Regarding Google’s advice to learning app designers

There is a growing public perception that “most educational apps stink” in today’s App Store, in part because they are ineffective.  That’s partly why I’m so happy to see Google promoting quality apps in their new App Store for Educators:

“Apps submitted to Google Play for Education will be evaluated by a third-party educator network, which will review them based on alignment with Common Core Standards and other factors.”  In the demo video, it is revealed that CUE is the 3rd party doing the reviewing.

I’m also very happy to see Google offering design advice to educational app designer/developers.  In this article I suggest ways Google could improve that advice.

In this first section, I argue that Google should require app developers to prove their app is effective.  I then review Google’s advice more broadly.

If I could make only one change…

If I could make only one change to this list, I would add this:

  • Prove your app is effective.

For example developers should be required to say, “Students who played the game [Motion Math] for 20 minutes for five days improved on a fractions test by an average of 15%.”  (link).  Pearson offers a free, generic framework (link) and many other similar resources exist.

I’m not talking about screening low quality apps.  I’m talking about screening apps that don’t measure anything at all.

Google told learning app designers (here, my bold):

Apps with highest educational value will have these characteristics:

  • Designed for use in K-12 classrooms.
  • Aligned with a common core standard or support common-core learning.
  • Simple, easy to use, and intuitive for the grade levels the app is targeting. App is relatively easy to navigate without teacher guidance. Not distracting or overwhelming to students.
  • Enjoyable and interactive. App is engaging to students and lets them control their experience.
  • Versatile. App has features make the it useful for more than one classroom function or lesson throughout the school year.
  • Supports the “4Cs”:
  1. Creativity — Allows students to create in order to express understanding of the learning objectives, and try new approaches, innovation and invention to get things done.
  2. Critical thinking — Allows students to look at problems in a new way, linking learning across subjects and disciplines.
  3. Collaboration — Allows students and (if appropriate) educators to work together to reach a goal.
  4. Communication — Allows students to comprehend, critique and share thoughts, questions, ideas and solutions.

Edutainment, initially hailed as a educational revolution, failed to disrupt classroom practice. One of the many reasons, argued MIT researchers, was the products’ frequent lack of efficacy (link). Google could help the latest generation of developers avoid repeating this clear and well-understood mistake in the field.

Bad learning apps can actually hurt learning. Some popular learning products are widely believed to be ineffective (such as toddler DVDs), but it is less commonly known that bad learning apps can do harm, not just fail to do good.   “Zimmerman, Christakis, and Meltzoff (2007) empirically demonstrated that for each hour children, ages 8 to 16 months, were exposed to commercially available audiovisual programs (e.g., Baby Einstein and Brainy Baby), the children developed 6 to 8 fewer receptive vocabulary words (i.e., words they understand) than their counterparts who were not exposed to such stimuli.” (Christakis 2009).  Google should prevent ineffective products from being confused with unknown or proven good educational products.  Requiring any sort of efficacy evidence would be a simple way to screen many of these products.

Obviously not all 1-person app developers can afford to do a “proper” randomized controlled trial, but I believe anyone can do a simple pre-post efficacy test. Some learning goals are less obviously testable. How does one evaluate efficacy of “systems thinking”?  It can be done, if only by using very qualitative, unstructured interviews.

I wish Google should require all apps in the education store to

  1. make a clear, specific claim of efficacy,
  2. provide evidence of that claim, and
  3. have that evidence validated or reviewed by a 3rd party

Google’s CUE approval system is a good first step toward the 3rd point, but I hope for more: I want a scale rating,  not just approved/not approved, so proven apps are first on the list, and the reasons are clear.

A Broader Critique

Next, I want to talk more broadly about Google’s design advice: Is this list good advice? 

Advice is cheap to make, but VERY EXPENSIVE to follow. Every point on Google’s list adds huge cost and risk to the app developer.

Specifically, I ask:

  • How should developers decide which, if any, of these to follow?
  • Should other stakeholders, say publishers, criticize an app, using this advice?
  • How?

There is difference between a wishlist and useful design advice. For example, consider this design advice: A quality car should include as many of these features as possible:

  • seats 12
  • 100 mpg
  • 0-100-0 in 4 seconds
  • Less than $10,000
  • Made from environmentally friendly materials
  • Looks awesomer than a Lambourghini
  • Parks in half a parking space

I hope we can agree that this list is near-impossible for commercial, practicing car designers to adhere to, and that it unlikely to be useful to audience.  Compare that silly list to Google’s list, and note what the two lists have in common, as you read the following questions.

  • Where did this advice come from? Who wrote this? What are their qualifications?
  • Is the source ‘data’ trustworthy?  Is this a wishlist of a naive enthusiast?  Is it based on lessons learned from a single case study? Is it a broad summary of the academic literature, written in an ivory tower?
  • Does this advice apply equally across the entire diverse landscape of the field?  Should learning game apps that practice, be more collaborative than instructional apps?
  • Does this advice fit with other expert design advice?  See below for examples. Are there conflicts or commonalities between this advice and existing, prevailing views of experienced designer/researchers?  What reasons are given for this variance?
  • Is this advice realistic? Is it even possible to build an app that fits all, or even most, of this advice?
  • What are some examples of apps that follow this advice?  Discuss merits and weaknesses of exemplary designs.
  • Could and should this advice be used by stakeholders, other than developers, to assess or critique?
  • Is there any evidence or reason to believe this advice will yield improved learning apps?  Are there cautions on any dangerous combinations?

I hope the reader can, by comparing to the silly list of car design, see why and how Google’s advice might be improved upon.

How useful is broad advice, to 1-person app developers?

What use is design advice for a “car”?  Minivans, supercars, and econoboxes all have very different use cases.  There is precious little design advice that applies to all.

A naive advisor might argue that these traits are all desirable. What’s the problem with advising designers to aim for such traits?  THe problem comes in assuming all learning apps are essentially similar.

Consider how a supercar designer who is told: cars should be affordable. Should they try to make a $10,000 supercar?  Of course not. It would not be possible to meet the key requirements of a supercar (performance, style, etc) in a $10,000 cost ceiling   Why not try to make minivans take half a parking space?  Again, the value of the minivan is its hauling capacity.  A tiny minivan is not a minivan anymore. It’s a different type of car.

Good learning apps are not essentially similar.  Teaching the concepts of algebra has little in common with reviewing cultural norms in 17th century Africa.    Proponents of gamification, applied to cars, suggest we can reuse mechanics for a variety of purposes.  That’s like saying we can all adapted a Ford Taurus to our needs: Farmers can add a roof rack, instead of buying a pickup truck, for hauling brush.  Racers can put chrome rims on and bingo, teen revheads have a cool car.

How many e-learning apps are basicly flash cards?  show material, multiple choice. Such elearning designs can be effective but designers should work hard to improve on that weak interaction. Such designs are not the best we can do with the power of Android apps.  I believe Google offered this advice intending designers to aim higher, as Devlin explains well here.

So, how what should the advice be?  Following the car metaphor, supercar designers should be discussing specifics: the merits of carbon fiber in interior detail, for example.

However, there is need for basic advice aimed at one-person learning app designers who didn’t necessarily study e-learning design principles in school. Such designers are perhaps akin to kit-car builders:

  • They need a few basic ideas (more rubber on the road means more traction, but higher friction). I think this was Google’s intent with this list, and I give some of my favorite examples of such advice at the end of this post.
  • They need many specific tips (e.g. slant your kingpins to make the car steer straight). this is tough to deliver on paper – it needs to be “just in time” and very simple, and pushed to designers as they work.
  • They don’t need broad goals (make your car use less gas). I think Google accidentally delivered much of this type of advice.

There are some general points, such as those made by http://sgeducation.wordpress.com/2008/10/07/failure-of-edutainment/

Much design advice should be specific to the intended learning goal, age, and nature of outcome (practice, etc).  Learning designers ask:

Should we repeat material?  Is it worth building a proper simulation, or just semi-faking it with a simple 1-variable interactive element?   Where does learning really occur in apps?  How can we collaborate yet avoid the blind leading the blind of the cliff?  There are some clues and a few outright answers in the literature (it’s not very accessable and easy to find, but that’s a separate rant).  That’s the design advice we need.

The end.

PS Further Reading

Finding good advice ain’t easy.  I’ll give three personal favorites, for classroom learning game design.

  1. MIT’s “Moving Learning Games Forward” paper here,
  2. Gee’s numerous excellent principles here (summarized by Draper here).
  3. For math learning games specifically, Devlin’s blog here.

These three examples are specific to learning games, part of the vast literature on e-learning (a random example of which is here).

<whap> Thank you sir. May I have another?

I am considering writing a review where I compare, point by point, Google’s advice to prevailing views from Gee, Osterweil, specifically for learning game designers. (if that’s something you’d be interested to see, let me know).