Why a GAME for good? A response to Keogh’s critique

In his critique of McGonigal’s “Play, don’t Replay!” campaign, Brendan Keogh said: “To the games evangelists, games become hammers and all the world starts to look like a nail.”

I feel that all game-based intervention projects should start with a solid reason why a game, and not a billboard, brochure, or elearning style app, is the appropriate medium.  Games are expensive to build, risky to design, and many intervention goals are simply not best addressed with a game.

Sadly, I predict that this very important point will be lost in Keogh’s article because the rest of the article is inflammatory.  I’m not disagreeing with his other points, particularly; e.g. while it’s not wrong that promotion of games for good can provide a “veneer of respectability for a far broader (and lucrative) “gamification” industry where expertise quickly translates into speaking events, consultancy roles, and book deals,” I doubt that kind of statement will start a productive conversation.

It’s useful to all of us when academics are critical, as long as they’re constructively critical and aiming to ensure scientific rigor is not lost during the rise of games for good.  e.g. Keogh’s larger points might have been better supported had he cited many such biased campaigns, rather than attacking one, and sadly there are plenty to choose from.  But still, I found it useful and I’m glad he wrote it.

 

ToyWorld: immersive toy concept

OK, here is a random ambitious idea for immersive toy. I call it ToyWorld. It is a way to make ANY real-world toy come alive onscreen.

It requires a hardware accessory for a game console or PC. Imagine a plastic breadbox that is a 3D scanner (a motorized turntable, a button, and two USB webcams – COGS maybe $30). A child puts any toy inside and clicks “scan”. The toy model appears fully textured onscreen. Auto-rigging 3D software finds any limbs, puts bones in, so the plastic toy immediately starts running (slithering, flying) around a virtual world.

Child then designs the toy’s character. She clicks one of a few basic AIs: “good / bad / boss / minion.” The toy immediately starts acting like that role (e.g. attacking props, vs throwing them in the air). Child selects one of 4 generic settings: scifi, dinosaurs, barbie-style modern, medieval. A 3D environment with props (trees, buildings, paths) appears, and the toy begins running around the world, acting in character – bad guys recruit minions and take over planets…or castles, etc. Child places basic level items like gold coins, spikes, fences. The toy starts grabbing coins, avoiding spikes, running around fences.

The worlds get better, constantly. provide ever-deeper trees of AI and interactive behavior, using Minecraft’s model of constant updates. Pets appear. Weather. Aliens attack. Fires. Viking ships invade.

Players can share and play with EACH OTHER’s toys. Remotely. Angus can get Shoni’s “bad dinosaur” and stick him in his space world.

OK, it’s kind of ambitious. :)

“Dumb Ways to Die” – lessons intervention designers can learn from this smashingly successful campaign

Here’s the point of this posting: If you wanted to engage the public enough to deliver a super-important safety message, your do-good instincts may be your own worst enemy.

Let’s talk about train safety. People keep dying by getting hit by urban trains. The metros around the world continue to try to raise awareness of this danger.

One recent campaign stands out (watch this YouTube video: http://youtu.be/IJNR2EpS0jw, check out the game at dumbwaystodie.org), at a level (maybe I’m going out on a limb here, but I doubt it) never reached by previous campaigns

Some have written on why this campaign is so successful. http://dumbwaystodie.org/why-dumb-ways-to-die-is-an-award-winning-campaign/. Maybe so, but a designer of inteventions, these statements aren’t really that helpful in improving my practice.

Here’s what I think we intervention designers can learn from this campaign:

Suppress your do-good instincts, and put the fun first. These guys use an ironic, almost mean spirited concept. There is absolutely zero clue this is a do-good campaign until you get to the last verse. Even then, the music slows, and they draw your attention…they do NOT deliver their do-good punchline. They slip it in casually. It’s so minor you might miss it.

Your do-good instincts can often lead you astray on what’s compelling about your mission. THe intuitive thing for people who witness death on the rails, is to try to scare the public  into awareness. Laughing at rail death is counter-intuitive at best…but which campaign will save more lives?

To wrap: if you’re a subject matter expert using an entertainment-oriented medium like Youtube (or video games) to deliver do-good messages, your biggest risk is failing to be entertaining.  I suggest you skew your thinking by priortizing entertainment as MORE important than your actual message.  Not equally important, MORE important.  Don’t worry, your audience won’t miss your point. Much more likely is they’ll be turned off by earnest preaching before they ever hear it.

 

Three FAQs for Juicy Game Design

What is juicy design?  The general idea is expressed poetically here: “the satisfying feeling we get when potential energy is converted to kinetic energy. That point where we release energy from a design in a way that creates surprise, delight,…”

Most hit casual games are loaded with examples, but Popcap’s games are most commonly cited, with good reason. Plants vs Zombies, Peggle, and earlier games all are amazing tutorials on making tiny actions, whose meaning is vanishingly small, satisfying and building holistic player satisfaction.

Let’s discuss the concrete implications (aimed at the beginning game designer, as a FAQ).

Q: How is Juicy different from basic good software interface design practice?

A: Most designers are comfortable with logical or factual design lenses: e.g. a “click” sound helps the user realize they clicked a button. Simplify  the screen so the important ideas pop out. These make sense.   By contrast, Juiciness is not as logical. Juiciness is an emotional lens on design.  A well-designed juicy game matches the players subconscious feelings of fairness and reward/punishment “I did that well, so something good should happen.”

Q: So, Juicy means good reward / punishment, Skinner Box type game design?
A: No.  Juicy is about tiny player action.  When you collect a coin in Plants vs Zombies, after doing a successful move, notice your FEELING of expectation of “good stuff”.  Notice the satisfaction of the coin appearing. Then, before you click, imagine the coin just vanishing when you clicked it.  Now, click it. That little flash and spinning of the coin, traveling to your points? That’s juiciness. It’s all the small stuff.

Q: So, Add fancy animation and your game is juicy?
A: Maybe.  A poorly-design juicy game has fancy animations that don’t relate to the player’s experience. It will feel ‘tacked on’, or unrelated to the core game activity.  Or, it’s overdone: Imagine audience cheering sounds for every tiny decision. It is too much – it feels false.

Q: Does juicy relate to the big picture – the game’s purpose – or is it more about UI in the moment?
A: It’s both, in a gluey way. Good juicy features connect the moment to the big picture. reward system.  When user intention is responsive and satisfyingly reflected back by the game.

Q: Is a Juicy design approach better than other ways of designing games?
A: No. “Juicy” is merely a narrow but useful lens to view a game’s design.  One cannot simply “make a game juicy” and be certain it’s better.  For example, consider characters in a casual game.  A mascot game character, like the Bookworm worm, reacts to player choices and personified game outcomes. The worm’s primary function is to mirror and validate the player’s internal, emotional state (though it also provides hints).  This is “juicy” character design.

Now consider the player character in a serious first-person immersive war simulation game.    Can the enemy see the player’s head above the barrel?  This is not an emotional, “juicy” design decision. This is a rational design issue.  The player’s character is highly functional: its shows the player’s position in the field, the action the player chose, and the reaction or impact.

The primary purpose of most 3D game player characters is not to reflect the player’s emotional state (though it is part of the purpose – for a richer discussion see Gee). Imagine “improving” the game by having player character thinking snarky comments, celebrating head shots or wiping tears away, when the game is primarily strategic.  Hopefully it’s obvious that making this character more “juicy” could easily hurt the player’s overall satisfaction.

Your comments or critiques are welcome.

Wanted: Better Labeling for Health Apps in App Stores

Health apps (including games for health) are being used today by consumers and medical professional to treat diagnosable health problems.  They are currently unregulated, though the FDA this month announced the first round of regulation.

The FDA has specifically said they are not aiming at app stores (link).  However, there is clear need for improvement in today’s app stores.

Consumers should know if the health app they’re buying is effective, but today, there is no such information available.

The design of app stores (e.g. Google Play, Apple App Store) strongly influences the information developers disclose.  Right now, there are no guidelines for health app developers.  They are required to supply the same information as entertainment app developers.

I feel the health app community – mHealth, Games for Health, FDA, academics, and consumer advocacy organizations should band together to produce or endorse a single set of design guidelines. These guidelines are aimed at Google, Apple, insurance agencies, and other leading distributors of health apps to consumers.

App stores are the isle in the drugstore.  They are where consumers shop and compare.  Like the labels on over-the-counter medicine, developers of health apps should inform their consumers:

- what does this app aim to achieve? (reduce depression? weight loss? injury rehabilitation?)
- is the app effective? (an independent, 5-star rating from “promising” to “proven”)
- who is the app intended for? (age, condition)
- what are signs that further help is needed, and where can the user find that help?

I feel that, given the state of affairs, voluntary guidelines strike a good balance between overregulation and the total consuiosn in today’s app stores.

If you agree or have comments, please add a comment below or get in touch directly: josh@whit-kin.com

Useful Theory for Learning Game Designers: Core Mechanic

You’re a learning game designer, looking for ways to explain the basic ideas behind your ideas to stakeholders. You need “greatest hits” theory that’s simple enough to explain to anyone.

Here’s Wolfram‘s great diagram and explanation of the components of a game:

Wolfgang’s 4-layer cake of game design

It obviously doesn’t describe ALL designs, but it’s a great starting points.

http://www.funstormgames.com/blog/2012/06/designing-around-a-core-mechanic/

Gender Contamination: A clear term for a fuzzy but real idea

I love it when someone names and explains a truth I’ve been half-aware of for some time.  This just happened today with the Jill Avery’s term “Gender Contamination”.

This article explains:

“Gender contamination” is the loaded and fascinating term coined by HBS senior lecturer Jill J. Avery to describe just how uncomfortable women and (more often than not) men become when a product they use to symbolize their gender is extended to appeal to another gender. She first noticed this phenomenon while working at Gillette, where the company was careful to call its women’s line “Gillette for Women” in order to create separation between pink razors that smell like papayas and black manly-man razors that smell like manly-man things. This piece describes a couple of other examples stemming from the research, including the struggle to get men to drink diet sodas (black cans and avoidance of the word “diet” help) and how gents on Porsche message boards managed their insecurities when the car company came out with an SUV.

It’s a really cool idea.

I’ve observed one particular instance of Gender Contamination that I found interesting. This phenomenon is sort of a variation on the idea above.  It happened in 3 steps:

  1. A product started gender-neutral,
  2. It became dominated one gender (by accident or design) and
  3. now is being marketed explicitly to another gender.

I’m thinking of Lego. Legos used to be mean generic, gender-neutral building blocks. Then came themed kits, which was a key reason Lego is the second largest toymaker in the world.  By selling kits, instead of bulk blocks, Lego can differentiate and market an endless series of products.  This is especially important because Lego’s patents recently expired.

Here’s where the gender contamination idea comes in.  Lego’s history of gender-neutral themes did not continue with their kids.  A few kit themes are gender-neutral today (e.g. Lego City) but most are not. Lego has found huge commercial success in making kits themed and licensed around boy-friendly storyworlds (Star Wars, Indiana Jones, Iron Man, Batman). Lego has churned out many original storyworlds around action and violence (e.g. Chima).

The sad result is many, even most, girls today don’t play with Legos after Duplo.  Lego became a boy’s toy.

Now, Lego’s trying to fix that.  They’ve recently launged a girl-targeted storyworld called Friends series.

Maybe it’s just because it’s new, or because I’m a guy, but to me, Friends feels …weird… even though I’m not opposed to Lego’s intent.  In fact I think it’s great that Lego is reaching out across gender boundaries, even if their motives are impure.

What’s wrong with Lego Friends?  It’s certainly got some clear weaknesses, as amazing game designer Erin Robinson explained so powerfully (here).

However, I feel something else uncomfortable about Lego Friends, and I can’t explain it. I wonder if Brand Contamination theory (or this minor variant of it) might be part of the key to articulating that other nagging feeling.

I’d love to hear Jill and Erin discuss how the idea of Gender Contamination explain the Mysterious Discomfort of Lego Friends.  Here’s hoping they’re inspired to explore it together.

 

 

 

 

Regarding Google’s advice to learning app designers

There is a growing public perception that “most educational apps stink” in today’s App Store, in part because they are ineffective.  That’s partly why I’m so happy to see Google promoting quality apps in their new App Store for Educators:

“Apps submitted to Google Play for Education will be evaluated by a third-party educator network, which will review them based on alignment with Common Core Standards and other factors.”  In the demo video, it is revealed that CUE is the 3rd party doing the reviewing.

I’m also very happy to see Google offering design advice to educational app designer/developers.  In this article I suggest ways Google could improve that advice.

In this first section, I argue that Google should require app developers to prove their app is effective.  I then review Google’s advice more broadly.

If I could make only one change…

If I could make only one change to this list, I would add this:

  • Prove your app is effective.

For example developers should be required to say, “Students who played the game [Motion Math] for 20 minutes for five days improved on a fractions test by an average of 15%.”  (link).  Pearson offers a free, generic framework (link) and many other similar resources exist.

I’m not talking about screening low quality apps.  I’m talking about screening apps that don’t measure anything at all.

Google told learning app designers (here, my bold):

Apps with highest educational value will have these characteristics:

  • Designed for use in K-12 classrooms.
  • Aligned with a common core standard or support common-core learning.
  • Simple, easy to use, and intuitive for the grade levels the app is targeting. App is relatively easy to navigate without teacher guidance. Not distracting or overwhelming to students.
  • Enjoyable and interactive. App is engaging to students and lets them control their experience.
  • Versatile. App has features make the it useful for more than one classroom function or lesson throughout the school year.
  • Supports the “4Cs”:
  1. Creativity — Allows students to create in order to express understanding of the learning objectives, and try new approaches, innovation and invention to get things done.
  2. Critical thinking — Allows students to look at problems in a new way, linking learning across subjects and disciplines.
  3. Collaboration — Allows students and (if appropriate) educators to work together to reach a goal.
  4. Communication — Allows students to comprehend, critique and share thoughts, questions, ideas and solutions.

Edutainment, initially hailed as a educational revolution, failed to disrupt classroom practice. One of the many reasons, argued MIT researchers, was the products’ frequent lack of efficacy (link). Google could help the latest generation of developers avoid repeating this clear and well-understood mistake in the field.

Bad learning apps can actually hurt learning. Some popular learning products are widely believed to be ineffective (such as toddler DVDs), but it is less commonly known that bad learning apps can do harm, not just fail to do good.   “Zimmerman, Christakis, and Meltzoff (2007) empirically demonstrated that for each hour children, ages 8 to 16 months, were exposed to commercially available audiovisual programs (e.g., Baby Einstein and Brainy Baby), the children developed 6 to 8 fewer receptive vocabulary words (i.e., words they understand) than their counterparts who were not exposed to such stimuli.” (Christakis 2009).  Google should prevent ineffective products from being confused with unknown or proven good educational products.  Requiring any sort of efficacy evidence would be a simple way to screen many of these products.

Obviously not all 1-person app developers can afford to do a “proper” randomized controlled trial, but I believe anyone can do a simple pre-post efficacy test. Some learning goals are less obviously testable. How does one evaluate efficacy of “systems thinking”?  It can be done, if only by using very qualitative, unstructured interviews.

I wish Google should require all apps in the education store to

  1. make a clear, specific claim of efficacy,
  2. provide evidence of that claim, and
  3. have that evidence validated or reviewed by a 3rd party

Google’s CUE approval system is a good first step toward the 3rd point, but I hope for more: I want a scale rating,  not just approved/not approved, so proven apps are first on the list, and the reasons are clear.

A Broader Critique

Next, I want to talk more broadly about Google’s design advice: Is this list good advice? 

Advice is cheap to make, but VERY EXPENSIVE to follow. Every point on Google’s list adds huge cost and risk to the app developer.

Specifically, I ask:

  • How should developers decide which, if any, of these to follow?
  • Should other stakeholders, say publishers, criticize an app, using this advice?
  • How?

There is difference between a wishlist and useful design advice. For example, consider this design advice: A quality car should include as many of these features as possible:

  • seats 12
  • 100 mpg
  • 0-100-0 in 4 seconds
  • Less than $10,000
  • Made from environmentally friendly materials
  • Looks awesomer than a Lambourghini
  • Parks in half a parking space

I hope we can agree that this list is near-impossible for commercial, practicing car designers to adhere to, and that it unlikely to be useful to audience.  Compare that silly list to Google’s list, and note what the two lists have in common, as you read the following questions.

  • Where did this advice come from? Who wrote this? What are their qualifications?
  • Is the source ‘data’ trustworthy?  Is this a wishlist of a naive enthusiast?  Is it based on lessons learned from a single case study? Is it a broad summary of the academic literature, written in an ivory tower?
  • Does this advice apply equally across the entire diverse landscape of the field?  Should learning game apps that practice, be more collaborative than instructional apps?
  • Does this advice fit with other expert design advice?  See below for examples. Are there conflicts or commonalities between this advice and existing, prevailing views of experienced designer/researchers?  What reasons are given for this variance?
  • Is this advice realistic? Is it even possible to build an app that fits all, or even most, of this advice?
  • What are some examples of apps that follow this advice?  Discuss merits and weaknesses of exemplary designs.
  • Could and should this advice be used by stakeholders, other than developers, to assess or critique?
  • Is there any evidence or reason to believe this advice will yield improved learning apps?  Are there cautions on any dangerous combinations?

I hope the reader can, by comparing to the silly list of car design, see why and how Google’s advice might be improved upon.

How useful is broad advice, to 1-person app developers?

What use is design advice for a “car”?  Minivans, supercars, and econoboxes all have very different use cases.  There is precious little design advice that applies to all.

A naive advisor might argue that these traits are all desirable. What’s the problem with advising designers to aim for such traits?  THe problem comes in assuming all learning apps are essentially similar.

Consider how a supercar designer who is told: cars should be affordable. Should they try to make a $10,000 supercar?  Of course not. It would not be possible to meet the key requirements of a supercar (performance, style, etc) in a $10,000 cost ceiling   Why not try to make minivans take half a parking space?  Again, the value of the minivan is its hauling capacity.  A tiny minivan is not a minivan anymore. It’s a different type of car.

Good learning apps are not essentially similar.  Teaching the concepts of algebra has little in common with reviewing cultural norms in 17th century Africa.    Proponents of gamification, applied to cars, suggest we can reuse mechanics for a variety of purposes.  That’s like saying we can all adapted a Ford Taurus to our needs: Farmers can add a roof rack, instead of buying a pickup truck, for hauling brush.  Racers can put chrome rims on and bingo, teen revheads have a cool car.

How many e-learning apps are basicly flash cards?  show material, multiple choice. Such elearning designs can be effective but designers should work hard to improve on that weak interaction. Such designs are not the best we can do with the power of Android apps.  I believe Google offered this advice intending designers to aim higher, as Devlin explains well here.

So, how what should the advice be?  Following the car metaphor, supercar designers should be discussing specifics: the merits of carbon fiber in interior detail, for example.

However, there is need for basic advice aimed at one-person learning app designers who didn’t necessarily study e-learning design principles in school. Such designers are perhaps akin to kit-car builders:

  • They need a few basic ideas (more rubber on the road means more traction, but higher friction). I think this was Google’s intent with this list, and I give some of my favorite examples of such advice at the end of this post.
  • They need many specific tips (e.g. slant your kingpins to make the car steer straight). this is tough to deliver on paper – it needs to be “just in time” and very simple, and pushed to designers as they work.
  • They don’t need broad goals (make your car use less gas). I think Google accidentally delivered much of this type of advice.

There are some general points, such as those made by http://sgeducation.wordpress.com/2008/10/07/failure-of-edutainment/

Much design advice should be specific to the intended learning goal, age, and nature of outcome (practice, etc).  Learning designers ask:

Should we repeat material?  Is it worth building a proper simulation, or just semi-faking it with a simple 1-variable interactive element?   Where does learning really occur in apps?  How can we collaborate yet avoid the blind leading the blind of the cliff?  There are some clues and a few outright answers in the literature (it’s not very accessable and easy to find, but that’s a separate rant).  That’s the design advice we need.

The end.

PS Further Reading

Finding good advice ain’t easy.  I’ll give three personal favorites, for classroom learning game design.

  1. MIT’s “Moving Learning Games Forward” paper here,
  2. Gee’s numerous excellent principles here (summarized by Draper here).
  3. For math learning games specifically, Devlin’s blog here.

These three examples are specific to learning games, part of the vast literature on e-learning (a random example of which is here).

<whap> Thank you sir. May I have another?

I am considering writing a review where I compare, point by point, Google’s advice to prevailing views from Gee, Osterweil, specifically for learning game designers. (if that’s something you’d be interested to see, let me know).

New Job Title: Researcher/Designer, Northwest Media

I’m happy to report that I’m at Northwest Media designing interventions for social good based on learning games, We’ve got a number of projects on the go:

  • VSG, a Sims-RPG hybrid game designed to help at-risk kids, especially foster teens, realize the need to develop life skills before jumping into the real world.  It’s a NIH-funded SBIR Phase II, and we’ve got most of a year left to finish.
  • InTouch, a serious game that aims to help foster and birth parents develop a form of mentalization called “Parental Reflective Functioning”.

It’s exciting stuff that combines innovation, simple execution of what is known, and all for social good.  Nice!

 

Doctoral Dissertation – shipped!

I am so happy to report that I have submitted my doctoral dissertation, back in December.

Here’s the abstract.

This research aims to improve the practice of designing educational video games (“learning games”). This thesis aims to both validate and extend Shelton’s theory of activity-goal alignment, which focuses on the relationship between a player’s activity and the designer’s intended learning goal in any learning game. The thesis develops and evaluates two novel tools. First, an autoethnographic account of a recent learning game project confirms Shelton’s prior findings that activity-goal alignment theory meets an important need in learning game design practice and that Shelton’s theory might be made more accessible to practising designers. The AGA Scoring Tool is developed, and both it and Shelton’s theory are evaluated through analytic discussions of designs of several existing learning games: activity-goal alignment theory is found useful, and scoring activity-goal alignment is argued to be clearer than Shelton’s narrative-based approach. Secondly, this thesis argues that there is need for improved tools for assessment of learning games. A critical review of existing assessment tools yields a list of criteria for any learning game assessment tool.  A basis for a new learning game assessment tool is developed from three theories: Higher Order Learning theory, Gee’s principles of Deep Learning, and Shelton’s activity-goal alignment. These three theories are argued to comprise an important, prevailing position within the learning game design literature. A new tool, the AGA-Based Assessment Tool, is proposed and exercised in critical discussions of several learning games. Important gaps between learning game design practice and theory are revealed using the tool.  The thesis concludes that scoring activity-goal alignment is useful to the learning game designer because it makes an important theoretical position from the learning game design literature clear and simpler to apply in practice.

So good to have this sent!  It’s being examined now, and I’ve been trying not to think about it for that last seven months.  I’ve already had one paper adapted from the autoethnographic chapter, accepted at the Games for Health Europe conference #GFHEU in October, so that’s a good feeling!