Monday, August 23, 2010

Games for assessment

Another Twitter conversation, another blog post...another thought-stream based out of ADL's Implementation Fest #ifest (Srsly, this is the most a conference has inspired me to write in a long time. A good, and bad, thing...).

So, let's talk about the opportunities and problems with using games for assessment. Especially when you throw the "s" word in there..."standardized" assessment.

Deep breath.

First, I'll preface by saying that games are a natural environment for assessment...in essence, they are assessing your performance just by nature of the game structure itself. Unless, of course, there aren't clear success metrics and you "win" by collecting more and more meaningless stuff (like Farmville)...but that's a whole other topic. So let's assume there are success metrics built into the game and those metrics align with what your learning objectives are. Its logical that by having someone play a game, you'll see how well they know something or know how to do something. Right?

Nothing is ever that easy. There are lots of aspects of game play that depend greatly on how the game was designed. For one, games have an intrinsic layer of cognitive overhead that may not exist in real life. For example, as I've been learning how to play Call of Duty 4, I first have to master the use of my PS3 controller. No, this isn't a learning game, but the same principles apply...it's why real guitar players get irritated playing Guitar Hero...there are skills that you need to develop to play a game, or to be successful in a game, that don't exist in real life or don't mirror the skills necessary to be successful at real life tasks. I think it becomes clear in first person shooter games, where your ability to operate your game controller does not directly translate to being able to accurately fire an automatic weapon in a combat environment. For any assessment, you have to make sure you're not just assessing how well someone plays the game, but how well they have mastered the real skill or content. In using games for assessment, you run the risk of assessing how well someone plays the game, not the objectives you are hoping to assess.

Another issue with games for assessment is the gender differences in how people play games. I'm about to talk about broad generalizations, so bear with me and recognize that some women game like "guys" and some guys game like "girls"...but there are different ways that people approach game environments and those differences do tend to follow along gender lines. Men are bigger risk-takers and explorers; women like to be guided, understand the environment, and follow the rules. Depending on how you design your game, you risk alienating a whole group of players if you don't consider the gender differences in game play. Worse, if you are using games for standardized assessment, you could be putting about half of the people you are assessing at a disadvantage just by the nature of the game design. Given the general acknowledgment that standardized tests are racially and class biased, adding a layer of gender bias in the game design risks making the concept of "standardized" even more meaningless.

Do I think games can be used effectively for assessment? Yes. Look at surgical simulations, flight simulators...close approximations of performing tasks in real life. Research has proven that successful performance in these simulated environments correlates to successful performance at the actual tasks. Where you can mirror game performance to real performance in this way, I think games are a brilliant and useful measure of assessment. But without careful design, thoughtful reflection on what the game environment adds to assessment, and what the trade-offs are with other forms of assessment, we risk creating another assessment environment that falls short of measuring true capability, potential, or performance.

2 comments:

  1. Not my area of expertise, so looking to learn something here. Doesn't your first issue hold true for assessment in general? There are some of us who are really good at pattern recognition, memorization, writing on deadline, etc. In some sense almost any form of assessment judges our test-taking skills as much as our content knowledge. To follow your analogy, you have to know how to properly fill in the bubbles with your No. 2 pencil (use the controller) in order to take the test (play the game).

    Is the challenge/answer just keeping in mind that there is a certain amount of this in any assessment and design for/around it? (Will now be randomly thinking about this all day and how it applies to when people should use SCORM. It's interesting.)

    ReplyDelete