- OpenSim
- Unity3D
- Alternate reality games (ARGs)
- Jibe
- Augmented reality
- Geolocation technologies
- access to the technologies (firewalls, hardware, etc.)
- interface design issues
- ease of use
Another Twitter conversation, another blog post...another thought-stream based out of ADL's Implementation Fest #ifest (Srsly, this is the most a conference has inspired me to write in a long time. A good, and bad, thing...).
So, let's talk about the opportunities and problems with using games for assessment. Especially when you throw the "s" word in there..."standardized" assessment.
Deep breath.
First, I'll preface by saying that games are a natural environment for assessment...in essence, they are assessing your performance just by nature of the game structure itself. Unless, of course, there aren't clear success metrics and you "win" by collecting more and more meaningless stuff (like Farmville)...but that's a whole other topic. So let's assume there are success metrics built into the game and those metrics align with what your learning objectives are. Its logical that by having someone play a game, you'll see how well they know something or know how to do something. Right?
Nothing is ever that easy. There are lots of aspects of game play that depend greatly on how the game was designed. For one, games have an intrinsic layer of cognitive overhead that may not exist in real life. For example, as I've been learning how to play Call of Duty 4, I first have to master the use of my PS3 controller. No, this isn't a learning game, but the same principles apply...it's why real guitar players get irritated playing Guitar Hero...there are skills that you need to develop to play a game, or to be successful in a game, that don't exist in real life or don't mirror the skills necessary to be successful at real life tasks. I think it becomes clear in first person shooter games, where your ability to operate your game controller does not directly translate to being able to accurately fire an automatic weapon in a combat environment. For any assessment, you have to make sure you're not just assessing how well someone plays the game, but how well they have mastered the real skill or content. In using games for assessment, you run the risk of assessing how well someone plays the game, not the objectives you are hoping to assess.
Another issue with games for assessment is the gender differences in how people play games. I'm about to talk about broad generalizations, so bear with me and recognize that some women game like "guys" and some guys game like "girls"...but there are different ways that people approach game environments and those differences do tend to follow along gender lines. Men are bigger risk-takers and explorers; women like to be guided, understand the environment, and follow the rules. Depending on how you design your game, you risk alienating a whole group of players if you don't consider the gender differences in game play. Worse, if you are using games for standardized assessment, you could be putting about half of the people you are assessing at a disadvantage just by the nature of the game design. Given the general acknowledgment that standardized tests are racially and class biased, adding a layer of gender bias in the game design risks making the concept of "standardized" even more meaningless.
Do I think games can be used effectively for assessment? Yes. Look at surgical simulations, flight simulators...close approximations of performing tasks in real life. Research has proven that successful performance in these simulated environments correlates to successful performance at the actual tasks. Where you can mirror game performance to real performance in this way, I think games are a brilliant and useful measure of assessment. But without careful design, thoughtful reflection on what the game environment adds to assessment, and what the trade-offs are with other forms of assessment, we risk creating another assessment environment that falls short of measuring true capability, potential, or performance.
Coming out of the #ifest Twitter stream, I once again heard how America's Army was a shining example of how games had been used to improve recruitment efforts. I posed the question...has it improved recruitment of women at the same rate it has improved the recruitment of men? So far, all I've heard back is *crickets*.
For two weeks, I have been looking for data or research on America's Army that mentions gender as a research parameter, but so far, I've found nothing. If you know of any research, I'd love to see it. My hypothesis? Recruitment of women was not as greatly improved after they played America's Army. If that's the case, what does that say about the relative value of recruiting women vs men into our military?
What makes a game successful? Is it ok for public institutions (government, schools, etc.) to measure the success of a serious game without looking at differences in outcomes along the most basic parameters (gender, class, race)? Is it ok to say a game is successful in achieving its goals if we don't consider those issues as part of the discussion?
I'm tired of hearing the marketing spin and the hype around how games can change the world if we're not even asking the most basic questions about WHO games are changing and HOW they are changing them. You won't find a bigger advocate of games for learning and as a vehicle to raise awareness and support behavior change. But not all games are created equal. We have to be vigilant and constantly questioning our design to ensure we're achieving the outcomes we seek. Ignoring questions of gender, class, and racial bias in serious game design makes me question the motives of the design itself and the motives of those promoting a game's "success."
As always, I welcome anyone's comments who can prove me wrong...
We are pleased to announce the first, official Tandem Learning Innovation Community event scheduled for Friday, August 27th, 2010 at 9:00 am SLT/ 12:00 pm EST. We’ll be hosting an open house on our new Second Life island. The official networking event, led by Koreen Olbrish (SL: Nina Sommerfleck), will be an informal discussion of community topics, including:
• New technologies for the TLIC to explore
• Major challenges in technology adoption for the community to address
• TLIC at DevLearn 2010 – call for interested parties to be showcased
Please let us know if you plan to attend by emailing Jedd Gold via linkedin or at jedd.gold@tandem-learning.com. We will be sending out the SLURL the day before the event to everyone who RSVP’s.
If you haven't yet joined the Tandem Learning Innovation Community, you can request to join here.
We are looking forward to seeing you there!
Even though I'm attending through the Twitter stream, ADL's Implementation Fest #ifest is getting me fired up about some learning technology industry issues that just can't be explored in 140 characters. For example, yesterday there was some lively conversation around the usefulness of learning tools.
And I, in a rash statement, said that most learning tools suck.
But let me clarify, because there can be a broad definition of what a learning tool is.
For me, a learning tool is not what I use to design learning experiences (those things might include pen and paper, whiteboard, PowerPoint, Visio, etc.). A learning tool is NOT a reference tool like Wikipedia. Wikipedia is an information portal where you can go, read, and maybe learn something new...but it was not designed as a learning experience. It does not facilitate learning, even though it can enable it. Can you learn from a reference tool? Sure! But good reference tools have good user experience design, not instructional design, making it a reference tool and not a learning tool. There IS a difference.
A learning tool, to me, is something that you use to develop a learning experience. In other words, a tool that allows you to "design" a learning experience and output it into "Voila!" a learning experience. Input = content, output = training. And here's why I think most learning tools suck.
Most tools limit what you can design intrinsically in their functionality. Let's take PowerPoint. What you're going to get is slides. Pretty didactic. Maybe a little video embedded, some nifty animations...but you're not going to get much in the way of learner interaction.
what's ur expected output from tools? Learning Content? Why ask the architect to output a house?
I've been following the ADL Implementation Fest #ifest stream on Twitter today and some of the conversation with my PLN has sparked some thoughts, maybe perspective, on how, or where, I see the government being able to lead the way in training. And, what prompted me to write this post, the ways in which its misdirecting its energies.
First, let me say, there are some great examples of people in government doing things the right way. Just from my immediate experience, Dr. Alicia Sanchez, who works for DAU, is the games czar who is helping integrate gaming into their curriculum. Mark Oehlert, also at DAU, is integrating social media technologies to support learning and knowledge management. Judy Brown at ADL is an industry recognized expert in mobile technologies and how they can be leveraged for learning. (Just realized, ironically, that these three will also be showcasing their knowledge at DevLearn 2010. You should go.) These three people, who happen to be people I know and respect, understand the unique positions they hold, and their opportunity to leverage technology for innovative applications. In short, they recognize that they have the chance to DESIGN really cool applications of existing technologies within the government and talk about how these projects are helping to improve learning, collaboration, and communication.
What I'm hearing out of iFest (so far...its the first day...;) is that the focus is still really on what technology can do for you and what technology initiatives ADL has been focusing on. To which I say...REALLY?!?! Sigh.
I don't need or want government agencies to fancy themselves technology companies. They aren't a start up, nor are they Microsoft. In short, there are companies who actually do that. And those companies need to make money doing it, which means that they need to build things that the market needs (even if the market doesn't want it...that's a totally different thing...).
What I'd love to see is that agencies within the government start looking at what REALLY helps support learning...good design. I'd love if they saw themselves as master implementers, not builders. Our government has tons of people that need training, its got tons of money and resources...why not leverage it for those things? Try innovative solutions. Experiment with design. Conduct research to establish best practices. That's what the learning technology industry needs. The government could provide this...it could LEAD this. But for the most part, its not.
If a technology is needed, the market will push it because that's what the market looks for: meeting unmet needs to make money. I'm tired of hearing about how a technology the government is developing is going to solve some problem. Let's face it, even Google has struggled with implementing innovative technologies (see: Wave, Lively) and that's their business...its what they do to make money...and they are arguably the best at it.
I'm hoping that as I hear more at iFest that its focused on design. Fingers crossed. If not, its an opportunity lost...
Whether or not we like to admit it...there actually IS a limit to what we can accomplish in the little time we have. I'm not saying that our potential is limited...I'm saying that we have capacity issues. There are only 24 hours in a day. We actually need to sleep, eat...we can't just keep going full steam ahead, full out running, all of the time.