Koreen Olbrish of Tandem Learning has a post about using games to assess learning, and she addresses both opportunities and problems.
Games are a natural environment for assessment…in essence, they are assessing your performance just by nature of the game structure itself. Unless, of course, there aren’t clear success metrics and you “win” by collecting more and more meaningless stuff (like Farmville)…but that’s a whole other topic.
So let’s assume there are success metrics built into the game and those metrics align with what your learning objectives are.
Koreen’s main topic is game design, but I want to talk about that last idea:
the game’s success metrics need to align with your learning objectives.
This sounds like Instructional Design 101, since it is Instructional Design 101. Ever more fundamental — Instructional Design 100, maybe — are these questions:
What do you want people to do?
Why aren’t they doing that now?
How will this make things better?
No, the first question isn’t about instruction at all. Nor is it about, “How do you want them to act?”
It’s about what you want people to get done.
When you can’t articulate what you want people to accomplish, it hardly matters what interventions you try. You have no way to measure progress. Might as well just run them all through whatever you feel like.
Making your goals less fuzzy
“Sheep dip” refers to a kind of chemical bath intended to prevent or combat infestations of parasites. (Videos of older, plunge style and newer, spray style processing of sheep.)
Farmers dip or spray sheep because… well, I’m no farmer, but here are some guesses:
- It’s more cost-effective than diagnosing the needs of each sheep.
- A dip-tank of prevention is better than a barnful of cure.
- Sheep on their own rarely propose new pest-management processes.
Ultimately, sheep farming has a few key outputs: leather, wool, mutton. While the sheep play an essential role, I don’t think you can successfully argue that these are accomplishments for the sheep. So what matters is the on-the-job performance of farm workers.
Speaking of on-the-job, many industries and organizations impose mandatory, formal training. Even there, the accomplishment shouldn’t be “training completed.”
One client delivered “equal-employment awareness” training annually to every employee. The original charter was full of “increase awareness” and “understand importance.” Here’s what that looked like after a lot of “how can I tell they’re more aware?”
- You can recognize examples of discriminatory behavior on the job.
- You can state why the behavior is discriminatory.
- You can describe steps for resolving the discrimination.
That’s not exhaustive (and the legal department would probably say you need to sprinkle “alleged” all over the place), but the three points are a first step toward a success metric that connects the individual and the organization.
Sometimes, it is a training problem
When people in an organization can articulate overall goals, it’s easier for them (as individuals and in groups) to think about how their activities and their results relate to those goals. They’re also likelier to be better problem-solvers, because they won’t corral every problem into a formal-training solution.
Even when a major cause of a performance problem is the lack of skill or knowledge, you benefit from revisiting those Design 100 questions:
- What are the results you expect when people apply the skills they currently lack?
- What could interfere with their applying them?
- How will this approach help them learn and apply the skills?
Slightly more diplomatic language led that EEO-awareness client to decide that knowing the date of the Americans with Disabilities Act didn’t have much impact on deciding whether, in a job interview, you can ask an applicant, “Do you have a handicap?”
I’m no expert on workplace games, but I’m pretty sure I get what Koreen Olbrish is talking about. It’s the workplace first, then the learning goal, and then the application of good design in pursuit of worthwhile results.
The same is true for any planned effort to support learning at work. You need to focus on what’s important, on how you know it’s important, on why you think training will help.
Then you use that information to guide your decisions about how to help people acquire and apply those skills when it matters.
Mindlessly grinding out courses (instructor-led, elearning, webinars, whatever) isn’t the answer, regardless of how many completion-hours people rack up.
It’s just…well, you know.
Bigg’s Sheep Dip (Glenovis) adapted from this photo by Riv / Martyn.
Bigg’s Dips (yellow/black) by Maurice Michael.
Quibell’s Sheep Dips by Peter Ashton aka peamasher.
3 thoughts on “Sheep dip and success metrics, or, don’t flock up”
Nice article,Wonderful insights.
I couldn’t agree more, priorities then planning.
Learning and development without prioritizing the necessities leads to nowhere.
HI Dave, I could not agree more with the observable learning objectives.
My favorite is the “understand” objective. When I see that as an objective I am fairly certain what follows will be “sheep dip”.
When designing I have to then start with the old.
How would you know if the “understoood”?
What would you see them DO if they “understood”?
Chris, it’s good to have you chime in.
I’m in complete agreement with “understood” — except when it (a) seems to be part of the client’s culture and (b) seems to be shorthand for something observable. In other words, I don’t try and force them into my language; I just try to get to a consensus that “understand the inventory system” means “retrieve appropriate reports, correctly interpret data, select suitable responses, transmit vendor orders,” et blooming cetera.