In my Building Job Aids workshop (presented last Tuesday at DevLearn 2015), participants analyze multiple case studies, applying techniques and using job aids to, well, build job aids. Among the skills they practice are the ability to choose the right type of job aid for a task, and the ability to use that type effectively.
There’s a lot of thinking and writing: I make an effort to avoid explaining much before an exercise. Instead, there’s a minimal introduction, with a lot of what would have been explained turned into a print resource to be consulted as needed.
One potential downside is that especially an hour or so after lunch, thinking and writing are conducive to dozing off.
At the same time, my assumption was that participants would want and need additional practice on relevant examples. How could I give someone the chance to assess different job aids and rate their effectiveness? Did she think the samples would produce the desired result? How did they align with ideas in our workshop?
The challenge wasn’t so much finding the examples as structuring the evaluation. The tradeoffs I saw (or believe now that I saw):
- Time constraints
- Relevance
- My desire for multiple elements in a rating system
- My desire for a simple, overall total
Then the format presented itself in three words:
I liked this title so much, I was determined to use it. But I’ve learned not to be literal about this kind of borrowing. What makes Jeopardy!-style games in training a dumb idea (even a counterproductive one) so often is not (necessarily) Jeopardy! itself. It’s the mismatch between the content and a format best suited to recalling isolated facts.
Some characteristics of dog shows that I thought suited my goals: I had widely different types of job aids, like the different dog breeds. I had limited time, which at least for me was like the dog-judging segment where the trainer fast-walks the dog in a set pattern before the judge. Plus judging.
That’s where I had the most trouble. How to get multiple points, an overall total per judge, and a logistically sane process? I started with a three item scale, rating each job aid on its fit (is this a good job aid for this kind of task?), its function (is it likely to produce the desired result?), and its format (how does it stack up against the job aid guidelines in the workshop).
I could score each of those from 1 to 3, with an extra point thrown in for personal preference. No matter how I squinted, though, it looked like way too much math.
Then I remembered the Apgar score – a quick assessment of a baby at birth. Five qualities like heart rate or respiration are each assigned a score of 0, 1, or 2. The total describes the baby’s physical condition on a scale of 0 – 10.
So I came up with a five-point scale for Best In Show:
- Aptness: how well the job aid fit the task and the setting
- Payoff: how likely it’ll achieve the desired result
- Guidelines: how it fit with guidelines in general and for its particular type of job aid
- Appearance: overall effectiveness of the design
- Response: the judge’s own reaction to the job aid.
As you can see, each item had a line for its score, with a box on top for the total.
In the interest of time, I limited myself to six competitors. This was the score sheet:
Off to the show
I was pretty sure I’d have a decent internet connection. I made a slide with links to my six examples. I explained the scoring, distributed the ballots, and showed each competitor for 30 – 60 seconds, with some contextual commentary as needed.
If I’d had a large group, my plan was for each person to fold the completed ballot between the six boxes, so as to tear it into six individual sheets. I’d have had one person total the ballots for competitor A, one for B, and so on. My workshop group was small enough that I could divide a sheet of flipchart paper in six as Voting Headquarters. It was little trouble for me write down scores by candidate and then total them.
How it went
Best in Show was a success, both as a change of pace and as an exercise in judging job aids. It also broadened exposure: half the competitors were new; the other half had been seen only briefly, as examples, earlier in the day.
An unexpected plus: everyone could see all the individual totals. One job aid received solid 10s except from one person who rated it a 7. Another participant said to her, “I want to know why you rated it a 7.” The question was not a challenge but rather genuine interest in how another person applied the principles of the workshop.
Aftermath
I’m really pleased this went as well as it did. I’m thinking of ways to make it work better (one participant was confused by my instructions and rated on a scale of 1 to 3 rather than zero to 2).
And if I have more time, I’ll have a follow-on exercise: Raise the Runt. The idea would be to see which job aid scored the lowest, and then talk about why and about how to improve it.