Sheep dip and success metrics, or, don’t flock up

Koreen Olbrish of Tandem Learning has a post about using games to assess learning, and she addresses both opportunities and problems.

Games are a natural environment for assessment…in essence, they are assessing your performance just by nature of the game structure itself. Unless, of course, there aren’t clear success metrics and you “win” by collecting more and more meaningless stuff (like Farmville)…but that’s a whole other topic.

So let’s assume there are success metrics built into the game and those metrics align with what your learning objectives are.

Koreen’s main topic is game design, but I want to talk about that last idea:  
the game’s success metrics need to align with your learning objectives.

This sounds like Instructional Design 101, since it is Instructional Design 101.  Ever more fundamental — Instructional Design 100, maybe — are these questions:

What do you want people to do?
Why aren’t they doing that now?
How will this make things better?

No, the first question isn’t about instruction at all.  Nor is it about, “How do you want them to act?”

It’s about what you want people to get done.

When you can’t articulate what you want people to accomplish, it hardly matters what interventions you try.  You have no way to measure progress.  Might as well just run them all through whatever you feel like.

Making your goals less fuzzy

“Sheep dip” refers to a kind of chemical bath intended to prevent or combat infestations of parasites.  (Videos of older, plunge style and newer, spray style processing of sheep.)

Farmers dip or spray sheep because… well, I’m no farmer, but here are some guesses:

  • It’s more cost-effective than diagnosing the needs of each sheep.
  • A dip-tank of prevention is better than a barnful of cure.
  • Sheep on their own rarely propose new pest-management processes.

Ultimately, sheep farming has a few key outputs: leather, wool, mutton.  While the sheep play an essential role, I don’t think you can successfully argue that these are accomplishments for the sheep.  So what matters is the on-the-job performance of farm workers.

Speaking of on-the-job, many industries and organizations impose mandatory, formal training.  Even there, the accomplishment shouldn’t be “training completed.”

One client delivered “equal-employment awareness” training annually to every employee.  The original charter was full of “increase awareness” and “understand importance.”  Here’s what that looked like after a lot of “how can I tell they’re more aware?”

  • You can recognize examples of discriminatory behavior on the job.
  • You can state why the behavior is discriminatory.
  • You can describe steps for resolving the discrimination.

That’s not exhaustive (and the legal department would probably say you need to sprinkle “alleged” all over the place), but the three points are a first step toward a success metric that connects the individual and the organization.

Sometimes, it is a training problem

When people in an organization can articulate overall goals, it’s easier for them (as individuals and in groups) to think about how their activities and their results relate to those goals. They’re also likelier to be better problem-solvers, because they won’t corral every problem into a formal-training solution.

Even when a major cause of a performance problem is the lack of skill or knowledge, you benefit from revisiting those Design 100 questions:

  • What are the results you expect when people apply the skills they currently lack?
  • What could interfere with their applying them?
  • How will this approach help them learn and apply the skills?

Slightly more diplomatic language led that EEO-awareness client to decide that knowing the date of the Americans with Disabilities Act didn’t have much impact on deciding whether, in a job interview, you can ask an applicant, “Do you have a handicap?”

I’m no expert on workplace games, but I’m pretty sure I get what Koreen Olbrish is talking about.  It’s the workplace first, then the learning goal, and then the application of good design in pursuit of worthwhile results.

The same is true for any planned effort to support learning at work. You need to focus on what’s important, on how you know it’s important, on why you think training will help.

Then you use that information to guide your decisions about how to help people acquire and apply those skills when it matters.

Mindlessly grinding out courses (instructor-led, elearning, webinars, whatever) isn’t the answer, regardless of how many completion-hours people rack up.

It’s just…well, you know.


CC-licensed images:
Bigg’s Sheep Dip (Glenovis) adapted from this photo by  Riv / Martyn.
Bigg’s Dips (yellow/black) by Maurice Michael.
Quibell’s Sheep Dips by Peter Ashton aka peamasher.

Novice learners: feeling safe to be dumb

Are you in a corporate training environment? Dick Carlson in his mild-manner way muses on how learners feel about training (Learner Feedback? You Can’t STAND Learner Feedback!).

Dick and I have some differences — I think dogs ought to have noses that they themselves can see — but not in this area.  The core of Dick’s post is the ultimate assessment: can you now accomplish whatever this training was supposed to equip you to accomplish?

On completing this module, the learner will be able to...(Yes, that does mean “that you couldn’t accomplish before due to a lack of skill or knowledge.”  Don’t be cute.)

Because — if we start with a true skill deficit that prevented you from producing worthwhile results — that’s vastly more important than whether the training fit your purported learning style, whether the ratio of graphics to text was in a given range, and whether the person helping you called herself a trainer, a teacher, a facilitator, a coach, or the Bluemantle Pursuivant.

If you need to learn how to recover from an airplane stall or how to control paragraph borders through a class in CSS, learning assessment comes down to two words: show me.

With all that, I do think that how the learner feels about what’s going on does  influence the learning situation.  I just want to make clear: that’s very different from saying that those feelings matter in terms of assessing the learning.

High profile?  You bet your assessment.

I was once in charge of instructor training and evaluation for an enormous, multi-year training project.  In the final phase, we trained over 2,000 sales reps to use a laptop-based, custom application.  90% of the them had never used a personal computer.

Which was a drawback: the client decided that as long as the sales reps were coming for training on the custom application, we should “take advantage of the opportunity” to teach them email.

It's all in the handout.And word processing.  And spreadsheets.  And a presentation package.  And connection to two different mainframe applications using simple, friendly 3270 emulation software.

In a total of five days (one 3-day session, a 2-day follow-on one month later).

Our client training group was half a dozen people, so we hired some 30 contractors and trained them as instructors.  I mention the contractors because we needed a high degree of consistency in the training.  When a group of sales reps returned for Session 2, we needed to be confident that they’d mastered the skills in Session 1.

(If the informal learning zealots knew how we electrified the fences within which the instructors could improvise, they’d have more conniptions than a social media guru who discovered her iPhone is really a Palm Pre in drag.)

We used a relentlessly hands-on approach with lots of coaching, as well as “keep quiet and make them think” guidance for the instructor.  The skills focused on important real-world tasks, not power-user trivia: open an account.  Cancel an order.  Add a new contract.

We conducted nearly 600 classroom-days of training, and we had the participants completed end-of-day feedback after 80% of them.  I never pretended this was a learning assessment.  I’m not sure it was an assessment at all, though we might have called the summary an assessment, because our client liked that kind of thing.  We had 10 or so questions with a 1-to-4 scale and a few Goldilocks questions ( “too slow / too fast / just right” ), as well as space for freeform comments.

Why bother?

I made the analogy with checking vital signs at the doctor’s or in the hospital.  Temperature, pulse rate, blood pressure, and respiration rate aren’t conclusive, but they help point the way to possible problems, so you can work on identifying causes and solutions.

So if we asked how well someone felt she could transmit her sales calls, we knew about the drawbacks of self-reported date.  And we had an instructor who observed the transmit exercise.  We were looking for an indication that on the whole, class by class, participants felt they could do thist.

(Over time, we found that when this self-reporting fell below about 3 on the four-point scale, it was nearly always due to… let’s say, opportunity for the instructor to improve.)

When we asked the Goldilocks question about pace, it wasn’t because we believed they knew more about pacing than we did.  We wanted to hear how they felt about the pace.  And if the reported score drifted significantly toward “too fast” or “too slow,” we’d decide to check further.   (2,204 Session 1 evaluations, by the way, put pace at 3.2, where 1 was “too slow” and 5 was “too fast.” )

Naturally, to keep in good standing with the National Association for Smile-Sheet Usage, we had free-form comments as well.  We asked “what did you like best?” and “What did you like least?”  (In earlier phases of this project, we asked them to list three things they liked and three they didn’t.  Almost no one listed three.  When we let them decide for themselves what they wanted to list, the total number per 100 replies went up. )

Early in the project, our client services team sat around one evening, pulling out some of the comment sheets and reading them aloud.  It was my boss at the time who found this gem, under “what did you like best?”

My instructor made me feel
safe to be dumb.

Everybody laughed.  Then everybody smiled.  And then everybody realized we had a new vision of what success in our project would mean.

We wanted the learners to feel safe to be dumb.  Safe to ask questions about things they didn’t understand.  Safe to be puzzled.  Because if they felt safe, they felt comfortable in asking for help.  And if they felt comfortable asking, that meant they felt pretty sure that we could help them to learn what they needed to learn.

What about weaving their feedback into the instructional design?  In general, newcomers to a field don’t know much about that field, which means they’re not especially well equipped to figure out optimal ways to learn.

Please note: I am not at all saying newcomers can’t make decisions about their own learning.  In fact, I think they should make ’em.  In a situation like this, though, my client wasn’t the individual learner.  It was (fictionally named) Caesar International, and it had thousands of people who needed to learn to apply a new sales-force system as efficiently as possible.

Mainly procedural skills.  Low familiarity with computers, let alone these particular applications.  High degree of apprehension.

(By the way, Ward Cunningham installed WikiWikiWeb online eight months after our project ended, so don’t go all social-media Monday-morning-quarterback on me.)

I felt, and still feel, that our design was good.  So did the Caesar brass: within six months of the end of the project, a nearly 25% increase in market share for Caesar’s #1 product, and the honchos said that resulted from successfully training the reps to use the new sales software on the job.

When you feel safe to be dumb, you don’t stay dumb long.

CC-licensed images:
Yes / no assessment by nidhug (Morten Wulff).
“Cover-the-content” adapted from this photo by antwerpenR (Roger Price).


Hiking trails and barriers to learning

In a cartoon I saw years ago, two Romans are sitting high in the Coliseum, watching people being thrown to the lions.  One man says to the other, “You know, I’m a Christian, too — I’m just not a fanatic about it.”

I’m kind of that way about hiking, and about learning design.  In terms of hiking, my idea of enjoyably strenuous is Lowe’s Bald Spot, a “small subordinate peak below Mount Washington” in New Hampshire’s White Mountains.  Which may explain why I enjoy hiking (okay, walking) along converted railbeds like Québec’s Parc Lineaire Le P’tit Train du Nord.  Just last week, my wife and I ambled along a similar, smaller route in the town of Annapolis Royal, Nova Scotia.

What’s that got to do with learning design?

Well, learning is what someone does–either through active pursuit or through the relentless looping of stimulus, response, and feedback.  Thus what you learn, where, and how all depend on your context, which includes the experiences and inclinations that you bring to the new setting.

If you’re not much of a hiker, then the hiking equivalent of learning design is an effort to help you achieve a satisfactory experience.  It took us a little while to figure that the gravely path winding past a marsh had been a railbed, though I had my suspicions.  Then I saw a clear, orderly fork, a place where one track had split from another, and I knew.

That kind of trail doesn’t need to provide a lot of guidance–though for newcomers, it’s helpful to make clear it is a trail, and to set forth some basics:

We’d entered from a side route and only found this gate as we approached the beginning of the trail.  Seems obvious that you’re not supposed to drive here.  At least that was my take.  But then we noticed the adjoining sign:

Additional user guidance, I guess.  What the trail planner (or the town attorney) had in mind, I suppose, was someone tooling along in his car on the approach to the trail, at night, and perhaps not noticing the metal gate.  A standard road sign format might help.

Then we moved a bit further away:

A lot of corporate and organizational learning is intended to increase effectiveness.  We want people to be more productive, able to do things more quickly, or to a higher level of quality.  It’s the mantra of better, faster, cheaper, more.

That’s fine.  That’s what you should aim for in an organization, because when you’re better at what you do, you can achieve the goals you had in mind.

A lot of corporate and organizational learning, though, hews doggedly to the throughput model.  Give people stuff.  Explain.  Direct.  Tell.  Don’t waste time having folks fumble around trying things.

What’s more, I believe many people in those organizations–the folks attending the formal learning–expect that approach.  Boil it down.  Don’t waste my time.  Gimme facts.  For heaven’s sake, don’t be a fanatic about making me do stuff in the hope that I’m going to learn.

Combine that with the urge that “learning professionals” have to be helpful, and you can end up with a day’s formal training that includes half an hour of icebreakers, another half hour on groundrules and objectives, 15 minutes’ recap before lunch and 15 minutes afterward, to say nothing of end-of-the-day reviews, reactions, and ritual bows toward the flipchart-sheet parking lot.  That’s a whole lotta time going to paralearning.

I’m going to post that third picture in my office as a not-too-subtle reminder that I shouldn’t make things too obvious


Twilight, LOLcats, and sales training

I haven’t read any of the Twilight books by Stephenie Meyer, and now I don’t have to, thanks to the reviews at Pop Suede.  (I started with the third, the one for Twilight: Eclipse, but here they’re in what I think is the proper sequence.)

Review of Twilight:

i is vampire!  rawr!

Review of Twilight: New Moon

oh hai. is me. bella.

Review of Twilight: Eclipse

Twilight Eclipse -- i is jes sum hansum dude gettin offa da bus

What’s the point (other than a teensy bit of humor)?

It struck me that, based on the little I’d picked up from newspapers and online, the Pop Suede folks have done a great job of capturing the plot of each book, then tweaking it enough that you see both the textual source and the satiric object.  It’s like a wildly informal approach to… a book report.

Understand: I no more want everyone churning out lolcats book reviews than I want another couple thousand terabytes of online-learning Jeopardy quiz.  But think what it took to put these things together: you had to grasp the key points of the original book, weed stuff out, and then express your understanding in a way that communicates.

It’s that kind of reworking and recasting of a complicated set of ideas that helps foster learning, not a 20-item multiple-guess test at the end of the half-day module on Twilight: New Moon.

I once needed to mitigate the effect of the typical marketing department information dump.  New victims employees were sentenced to hear 90 minutes’ worth of feeds and speeds about three major products.   So I asked the product managers to agree to a new format in which they’d present for only an hour, take a short break, and then participate in a discussion with the new hires.

This is how I explained the “discussion” to the sales folks, immediately before the first presentation:

We’re going to have three one-hour presentations today.

Yeah, I know, but after two of them, you get a 15 minute break.

Look on the back of your name card.  You’re in one of three groups based on the colored dot.

At the end of each presentation, I’ll name one of the colors.   During the break, that color group has 15 minutes to make a pitch on “the 10 main ways to sell [whatever the product is].”

After the break, you make your pitch.  The rest of you get to ask questions, kibitz, figure stuff out.
At the end, the Product Manager will jump in.

Yeah, it was manipulative.  Hey, I’d been working with sales reps for a while.

Some of the things I had in mind:

  • Reduce potential product-manager-induced sleep by 33% (one hour instead of 90 minutes).
  • Increase attention, at least in the first session, since the sales rep didn’t know if he had to work on the pitch till after it was over.
  • More breaks than expected (a feature, but for most folks, a benefit).
  • Rethinking / reworking by the sales reps replaced canned product-manager summary.
  • Product manager got to hear what the sales reps thought were the main sales ideas.

In a way, it was very formal learning: one-time, face-t0-face,  scheduled.  We even had mediocre coffee, pastries, and PowerPoint.  But we also got the salespeople doing what their jobs called for: thinking about the products and how they could sell them to potential customers.


Front-end analysis: not baby-sitting, not psychotherapy

In an online conversation, I found myself again quoting Joe Harless. In this case, the quote was from a March 1975 interview with Training magazine.  I haven’t found this online anywhere, so thought I’d summarize a bit here.

A little background: Harless coined the term front-end analysis.  As he wrote in a workshop guide, to help our client achieve its business or organizational goals:

We begin at the end and work backwards in the basic progression:

  1. We first find out what goals are not being achieved satisfactorily, or what the new goals are when they are set by the client.
  2. We then find out what accomplishment is not being produced satisfactorily that is causing the goal not to be met.
  3. We then find out what behaviors are not being obtained that cause the deficient accomplishment.
  4. Then, and only then, can we determine which of the influences need to be manipulated.

The process just described is called Front-End Analysis.

The Training interview asked if FEA were “just the Joe Harless shtick.”  Harless replied that it was real “if you define real as having a definite set of procedures…and data and case histories” along with people who are applying these things.

Front-end analysis began with the realization that we could produce excellent training packages, ones that pleased not only the developer but the client.  And yet follow-up evaluation ( “which…we jokingly called rear-end analysis” ) revealed that, as often as not, skills didn’t transfer to the job.

So Harless wondered why.  “Being devotees of the scientific method, we advanced certain hypotheses… [And] we began testing these hypotheses.”

To Harless and his collaborators, rear-end analysis asks, “Why didn’t the training produce the intended result?”  Front-end analysis asks three other questions:

  • What are the symptoms that a problem exists?
  • What is the performance problem producing those symptoms?
  • What is the value of solving that problem?

And that’s where the quote comes from:

Training: Value in terms of what?

Harless: In terms of money. Front-end analysis is about money first and foremost.  So is training.  If not, you’re baby-sitting or doing psychotherapy.

Harless said this as an aside to the main theme of his interview.  Even so, this is a lodestone for anyone working in organizational learning.  I agree that the individual needs to have some personal investment in order to learn effectively on the job.  She wants to raise her skills, or master a new task, or prepare for a new position, or gain satisfaction from resolving new challenges.

Those are her variables.  The organization has variables as well; the relationship between the two sets is an effort to balance the work-equation.  How can those skills, those tasks, those challenges make sense for her in the organization’s context?  “Is it worth  spending X to achieve Y?” Solve for the organization.  Solve for your personal goals.

I’m not trying to reduce this purely to dollars, and I don’t think Harless was, either.  (The same people who get nit-picky about “ROI for training” are strangely silent when a merger like Daimler-Chrysler–financially analyzed, you’d think, to a fare-thee-well–ends up vaporizing billions of dollars.)

When Harless says, “Value in terms of money,” I see it as shorthand.  Money is the most common and most convertible indicator of value in group activity.  You can choose other indicators; you just have to work harder.

1975 was fairly early in the history of performance improvement, though I don’t think we’ve yet reached the Golden Age.  Here’s the Reverend Harless preaching on a related theme:

You know, trainers are forever going around looking for respectability.  They’re always asking, “How can we sell management on the idea of training?”

Well, the answer is, you don’t.  You sell management on the benefits of solving human performance problems. You make it clear to management that you are there to avoid training when it’s not cost-effective.

That’s how you get to be a hero.  That’s how you get to be respectable…That’s how you avoid being stuck off in some personnel department somewhere.

By the way, Guy Wallace’s Pursuing Performance blog has a 2008 video interview with Joe Harless:

“Almost always, the client came to us requesting the development of some kind of training intervention… [in a typical situation, the workers] already knew how to detect and correct…defects….They were not doing so because…they were being paid for the quantity of production rather than the quality of the production.”