This post is my contribution to the October edition of the Working / Learning blog carnival, hosted at the Xyleme Learning Blog.
(What’s a blog carnival? Details here. If you blog about learning in a work setting, or about working deliberately at learning, you should take part.Â Don’t be shy.)
In a recent post, Beyond Training, Harold Jarche (in one of his comments) gives his rule of thumb: “Training is the last resort, when all other performance improvement alternatives (which are usually cheaper) have been discounted.”
Instead of “discounted,” I might have said “examined.” Otherwise, Harold’s highlighting a dilemma that corporate and organizational training departments (by whatever name) have been struggling with decades.
Here’s the deal: there are all kinds of ways to instruct efficiently and effectively. You can design (as Bob Mager and Peter Pipe said more than 30 years ago) criterion-referenced instruction so you don’t waste people’s time “teaching” them what they already know. You can sequence, you can use increasing approximations of the real-life job, you can avoid war stories and nice-to-know. You can avoid spoon-feeding.Â You can emphasized hands-on, problem-based exercises.
But… a lot of the time you don’t have to do those things. How much of “training” is a kind of corporate Clearasil applied to the zits of a counterproductive computer system or an alleged process that’s really the business equivalent of the cowpath that became a paved street?
How much of what some subject-matter expert or department head thinks people really oughta know (or, worse, really oughta wanna know) actually matters?
It may be that people don’t know this stuff (whatever “this stuff” is). It’s less clear that traditional training is the way to change the outcomes.
For many people, the father of “performance improvement” was Tom Gilbert; I had the chance to meet him several times, and his thinking has permanently influenced my own. Some time back I quoted his model for creating incompetence. Consultants Joseph and Jimmie Boyett published a crisp article (PDF) explaining why the performance-improvement model makes sense.
It’s worth a look; it tracks with Harold’s point about training as a last resort. In essence, Gilbert would approach a performance problem (a gap between the results you want and the ones you have) like this:
- Do people have the information they need?
(Notice, that’s not “do they know?” Gilbert is talking about information about how to perform and about how well you’re doing.)
- Do they have the instruments they need — tools, methods, technology, whatever? You can train pharmaceutical workers in all kinds of good manufacturing practice, but if (as at one location I worked in) people have to walk from packaging line A to line B because line A doesn’t have the right kind of scale — and you’re measuring residue in fractions of a gram — you risk not getting the accuracy you claim you need.
- Do you have incentive systems to support the performance you need? If the customer comes first, do you punish people for not completing their end-of-the-day paperwork by a set time? If your speeches are about relationship selling, are the annual award winners the salespeople who pushed product?
- Only after examining these other influences on performance would Gilbert ask whether people have the skills and knowledge to perform. As the Boyetts say,
By correcting deficiencies in information, instruments, and incentives first, you make sure you don’t end up training people to use tools that could be redesigned, or to memorize data they don’t need to remember, or to perform to standards they are already capable of meeting and would meet if they knew what these standards were.
I love working in this field; I get excited when people in client organizations produce better results on the job. What has mystified me since I read Mager in grad school and Gilbert’s Human Competence in the late 1970s is why otherwise sensible organizations waste millions of dollars (and millions of worker hours) trying to talk or PowerPoint or click-enter or multiple-choice people into worthwhile results.
Photo of criterion-based traffic test by Birger Hoppe.