Improving performance

Doing more than “training”

Mar 072013
 

My French isn’t that good: I can hold a conversation (sometimes) but I couldn’t hold a job. One way I try to get better is to read more and listen to more in French. I recently came across the Langue Française section of the TV5Monde site, which has an almost overwhelming range of features.

One of them is 7 jours sur la planète (7 Days on the Planet). It’s a regular feature  with three segments from the week’s TV news. For each segment, you can watch the video clip, read a transcript, and then test your comprehension with three levels of questions (elementary, intermediate, and advanced).

7jours exercises

I watched the first clip in the grid above, about fish fraud (one species of fish passed off as another). I got the gist, then brought up the transcript to spot words I didn’t know, or catch meanings I might have mistaken.

That’s when I discovered Alexandria. TV5Monde’s site is set up so that on a page with a special icon (red circle with a question mark in the upper right of the following image), you can double-click any word to bring up a multi-language dictionary:

alexandra01

 

In this example, I clicked on l’étiquette. Alexandria popped up with a French-language dictionary, which reminded me that une étiquette is a little card or tag with the price, origin, or instructions for some product or item of merchandise.

You can set the dictionary to translate into any of more than two dozen languages:

alexandra02

(“Choose your target language.”)

What impresses me about this approach is that TV5Monde doesn’t have to create specialized hypertext for certain words. As far as I can tell, Alexandria’s dictionary works with any word on the page.

If you don’t know any French, of course, this would be a terrible way to learn it. You wouldn’t have any background to decide between one meaning and another, and a dictionary can’t tell you much about syntax or context.  The title of the segment in French, La fraude  aux poissons passe à travers les filets, could be read as “Fish fraud passes through the nets.” But even my paperback French-English dictionary has 27 main entries for passer,  and given the subject, I’d translate the title as “Fish fraud is slipping through the nets.”

If  you’ve got a low-to-intermediate level of ability with French, this is a powerful tool to help you understand more of what you read on the TV5Monde site

It looks like there’s a lot more to Alexandria–more than I can spend time on this morning. I have the impression you can link any web page to the dictionary’s features. I haven’t tested that yet, but I will.

Oct 112012
 

I learned late yesterday that Joe Harless, who listed himself on LinkedIn as “independent think tank professional,” died on October 4th.  (Here’s a report in the Newnan, Georgia Times Herald.)

I don’t know how widely known Joe was outside the world of ISPI, the International Society for Performance Improvement, prior to his semi-retirement from that arena. (Later in life, as the Times-Herald article explains, he was involved in improving the impact of high school education near his home in Newnan and throughout the state.) I’m pretty sure it wasn’t widely enough, which is a shame for people who worked in what used to be called the training and development field.

That’s because Joe, like many of his colleagues, realized that the real goal of that field should not be doing training. Here’s Joe in 1992, writing in the ISPI journal, then called Performance and Instruction:

My behaviorism roots conditioned me to observe what people do, rather than what they say they do.

Tom Gilbert taught me to give more value to what people produce (accomplishment) than what they do or know.

I learned from observations of other technologies (medicine, engineering, plumbing, building the Tower of Babel, etc.) the wisdom of common purpose and agreeing on definitions….

I get confused when people say they are Performance Technologists but always produce training / informational / educational type interventions for every project. This confuses me because examination of more that 300 diagnostic front-end analyses done by cour company and our clients shows the information / training / education class of intervention was the least frequently recommended.

More than 30 years ago, I attended JAWS – the Job Aid Work Shop that Joe developed. I’d been working for Amtrak, developing training for ticket clerks, reservations agents, and others. JAWS provided me with a systematic way of looking at how people accomplish things on the job and figuring out where it made a lot more sense to create a job aid then to try (usually fruitlessly) to have them memorize the information that the job aid contains.

A side benefit of JAWS was getting to know Joe, a man serious about his work, gracious in his dealings with others, and good-humored in his presentation. Like an old-time Southern preacher, he’d become Reverend Joe and say things like, “An ounce of analysis is worth a pound of objectives.”

(Meaning: it’s terrific to have sound, behavioral objectives for your training–but maybe the problem you’re dealing with is not one that training can solve.)

He also nudged ISPI toward a name change by saying that “the National Society for Performance and Instruction” was the equivalent of “the National Society for Transportation and Bicycles.”

Again from that 1992 article:

Trainers sell training. They are usually commanded to do so, and are rewarded for the volume of training developed and delivered. Educators are conditioned to teach “subject-matter,” not to impact performance.  Most vendors hawk a given product, not a process. Buyers typically want to buy things, not analysis. Our letterheads read Training Department, or Education Center, or Learning, Inc., etc. The names of our organizations do not imply: performance improvement sold here.

I took a number of Joe’s workshops, including ones on instructional design and on front-end analysis. As he began working on what became his Accomplishment-Based Curriculum Development system, he invited a number of people like me, who’d used his workshops in our own organizations, to participate in a tryout for one of the new components. He was especially eager to hear our candid opinions. He knew what he was doing, but he was pretty sure he didn’t know everything.

I attended my first professional conference around 1978, when ISPI (then NSPI) met in Washington DC, where I was working (no travel cost!). After one session, I was speaking with Stephanie Jackson, an experienced practitioner, when Joe Harless came up–Stephanie had worked for him previously. We three talked for a bit, and it was clear to me that these two were good friends. Joe said to Stephanie, “Let’s get a beer.” I said something about letting them catch up with each other, to which Joe responded, “Don’t you like beer?”

In my career, I’ve learned a lot from many people, but Joe Harless was the right person at the right time for me, opening doors and sharing ideas, hearty and enthusiastic and curious.  What he did was to make concrete for me ways to enable other people to produce better results on the job. He combined analytical skills with openness to new ideas and an interest in other fields that has inspired me always.

We once talked about the job aid workshop, which I gave any number of times at Amtrak and GE. At one point, he had a segment where he’d present examples of good job aids and bad ones.  “Not any more,” he told me. “Now I put ‘em all out and let the participants figure out which ones are good and why.”

I had a conversation on Twitter yesterday with Guy Wallace of EPPIC. I said that for me, “It’s practically hero worship, but you know how Joe would have laughed at that.”

I was holding back, I think, because of the immature connotation of “hero worship.” But Joe has had more direct influence on my career than anyone I can think of. I learned from his ideas, I was energized by his search for data as evidence, and although it was probably true for many people, I loved that he called me “Cousin Dave.”

If someone’s influence in your life makes you want to do better, if his work and his interaction inspire you to dig deeper and reach further, then that person’s a hero.

You could do a lot worse that hear Joe himself talk about performance–on-the-job accomplishment–as the heart of the matter.   Guy has a number of videos on YouTube, including a 90-minute one from a discussion in Toronto earlier this year at ISPI’s 50th anniversary.  I’ve set this link to start at the 8-minute mark, when Joe begins speaking.  You might find it worth a few minutes of your time, even with some of the callouts to old friends and inside jokes.  I’ve included a few comments here as highlights, all of which come in the first seven minutes of Joe speaking.

…Even in the heyday of programmed learning in the Sixties there were some of us who were arguing that we should be about developing instructional technology, not just programmed instruction, if we truly wanted to revolutionize training and education.

…Not willing to let good enough alone, there were some of us who were then arguing that we should be about the development of what? Performance technology, that would subsume instructional technology and have as its process, at the beginning, a process that was like medical diagnosis. I called my version of the diagnostic process Front End Analysis…

The genesis of my front-end analysis was the confounding realization that many of the training– the training that we developed for our clients didn’t seem to make any difference in the on the job situation, even after the trainees, the learners, successfully acquired the knowledge we so carefully taught them. I don’t know — a rough analogy, I suppose, is that we gave them good medicine but it didn’t cure the disease….

We conducted follow-up investigations with the aid of some of our cooperative clients…. In a shocking number of cases, we found that a lack of skill and knowledge was not the predomination cause of the non-job-performing situations…. Thus all the training in the world would do little to help the performance.

I’m going to miss Joe a lot. I do already.

Jul 112012
 

Thanks to David Glow, whose mention of it I happened to notice on Twitter last night, I found a blog post by Steve Flowers that I hadn’t seen: Just a Nudge–Getting into Skill Range. He’s talking about skill, mastery, and the (ultimately futile) “pursuit of instructional perfection.”

Steve starts with a principle from law enforcement: only apply the minimum force necessary to produce compliance.  (This is why those “speed limit enforced by aircraft” signs rarely mean “cops in helicopter gunships”). Then he works on a similar principle for, as he puts it, instruction performance solutions.”

Trying to design training / instruction for skill mastery can hinder–or defeat–the learning process, he says. That’s because mastery, in whatever form reasonable people would define it, is likely the outcome of a long period of practice, reflection, and refinement.

“Mastery” sounds good, which is why the corporate world is hip-deep in centers of excellence and world-class organizations.  A lot of the time, though, “world-class” is a synonym for “fine,” the way you hear it at the end of a TV commercial: “available at fine stores everywhere.”  Meaning, stores that sell our stuff.

He’s not saying there’s no place for formal learning, nor for a planned approach to helping people gain skill.  What he is saying is that we need “to design solutions to provide just the right nudge at just the right moment.

Most of the time, we don’t need mastery on the job, he says, and I agree.  We do need competence, which is what I believe he means by helping the performer move into a “skill range” — meaning the performer has the tools to figure out a particular problem or task.

From a blog post by Steve Flowers
(Click image to view his post.)

I’ve been mulling some related ideas for some time but hadn’t figured out how to even start articulating them. One theme has to to with the role of job aids and other performance support–things that Steve believes strongly in. I despair at the server farms full of “online learning” that shows (and show), and tells (and tells and tells) while failing to offer a single on-the-job tool.

Listen: the only people who’ll “come back to the course” for the embedded reference material are (a) the course reviewers, (b) the utterly bored, and (c) the utterly desperate.

A second theme has to do with the two different kinds of performance support that van Merriënboer and Kirshner talk about in Ten Steps to Complex Learning. In their terminology, you have:

  • Procedural information: this is guidance for applying those skills that you use in pretty much the same way from problem to problem.  That’s the heart of many job aids: follow this procedure to query the database, to write a flood-insurance policy for a business, or to update tasks in the project management system. You can help people learn this kind of information through demonstration, through other presentation strategies, and through just-in-time guidance.
  • Supportive information: as vM&K say, this is intended to bridge the gap between what learners already know, and what they need to know, to productively apply skills you use differently with different problems.  “Updating the project management system” is procedural; “deal with the nonperforming vendor” is almost certainly a different problem each time it arises.  (That’s why Complex Learning uses the somewhat ungainly term “non-recurrent aspects of learning tasks.”) Types of supportive information include mental models for the particular field or area, as well as cognitive strategies for addressing its problems.

As the complexity of a job increases, it’s more and more difficult to help people achieve mastery. That’s not simply because of the number of skills, but because of how they related, and because of the support required.

Rich learning problems

Part of the connection I see, thanks to Steve’s post, is that the quest for perfect instruction ignores both how people move toward mastery (gradually, over time, with a variety of opportunities and guided by relevant feedback). In many corporations and organizations, formal learning for most people gets squeezed for time and defaults to the seen-and-signed mode: get their names on the roster (or in the LMS) so as to prove that learning was had by all.

We focus on coverage, on forms, on a quixotic or Sisyphean effort to cram all learning objectives into stuff that boils down to a course. I’m beginning to wonder, frankly, whether any skill you can master is a course is much of a skill to begin with. At most, such a skill is pretty near the outer border on Steve Flowers’ diagram. So the least  variation from the examples in the course–different circumstances, changed priorities, new coworkers–may knock the performer outside the range of competence.

(Images adapted from photos of F. Scott Fitzgerald and Ernest Hemingway from Wikimedia Commons.)

Jun 112012
 

At the Innovations in e-Learning Symposium this week, Dan Bliton and Charles Gluck from Booz Allen Hamilton presented a session on “failure-triggered training.” I was really impressed by their description of a study that explored different approaches to reducing the risk of phishing attacks in a corporate setting. For one thing, as I told Charles immediately after the session, they invented the flip side of a job aid.  But I’m getting ahead of myself.

In this post:

  • Their session description (from the symposium brochure)
  • My summary of the session, with a few excerpts from their presentation
    (I’ll repeat this link a few times in this post; all those links are for the same set of materials. You don’t need to click more than once.)
  • (At least) three implications for improving performance

The session description

Study Results: Failure-Triggered Training Trumps Traditional Training

We didn’t expect our highly interactive eLearning (that generated great post-test scores) to be completely ineffective in changing behaviors in the work environment! Could the same eLearning be made effective if delivered as failure-triggered training? Come learn the outcomes of a blind study of nearly 500 employees over nine months which analyzed multiple training approaches. The study shows that the same eLearning was significantly more effective when delivered as spaced events that employed learning at the point of realization. This combination of unannounced exercises and failure-triggered training (a See-Feel-Change approach) significantly reduced improper responses to phishing attacks by 36%.

I didn’t ask Bliton or Gluck about this, but “see-feel-change” seems related to what John Kotter talks about here: making a seemingly dry or abstract concept more immediate and concrete.

What I heard: BAH’s study

(Note: this is my own summary. I’m not trying to put words in their mouths, and may have misunderstood part of the session. If so, that’s my fault and not theirs.  In no way am I trying to take credit either for the work or for the presentation by Dan Bliton or Charles Gluck.)

The Booz Allen Hamilton (BAH) study, involving 500 employees over 9 months, analyzed different training approaches to “phishing awareness.”  The training aimed at making employees aware of the risks of phishing attacks at work, with the goal of reducing the number of such attacks that succeed.

The study wanted to see whether interactive awareness training produced better results than static, page-turner training. In addition, the study used fault-triggered training, which Bliton and Gluck explain this way:

Unannounced, blind exercises [simulated phishing attacks] delivered in spaced intervals, combined with immediate, tailored remedial training provided only to the users that “fail” the exercises.

In other words, if you click on one of the fake phishing attempts, you immediately see something like this:

BAH phishing failure

BAH divided the study participants into three groups:

  • The control group received generic “training” about phishing that did not tell them how to respond to attacks.
  • The wiki group’s training consisted of a non-interactive pageturner, copied from a wiki.
  • The interactive group’s training included practice activities (how to identify likely phishing, how to respond).

In post-training comments, the Interactive group gave their training an overall score of 3.8 out of 5.  As the presenters noted somewhat ruefully, the Wiki group gave theirs 3.7  — and the control group gave theirs 3.4.  (See slide 11 in the presentation materials.)  The page-turning Wiki group actually felt better prepared to recognize phishing than the Interactive group.

Posttest questions indicated that 87.8% of the Wiki group and 95.6% of the Interactive group knew whom to notify if they spotted suspicious email.

From the response to the first simulated attack, however, Dan and Charles learned there was no significant difference between the three groups (Control, Wiki, Interactive) — nearly half the participants in each group clicked the link or replied to the email.

What happened next at BAH

Over six months, participants received three “exercises” (mock phishing attempts). “Failure” on these exercises consisted of either clicking an inappropriate link (producing an alert like the example above) or replying to the email — hence, “failure-triggered training.”

The study provide good data about actual performance, since it captured information like who clicked a link or replied to the simulated phishing.

Incorrect responses fell dramatically between the first and second exercises, and further still between second and third:

Booz Allen results from failure-triggered training

 Bliton and Gluck attribute this decrease to two main factors: the spaced-learning effect produced by the periodic exercises, and “learning at the point of realization,” since what you could think of as failure-feedback occurred just after someone responded inappropriately to what could have been an actual phishing attack.

If you’re familiar with ideas like Gottfredson and Mosher’s Five Moments of Need, which Connie Malamed summarizes nicely, this is #5 (“when something goes wrong”).

I’ve left out plenty; if you’ve found this description intriguing, take a look at their presentation materials. I can tell you that although Bliton and Gluck’s presentation at IEL12 had a relatively small audience, that audience really got involved: question, opinions, side conversations–especially striking at 4 o’clock on the last afternoon of the symposium.

What I thought, what I think

This approach is much more than training, in the sense of a structured event addressing some skill or knowledge need. I told Charles Gluck that it’s almost the flip side of a job aid.  A job aid tells you what to do and when to do it (and, of course, reduces the need to memorize that what-to-do, since the knowledge is embedded in the job aid).

At first I thought this approach was telling you what not to do, but that’s not quite right, because you just did what you shouldn’t have.  You can think of it  as being like a ground fault circuit interrupter (GFCI), a special type of safety device for an electrical circuit.

GFCIs can respond to a problem too small for a circuit breaker to detect. So you’re blow-drying your hair, when click! the wall outlet’s GFCI trips, safely breaking the circuit and interrupting your routine.  Not only do you avoid a shock; you also have feedback (if you know about how GFCIs work) that you’d been at risk from electrical hazard.

In the same way, BAH’s mock-phishing exercise interrupts the flow of work. By following the interruption with immediate, relevant, concrete feedback, as well as an offer for further details via a brief training program, this short circuit is turned into a smart circuit.

Which to me opens the door to — let’s use a different term instead of “failure-triggered” — task-triggered performance support. Like a virtual coach, the BAH exercises detect whether I responded inappropriately and then help me not only to recognize and even practice what to do instead.

What I’m leaving out

This was a study and had limits.  For one thing, because of the failure-trigger, we don’t know much about the people who didn’t click on the phishing attempts: have they really mastered this skill, or did they just not happen to click on these trials?

There’s also some data about the best response (don’t click the link, do report the attempt), though the numbers seem very small to me.  (I don’t recall anyone asking about the details on this topic, so I could well be misunderstanding what the numbers represent).

BAH study: attempt reported

On the corporate-culture side, what happens within the organization?  Does this seem Orwellian?  Can the organization apply it as formative feedback intended to help me improve, or do I just end up feeling that Somebody’s Watching? I’d like to look for some data about the effects of retail mystery-shopper or secret-shopper programs, a similar activity that can seem either like trickery or like process improvement.

What about habituation? Will the effectiveness of this approach fade over time?

Most intriguing: can you harness this as a form of ongoing training?  For example, along with informing people about some new security threat, create and send out further exercises exemplifying such a threat. Their purpose would be to provide a kind of on-the-job batting practice, with “failure” producing two-part feedback (“You missed this security threat, which is…” “To find out more, do this…”).

Dan Bliton, Charles Gluck, and their colleagues have done more than make BAH more secure from phishing.  They’ve also shared a creative, practical experiment.

 

Feb 282012
 

This entry is part 3 of 3 in the series When to Build a Job Aid.

The first two parts of this series, in one line each:

  • Is a job aid mandatory? If not, does speed or rate on the job prohibit the use of a job aid?
  • Do the characteristics of the task tell you that a job aid makes sense?

If they do, you might feel ready to leap right into design.  But in the real world, people don’t just perform a task; they work within a complex environment.  So the third part of your decision is to ask if any obstacles in that environment will hamper the use of a job aid.

Part 3: what obstacles does your job aid need to overcome?

You could ask these question in either order, but physical barriers are sometimes easier to address than social ones.

Often people have to work in settings where a job aid might be a hindrance or even a danger.  Someone repairing high-tension electrical lines, for example.  Or someone assembling or disassembling freight trains at a classification yard:

You don’t need to watch this video about humping railroad cars, but as the narrator points out around the 4:00 mark, in the distant past a worker would have to ride each car as gravity moved it down a manmade hill (the hump), applying the brake by hand if the car was moving faster than about 4 mph. It would have been impossible to give the brakeman a job aid for slowing the car, so his training (formal or otherwise) would have required lots of practice and feedback about judging speed.  And possible trial and error.

Amarillo by morning?

Texas highway map, 1936

Rather than develop impractical job aids for aspects of this set of tasks, modern railroads rely on computers to perform many of them.  For example, radar monitors the speed of cars more accurately than a person could, and trackside retarders act to moderate that speed.

Remember, the goal is not to use job aids; the goal is to produce better on-the-job results.  Sometimes you can do that by assigning difficult or repetitive tasks to machinery and automation.

In many cases, though, you can overcome physical obstacles to the use of a job aid  by changing its form.  No law requires a job aid to be on an 8 1/2 by 11 inch laminated piece of paper. Nor on the formerly ubiquitous, multifolded paper of a highway map.

A road map can support different kinds of tasks.  You can use it at a table to plan where you’re going to go, to learn about the routes.  No barriers to such use.  But for a person who’s driving alone, a paper road map is at best a sub-optimal support.  It’s hard to use the map while trying to drive through an unfamiliar area.

In a quarter mile, turn left

Deep in the heart of Oslo

Real-time support for the driver now includes geosynchronous satellites, wireless technology, a constantly updated computer display–and a voice.

That voice is transformative: it’s a job aid you don’t have to read. Because the GPS gives timely, audible directions, there’s no need to take your eyes off the road and decipher the screen.

Other examples of overcoming physical barriers: attach the job aid to equipment. Use visual cues, like a change of color as movement or adjustment gets closer to specification.  Combine audio with voice-response technology (“If the relay is intact, say ‘okay.’ If the relay is damaged, say ‘damaged.'”)

But he had to look it up!

Overcoming physical barriers is one thing.  Overcoming social barriers is…a whole bunch of things. Your job aid will fail if the intended performer won’t use it.

Popular culture places a great value on appearing to know things.  When someone turns to an external reference, we sometimes have an irrational feeling that she doesn’t know what she’s doing–and that she should.  In part, I think we’re mistaking retention of isolated facts with deep knowledge, and we think (reasonably enough) that deep knowledge is good.

Don't go off on the wrong track.At its worst, though, this becomes the workplace equivalent of Trivial Pursuit. A railroading example might be someone who can tell you not only the train numbers but the locomotive numbers that ran on a certain line decades ago–but who can’t issue you a ticket in a prompt, accurate, courteous manner.

The performer herself may be the person believing that performance guided by a job aid is somehow inferior.  Coworkers may hold it, putting pressure on the individual.  Even clients or other stakeholders may prefer not to see the performer using a job aid.

Maybe there’s a way around this bias.  The job aid could be embedded in a tool or application, such that the performer is merely applying one feature.  That’s essentially what a software wizard does.  Watch me turn this data into a chart–I just choose what I want as I go along.

(And doesn’t “choose what I want” sound much more on top of things than “look stuff up?”)

For a injection gun used for immunizations in third-world settings, healthcare workers occasionally had to make adjustments to clear jams and similar equipment glitches.  Some senior workers did not want to seem to need outside help to maintain their equipment, but couldn’t retain all the steps.  (Remember in Part 2?  Number of steps in task, complexity of steps?)  So the clearing instructions were attached to the equipment in such a way that the worker could follow the job aid while clearing the gun.

♦ ♦ ♦

The considerations here aren’t meant as either exhaustive or exclusive.  They are, however, important stops to make, a kind of reality check before you hit the on-ramp to job aid design.  The reason for building a job aid is to guide performance on the job while reducing the need for memorization, in order to achieve a worthwhile result.  If the performer can’t use it because of physical obstacles, or won’t use it because of social ones, the result will be… no result.

 

CC-licensed photos:
1936 Texas highway map by Justin Cozart.
Norwegian GPS by Stig Andersen.
1879 Michigan Central RR timetable from the David Rumsey Map Collection.