When to build a job aid: ask the task

This entry is part 2 of 3 in the series When to Build a Job Aid.

The previous post in this series covered the initial go/no-go decisions: are you required to build a job aid?  Does a need for rate or speed make a job aid impractical?

If the answer in both cases is no, then you don’t have to build a job aid, yet there’s no reason not to (so far).  A good way forward at this point is to consider the characteristics of the real-world performance you have in mind.  This is related to though not the same as task analysis.  I have my own name for it:

What that means is: use what you know about the task to help determine whether building a job aid makes sense.  You can go about this in many ways, but the following questions quickly cover a lot of the territory.

♦ ♦ ♦

How often does someone perform the task?

“Often” is a relative term–in fact, most of the questions in Ask the Task are relative.  That doesn’t mean they’re not pertinent.  Asking “how frequent is frequent?” turns your attention to the context of the task and the people who typically carry it out.

Frequency isn’t the same thing as regularity.  Some tasks are frequent and predictable, like a weekly status update.  Some are more random, like handling a payment by money order.  And some are much more rare, like a bank teller in Vermont handling a money transfer from Indonesia.

Whether you end up building a job aid, designing training, or just tossing people into the deep end of the performance pool, you need some idea of how frequent “frequent” is, and where the specific task might fall along a job-relevant frequency scale.

Think about what frequency might tell you about whether to build a job aid.  Yes, now.  I’ll tell you more at the end of the post, but we both know you ought to do some thinking on your own, even if we both suspect few other people will actually do that thinking while they read this.

♦ ♦ ♦

How many steps does the task have?

It’s true, some tasks don’t really seem to have steps.  Or they have very few: look up the arguments for the HTML <br> tag.  And some tasks have so many that it might make sense to break them up into logical subgroups: setting up the thermoformer.  Testing the thermoformer.  Troubleshooting problems after the test.

Think of “step” as the lowest level of activity that produces a result that makes sense to the performer on the job.  If I’m familiar with creating websites, then “create a new domain and assign it to a new folder in the \public_html directory” might be two steps (or maybe even one).  If I’m not familiar with creating websites, I’m going to need a lot more steps.

That makes sense, because a job aid is meant to guide a particular group of performers, and the presumption is that they share some background.  If you have widely differing backgrounds, you might end up with two versions of a job aid–see the Famous 5-Minute Install for WordPress and the more detailed instructions.  Essentially, that’s two job aids: one for newcomers (typically with more support) and one for more experienced people.

As with frequency, you need to think about how many steps the task involves, and whether you think of those as relative few steps, or relatively many.

♦ ♦ ♦

How difficult are the steps?

You can probably imagine tasks that have a lot of steps but not much complexity.  For someone who’s used to writing and who has solid, basic word processing skills, writing a 25-page report has plenty of steps, but few of them are difficult (other than getting reviewers to finish their work on time).

In the same way, a task can have relatively few steps, but many of them can be quite difficult.

That’s the reason for two step-related considerations when you Ask the Task whether a job aid makes sense: how many? How hard?

Pause for a moment and think which way you’d lean: if the steps in a task are difficult, does that mean “job aid might work,” or does that mean “people need to learn this?”

♦ ♦ ♦

What happens if they do it wrong?

This question focuses on the consequences of performing the task incorrectly.  Whether a person has a job aid or not is immaterial–if you don’t perform correctly, what happens?  Personal injury? Costly waste or rework? Half an hour spent re-entering the set-up tolerances? Or simply “re-enter the password?”

As with the other questions, you need to think about the impart of error in terms of the specific job.  And, if you haven’t guessed already, about the relationship between that impact and the value of building a job aid.

♦ ♦ ♦

Is the task likely to change?

We’re not talking about whether the job aid will change, because we still haven’t figured out if we’re going to build one.  We’re talking about the task that a job aid might guide.  What are the odds the task will change?  “Change” here could include new steps, new standards, new equipment, a new product, and so on.

♦ ♦ ♦

Ask the task, and the job aid comes out?  Right!

You’ve probably detected a pattern to the questions.  So the big secret is this:

The more your answers tend to the right, the stronger the case for a job aid.

What follows is the 90-second version of why.  (As your read the lines, just add “all other things being equal” to each of them.)

  • The less frequently someone performs a task, the likelier it is that he’ll forget how to do it.  If you’re an independent insurance agent whose practice mostly involves homeowner’s and driver’s insurance, and you write maybe six flood insurance policies a year, odds are that’s not a task you can perform without support. Job aids don’t forget.
  • The more steps involved in the task, the more challenging it will be for someone to retain all those steps correctly in memory and apply them at the right time. Job aids: good at retention.
  • The more difficult the steps are, the harder the performer will find it to complete each step appropriately. A job aid can remind the performer of criteria and considerations, and even present examples.
  • The higher the impact of error, the more important it is for the performer to do the task correctly.  You certainly can train people to respond in such circumstances (air traffic control, emergency medical response, power-line maintenance) , but often that’s when the performance situation or the time requirement presses for such learning.  Otherwise, a well-designed job aid is a good way to help the performer avoid high-cost error.
  • The more changeable the task, the less sense it makes to train to memory.  Mostly that’s because when the change occurs, you’ll have to redo or otherwise work at altering how people perform.  If instead you support the likely-to-change task with job aids, you’re avoiding the additional cost of full training, and you mainly need to replace the outdated job aid with the new one.

Here are the ask-the-task questions, together once more:

Part 2: ask the task

When to build a job aid: go / no-go

This entry is part 1 of 3 in the series When to Build a Job Aid.

If you’re wondering whether you should build a job aid to support some task, this is the first of a three-part guide to help you figure things out.

Should you, or shouldn't you (part 1)?

That first consideration (“Is a job aid required?”) isn’t as daft as it might seem.  If your organization mandates a job aid for some task, then you’re stuck.  You want to do the best job you can with it (or maybe you don’t), but unless you convince the right people to reverse the policy, somebody’s going to be building a job aid.

Which means you can skip the rest of the “should I build?” stuff that will appear in Parts 2 and 3.

Assuming that a job aid isn’t mandatory, the next question is whether speed or rate is a critical factor in performing whatever the task is.  The short answer is that if speed matters, a job aid isn’t going to work.

Wing tips up, feet down, watch for that wave...First, when it comes to routinely high-volume work like factory production or air-traffic control, that normal high-volume state doesn’t allow the performer time to consult a job aid.  Successful results depend on learning–on committing skill and knowledge to memory, and on retrieving and applying those things appropriately.

I’m a pretty fast typist (65 – 80 words per minute if I’ve been writing a lot), but the moment I glance down at the keyboard my rate drops, because the visual signal interferes with the virtually automatic, high-rate process I normally use at a keyboard.

That’s rate.  As for speed, many jobs call for you to apply knowledge and skill  in an unscheduled fashion, but quickly.  Think about safely driving a car through a tricky situation, much less an emergency.  You don’t have the opportunity to consult a job aid.  If a kid on a bike suddenly pulls out in front of you, you can’t look up what to do.

Anyone who’s helped train a new driver knows what it’s like when the novice is trying to decide if it’s safe to turn into traffic.  We experienced drivers have internalized all sorts of data to help us decide without thinking, “Yes, there’s plenty of time before that bus gets here; I can make the left turn.” In the moment, the newcomer doesn’t have that fluency but has to be guided toward it–just not via a job aid.

What’s next?

Once you’ve determined that you’re not required to build a job aid, and that there’s no obstacle posed by a need for high speed or high rate, you’ll look at the nature of the performance for clues that suggest job aids.  That’ll be the next post: Ask the Task.

CC-licensed image of seabirds by Paul Scott.

Designing learning, or, what would Elmore do?

Okay, I confess.  Elmore Leonard does not have any advice for better training.  Not that I know of.  But a book review I read yesterday reminded me (as if that were necessary) how much I’ve enjoyed his writing.

I also enjoy the list of rules for writing he says he’s picked up along the way. An example:

5. Keep your exclamation points under control.

You are allowed no more than two or three per 100,000 words of prose.

One thing he strives for, he says, is “to remain invisible when I’m writing a book.”  So the rules are “to help me show rather than tell what’s taking place in a story.”

That’s a pretty decent starting point to take if you’re creating something that’s meant to help other people learn.  So I thought I’d see if you could adapt his rules to designing for learning in the workplace.  Or to supporting learning.  Or at least to keeping away from CEUs and the LMS.

1.  Never open with “how to take this course.”

Angry Birds is a software application in which you use launch suicidal birds via an imaginary slingshot to retaliate against green pigs who’ve stolen eggs and are hiding in improbable shelters.  12 million people have purchased Angry Birds in the past two years, none of them because of the “how to play Angry Birds” module.

Honestly, there are only  two groups of people who look for “how to take this course.”   In the first group are those who designed the course, along with the lowest-ranking members of the client team.  In the second group are folks who still have their Hall Monitor badge from junior high.

2.  Never begin with an overview.

I can’t do any better than Elmore Leonard on this one:

[Prologues] can be annoying, especially a prologue following an introduction that comes after a foreword.  But these are ordinarily found in nonfiction. A prologue in a novel is backstory, and you can drop it in anywhere you want.

Cathy Moore worked with Kinection to create the military decision-making scenario, Connect with Haji Kamal. If you haven’t seen it, click the link. It takes about 10 minutes.  And notice: the overview on that first page is 17 words long.

3.  Never use “we” when you mean “you.”

Maybe I was a grunt for too long.  Maybe I’m just contrary.  But anytime I run across some elearning that’s yapping about “now we’re going to see,” I think, “who’s this we?”

“We” is okay when you’re speaking in general about a group to which you and your intended audience both belong.  But especially in virtual mode, it wears out quickly.

4.  Don’t act like you’re the marketing department. Even if you are.

This is a first cousin to the “we” business.  Once at Amtrak, a group of ticket clerks was learning a marketing-oriented approach to questions about our service.  When a customer asks, “What time do you have trains to Chicago?” the proactive response is to fill in the formula, “We have ___ convenient departures at (time), (time), and (time).”

For several stations in my area, there was one train a day in each direction.  From Detroit to Chicago at the time, there were two.

It’s not only bombastic to talk like this, it also confuses a feature with a benefit, a distinction any good salesperson would explain if marketing just asked.  It doesn’t matter if you have sixteen departures a day if I, the customer, don’t find any of them convenient.

5.  Keep your ENTHUSIASM under control!!

I suppose there’s less of this around than there used to be.  I’m a staunch believer in the value of feedback.  I believe just as firmly that feedback needs to be appropriate to the context.  Shouting “That’s great!” for trivial performance mainly makes people feel like they’ve time-traveled back to an ineffective third grade.

6.  Don’t say “obviously.”

The thing about the obvious is: people recognize it.  That’s why so few of us are surprised when we press the button for the fifth floor and the elevator eventually stops there.  Except in a humorous tone (and remember, tragedy’s easy; comedy’s hard), words like “obviously” and “clearly” can sound maddeningly condescending.

7.  Use technobabble sparingly.

When does tech talk become babble?  When it doesn’t pertain to the people you’re talking with. If I’m discussing interface design with people who work in design-related areas, then “affordances” probably makes sense to them.  But, for example, if in a post here on my Whiteboard I say that Finnish has features of both fusional and agglutinative languages, I can think of perhaps one frequent reader who has any idea what that means.  Accurate as those terms might be for linguists, they’re a dead loss for a general audience.

8. Avoid detailed descriptions of things that don’t matter to people doing the work.

This goes with the backstory remarks above, but I’m also thinking of any number of computer-user training sessions I’ve seen. One GE executive told me that the typical Fortune 100 company has more than 50 mainframe-based computer systems, most of which don’t talk well to each other.

What does that have to do with training people to use them?

The typical worker could not possibly care less whether it’s a mainframe, whether it uses Linux, who built it, where it stores data. If he thinks about cloud computing at all (unlikely), he suspects the phrase is mostly puffery, the IT equivalent to “available at fine stores everywhere.”

The introductory course at a federal agency dealing with pensions was stuffed to stupefaction with that sort of data-processing narcissism. What the participants in the course needed to know was: What’s my job, and how do I do it?

The answer is never “the QED Compounder pre-sorts input from the Hefting database in order to facilitate the Rigwelting process.”  Even if these things are technically true (and who could tell?), they’re meaningless.

Not to say that a quick summary of the process is worthless.  It simply has to make sense:

“The Intake Group reviews personnel records from a new pension plan and makes sure they can go into our system so we can analyze them. Once the Intake Group finishes, we cross-check the new account in our system to uncover any conflicts with the data as it appeared in the original system.  The team at the Resolution Desk handles the conflicts that can’t be fixed quickly.”

9. Don’t go into great detail describing the wonderfulness of the business, the product, or the CEO.

As Bear Bryant said once about the motivational codswallop so beloved at alumni dinners, “People love to hear that shit. Winning inspires my boys.”

Certainly you want people to recognize the good qualities–what makes the company and its product valuable to the customer (and thus to the shareholder). Since people in structured organizational learning already work for the outfit, they’ve already got plenty of information, and most likely an opinion, about its world-class, paradigm-shifting splendor.

10. Try to leave out the parts people don’t learn.

What don’t people learn?

They don’t learn what’s trivial (except to get through the unavoidable Jeopardy-style quiz).  They don’t learn what doesn’t relate to their job (or to a job they’d like to have). They don’t learn what they don’t get a chance to practice. They don’t learn what they don’t need to learn because they already know where to look it up when they need to.

And they especially don’t learn what they knew before they got there.

If it sounds like “training,” redo it.

Training, learning, performance — these are all variations on a theme.  I believe if you talk too much about the process of how you train, or how you learn, people nod off quickly. This is especially true of the beloved rituals of the Stand-Up Instructor: icebreakers, going-around-the-room introductions, that creative nine-dot puzzle, and your expectations for this course.

I do think it’s good to find out what people want or hope or expect, but really: if this is a workshop on designing job aids, then assuming I could read the sign at the front, I’m not here for Assumptive-Close Selling for the INFP.

Behavior and accomplishment, or, what did you do?

Guy Wallace said this on Twitter a while back:

I always wanted the Client to own the analysis & design data rather than me. I don’t convert their words to mine – or to Noun-Verb patterns.

He’s summarized a lot of good ideas about the consulting process.  And his phrase about noun-verb patterns reminded me of  two principles I keep in mind when dealing with performance problems (and with their much-smaller subset, training problems):

  • Behavior you take with you; accomplishment you leave behind.
    (That’s Tom Gilbert talking.)
  • A result is a noun; doing is a verb.

Behavior you take with you.“Doing” is a good example of a fuzzy label, and that’s part of why I’m writing this.  When someone starts talking about what people do at work, it seems to me it’s easy for their focus to shift.

Sometimes when someone’s talking about what people do at work, he might mean the actions they perform, the processes they go through.  That’s how a person works.  It’s the doing part of what they do.  It’s a verb.

If you need to repair the water damage in your basement, then what the contractor does at work–the verbs he carries out–are things like measuring dimensions, testing materials, inspecting damage, examining structures, considering costs,  and calculating square footage.

Alternatively, when someone’s talking about what people do at work, he might mean on the things those people produce, what they accomplish–the result of their work.

Accomplishments you leave behindA result is a thing.  It’s a noun. That’s the case even in so-called knowledge work and in service occupations. From this angle, what the contractor does–what he produces–is an estimate for the job.  Or a list of suggested approaches (on paper, or  verbal), or a series of questions for you to ask yourself–a kind of contractor’s initial consultation.

That’s what’s left behind when the contractor leaves.

I think this behavior/accomplishment distinction is crucial when you’re talking about performance on the job.  Companies and organizations are crammed to the institution rafters with ritualized behavior that continues in the absence of any real accomplishment except that the behavior got done: sales people are required to make 30 cold calls a day–not because of any company data about the effectiveness of cold calling, but because Veronica, the sales director, had three early successes from cold calls.

Working from the other direction, Umberto, who handles accounting for your department, tells you to charge the new software under “training materials.”  Not that the software is going to be used for training–but your boss has authority for twice the expenditure under that category than he has under “computer resources.”  Changing the accounting codes is such a slow process that not only would the software be out of date, but Umberto and you would both be retired, before it happened.  So Umberto’s helped accomplish a result (software deployed) but if caught you and he will both be sentenced assigned to Purchasing Refresher Training.

When you focus on results, you can work backward through the factors that influenced those results.  Sometimes (most of the time, actually) you’ll find that “lack of skill or knowledge” on the part of the performer is not the major hindrance to accomplishment.  That, of course, means you’ll likely be wasting your time (and the performer’s) if you try to resolve the situation through training.

As Guy Wallace said, you want the client to own the analysis and the design data.  I’ve said before that while I myself try not to use “understand” as a learning objective, I might go along with the client’s use of it, so long as the client and I can agree on what it looks like (what the results are) when someone “understands” how to perform a FEMA elevation certification.

Now, if the client is hell-bent on ignoring any data that doesn’t say “deliver training,” I’m pretty sure I’m not the right person to be working with that client.

Images adapted from this CC-licensed photo by Sebastian Werner / blackwing_de.

 

Traction or distraction: phones in class

On LinkedIn’s Learning, Education, and Training Professionals group, two months ago, a member kicked off a discussion with this question:

Increasingly, we are finding that people bring their phones, computers and Blackberrys to class expecting that it will be OK to use them. How are you dealing with this issue?

I'm going to turn this off now.As of this morning, there are 83 contributions to the discussing.  Although I’ve disagreed strongly with some of the opinions and suggestions, I’ve come to see this question as yet another example of a complex problem–in other words, one without a single, correct solution.

Here’s my paraphrase of what several participants said.  To minimize my biases, I chose every 8th comment.  Well, I left out one, which happened to be my own.  (Just coincidence that it feel into the every-eighth sequence.)

  • I display a slide with logistics (breaks, fire exits, etc.) that asks people to turn off phones or at least put them on vibrate.
  • I show a humorous YouTube video and say this is what I did with the last phone that rang during my presentation. I make everyone take out their phone and turn them off in front of everyone.  I include a 20-30 minute break several times a day.
  • Ask the class to set the rules.  You are there to learn.  If people were on vacation instead of training, why would they check email?  They can do that during lunch.
  • Sometimes people are using BlackBerries and other devices to take notes.
  • Lately I don’t even mention phones.  I trust adults to act like adults.  I do like (another person’s) suggestion of asking people to turn them on to integrate outside information.
  • Set your phone to ring 3 minutes into the session.  Pretend to talk with the president of the company, who wants to know if everyone’s turned their phones off. Exception: if you expect the president to call, or if someone’s seriously ill. I also believe people are adults who must make their own decision.
  • There’s no right or wrong answer.  Some teaching strategies are still focused on a society that no longer exists.  Use appropriate technology at the appropriate time.
  • Go with the flow.  I can get irritated if a phone rings, but if the class is good and people are engaged, they’ll take their own responsibility.
  • I like letting the learners decide how to deal with device interruptions.

It's past time to put away your phone.I don’t do much formal instruction any more, by which I mean acting as the primary source (and predominant voice) in a scheduled learning event.  I’ve done quite a bit of that, but over time found that people seemed to learn best when I talked less and they did more.

Yes, when people are new to a topic, they generally need some grounding and some concepts.  Most of my experience is has not been with people new to the organization and the industry, however.  That means they tend to need less “before we begin” than a lot of instructors (and instructional designers) seem to think.  Even for a topic as information-dense as Amtrak’s reservation system, I found that a lean approach (less talking, more doing) suited the goal of having people able to use the system.

The LinkedIn discussion does provide a glimpse at the many ways that people working in this field view cell phones, PDAs (does anyone say PDA any more?), and smartphones.  (Almost none of the comments address computers as such.) I see a kind of clustering around “they’re here to learn (from me),” and a smaller one around “I’m here to help them learn.”

My own phone, like my computer, is as basic a tool as pen and paper.  Yes, I take paper notes, but when I have the choice, I take electronic ones so I can tag, search, re-use, copy, paste–all of which are tougher to do with PowerPoint handouts or handwritten notes.

I don’t want someone else telling me how to capture or retrieve information.   If they say things that I find condescending or just plain silly (“enter the world of civilized people,”  “phones are an interruption to learning”), I’ll get the message–though it may not be the one intended.

CC-licensed images:
Retro phone photo by Robert Bonnin.
Classroom sign photo by Ben+Sam.