On the job

Otherwise, it’s babysitting or psychotherapy.

Apr 252014
 

I’ve spent most of the past four months learning how public-sector pension administration is affected by WESA (the Wills, Estates and Succession Act that goes into effect in British Columbia on March 31).

That’s because my assignment was to design WESA-related training for the people in my organization who deal with members of the different plans we administer at BC Pension Corporation.

In the training / learning networks that I connect to, I frequently see discussion (and occasional grousing) about “compliance,” which often seems to mean “having to comply with some picayune requirement.” At the moment there’s a video clip making the rounds of Twitter and Facebook, with a flight attendant joking through the standard safety announcement.

I thought it was funny (though it wouldn’t be if I had to hear it five times a month), but the mandatory pre-flight announcement is also near the bottom rung of the compliance ladder. Where I work, compliance can mean “make sure we meet the legal and fiduciary requirements established to protect the interests of the individual members of pension plans and of their public-sector employers.”

Putting that in terms of accomplishments, we want to be able to:

  • Accurately describe the options you have for nominating (designating) who will receive:
    • Any benefit available if you die before retirement
    • Any benefit available if you die after retirement
  • Correctly explain options for allocating benefits among multiple beneficiaries
  • Review nominations submitted by members for completeness and accuracy
  • Correctly enter that information into our system
  • Update information based on changes from the member or the employer

…and a number of other changes to how we’ve done things prior to this legislation.

So one part of this post is to say “yes, compliance can matter,” and the other is just to talk a bit about how fortunate I’ve been in this new job. I was assigned to the WESA my first week on the job, because my acting manager believed it was good for people to have a project that’s their own.

I worked with the person writing our procedures related to nominations; he guided me through the initial thickets of terminology, acronyms, and workflow. My colleague Chris, the senior member of our instructional designer group, helped me plan my project and gave invaluable ideas from a course he’d developed on a similarly complicated topic. I also got to work with several subject-matter experts who “work in the plans,” as we say — their day jobs involve dealing the members of one or another of the BC public-sector plans, so they know this stuff.

Best of all, the experts who were the instructors were eager to avoid information dumps and talk shops. Ultimately we created three versions of our course, tailored to three different job categories. Lots of practice cases — including simple ones they walked the participants through, so people could see the relevant part of the system and update a (fictional) member’s records instead of just having someone tell them how they’d do it back on the job.

I’m thinking of writing a bit more about this. I need to find the right balance between describing what I think is worth talking about, safeguarding specifics about our members and our systems, and putting people to sleep with more information about nominating beneficiaries than they might want to know.

I’ll figure that out, and I’ll try to get my posting frequency up a bit. I’ve been missing the thinking-out-loud for quite some time.

Jan 312014
 

I recently came across a link to this infographic by Julian Hansen.

Infographic by Julian Hansen

I don’t see most infographics as a job aid. They usually aren’t intended to guide you through a task, and don’t usually serve well as reference job aids (my term for information that’s been organized for quick reference). I don’t think this would serve as a true job aid for most non-designers–it’s really busy, and the criss-crossing paths could easily confuse someone.

As this Fontfeed article states, though, that wasn’t really Hansen’s goal.

 Instead of simply browsing through type specimens, Julian wondered if he could come up with something more rational, a systematic approach [to choosing typefaces]. His project took the form of a flowchart on a poster. Studying different type finders made him come to the conclusion that selecting type really could be a matter of taste…. This made Julian decide that his poster should not only be useful, but also be light-hearted and make fun of stereotypes. This made him throw in options like “is it an Italian restaurant?” for instance. His ultimate goal was to show that typefaces convey a whole lot of meaning that “ordinary” people just don’t see.

Assuming that’s true, I see the chart as one way to demonstrate understanding: here’s what I think about fonts and when to use them. This is part of what I think Jane Bozarth means when she says, “We learn by doing, and by telling what we’re doing, and by watching others do things, and by showing others how we did something.”

Personally, I’m not much info fonts.

That’s not the point, though. Work like Hansen’s has the potential to trigger further interest in people.  For example, after reading his chart and the Fontfeed article, I happened to see a tweet by @MizMinh linking to an article on The Next Web:

The Science Behind Fonts (and How They Make You Feel)

Personally, all my working out loud lately has been done on site, in my new job. I’m not unhappy about that; I’m working on an engaging project and I have collaborative colleagues. But I’ve been neglecting other avenues, and this post is one effort to overcome that neglect.

Oct 192012
 

Part of an email I received yesterday; I’ve changed a few [specifics] for privacy’s sake:

My friend [Veronica], a retired lawyer, has just started training as a part-time [Stratosphere Airlines] ticket agent at [Overcrowded Airport].

She has started with some computer-based self instruction that seems to dump lots of info on students before any application, like memorizing the airport codes for cities that Stratosphere flies to. She says there will be interaction–role-playing-like simulations of conversations and problems a ticket agent will predictably experience.

She said the trainer has confessed to her that the info dump without application is neither his own preference nor his design. It is imposed.

Have a good flight, sugar.I’ve never worked for Stratosphere Airlines; I’ve hardly ever flown them. But I recognize this situation and this approach, because they’re a direct flight to 1975, when this dull-witted, learn-X-before-Y approach was pandemic in the travel industry. It’s how I started learning what was Amtrak’s reservation system at the time: memorize a trainload of facts.

One of the many unfortunate assumptions is the value of such memorization. Like Latin or limp broccoli in your school lunch, it’s supposed to be good for you. In the context of becoming competent as a ticket agent, though, it’s as misguided as memorizing the name of every street along your 25-mile commute, rather than learning the most sensible route and then useful variations, like when to avoid driving past the high school.

The assumption in Veronica’s training program is that you have to know the city code before you can look up a schedule. The reality is that you have to have the code, which isn’t the same thing.  Let me demonstrate with an example from based on Amtrak’s old reservation system:

Use the A (Availability) entry to find the schedule between two cities.
Here’s how to check availability between Chicago (CHI) and Los Angeles (LAX) on July 5th:

A 5JUL CHI LAX
(don’t use spaces; they’re here just to make the example clear)

How would you check the schedule for May 9th from San Francisco (SFO) to Portland (PDX)?

The odds are that 80% of people, given that example, will come up with one of the two correct answers(A9MAYSFOPDX or A09MAYSFOPDX–the leading zero in the date is optional). Which means that for them an instructor or course can respond, “That’s right” and then show what the reservation system would show: the schedule from San Francisco to Portland on May 9th.

I’m skipping some nuance here, like taking note of the leading zero if the person uses it (and pointing out it’s optional). I’m also skipping what a good instructor or course would do with what I call expected wrong answers–someone using the correct city codes in the wrong order, or using a code from the example.

To me, this example is a bite-sized authentic task: it’s a small accomplishment that makes sense in a workplace context. Your customer asks what the schedule is from Point A to Point B, and you find out. Looking up city codes is a useful, even essential skill (if you don’t have the city code for Moose Jaw, you can’t find a flight that goes there), but it’s just one component in a cluster of authentic tasks.

What’s more, I can put together a logical sequence of such bite sized tasks into a complete customer transaction suited to a novice ticket agent.  And I can then expand parts of that sequence to give practice in applying the system’s power (which is to say, its complexity) to meet a customer’s requests. “Is there a flight that will get me to Moose Jaw by 3?  Can I leave Moose Jaw on Saturday morning? Is there a discount fare when traveling with small children on the weekend?”

I worked in an Amtrak ticket office for four years, and no customer ever asked me for a city code. We used them all the time–but our practice was to teach ticket agents to look them up at first, and not guess. We also has job aids for frequently-requested cities–storing the information in the job aid instead of trying to cram it into someone’s head. The training-wheels effect would kick in, so that after a week or two on the job in Detroit, you did memorize the code for Jackson, Michigan (JXN), a destination people requested from us far more often than they did Jacksonville, Florida (JAX) or Jackson, Mississippi (JAN).

There’s an awful lot of stuff to learn in a railroad or airline reservation system. I often use ticket-agent training as an example of a potential drawback to using only informal learning approaches. Yes, it’s true, if I dropped you in the middle of Budapest with a tattoo on your forehead that said “Kérem, ne beszélj velem angolul” (“Please don’t speak to me in English”), you’d probably start picking up Hungarian quickly. Depending on your interests, though, a scenario-based course on Hungarian for Travelers–focused on realistic situations that made sense to you–might be a better idea.

It’s almost certainly a better idea than beginning by studying verb conjugations. You’ll need those, eventually, but you can probably find out what time the flight arrives without having to study the subjective first.  Except, maybe, if the Stratosphere Airlines flight from 1975 were to arrive.

CC-licensed image by Robert Huffstutter

Jun 282012
 

There’s this:

Blended learning

“Stir the mixture well / Lest it prove inferior…”

And there’s this:

Blended learning and job aids

“…then put half a drop / Into Lake Superior.”

Even conceding that many of the “blended learning” hits are from formal education (schools, academia), it’s a little depressing that only 3% of them mention job aids. I personally doubt it’s because everyone uses job aids. It’s almost as if developers, yearning to produce ever-more-engrossing courses, are blind to this kind of performance support.

This is closely related to what Cathy Moore says in the opening minute of the following clip:

And here, at 4%… is what is possibly the least expensive and most effective approach [for blended learning]: on-the-job training tasks. Apparently we are still stuck in the mindset that training is a course.

The clip actually covers a lot of territory in six minutes, including realistic tasks, application, relevant examples, and so on, but I want to focus here on the aspect of figuring out how not to train — or, more accurately, how to not train. Cathy demonstrates the use of “a mega job aid” to enable on-the-job learning. This is her term for combining a job aid (which stores information or guidance so you don’t have to remember it) with instruction (which tells you how to apply what’s in the job aid to a specific task).

How do you know it’s a job aid?

  • It’s external to the individual.
  • It reduces the need to memorize.
  • People use it on the job.
  • It enables accomplishment.

I asked Cathy for some comments about job aids.

“Before designing formal training, consider whether a job aid is all you need.”

Here, she’s asking what makes you think you need formal training for X?  Is there another way to help people accomplish the desired result?

“If you decide training is necessary, make sure the job aids are top-notch, and consider having the ‘course’ teach people how to use the job aids.”

It’s not a job aid if you don’t use it while you’re performing the task. So if you build a job aid but find that people need to practice using it, that practice should be like on-the-job use.  They’re not going to be doing the real-world task from within the LMS (unless, poor devils, their real-work job is managing the LMS). Embalming a job aid inside a course is like disabling an elevator in hopes that people will learn how to get from the 3rd to the 9th floor without “cheating.”

“Don’t duplicate the job aid info in the course.”

  Part of the decision about whether to build a job aid involves the nature of the task. Among the considerations:

How likely is it that the task will change?

The likelier it is that the task will change (and thus that the steps for accomplishing it will change), the more sense it makes to build a job aid — and the less sense it makes to duplicate the job aid inside a formal course.

Instead, as part of your formal training, use the same job aid people will use on the job. And figure out how to make updates easily available.

No matter what learning management ideology claims, there are only three kinds of people who return to an online course for reference information:

  • People who work for the vendor.
  • Actors appearing in the vendor’s materials.
  • People on the job who are really bored or really desperate.

Because she involves herself with what people actually do on the job, Cathy has some inexpensive yet highly effective ideas about where to get started:

To evaluate and improve job aids, physically visit learners’ work stations and look around. What support materials have people created for themselves? Often someone on the job has already created a good job aid and you just need to “borrow” it.

Even if it’s a less-than-ideal job aid, the fact that someone’s created it and is using it suggests both that the task is important and that people feel the need for support as they’re carrying out the task. That’s one heck of a head start, and you haven’t had to create a single “at the end of this training program” statement.

Jun 112012
 

At the Innovations in e-Learning Symposium this week, Dan Bliton and Charles Gluck from Booz Allen Hamilton presented a session on “failure-triggered training.” I was really impressed by their description of a study that explored different approaches to reducing the risk of phishing attacks in a corporate setting. For one thing, as I told Charles immediately after the session, they invented the flip side of a job aid.  But I’m getting ahead of myself.

In this post:

  • Their session description (from the symposium brochure)
  • My summary of the session, with a few excerpts from their presentation
    (I’ll repeat this link a few times in this post; all those links are for the same set of materials. You don’t need to click more than once.)
  • (At least) three implications for improving performance

The session description

Study Results: Failure-Triggered Training Trumps Traditional Training

We didn’t expect our highly interactive eLearning (that generated great post-test scores) to be completely ineffective in changing behaviors in the work environment! Could the same eLearning be made effective if delivered as failure-triggered training? Come learn the outcomes of a blind study of nearly 500 employees over nine months which analyzed multiple training approaches. The study shows that the same eLearning was significantly more effective when delivered as spaced events that employed learning at the point of realization. This combination of unannounced exercises and failure-triggered training (a See-Feel-Change approach) significantly reduced improper responses to phishing attacks by 36%.

I didn’t ask Bliton or Gluck about this, but “see-feel-change” seems related to what John Kotter talks about here: making a seemingly dry or abstract concept more immediate and concrete.

What I heard: BAH’s study

(Note: this is my own summary. I’m not trying to put words in their mouths, and may have misunderstood part of the session. If so, that’s my fault and not theirs.  In no way am I trying to take credit either for the work or for the presentation by Dan Bliton or Charles Gluck.)

The Booz Allen Hamilton (BAH) study, involving 500 employees over 9 months, analyzed different training approaches to “phishing awareness.”  The training aimed at making employees aware of the risks of phishing attacks at work, with the goal of reducing the number of such attacks that succeed.

The study wanted to see whether interactive awareness training produced better results than static, page-turner training. In addition, the study used fault-triggered training, which Bliton and Gluck explain this way:

Unannounced, blind exercises [simulated phishing attacks] delivered in spaced intervals, combined with immediate, tailored remedial training provided only to the users that “fail” the exercises.

In other words, if you click on one of the fake phishing attempts, you immediately see something like this:

BAH phishing failure

BAH divided the study participants into three groups:

  • The control group received generic “training” about phishing that did not tell them how to respond to attacks.
  • The wiki group’s training consisted of a non-interactive pageturner, copied from a wiki.
  • The interactive group’s training included practice activities (how to identify likely phishing, how to respond).

In post-training comments, the Interactive group gave their training an overall score of 3.8 out of 5.  As the presenters noted somewhat ruefully, the Wiki group gave theirs 3.7  — and the control group gave theirs 3.4.  (See slide 11 in the presentation materials.)  The page-turning Wiki group actually felt better prepared to recognize phishing than the Interactive group.

Posttest questions indicated that 87.8% of the Wiki group and 95.6% of the Interactive group knew whom to notify if they spotted suspicious email.

From the response to the first simulated attack, however, Dan and Charles learned there was no significant difference between the three groups (Control, Wiki, Interactive) — nearly half the participants in each group clicked the link or replied to the email.

What happened next at BAH

Over six months, participants received three “exercises” (mock phishing attempts). “Failure” on these exercises consisted of either clicking an inappropriate link (producing an alert like the example above) or replying to the email — hence, “failure-triggered training.”

The study provide good data about actual performance, since it captured information like who clicked a link or replied to the simulated phishing.

Incorrect responses fell dramatically between the first and second exercises, and further still between second and third:

Booz Allen results from failure-triggered training

 Bliton and Gluck attribute this decrease to two main factors: the spaced-learning effect produced by the periodic exercises, and “learning at the point of realization,” since what you could think of as failure-feedback occurred just after someone responded inappropriately to what could have been an actual phishing attack.

If you’re familiar with ideas like Gottfredson and Mosher’s Five Moments of Need, which Connie Malamed summarizes nicely, this is #5 (“when something goes wrong”).

I’ve left out plenty; if you’ve found this description intriguing, take a look at their presentation materials. I can tell you that although Bliton and Gluck’s presentation at IEL12 had a relatively small audience, that audience really got involved: question, opinions, side conversations–especially striking at 4 o’clock on the last afternoon of the symposium.

What I thought, what I think

This approach is much more than training, in the sense of a structured event addressing some skill or knowledge need. I told Charles Gluck that it’s almost the flip side of a job aid.  A job aid tells you what to do and when to do it (and, of course, reduces the need to memorize that what-to-do, since the knowledge is embedded in the job aid).

At first I thought this approach was telling you what not to do, but that’s not quite right, because you just did what you shouldn’t have.  You can think of it  as being like a ground fault circuit interrupter (GFCI), a special type of safety device for an electrical circuit.

GFCIs can respond to a problem too small for a circuit breaker to detect. So you’re blow-drying your hair, when click! the wall outlet’s GFCI trips, safely breaking the circuit and interrupting your routine.  Not only do you avoid a shock; you also have feedback (if you know about how GFCIs work) that you’d been at risk from electrical hazard.

In the same way, BAH’s mock-phishing exercise interrupts the flow of work. By following the interruption with immediate, relevant, concrete feedback, as well as an offer for further details via a brief training program, this short circuit is turned into a smart circuit.

Which to me opens the door to — let’s use a different term instead of “failure-triggered” — task-triggered performance support. Like a virtual coach, the BAH exercises detect whether I responded inappropriately and then help me not only to recognize and even practice what to do instead.

What I’m leaving out

This was a study and had limits.  For one thing, because of the failure-trigger, we don’t know much about the people who didn’t click on the phishing attempts: have they really mastered this skill, or did they just not happen to click on these trials?

There’s also some data about the best response (don’t click the link, do report the attempt), though the numbers seem very small to me.  (I don’t recall anyone asking about the details on this topic, so I could well be misunderstanding what the numbers represent).

BAH study: attempt reported

On the corporate-culture side, what happens within the organization?  Does this seem Orwellian?  Can the organization apply it as formative feedback intended to help me improve, or do I just end up feeling that Somebody’s Watching? I’d like to look for some data about the effects of retail mystery-shopper or secret-shopper programs, a similar activity that can seem either like trickery or like process improvement.

What about habituation? Will the effectiveness of this approach fade over time?

Most intriguing: can you harness this as a form of ongoing training?  For example, along with informing people about some new security threat, create and send out further exercises exemplifying such a threat. Their purpose would be to provide a kind of on-the-job batting practice, with “failure” producing two-part feedback (“You missed this security threat, which is…” “To find out more, do this…”).

Dan Bliton, Charles Gluck, and their colleagues have done more than make BAH more secure from phishing.  They’ve also shared a creative, practical experiment.