At the Innovations in e-Learning Symposium this week, Dan Bliton and Charles Gluck from Booz Allen Hamilton presented a session on “failure-triggered training.” I was really impressed by their description of a study that explored different approaches to reducing the risk of phishing attacks in a corporate setting. For one thing, as I told Charles immediately after the session, they invented the flip side of a job aid. But I’m getting ahead of myself.
In this post:
- Their session description (from the symposium brochure)
- My summary of the session, with a few excerpts from their presentation
(I’ll repeat this link a few times in this post; all those links are for the same set of materials. You don’t need to click more than once.)
- (At least) three implications for improving performance
The session description
Study Results: Failure-Triggered Training Trumps Traditional Training
We didn’t expect our highly interactive eLearning (that generated great post-test scores) to be completely ineffective in changing behaviors in the work environment! Could the same eLearning be made effective if delivered as failure-triggered training? Come learn the outcomes of a blind study of nearly 500 employees over nine months which analyzed multiple training approaches. The study shows that the same eLearning was significantly more effective when delivered as spaced events that employed learning at the point of realization. This combination of unannounced exercises and failure-triggered training (a See-Feel-Change approach) significantly reduced improper responses to phishing attacks by 36%.
I didn’t ask Bliton or Gluck about this, but “see-feel-change” seems related to what John Kotter talks about here: making a seemingly dry or abstract concept more immediate and concrete.
What I heard: BAH’s study
(Note: this is my own summary. I’m not trying to put words in their mouths, and may have misunderstood part of the session. If so, that’s my fault and not theirs. In no way am I trying to take credit either for the work or for the presentation by Dan Bliton or Charles Gluck.)
The Booz Allen Hamilton (BAH) study, involving 500 employees over 9 months, analyzed different training approaches to “phishing awareness.” The training aimed at making employees aware of the risks of phishing attacks at work, with the goal of reducing the number of such attacks that succeed.
The study wanted to see whether interactive awareness training produced better results than static, page-turner training. In addition, the study used fault-triggered training, which Bliton and Gluck explain this way:
Unannounced, blind exercises [simulated phishing attacks] delivered in spaced intervals, combined with immediate, tailored remedial training provided only to the users that “fail” the exercises.
In other words, if you click on one of the fake phishing attempts, you immediately see something like this:
BAH divided the study participants into three groups:
- The control group received generic “training” about phishing that did not tell them how to respond to attacks.
- The wiki group’s training consisted of a non-interactive pageturner, copied from a wiki.
- The interactive group’s training included practice activities (how to identify likely phishing, how to respond).
In post-training comments, the Interactive group gave their training an overall score of 3.8 out of 5. As the presenters noted somewhat ruefully, the Wiki group gave theirs 3.7 — and the control group gave theirs 3.4. (See slide 11 in the presentation materials.) The page-turning Wiki group actually felt better prepared to recognize phishing than the Interactive group.
Posttest questions indicated that 87.8% of the Wiki group and 95.6% of the Interactive group knew whom to notify if they spotted suspicious email.
From the response to the first simulated attack, however, Dan and Charles learned there was no significant difference between the three groups (Control, Wiki, Interactive) — nearly half the participants in each group clicked the link or replied to the email.
What happened next at BAH
Over six months, participants received three “exercises” (mock phishing attempts). “Failure” on these exercises consisted of either clicking an inappropriate link (producing an alert like the example above) or replying to the email — hence, “failure-triggered training.”
The study provide good data about actual performance, since it captured information like who clicked a link or replied to the simulated phishing.
Incorrect responses fell dramatically between the first and second exercises, and further still between second and third:
Bliton and Gluck attribute this decrease to two main factors: the spaced-learning effect produced by the periodic exercises, and “learning at the point of realization,” since what you could think of as failure-feedback occurred just after someone responded inappropriately to what could have been an actual phishing attack.
If you’re familiar with ideas like Gottfredson and Mosher’s Five Moments of Need, which Connie Malamed summarizes nicely, this is #5 (“when something goes wrong”).
I’ve left out plenty; if you’ve found this description intriguing, take a look at their presentation materials. I can tell you that although Bliton and Gluck’s presentation at IEL12 had a relatively small audience, that audience really got involved: question, opinions, side conversations–especially striking at 4 o’clock on the last afternoon of the symposium.
What I thought, what I think
This approach is much more than training, in the sense of a structured event addressing some skill or knowledge need. I told Charles Gluck that it’s almost the flip side of a job aid. A job aid tells you what to do and when to do it (and, of course, reduces the need to memorize that what-to-do, since the knowledge is embedded in the job aid).
At first I thought this approach was telling you what not to do, but that’s not quite right, because you just did what you shouldn’t have. You can think of it as being like a ground fault circuit interrupter (GFCI), a special type of safety device for an electrical circuit.
GFCIs can respond to a problem too small for a circuit breaker to detect. So you’re blow-drying your hair, when click! the wall outlet’s GFCI trips, safely breaking the circuit and interrupting your routine. Not only do you avoid a shock; you also have feedback (if you know about how GFCIs work) that you’d been at risk from electrical hazard.
In the same way, BAH’s mock-phishing exercise interrupts the flow of work. By following the interruption with immediate, relevant, concrete feedback, as well as an offer for further details via a brief training program, this short circuit is turned into a smart circuit.
Which to me opens the door to — let’s use a different term instead of “failure-triggered” — task-triggered performance support. Like a virtual coach, the BAH exercises detect whether I responded inappropriately and then help me not only to recognize and even practice what to do instead.
What I’m leaving out
This was a study and had limits. For one thing, because of the failure-trigger, we don’t know much about the people who didn’t click on the phishing attempts: have they really mastered this skill, or did they just not happen to click on these trials?
There’s also some data about the best response (don’t click the link, do report the attempt), though the numbers seem very small to me. (I don’t recall anyone asking about the details on this topic, so I could well be misunderstanding what the numbers represent).
On the corporate-culture side, what happens within the organization? Does this seem Orwellian? Can the organization apply it as formative feedback intended to help me improve, or do I just end up feeling that Somebody’s Watching? I’d like to look for some data about the effects of retail mystery-shopper or secret-shopper programs, a similar activity that can seem either like trickery or like process improvement.
What about habituation? Will the effectiveness of this approach fade over time?
Most intriguing: can you harness this as a form of ongoing training? For example, along with informing people about some new security threat, create and send out further exercises exemplifying such a threat. Their purpose would be to provide a kind of on-the-job batting practice, with “failure” producing two-part feedback (“You missed this security threat, which is…” “To find out more, do this…”).
Dan Bliton, Charles Gluck, and their colleagues have done more than make BAH more secure from phishing. They’ve also shared a creative, practical experiment.