Assembling a better widget vat, or, Excel is not a verb

Last week, I had a Twitter conversation with Jane Bozarth and David Glow. All three of us, I think, started by having fun with the, um, suboptimal material Jane had been given to start with. I briefly described a spectacular example of a wrongheaded job aid, but we moved pretty quickly to the idea that mockery (however well deserved) wasn’t enough.

So rather than post this fictionalized version as a candidate for the Worst Job Aid Ever, I thought I’d try doing two things: talk about why this attempt works so poorly, and suggest some things I’d do differently if invited. I’m grateful to David and Jane for encouraging me in this.

It’ll be a bit wordy, so  I’ll break it into two posts. This is the first.

Here are two pages taken from a set of assembly instructions for a piece of industrial equipment. I’ve edited the pages, but only to conceal the source. The layout is exactly the same as the original. (You can click the images to see them full size in a new window.)

QAC 02 page 1 1
Assembly instructions, page 1 – 1

 (By the way, when I said the layout is exactly the same as the original, I also meant that fifteen of the seventeen pages have an identical layout. Each page has space for three dropped-in photos; each has three borderless boxes of instructions [centered vertically, with text almost always in ALL CAPS]; each has the six boxes you see for safety, tools, quality, and so on.)

Assembly instructions, page 1 – 5

You may have noticed that the assembly instructions were created in Excel. I confess that I’ve kept this unique document for selfish reasons: it’s one of the most bizarre attempts at guiding performance that I’ve ever seen.

I’m not here to talk about bizarre–at least not today. I’m here to talk about why these instructors fail so badly.

What’s not working, and why

It’s true that never before nor since have I seen an assembly guide written in a spreadsheet, but that’s more a symptom than the underlying problem. A more important question: what’s Quasimodo’s problem? In other words, what are they trying to get done?

I’d say their goal is to have assembled widget vats that pass inspection and meet cost guidelines.

I’ve covered a lot of territory with “pass inspection,” but the second-last sheet makes reference to a number of Quasimodo standards. (Click this image to open it in a new window.)

QAC 05 inspection
Inspection guidelines and standards

Although I never saw the actual equipment, it’s clear from photos that the finished product is about the length and width of a roomy parking space. Two or three people assembly it in a factory, from start to finish (meaning, not on an assembly line–the parts come to the assemblers). Assembly involves over 50 steps, several of which are variations of “repeat steps 1-5 for all four edges.”

So what?

Well, mostly these steps are sequences of actions with few decisions:

  1. Position center end wall panel CAD on assembly fixture.
    • Use two people or a jib crane to lift panels.
    • Check submittal for end wall connection and bottom connection orientation.
  2. Fasten center end wall panel CAD to floor plane using 5/16 x 3/4 LG tappers.
  3. Fasten center end wall panel CAD to floor support channel ZCA using 3/8 x 1-1/4 LG screw.
  4. And so on and so forth…

I imagine these instructions were printed, pages slipped into sheet protectors and stored in a ring binder at the assembly area. They’re like a recipe from an industrial cookbook. So the fact they were created in Excel, while non-typical, is less relevant than the barriers created by their overall layout and especially by their approach to guiding behavior.

What else do we know about assembling the widget vat?

  • Workers need safety equipment like hearing protection and safety glasses.
  • Parts of the task involve specialized equipment (like that jib crane).
  • Some ways of working are more efficient or more effective than others (“start at the center and work your way out to the ends”).
  • Detail often matters (as in the note above to check the submittal, a kind of specification for one specific assembly job).

And why doesn’t this attempt work well?

QAC 05 1 5 detailTo shoot the biggest fish in the barrel: Excel isn’t a word processor. It’s not a publishing tool (unless you’re publishing numbers and charts, or else tables of data). Creating this guide in a spreadsheet needlessly complicates the task of updating and revising — and even searching.

This isn’t even well-done document publication via Excel. Here’s a portion of page 1-5 (the second image above).  Note that the photo includes two callouts labeled CA while the accompanying text refers to panels CAA, CAB, and CAF. If you had the Excel file, you could enlarge the two CA callouts in the picture, and then you’d see that one of them actually reads CAA while the other reads CAB.

So Doctor Spreadsheet may not have been a proofreading whiz.

But who cares? The reason Excel is a poor choice is that nothing calls for Excel. There isn’t a single calculation in the entire document. You might as well have produced this in PowerPoint. Or taken photos of alphabet magnets you arranged on your fridge.

From a graphics standpoint, the lockstep layout assigns equal weight to two areas (task photos and procedural steps), and the same total weight to six blocks, one of which (quality) reads “n/a” on all but one of the 15 pages.  Most of the time, a third of the space on a page is sitting around doing nothing. Nothing except confining the actual performance steps to their all-cap prison.

From a job-aid standpoint:

  • Before you begin information (like equipment and parts to have) should appear before the steps in the procedure.
  • Visuals, when necessary, should appear next to the step they illustrate.
  • Information that’s not needed on a page should not appear and shouldn’t have a reserved parking space that does nothing but delineate whether the information might show up on a subsequent page.
  • Steps should be clearer delineated, not crammed into a trio of one-size-fits-all holding pens:
Detail from page 1-6
Detail from page 1-6

The vertical centering manages to complicate reading even more, a remarkable feat for such a small amount of text. Another complication: in this example, step 5 says to repeat steps 1 through 5–a good recipe for an endless loop. I think the assemblers would figure out what the designer meant, but it’s no thanks to the designer.

So, as it exists, the QAC assembly instructions are hard to update, hard to read (from a graphics standpoint), and hard to follow (from a getting-your-work-done standpoint). It doesn’t seem like they’d easily get Quasimodo to the goal of assembled widget vats that passed inspection at a cost acceptable to the company — at least not until the workforce managed to build enough of these things to not need the instructions.

In my next post, I’ll show some possible revisions and talk about why I think they’d help. But I don’t have to have all the fun: add your comments. Ask your questions–if I’m able to, I’ll answer them. Let’s see if we can get to IPI sign-off a bit faster. You know, so we can excel.

Robust practice: where corporate learning misses the train

New! Improved! Now with TripleHorn API!
CC-licensed image by Kudu Photo

In the world of organizational learning, there’s always another bandwagon. In fact, they have to widen the L&D Highway every few years to accommodate late-model bandwagons, as well as service vehicles full of batteries, charging cords, and templated reports. Which is great–if you’re in the bandwagon-supply business, rather than the far less trendy business of helping people accomplish things on the job.

I work for an organization where much of the staff works with complex computer systems. In our case, the systems help us administer pension programs — enrolling people when they start a job covered by one of our plans, producing annual statements of benefits, calculating estimates, processing transactions related to salary and service, and managing all the pension-payment transactions.

Hundreds of thousands of people–I suspect millions, actually–work in similar jobs in any industry you can think of. I am pretty familiar with hotel, airline, and passenger rail reservation systems, and I’ve helped train people to use systems for managing inventory, conducting pharmaceutical trials, monitoring accounts in the consumer packaged goods industry, trading natural gas, and tracking shipping containers.

Almost none of these systems had a way for people to practice tasks in a robust way.

By robust practice, I mean features or capabilities that let people practice complete tasks, as they would on the job, without risk to data, resources, vendors, or clients. It’s my contention that most large corporate systems, although they’re the workhorses of the organization, have no way for someone to practice what he’s supposed to do in the production system–except by working with real data.

I say “production system” as shorthand for the ability to reserve actual airline seats or handle actual pension benefits or process actual bank loans. And I’m going to say “practice system” when I mean the ability to exercise the same steps and skills without reserving actual seats, handling actual pensions, or issuing actual loans.

In a financial system for managing bank branches, for example, if you wanted to practice the steps to process a car loan, you had to pretend you were issuing a loan to yourself. And you really had to remember not to click approve, or customer-you would have a loan.

“Don’t push Approve!” is good guidance in that situation, but a lousy way to train someone in loan approvals.

Is this your experience, too?

[poll id=”6″]

Simulations are at best a half-measure. (At worse, they’re a fantastic way to set money on fire.) It takes forever and two weeks to develop a simulation, and the Thursday after it’s released, IT relabels the Pending tab and adds two new buttons on the Estimation form.

(Added to original post: a comment from Clark Quinn on Twitter made me realize that this last paragraph could be misconstrued. I was thinking of simulations of some complicated corporate computer system–in other words, a standalone imitation of the production system, one that requires at least as much maintenance as the production system does.)

True robust practice doesn’t mimic the production system; it mirrors or piggybacks on production’s capabilities while ensuring that anything that goes wrong in the practice system stays in the practice system.

At Amtrak, we created a set of imaginary passenger trains–the “training trains.” They had robust practice features, and also safety features.

Robust features:

  • Training trains ran on the same routes as actual trains, with the same schedules, fares, and accommodations.
  • The set of training trains contained every accommodation available, so you could practice reservations that might rarely or ever come to you otherwise.
  • Training train schedules, fares, features, and services were created using production-system data, so the training trains always reflected real-world conditions.
  • Practice IDs could retrieve such production-system information as schedules, fares, routes, on-board services, and operating status (“Is train 353 on time?”).
  • You could log into a practice ID from any terminal connected to the production system.

Safety features:

  • A person using a production ID could not display the training trains. (You couldn’t make a reservation for a real person on an imaginary train.)
  • A person using a practice ID could not reserve space, create reservations, or issue tickets on an actual train.
  • Practice users could issue the “print ticket” command, but the actual printed ticket clearly read TEST TICKET / NOT GOOD FOR TRAVEL.
  • Practice users could not enter or alter production-system information — e.g., they could not report an actual train as departing on time or change the hours of a ticket office.
  • To log into a practice ID, you had to log out of your production ID.

Additional safety measures guaranteed, for instance, that imaginary revenue from training train reservations was not counted as actual revenue in the production system.

We try so hard as learning professionals to create authentic, high-value formal training. And we talk a lot about the problem of transfer, the need for spaced practice, the value of whole tasks, and the like. But we don’t seem to advocate for measures that would deliver robust practice to the workstation, and many of us are a little uneasy about what was described to me as “letting people run around inside a practice system.”

Such running around, which I think of as “deciding what I want to practice, then practicing it,” seems to me to have far greater potential for performance support than run-of-the-mill elearning, no matter how many images it has of people in suits, talking on phones.

Supporting performance, or, that’s l’étiquette

My French isn’t that good: I can hold a conversation (sometimes) but I couldn’t hold a job. One way I try to get better is to read more and listen to more in French. I recently came across the Langue Française section of the TV5Monde site, which has an almost overwhelming range of features.

One of them is 7 jours sur la planète (7 Days on the Planet). It’s a regular feature  with three segments from the week’s TV news. For each segment, you can watch the video clip, read a transcript, and then test your comprehension with three levels of questions (elementary, intermediate, and advanced).

7jours exercises

I watched the first clip in the grid above, about fish fraud (one species of fish passed off as another). I got the gist, then brought up the transcript to spot words I didn’t know, or catch meanings I might have mistaken.

That’s when I discovered Alexandria. TV5Monde’s site is set up so that on a page with a special icon (red circle with a question mark in the upper right of the following image), you can double-click any word to bring up a multi-language dictionary:



In this example, I clicked on l’étiquette. Alexandria popped up with a French-language dictionary, which reminded me that une étiquette is a little card or tag with the price, origin, or instructions for some product or item of merchandise.

You can set the dictionary to translate into any of more than two dozen languages:

(“Choose your target language.”)

What impresses me about this approach is that TV5Monde doesn’t have to create specialized hypertext for certain words. As far as I can tell, Alexandria’s dictionary works with any word on the page.

If you don’t know any French, of course, this would be a terrible way to learn it. You wouldn’t have any background to decide between one meaning and another, and a dictionary can’t tell you much about syntax or context.  The title of the segment in French, La fraude  aux poissons passe à travers les filets, could be read as “Fish fraud passes through the nets.” But even my paperback French-English dictionary has 27 main entries for passer,  and given the subject, I’d translate the title as “Fish fraud is slipping through the nets.”

If  you’ve got a low-to-intermediate level of ability with French, this is a powerful tool to help you understand more of what you read on the TV5Monde site

It looks like there’s a lot more to Alexandria–more than I can spend time on this morning. I have the impression you can link any web page to the dictionary’s features. I haven’t tested that yet, but I will.

Joe Harless, my hero

Joe Harless
Newnan GA Times Herald

I learned late yesterday that Joe Harless, who listed himself on LinkedIn as “independent think tank professional,” died on October 4th.  (Here’s a report in the Newnan, Georgia Times Herald.)

I don’t know how widely known Joe was outside the world of ISPI, the International Society for Performance Improvement, prior to his semi-retirement from that arena. (Later in life, as the Times-Herald article explains, he was involved in improving the impact of high school education near his home in Newnan and throughout the state.) I’m pretty sure it wasn’t widely enough, which is a shame for people who worked in what used to be called the training and development field.

That’s because Joe, like many of his colleagues, realized that the real goal of that field should not be doing training. Here’s Joe in 1992, writing in the ISPI journal, then called Performance and Instruction:

My behaviorism roots conditioned me to observe what people do, rather than what they say they do.

Tom Gilbert taught me to give more value to what people produce (accomplishment) than what they do or know.

I learned from observations of other technologies (medicine, engineering, plumbing, building the Tower of Babel, etc.) the wisdom of common purpose and agreeing on definitions….

I get confused when people say they are Performance Technologists but always produce training / informational / educational type interventions for every project. This confuses me because examination of more that 300 diagnostic front-end analyses done by cour company and our clients shows the information / training / education class of intervention was the least frequently recommended.

More than 30 years ago, I attended JAWS – the Job Aid Work Shop that Joe developed. I’d been working for Amtrak, developing training for ticket clerks, reservations agents, and others. JAWS provided me with a systematic way of looking at how people accomplish things on the job and figuring out where it made a lot more sense to create a job aid then to try (usually fruitlessly) to have them memorize the information that the job aid contains.

A side benefit of JAWS was getting to know Joe, a man serious about his work, gracious in his dealings with others, and good-humored in his presentation. Like an old-time Southern preacher, he’d become Reverend Joe and say things like, “An ounce of analysis is worth a pound of objectives.”

(Meaning: it’s terrific to have sound, behavioral objectives for your training–but maybe the problem you’re dealing with is not one that training can solve.)

He also nudged ISPI toward a name change by saying that “the National Society for Performance and Instruction” was the equivalent of “the National Society for Transportation and Bicycles.”

Again from that 1992 article:

Trainers sell training. They are usually commanded to do so, and are rewarded for the volume of training developed and delivered. Educators are conditioned to teach “subject-matter,” not to impact performance.  Most vendors hawk a given product, not a process. Buyers typically want to buy things, not analysis. Our letterheads read Training Department, or Education Center, or Learning, Inc., etc. The names of our organizations do not imply: performance improvement sold here.

I took a number of Joe’s workshops, including ones on instructional design and on front-end analysis. As he began working on what became his Accomplishment-Based Curriculum Development system, he invited a number of people like me, who’d used his workshops in our own organizations, to participate in a tryout for one of the new components. He was especially eager to hear our candid opinions. He knew what he was doing, but he was pretty sure he didn’t know everything.

I attended my first professional conference around 1978, when ISPI (then NSPI) met in Washington DC, where I was working (no travel cost!). After one session, I was speaking with Stephanie Jackson, an experienced practitioner, when Joe Harless came up–Stephanie had worked for him previously. We three talked for a bit, and it was clear to me that these two were good friends. Joe said to Stephanie, “Let’s get a beer.” I said something about letting them catch up with each other, to which Joe responded, “Don’t you like beer?”

In my career, I’ve learned a lot from many people, but Joe Harless was the right person at the right time for me, opening doors and sharing ideas, hearty and enthusiastic and curious.  What he did was to make concrete for me ways to enable other people to produce better results on the job. He combined analytical skills with openness to new ideas and an interest in other fields that has inspired me always.

We once talked about the job aid workshop, which I gave any number of times at Amtrak and GE. At one point, he had a segment where he’d present examples of good job aids and bad ones.  “Not any more,” he told me. “Now I put ’em all out and let the participants figure out which ones are good and why.”

I had a conversation on Twitter yesterday with Guy Wallace of EPPIC. I said that for me, “It’s practically hero worship, but you know how Joe would have laughed at that.”

I was holding back, I think, because of the immature connotation of “hero worship.” But Joe has had more direct influence on my career than anyone I can think of. I learned from his ideas, I was energized by his search for data as evidence, and although it was probably true for many people, I loved that he called me “Cousin Dave.”

If someone’s influence in your life makes you want to do better, if his work and his interaction inspire you to dig deeper and reach further, then that person’s a hero.

You could do a lot worse that hear Joe himself talk about performance–on-the-job accomplishment–as the heart of the matter.   Guy has a number of videos on YouTube, including a 90-minute one from a discussion in Toronto earlier this year at ISPI’s 50th anniversary.  I’ve set this link to start at the 8-minute mark, when Joe begins speaking.  You might find it worth a few minutes of your time, even with some of the callouts to old friends and inside jokes.  I’ve included a few comments here as highlights, all of which come in the first seven minutes of Joe speaking.

…Even in the heyday of programmed learning in the Sixties there were some of us who were arguing that we should be about developing instructional technology, not just programmed instruction, if we truly wanted to revolutionize training and education.

…Not willing to let good enough alone, there were some of us who were then arguing that we should be about the development of what? Performance technology, that would subsume instructional technology and have as its process, at the beginning, a process that was like medical diagnosis. I called my version of the diagnostic process Front End Analysis…

The genesis of my front-end analysis was the confounding realization that many of the training– the training that we developed for our clients didn’t seem to make any difference in the on the job situation, even after the trainees, the learners, successfully acquired the knowledge we so carefully taught them. I don’t know — a rough analogy, I suppose, is that we gave them good medicine but it didn’t cure the disease….

We conducted follow-up investigations with the aid of some of our cooperative clients…. In a shocking number of cases, we found that a lack of skill and knowledge was not the predomination cause of the non-job-performing situations…. Thus all the training in the world would do little to help the performance.

I’m going to miss Joe a lot. I do already.

Learning: it’s complicated

Thanks to David Glow, whose mention of it I happened to notice on Twitter last night, I found a blog post by Steve Flowers that I hadn’t seen: Just a Nudge–Getting into Skill Range. He’s talking about skill, mastery, and the (ultimately futile) “pursuit of instructional perfection.”

Steve starts with a principle from law enforcement: only apply the minimum force necessary to produce compliance.  (This is why those “speed limit enforced by aircraft” signs rarely mean “cops in helicopter gunships”). Then he works on a similar principle for, as he puts it, instruction performance solutions.”

Trying to design training / instruction for skill mastery can hinder–or defeat–the learning process, he says. That’s because mastery, in whatever form reasonable people would define it, is likely the outcome of a long period of practice, reflection, and refinement.

“Mastery” sounds good, which is why the corporate world is hip-deep in centers of excellence and world-class organizations.  A lot of the time, though, “world-class” is a synonym for “fine,” the way you hear it at the end of a TV commercial: “available at fine stores everywhere.”  Meaning, stores that sell our stuff.

He’s not saying there’s no place for formal learning, nor for a planned approach to helping people gain skill.  What he is saying is that we need “to design solutions to provide just the right nudge at just the right moment.

Most of the time, we don’t need mastery on the job, he says, and I agree.  We do need competence, which is what I believe he means by helping the performer move into a “skill range” — meaning the performer has the tools to figure out a particular problem or task.

From a blog post by Steve Flowers
(Click image to view his post.)

I’ve been mulling some related ideas for some time but hadn’t figured out how to even start articulating them. One theme has to to with the role of job aids and other performance support–things that Steve believes strongly in. I despair at the server farms full of “online learning” that shows (and shows), and tells (and tells and tells) while failing to offer a single on-the-job tool.

Listen: the only people who’ll “come back to the course” for the embedded reference material are (a) the course reviewers, (b) the utterly bored, and (c) the utterly desperate.

A second theme has to do with the two different kinds of performance support that van Merriënboer and Kirshner talk about in Ten Steps to Complex Learning. In their terminology, you have:

  • Procedural information: this is guidance for applying those skills that you use in pretty much the same way from problem to problem.  That’s the heart of many job aids: follow this procedure to query the database, to write a flood-insurance policy for a business, or to update tasks in the project management system. You can help people learn this kind of information through demonstration, through other presentation strategies, and through just-in-time guidance.
  • Supportive information: as vM&K say, this is intended to bridge the gap between what learners already know, and what they need to know, to productively apply skills you use differently with different problems.  “Updating the project management system” is procedural; “deal with the nonperforming vendor” is almost certainly a different problem each time it arises.  (That’s why Complex Learning uses the somewhat ungainly term “non-recurrent aspects of learning tasks.”) Types of supportive information include mental models for the particular field or area, as well as cognitive strategies for addressing its problems.

As the complexity of a job increases, it’s more and more difficult to help people achieve mastery. That’s not simply because of the number of skills, but because of how they related, and because of the support required.

Rich learning problems

Part of the connection I see, thanks to Steve’s post, is that the quest for perfect instruction ignores both how people move toward mastery (gradually, over time, with a variety of opportunities and guided by relevant feedback). In many corporations and organizations, formal learning for most people gets squeezed for time and defaults to the seen-and-signed mode: get their names on the roster (or in the LMS) so as to prove that learning was had by all.

We focus on coverage, on forms, on a quixotic or Sisyphean effort to cram all learning objectives into stuff that boils down to a course. I’m beginning to wonder, frankly, whether any skill you can master in a formal course is much of a skill to begin with. At most, such a skill is pretty near the outer border on Steve Flowers’ diagram. So the least  variation from the examples in the course–different circumstances, changed priorities, new coworkers–may knock the performer outside the range of competence.

(Images adapted from photos of F. Scott Fitzgerald and Ernest Hemingway from Wikimedia Commons.)