(This is a continuation of a previous post based on John M. Carroll’s The Nurnberg Funnel)
The main elements in the Minimal Manual test–a task-centric approach to training people in using computer software–were lean documentation, guided exploration, and realistic exercises. So the first document that learners created was a letter. In earlier, off-the-shelf training, the first task had been typing a description of word processing, “something unlikely to be typed at work except by a document processing training designer.”
This sort of meta-exercise is very common, and I think almost always counterproductive. Just as with Amtrak’s training trains that (as I said here) didn’t go over real routes, trivial tasks distract, frustrate, or confuse learners. They don’t take you anyplace you wanted to go.
Not that the practice exercise needs to look exactly like what someone does at his so-called real job; the task simply needs to be believable in terms of the work that someone wants to get done.
Into the pool
After creating the Minimal Manual, Carroll’s team created the Typing Pool test. They hired participants from a temp agency and put them in a simulated office environment, complete with partitions, ringing phones, and office equipment. These people were experienced office workers with little prior computer knowledge. (Remember, this was in the 1980s; computer skills were comparatively rare. And Carroll was testing ways to train people to use computer applications.)
Each group of two or three participants was given either the Minimal Manual (MM) or the systems style instruction manual (SM). Participants read and follow the training exercises in their manuals and periodically received performance tasks, each related to particular training topics. (You can see the task list by enlarging the image on the right.)
Some topics were beyond the scope of either the MM or the SM; interested participants could use an additional self instruction manual or any document in the system reference library.
After finishing the required portion of training material, participants took the relevant performance test. They were allowed to use any of the training material, the reference library. They could even call a simulated help line. This last resource had an expert on the system who was familiar with the help line concept but unaware of the goals of the study.
So what happened? Carroll provides a great deal of detail; I’ll summarize what seem to me to be the most important points.
Minimal learning was faster learning.
In all, the MM participants used 40% less learning time then the SM participants — 10 hours versus 16.4. (“Learning time” refers to time spent with either the MM or SM materials, not including time spent on the performance tasks.) This was true both for the basic tasks (1 through 3 on the list) and the advanced wants.
In addition, the MM group completed 2.7 times as many subtasks, as the SM group. One reason was that some SM participants ran out of time and were unable to try some of the advanced tasks. Even for those tasks that both groups completed, the MM group outperformed by 50%.
We were particularly satisfied with the result that the MM learners continued to outperform their SM counterparts for relatively advanced topics that both groups studied in the common manual. This indicates that MM is not merely Wiccan dirty for getting started… Rather, we find MM better then SM in every significant sense and with no apparent trade-offs. The Minimal Manual seem to help participants learn how to learn.
In the second study, more analytical while more limited in scope, similar results were found. In this study, Carol’s group also compared learning by the book (LBB) with learning by doing (LWD). The LBB group were given training manuals and assigned portions to work with. After a set period of learning, they were given performance tasks. This cycle was repeated three times. The LWD learners received the first task at the start of the experiment, as they completed each task, they received the next one. There was also an SM by-the-book group and an SM learn-by-doing group.
So there are two ways to look at the study: MM versus SM as with previous study, and LWD versus LBB for each of those formats. To make that clear, both sets of LWD learners received at the start both the training materials and the relevant performance test to complete; both sets of LBB learners had a fixed amount of time to work with the training materials (which included practice) before receiving the performance tests.
Among the things that happened:
- MM learners completed 58% more subtasks than SM learners did.
- LWD learners completed 52% more subtasks than LBB learners did.
- MM learners were twice as fast to start the system up as SM learners.
- MM learners made fewer errors overall, and tended to recover from them faster.
Mistakes were made.
One outcome was the sort of thing that makes management unhappy and training departments uneasy: the average participant made a lot of errors and spent a lot of time dealing with them. Carroll and his colleagues observed 6,885 errors and classified them into 40 categories.)
Five error types seemed particularly important–along the accounted for over 46 percent of the errors; all were at least 50 percent more frequent than the sixth most frequent error…
The first three of these were errors that the MM design specifically targeted. They were imprtant errors: learners spent an average of 36 minutes recovering from the direct consequences of these three errors, or 25 percent of the average total amount of error recovery time [which was 145 minutes or nearly half the total time].
The MM learners man significantly fewer errors for each of the top three categories–in some cases nearly 50% less often.
This to me is an intriguing, tricky finding. A high rate of errors that includes persistence and success can indicate learning, though I wonder whether the participants found this frustrating or simply an unusual way to learn. I’m imagining variables like time between error and resolution, or number of tries before success. Do I as a learner feel like I’m making progress, or do I feel as though I can’t make any headway?
The LWD participants (both those on MM and on SM) had a higher rate for completing tasks and a higher overall comprehension test score than their by-the-book counterparts. So perhaps there’s evidence for the sense of progress.
Was that so hard?
Following the trial, Carroll’s team asked the participants to imagine a 10-week course in office skills. How long would they allow for learning to use the word processing system that they’d been working with. The SM people thought it would need 50% of that time; the MM people, 20%.
Slicing these subjective opinions differently, the LBB (learn-by-book) group estimated less time than the LWD (learn-while-doing) group. In fact, LBB/MM estimated 80 hours while LWD/MM estimated 165.
What this seems to say is that in general the MM seemed to help people feel that word processing would be easier to learn compared with SM, but also that LWD would require more time than LBB.
♦ ♦ ♦
The post you’re reading and its predecessor are based on a single chapter in The Nurnberg Funnel–and not the entire chapter. Subsequent work he discusses supports the main design choices:
- Present real tasks that learners already understand and are motivated to work on.
- Get them started on those tasks quickly.
- Encourage them to rely on their own reasoning and improvisation.
- Reduce “the instructional verbiage they must passively read.”
- Facilitate “coordination of attention” — working back and forth between the system and the training materials.
- Organize materials to support skipping around.
I can see–in fact, I have seen–groups of people who’d resist this approach to learning. And I don’t only mean stodgy training departments; sometimes the participants in training have a very clear picture of what “training” looks like, what “learning” feels like, and spending half their time making errors doesn’t fit easily into those pictures.
That’s an issue for organizations to address–focusing on what it really means to learn in the context of work. And it’s an issue for those whose responsibilities include supporting that learning. Instructional designers, subject-matter experts, and their clients aren’t always eager to admit that explanation-laden, application-thin sheep-dip is ineffective and even counterproductive.
CC-licensed image: toy train photo by Ryan Ruppe.