Minimal training: a plunge into the typing pool

(This is a continuation of a previous post based on John M. Carroll’s The Nurnberg Funnel)

The main elements in the Minimal Manual test–a task-centric approach to training people in using computer software–were lean documentation, guided exploration, and realistic exercises. So the first document that learners created was a letter. In earlier, off-the-shelf training, the first task had been typing a description of word processing, “something unlikely to be typed at work except by a document processing training designer.”

You call this training?This sort of meta-exercise is very common, and I think almost always counterproductive. Just as with Amtrak’s training trains that (as I said here) didn’t go over real routes, trivial tasks distract, frustrate, or confuse learners. They don’t take you anyplace you wanted to go.

Not that the practice exercise needs to look exactly like what someone does at his so-called real job; the task simply needs to be believable in terms of the work that someone wants to get done.

Into the pool

After creating the Minimal Manual, Carroll’s team created the Typing Pool test.  They hired participants from a temp agency and put them in a simulated office environment, complete with partitions, ringing phones, and office equipment. These people were experienced office workers with little prior computer knowledge. (Remember, this was in the 1980s; computer skills were comparatively rare. And Carroll was testing ways to train people to use computer applications.)

Tasks for the Typing Pool test
(Click to enlarge.)

Each group of two or three participants was given either the Minimal Manual (MM) or the systems style instruction manual (SM). Participants read and follow the training exercises in their manuals and periodically received performance tasks, each related to particular training topics. (You can see the task list by enlarging the image on the right.)

Some topics were beyond the scope of either the MM or the SM; interested participants could use an additional self instruction manual or any document in the system reference library.

After finishing the required portion of training material, participants took the relevant performance test. They were allowed to use any of the training material, the reference library.  They could even call a simulated help line. This last resource had an expert on the system who was familiar with the help line concept but unaware of the goals of the study.

So what happened?  Carroll provides a great deal of detail; I’ll summarize what seem to me to be the most important points.

Minimal learning was faster learning.

In all, the MM participants used 40% less learning time then the SM participants — 10 hours versus 16.4. (“Learning time” refers to time spent with either the MM or SM materials, not including time spent on the performance tasks.) This was true both for the basic tasks (1 through 3 on the list) and the advanced wants.

In addition, the MM group completed 2.7 times as many subtasks, as the SM group. One reason was that some SM participants ran out of time and were unable to try some of the advanced tasks. Even for those tasks that both groups completed, the MM group outperformed by 50%.

We were particularly satisfied with the result that the MM learners continued to outperform their SM counterparts for relatively advanced topics that both groups studied in the common manual. This indicates that MM is not merely Wiccan dirty for getting started… Rather, we find MM better then SM in every significant sense and with no apparent trade-offs. The Minimal Manual seem to help participants learn how to learn.

In the second study, more analytical while more limited in scope, similar results were found. In this study, Carol’s group also compared learning by the book (LBB) with learning by doing (LWD).  The LBB group were given training manuals and assigned portions to work with. After a set period of learning, they were given performance tasks. This cycle was repeated three times. The LWD learners received the first task at the start of the experiment, as they completed each task, they received the next one. There was also an SM by-the-book group and an SM learn-by-doing group.

So there are two ways to look at the study: MM versus SM as with previous study, and LWD versus LBB for each of those formats. To make that clear, both sets of LWD learners received at the start both the training materials and the relevant performance test to complete; both sets of LBB learners had a fixed amount of time to work with the training materials (which included practice) before receiving the performance tests.

Among the things that happened:

  • MM learners completed 58% more subtasks than SM learners did.
  • LWD learners completed 52% more subtasks than LBB learners did.
  • MM learners were twice as fast to start the system up as SM learners.
  • MM learners made fewer errors overall, and tended to recover from them faster.

Mistakes were made.

One outcome was the sort of thing that makes management unhappy and training departments uneasy: the average participant made a lot of errors and spent a lot of time dealing with them.  Carroll and his colleagues observed 6,885 errors and classified them into 40 categories.)

Five error types seemed particularly important–along the accounted for over 46 percent of the errors; all were at least 50 percent more frequent than the sixth most frequent error…

The first three of these were errors that the MM design specifically targeted.  They were imprtant errors: learners spent an average of 36 minutes recovering from the direct consequences of these three errors, or 25 percent of the average total amount of error recovery time [which was 145 minutes or nearly half the total time].

The MM learners man significantly fewer errors for each of the top three categories–in some cases nearly 50% less often.

This to me is an intriguing, tricky finding. A high rate of errors that includes persistence and success can indicate learning, though I wonder whether the participants found this frustrating or simply an unusual way to learn. I’m imagining variables like time between error and resolution, or number of tries before success. Do I as a learner feel like I’m making progress, or do I feel as though I can’t make any headway?

The LWD participants (both those on MM and on SM) had a higher rate for completing tasks and a higher overall comprehension test score than their by-the-book counterparts. So perhaps there’s evidence for the sense of progress.

Was that so hard?

Following the trial, Carroll’s team asked the participants to imagine a 10-week course in office skills.  How long would they allow for learning to use the word processing system that they’d been working with.  The SM people thought it would need 50% of that time; the MM people, 20%.

Slicing these subjective opinions differently, the LBB (learn-by-book) group estimated less time than the LWD (learn-while-doing) group. In fact, LBB/MM estimated 80 hours while LWD/MM estimated 165.

What this seems to say is that in general the MM seemed to help people feel that word processing would be easier to learn compared with SM, but also that LWD would require more time than LBB.

♦  ♦  ♦

The post you’re reading and its predecessor are based on a single chapter in The Nurnberg Funnel–and not the entire chapter.  Subsequent work he discusses supports the main design choices:

  • Present real tasks that learners already understand and are motivated to work on.
  • Get them started on those tasks quickly.
  • Encourage them to rely on their own reasoning and improvisation.
  • Reduce “the instructional verbiage they must passively read.”
  • Facilitate “coordination of attention” — working back and forth between the system and the training materials.
  • Organize materials to support skipping around.

I can see–in fact, I have seen–groups of people who’d resist this approach to learning.  And I don’t only mean stodgy training departments; sometimes the participants in training have a very clear picture of what “training” looks like, what “learning” feels like, and spending half their time making errors doesn’t fit easily into those pictures.

That’s an issue for organizations to address–focusing on what it really means to learn in the context of work.  And it’s an issue for those whose responsibilities include supporting that learning. Instructional designers, subject-matter experts, and their clients aren’t always eager to admit that explanation-laden, application-thin sheep-dip is ineffective and even counterproductive.

CC-licensed image: toy train photo by Ryan Ruppe.

Yoda was wrong–of course there’s “try”

Are we having funnel yet?The Nuremberg Funnel, according to Wikipedia, is a humorous expression for a kind of teaching and learning.  It implies knowledge simply flowing effortlessly into your brain as you encounter it–or else a teacher cramming stuff in the mind of a dullard.

(The term dates to at least 15th-century Germany, and I suspect the notion of funneling or otherwise stuffing knowledge into someone is a few months older than that.)

The Nurnberg Funnel is humorous as well, in a slightly drier way. John M. Carroll’s 1990 book, subtitled Designing Minimalist Instruction for Practical Computer Skill, describes efforts to help people learn to use computers and software.  In 1981, Carroll and his colleagues analyzed problems that people had learning then-new technology like the IBM Displaywriter and the Apple Lisa.

Minimal sense

In one extended experiment, Carroll and his colleagues had volunteers work with the Lisa, its owners guide, and the documentation for LisaProject.  The goal was to find out what interested but untrained users actually did with these materials.

Mostly what they did was struggle.

On average, the learners took three times the half hour estimated by Apple and enthusiastic trade journals–just to complete the online tutorial. “Two [learners] who routinely spent more than half of their work time using computers… failed to get to our LisaProject learning task at all.”

Carroll calls into question what he refers to as the systematic or systems approach to user training. To him this means “a fine-grained decomposition of target skills” used to derive an instructional sequence: you practice the simple stuff before you go on to more complex tasks they contribute to.

Carroll believes that “the systems approach to instructional design has nothing in common with general systems theory.” What’s worse is that in the workplace, the highly structured step-by-step approach just doesn’t work.

If only people would cooperate!  But they don’t.

The problem is not that people cannot follow simple steps; it is that they do not… People are situated in a world more real to them than a series of steps… People are always already trying things out, thinking things through, trying to relate what they already know to what is going on…

In a word, they are too busy learning to make much use of the instruction.

(that emphasis is Carroll’s, not mine — DF)

After further experiments, Carroll and his colleagues created what they called the Minimal Manual.  Earlier they’d made up a deck of large cards “intended to suggest goals and activities” for learners, and useful as quick-reference during self-chosen activity. In chapter 6 of The Nurnberg Funnel, he describes the next stage–a self-instruction manual designed on the same minimalist model.

Training on real tasks

The Minimal Manual used titles like “Typing Something” or “Printing Something on Paper” rather than suboptimal, system-centric ones in the original Displaywriter materials.  Carroll’s materials also eliminated material that was not task oriented–like the entire chapter entitled “Using Display Information While Viewing a Document.”

At the same time, the experiment included essential material not well covered in the original document.  It was easy for learners to accidentally add blank lines but difficult for them to get rid of them.  The Minimal Manual turned this into a goal-focused task that made sense to the learner: “Deleting Blank Lines.” While not catchy, that title’s a big improvement on “how to remove a carrier return control character.”

Getting started fast

In the Minimal Manual the learner switches on the system and begins the hands-on portion of instruction after four pages of introduction.  In the systems-style instruction manual, hands-on training begins after 28 pages of instruction.

Learners created their first document only seven pages into the Minimal Manual…. In the commercial manual, the creation of a first document was delayed until page 70.

Carroll shows several ways in which the comprehensive systems-style manual bogs down, overloads the learner, and gets in the way of doing anything that seems like real work.  I can remember endless how-to-use-your-computer courses that spent 45 minutes on file structure and hierarchy before the target audience had ever created a document that needed to be saved.  This is like studying the house numbering scheme for a city before learning how to get to your new job.

Reasoning and improvising

The Minimal Manual approach included “On Your Own” work projects–for example, make up a document and compose the text yourself.  Then try inserting, deleting, and replacing text.

Some explanation is always necessary, but the minimalist approach kept that to… a minimum.  “The Displaywriter stores blank lines as carrier return characters.”  That’s it.  You don’t really have to know what a carrier return character is–what’s important to you as a user is (a) it’s what creates blank lines, and (b) if you delete it, you delete the blank line.

In general, this approach introduced a procedure only once.  The three-page chapter “Printing Something on Paper” was the only place that printing was explained.  Elsewhere, exercises simply told the learner to print.  If he wasn’t sure how, he’d have to go back to that chapter.

In part, the team chose this approach because of the endless and often fruitless searching that learners had done in earlier trials, losing themselves in thickets of manuals and documents.  The fewer pages you have and the clearer their titles, the easier it is to find what you’re looking for.

Here’s the entire explanation for the cursor control keys:

Moving the cursor

The four cursor-movement keys have arrows on them (they are located on the right of the keyboard).

Press the ↓ cursor key several times and watch the cursor move down the screen.

The ↑, ←, and → keys work analogously.  Try them and see.

If you move the cursor all the way to the bottom of the screen, or all the way to the right, the display “shifts” so that you can see more of your document.  By moving the cursor all the way up and to the left, you can bring the document back to where it started.

Connecting the training to the system

Carroll’s subhead here is actually “Coordinating System and Training,” but I wanted to be more direct.  His team deliberately used indirect references in order to encourage learners to pay attention to the system they were learning.  In those long-ago days, for example, computers had two floppy-disk drives.  The Minimal Manual didn’t tell learners which drive to put a diskette in.  “We left it to the learner to consult the system prompts.”

Supporting error recognition and recovery

As with other parts of the experiment, Carroll and his colleagues used error information from previous testing to guide the support provided by the Minimal Manual.  Multi-key combinations (hold down one key while pressing another) baffled many learners, especially when the labels on the keys were meaningless to them: (“press BKSP, then CODE + CANCL”).  And then there was this:

A complication of the Code coordination error is that the recovery for pressing Cancel without holding the Code key is pressing Cancel while holding the Code key.

Good thing we never see anything like that any more, huh?

Exploiting prior knowledge

It’s easy to forget how confusing word processing can be–at least till you try learning some new application for which you have very little background.  (I’ve taken a stab at learning JavaScript, and I can see that’s probably not the basis of my next career.)  The Minimal Manual strove to counter the relentless, technocratic, system-centric thinking in the original.  “The impersonal term ‘the system’ was replaced by the proper name…the Displaywriter.”

I can hear IT people I’ve worked with sniffing “so what?”  I’ve actually had a programmer say to me, of a useful but very complicated tool, “If they can’t understand this, they don’t deserve it.”

One particularly useful approach: document names.  Back when most white-collar work did not involve computers, people created paper documents all the time, but rarely thought of documents as requiring a name.  (What’s the name of a letter?  What’s the name of a memo?) So the bland instruction “Name your document” seems like one more small technical obstacle in the way of getting something useful done.

Carroll’s team had learned that naming created lots of problems for learners, and so found a way to ease learning of this unfamiliar concept.

In the terminology of the Displaywriter you will be “creating a document” — that is, typing a brief letter.  You will first name the document, as you might make up a name for a baby before it is actually born.  Then you will assign the document to a work diskette — this is where the document will be stored by the Displaywriter.  And then, finally, you will type the document at the keyboard, and see the text appear on the screen.

It might still feel odd to have to name a document, but the baby analogy brings the idea a bit closer to what the average person already knows.

  ♦  ♦  ♦

There’s a great deal more in chapter 6 that I’ll have to return to in another post.  I wanted to share what’s here, though, because I think it’s extremely relevant to the future of learning at work.

That omnipresent quotation from a movie puppet often exasperates me.

Of course there’s try–in fact, it’s the effort involve in genuinely trying that’s essential.  Otherwise, no Jedi training and not much need for a master; Yoda could just take a seat behind Statler and Waldorf.

Trying and succeeding leads to conclusions that may or may not be correct–sometimes they’re simplistic, sometimes they’re downright erroneous.  Trying and falling short, in an environment where such trying is encouraged, can lead to analysis, to greater awareness of the available steps, inputs, and tools, and to improved performance.

The bigger lesson, I am more and more convinced, is that comprehensive systems training is a myth.  People might spend extended time in formal classes, or labor their way through highly structured text or tutorials, but most of the time they’re looking for how to accomplish something that seems valuable to them.  Just tell me how to get these images posted.  Let me create a series of blog posts that have automatic navigation.  How can I search this mass of data to find things that are X, Y, and Z, but not Q?

As I put it in a different context (vendor-managed inventory), I don’t want to know about standard deviation.  I want to know whether the grocery warehouse computer’s going to order more mayonnaise–and how to tell it not to, if that’s what I think is best.

In no way am I saying that analysis doesn’t matter.  It matters a lot–witness the skillful observation and analysis of user testing that led Carroll and his associates to the Minimal Manual.  That for them was a starting point–they examined data from their testing to gain further insight and to guide decisions about supporting learning.

(I wrote a follow-up to this post:
Minimal Training: a Plunge into the Typing Pool

Training’s like dieting, or, weighting for results

About a year and a half ago, I decided to try losing weight by following the Weight Watchers program that my wife had enrolled in. After a few months, I began to view weight management as a kind of performance improvement project (see this post and this one).

(Here on my Whiteboard, I focus mainly on topics like workplace learning and performance improvement, areas I’ve worked in for decades.  No one in his right mind would pay me for advice on cardiovascular health, weight-change dynamics, or the physiology of nutrition and exercise.  I’m extrapolating from my experience to make a point about accomplishments at work, not telling people they should eat less or exercise more.)

I’m no longer such big deal

Although I didn’t say so at the time, my ultimate goal was to lose 60 pounds, 50 of them in the first year. Some 20 months after I started, I’ve lost 43. 

You could say “that’s great!”  Or you could argue I’ve fallen short of my goal.  I’ve felt especially frustrated by months-long stretches where I didn’t seem to lose any weight at all.  This in spite of what I think of as the bank-account approach to weight: there are 3,500 calories in a pound, so reducing your daily intake by 500 calories should have you losing a pound a week, give or take.

The New York Times recently ran Why Even Resolute Dieters often Fail, in which Jane E. Brody reported on a study by Dr. Kevin D. Hall and his associates. The study, which appeared in the August 27 issue of The Lancet, makes a number of striking points.  (By the way, that link to The Lancet leads to a summary of the study.  For the complete study, use the free registration option at the bottom of the summary.)

Among those points:

  • That 3,500-calorie model leads to “drastically overestimated expectations for weight loss.” Overestimated, as in predicting “about 100% greater weight loss” than the model that Hall and his colleagues set forth.
  • Weight loss requires much more time than many people expect (and more time than many diet-plan promotions imply). 

Although my 60-pound goal is reasonable for me, Hall’s study suggests I’ll see only “half of the [desired] weight change being achieved in about 1 year, and 95%…in about 3 years.”

I’ve read Brody’s article several times, and gone over the Hall study in detail; they helped me understand my own situation.  More to the point here, they offer me an opportunity to compare weight management with improving performance at work.

Training is like dieting: not a bad way to start

When I say “training,” I’m usually thinking of a deliberate effort to close an existing, important gap between current skills and those required for a newcomer to achieve acceptable results in the workplace.  I’ve worked on lots of projects where such training made sense for people like reservation agents, field salespeople, and health-claims adjustors. 

What I think these projects have in common is that it was possible to help people gain new skills so they could produe acceptable performance in a relatively short time.  They aren’t going to be master performers right away, but they’ll be good enough for now.  And they’ll be more likely to improve in the future, because they’ll no longer be complete novices.

What such workers tend to have in common is that they have lots in common: they do similar work,  they have similar job-relevant experience, they have similar skills, and they lack similar skills.  Often they’re in a few physical locations (like, say, central offices or reservation centers), or the organization can assemble them for training (classrooms, workshops) or assemble training for them (online learning).

As for the skills they need to acquire, those are predominantly procedural: how to check availability, how to manage customer accounts, how to conduct intake interviews.

How is this like dieting?  If you’re overweight (e.g., have a BMI over 25) or obese (over 30) and you’d rather not be, there are lots of approaches you can take at the outset.  Noting your caloric intake and decreasing it, so that you’re not taking in as many as you expend, is one approach that may be good enough for starters.  If you don’t have other serious health issues, and if a principal cause of your current weight is a caloric imbalance, then a deliberate reduction in overall calories–a diet–will likely produce results.

Don’t just take my word for it.  “All reduced energy diets have a smiliar effect on body-fat loss in the short run,” Hall’s study says.  “The assumption that a ‘calorie is a calorie’ is a reasonable first estimation…over short-time periods.”

Even in that short term, you have choices that are more effective and choices that are less so.   For example, the real-world Mayo Clinic Diet (as opposed to the “miraculous,” grapefruit-laden one) for example, will likely produce better results than the kind of “diet” that has you eating nothing but rutabaga and rockfish. 

To me, that’s analagous to the difference between “any training is better than no training” and training based on task analysis, needs analysis, and effective ways to help people learn.

From apprentice to journeyman (Deterline was right)

Thus far it seems that Brody, Hall, and I are in agreement, which is pretty classy company for me.  It doesn’t seem to matter much how you start on weight management.  Many different paths will produce results that are good enough in the short term. 

In the workplace, though, short-term thinking rarely pays off long term.  Likewise with job-related skill: good enough for a novice, after a while, isn’t good enough.  If you think of the newcomer to a job as an apprentice, you want him or her to eventually move to the journeyman level: more skilled, able to deal with a wider range of problems, and competent in skills that are not simply procedural.

That’s not easy.  As Bill Deterline once observed, “Things take longer than they do.”  Part of the path from apprentice to journeyman is learning to recognize and deal with complexity.  In the weight-management world, here’s some of the complexity revealed by Hall’s study:

  • When an overweight person begins consuming fewer calories than he expends, he loses weight–but the rate of loss slows as the ratio of fat to lean in his body changes.  (Weight loss is not linear; steady progress is unlikely.)
  • The same increase in caloric intake will result in more weight gain for an overweight person than for someone not overweight–and for the overweight person, more of the gain will be body fat.  (You risk regaining, and you’ll regain quickly.)

Here’s how Hall’s study suggests you think about goals for weight loss:

We propose an approximate rule of thumb for an average overweight adult: every change of energy intake of 100 kJ per day will lead to an eventual bodyweight chage of about 1 kg (equivalently, 10 kcal per day per pound of weight change) with half of the weight change being achieved in about 1 year and 95% of the weight change in about 3 years.

How does that rule applies to my original goal?  Let’s assume I was consuming just enough calories to maintain my starting weight.  Yeah, let’s assume that.  To lose 60 pounds would mean:

  • Reducing my intake by 600 calories a day (a kilocalorie is the scientific term for what dieters call a calorie), thus…
  • Losing 30 of those pounds in the first year, and in theory…
  • Losing 58 pounds–by the end of the third year.

From Hall’s viewpoint, I’m on track–I’m more than halfway to my goal, and I’ve managed to maintain that loss.  In a sense, I’m no longer a weight-management apprentice.  

What happens after a good start

I said that training is like dieting.  But I’ve implied (and I’m now stating outright) that most of the time neither one is sufficient for long-term results.  “Diet” in the traditional sense is a short-term planned restriction on caloric intake in order to produce weight loss.  “Training” in the traditional organizational sense tends to be a group-focused, short-term effort to provide people with mainly procedural skills that they currently lack, in order to produce acceptable results on the job.

Just in case it’s unclear, I keep harping on “acceptable results” because if training doesn’t relate to on-the-job accomplishment, I don’t quite get why the organization bothers.  I keep harping on a lack of skill because if people already have the skill needed but the organization is “training” them anyway, mostly what people learn is that the organization isn’t all that bright.

The Brody article and the Hall study reinforce what I think of as a movement from losing weight to maintaining health.  On the job front, it’s like the difference between a hotel employee’s using the hotel reservation system correctly and that same person successfully resolving a customer service problem.

Even entry-level positions involve some judgment, some decision-making, some degree of tacit knowledge.  You can’t train for these things specifically; you need to develop models, offer examples, offer opportunities to practice and reflect.

Thus Hall’s 3-year timeframe is one tool that an individual can use to set his or her own expectations regarding the rate of weight loss and the likelihood of plateaus, along with similar research-based principles like these:

  • We can’t estimate a person’s “initial energy requirements” (daily caloric need) without an uncertainty of 5% or even greater.  (Your reduced-calorie target is only an estimate.)
  • People are often inaccurate in describing or recording their food intake, either before or during a weight-loss program.  (Your munchage may vary.)

As Brody points out in her New York Times article:

Studies of the more than 5,000 participatns in the National Weight Control Registry have shows that those who lost a significant amount of weight and kept it off for many years relied primarily on two tactics: continuing physical activity and regular checks on body weight.

How about that?  Behavioral change, the specifics of which vary, the results of which are higher levels of caloric consumption.  And a monitoring system to track data and assist in further analysis. 

(I weigh myself at the same time every day that I’m home, and have done so for 20 months.  Not only does the momentum of the practice itself carry me along, but I have a good sense for what the typical variation is.  Of course, if I’ve gained weight, that’s just a fluctuation, but if I’ve lost weight, that’s progress.  You go with the evaluation system that makes the most sense.)

I do think there’s a role for formal organizational learning (in my mind, a much better term than “training”)–though it’s a narrow role, in the same way that diet-as-restriction has a narrow role in managing overall health.  Both may in certain circumstances be good enough to start with, but both are likely to fall short over time.

In other words, I believe that letting new hires figure out the inventory-management system for themselves is probably a suboptimal approach.  You’re deluding yourself, though, if you think you can procedurize your way to workplace mastery .  If you’re trying to increase your organization’s effectiveness, you have to do better than telling people to eat more grapefruit.

CC-licensed images:
Balance-beam scale by wader.
Car-hire image by Send Chocolate (Tina Cruz).
Nighttime road by Axel Schwenke.

The value of metrics, and vice-versa

For a while there, I thought Joe Gerstandt was full of crap.

Joe GerstandtSomebody tweeted a line from one of Joe’s blog posts:  “We are not accountants. We are Jedi.  We play on a completely different field.”

“Jedi” alone is often enough to make me go find something else to do, but instead I read the full post, The False Tyranny of Metrics.  And for a while, I continued to think Gerstandt was full of crap.  Or maybe just way out there, because my initial skimming said that he was saying metrics don’t matter.

That wasn’t the case.  It took me longer than I like to admit to realize that the Talent Anarchy blog is (at least in part) a dialog between Gerstandt and his business partner, Jason Lauritsen.  So this post was part of their thinking out loud about what matters.

The heart of what Gerstandt is talking about emerges in a follow-up post (and at the end of my post, I’ve put links to several posts from Talent Anarchy):

And maybe I do not think that measurement is evil…measurement is a tool after all, so it boils down to how you use it. But this is what I do believe:

  • one: We over-prioritize things that come with metrics.
  • two: We have told ourselves some great lies about what we can measure.
  • three: The outcome of our use of metrics is often evil.

The conversation really struck me because of several themes or issues  running through my life right now.  One of them is a client I’ll call Hephaestus. I’ll say they make  household fans and heaters.  As a manufacturer, Hephaestus has some serious metrics having to do with production–rate, quality, reject rate, cost, all the sorts of things you’d expect.  And the sorts of things that make sense there.

Does Hephaestus have other ways of knowing how they’re doing?  I’m pretty sure they do, though the project I’m dealing with doesn’t extend that far.  I haven’t been called in by the CEO or the VP of manufacturing.  Even so, I see potential wisdom for me and for my client in the Talent Anarchy discussion.

Metrics - one part of the jobOur project is about how to bring new manufacturing workers to competency.  If you’ve worked in a plant, you have some idea what these jobs can be like.  At a GE appliance factory, I observed workers in charge of powder-paint application, wire-harness installation, and similar jobs.

How do you help a new person do that safely and accurately–and with acceptable progress to the necessary speed?

It’s not all feeds and speeds, either with regard to turning out those appliances, or with regard to how people learn.  I think there’s a lot of value in questioning assumptions, especially those we don’t even recognize as assumptions.

Here are links to the posts in the discussion at Talent Anarchy, along with a quote pulled from each.  Worth the time to go through.  You’re likely to find value in the comments as well:

The Measurement Imperative (Jason)

(the post in which Jason starts the discussion)
I know that measurement and metrics aren’t your favorite thing to talk about, but what do you think?  Where does measurement fit into the work we do?

The False Tyranny of Metrics (Joe)

(from a comment on this post)
I was talking with my boss about the situation [of half the staff at a health facility frequently arriving late] when he asked me if my team cared about the people we served and if they were dedicated to helping those folks achieve outcomes. I answered yes – they excelled at achieving outcomes. He then challenged me by pointing out that the only reason I was on my time-clock tirade was because I could hit a button on the computer and spit out the metrics related to the situation. Punch reports were the metrics I had available so that was what I managed to.

Despite What You May Have Heard, Measurement Isn’t Evil (Jason)

What I heard you say is that putting metrics and measurement before the actual work, or worse, substituting it as the work is really damaging and counter-productive.  And I would agree with that.  When the metric becomes what you are trying to accomplish, you have lost.

More Metrics Madness (Joe)

I do understand the importance of profit.  I am a business owner myself…I get it.  But the purpose of my business is not profit. I work, at least partly, because I need to make a living, but I do the particular work that I do for reasons that have nothing to do with profit.  Profit is mandatory, I am not in any way confused about that, but saying that an organizations exists for the purpose of profit is kind of like saying that the purpose of a persons life is breathing (which also can be measured quite well by the way).

A Defense (of a sort) of Metrics
(a guest post by Mark D. Hirschfeld and F. Leigh Branham

We may not be able to measure honesty, compassion, and courage, but we can measure the results that those traits produce–lower voluntary turnover, lower quit rates, fewer grievances filed, more internal job progressions allowed, more customers returning more frequently and referring their friends, more managers coaching (often confronting), recognizing (more often) and giving constructive feedback, more new employees being hired through referrals from happier, more engaged employees–all measures of not just more, but of better places to work that do indeed serve as measures of progress toward becoming a remarkable workplace.

CC-licensed production parts image by iamphejom.

A lawyer’s advice for a learning professional

Stephen EllisStephen Ellis is a partner at the law firm Tucker Ellis & West.  Thanks to David Maister’s blog (which Maister discontinued in January 2010), I came across a commencement address Ellis gave at Case Western Reserve School of Law in May 2008.

Unlike the Case grads, I’m far from the beginning of my career, but I found Ellis’s ideas both pertinent and refreshing.

And that was before this section:

The fact is, our profession has become increasingly unhappy over the past couple of decades. I am convinced the vast majority of that unhappiness derives from a single seemingly innocuous event in the late 1980’s: The American Lawyer magazine began publishing the AM LAW 100, and listed the profits per partner of the 100 largest firms. Virtually all of the firms in this country immediately bought in to that statistic as the only credible measure of success. The game was on – we lawyers would now take our measure almost entirely from money, at least in terms of what was publicly discussed. Without question, integrity, service and professionalism were important, but how we measured ourselves was money.

This was a terrible mistake and now, more and more of us see its dark implications: the bragging rights on how many billable hours we charge (and the matching lost weekends and evenings); rates that are topping $1000 an hour; and clients who believe their files are being worked to death by armies of inexperienced associates.

Here’s the title of Ellis’s address: On Being a Happy (and Successful) Lawyer.  He speaks well (judging from the transcript), and I thought many of his points would be worthwhile to pull out and adapt to… well, to me, a person in the learning profession.

So I’m putting several of those points here, for when I don’t have time to reread the entire address.  Where he talks about the law, I just mentally edit things to read “the learning profession.”  It’s worth the effort.

First, be someone others can count on.

Clients come to you because they have a situation they cannot solve on their own. Most are not looking for an analysis of the law. Most want you to solve a problem. So solve it, don’t add to their problem by being hard to find, by missing deadlines, or by simply describing their problem back to them.

Second, be an interesting person.

Force yourself to do be able to talk about more than law – read books, go to movies, be part of politics, go to lectures. You’ll meet people, you’ll be able to talk about things that other people find interesting, and you won’t burn out on your job.

Look out for yourself.

Mentors are important, but they are only a resource.  Accept that you are in change of your success…. If you think you need experience in an area, make it your business to go get it.

Determination matters.

Great careers are the result of day after day deciding to do good work and being someone who others count on.

Be enthusiastic.

We lawyers take pride in being the first one to find fault with an idea.  Makes us look smart…. clients want to do things.  They don’t call you so they can not do things.  They want to stay in the borders of the law, but they want to be told how to do what they want to do…

There is no better way to end a client meeting than saying “This is going to be great” and to mean it.

Trust yourself.

Among the most important conclusions I came to as a young lawyer was that if I didn’t understand something, it was because the thing in fact didn’t make sense, not because I was stupid.

Most of the times I’ve found myself in hot water it’s because I let a conversation continue past the point where I understood what was being said. And virtually every time I would say “Stop, I’m not following this,” someone would come up to me after the meeting and say “Boy, I’m glad you said that. I had no idea what we were talking about.”

People I admire talk a lot about organizational culture.  Whatever image I had of the culture of a law firm, I’ve had to modify it (at least as it applies to a good law firm) based on Ellis’s thoughts.  I know these are good reminders for my own professional life as well.