What I read

The strongest memory is weaker than the palest ink.

Feb 092013
 

I’ve been reading All I Really Need to Know (about Creative Thinking) I Learned (by Studying How Children Learn) in Kindergarten, by Mitchel Resnick of the MIT Media lab. This was the suggested reading for the first session of the Learning Creative Learning online course.

This paper argues that the “kindergarten approach to learning” – characterized by a spiraling cycle of Imagine, Create, Play, Share, Reflect, and back to Imagine – is ideally suited to the needs of the 21st century, helping learners develop the creative-thinking skills that are critical to success and satisfaction in today’s society. The paper discusses strategies for designing new technologies that encourage and support kindergarten-style learning, building on the success of traditional kindergarten materials and activities, but extending to learners of all ages, helping them continue to develop as creative thinkers.

Resnick's image of kindergarten learning

Resnick’s image of kindergarten learning

Resnick is referring to the kind of kindergarten where kids are not “filling out phonics worksheets and memorizing flash cards” — more like the one I remember, with huge wooden blocks, a full-size rolltop desk, and nothing that I can recall as an effort to get me ready for the LSAT.

His diagram’s a spiral because the steps in this process aren’t as distinct or sequential as describing or depicting them might imply.

It’s through this process that kindergarteners “develop and refine their abilities as creating thinkers.” And, as they grow, they need resources beyond wooden blocks and finger paint.

I like his stress on little-c creativity (“creativity within one’s personal life”). Not everyone’s going to be the next Freeman Dyson or Linus Torvalds, but everyone can “become more creative in the ways they deal with everyday problems.”

In the Imagine section, he points out that many kindergarten materials encourage the imagination–they don’t over-structure. By contrast, a lot of “education technologies are overly constrained” — you can only do what they’re set up to do.

It’s like all that fun drill and practice.

He offers the example of Crickets, which I hadn’t heard of: small programmable devices, suited to children, that they can interconnect, modify, and program. Don’t take my word for it, though:

In the article, he says:

The design challenge is to develop features specific enough so that children can quickly learn how to use them, but general enough so that children can contine to imagine new ways to use them.

For some reason, this reminded me of explanations of “simple machines” in long-ago science classes–things like inclined planes, wedges, screws, and pulleys. I’d been told that a screw was a kind of inclined plane, but when it came to pulleys, I don’t think we ever actually rigged up a bunch of pulleys to experience how the right combination would let us lift a load we otherwise could not.

While reading the Create section, I read this line three times:

With Mindstorms and Crickets, for example, children can create dynamic, interactive constructions — and, in the process, learn concepts related to sensing, feedback, and control.

It’s the last part that got me. What it brought to mind was the first course I wrote in the computer-based training system we used for reservations training at Amtrak. Things I had learned about learning (like using a minimalist approach, or providing feedback without giving away the answer) clicked. I could create a course that would help someone learn how to request and interpret train schedules–and I wouldn’t have to be there when that happened.

Resnick says (sensibly) that playing and learning ought to be linked. “Each at its best involves…experimentation, exploration, and testing.” This is part of why he disliked “edutainment” (and not just for its overripe, marketeerish name).

Studios, directors, and actors provide you with entertainment; schools and teachers provide you with education… In all of these cases, you are viewed as a passive recipient. If we are trying to help children develop as creative thinkers, it is more productive to focus on “play” and “learning” (things you do) rather than “entertainment” and “education” (things that others provide for you).

Also in this section, he mentions Scratch, a programmable language that kids can use to create interactive stories. I haven’t gone into this, but just the illustrations of the code remind me of the MIT App Inventor that I used to build a smartphone app (touch a picture of a cat, hear a purring sound, after which the image changes to a cow).

A scrap of Scratch

 

Say meow, then switch to the cow.

Scratch is one way that Resnick’s article moves into the Share section. He quotes Marvin Minsky as saying that the Logo programming language has great grammar but not much literature.

So the Scratch website is an example of “both inspiration and audience.” And, in my way of thinking, if that’s not what you want to share, you at least see how sharing can happen.

Resnick is talking about children, but I come to this from a career mostly involving helping adults to learn. And perhaps the single biggest drawback to learning in the workplace (well, after you get past icebreakers and listening-as-learning and endless recordkeeping) is the dearth of support for reflection.

What are you doing? Why are you doing it? How’s it going? What do you think made that happen (for all kinds of outcomes)?

A colleague I respect recently said he’s decided to propose his first professional-conference presentation. I was surprised that he hadn’t presented already, but no matter. I can recall the first one I did. I wanted to share with people, but I was nearly paralyzed by the idea that I didn’t have all that much to say.

And you know, maybe I didn’t, depending on what measurements you choose.

What I did have was my particular experience (using a complex computer-based training system) combined with the data-based, lean approach to helping people improve, which I’d learned from folks like Geary Rummler and Dale Brethower.

My point is that thinking about what I’d been doing, and trying to uncover value it might have for other people, helped me see the everyday in a new light. That’s the goal of useful reflection.

* * *

I’ve written this post both to help me process the ideas in Resnick’s articles and to set down thoughts of my own. In addition, I found myself noting in a separate document things I wanted to know more about (like Crickets, epistemic games, and Lev Vygotsky). To me those were sidelights; I might discuss them one on one, but this post is plenty long as is.

 

Jan 292013
 

boller trends coverThis is my summary and reaction to the first part of Sharon Boller‘s whitepaper, Learning Trends, Technologies, and Opportunities. Boller is the president of Bottom-Line Performance, a learning design firm based in Indiana.

The 26-page whitepaper has two main sections:

  • Six truths about today’s learning environment.
  • Emerging trends and technologies

I think it’s well worth reading in its entirety. Here on the Whiteboard, I wanted to summarize some of those truths in part one, and add comments for which Boller has no responsibility whatsoever.  

ILT is not dead.

When I read this, I tell you, you could have knocked me over with a smilesheet.

I’m not mocking Boller–far from it. Among the many useful features in her whitepaper are summaries of facts. For instance, ASTD said last year that 59.4% of companies reported using instructor-led classroom training, and another 13.3% use instructor-led by online or remote (such as video).

Self-paced online? 18.7%, and a whopping 1.4% are using mobile as a distribution method.

I looked at this summary from ASTD about the State of the Industry report that Boller mentions. While this isn’t the entire report, I found a comment about “content distribution” striking:

Technology-based methods have rebounded to account for 37.3 percent of formal hours available across all learning methods.

If I read that right, then non-tech methods (you know, like instructor-led classroom training) accounts for more than 60% of “formal hours available across all learning methods.

Even the phrase “learning method” is telling. I’m not the kind of fanatic who goes around correcting punctuation and menus; I can even hold a civil conversation with someone who uses “understand” as part of a training objective–because I’m inclined to see it as shorthand for something that can eventually be observed.

So I do understand that people in the industry use “learning method” for things that can only aspire to encourage learning. I do think it’s helpful to state that explicitly from time to time. Absolutely, you can design and create activities, experiences, exercises, games, what have you, that are aimed at supporting, encouraging, and so on, just as you can  find recipes, buy ingredients, set a table,and prepare dishes. What you can’t do is guarantee that people will eat your food.

mLearning: lots of talk, little action

That ASTD report tells us that 1.4% of formal learning is delivered via mobile. Like Boller, I’m sure the current figure is higher. After all,  an increase of nearly 50% would get you all the way up to 2.1% .

I can’t help wondering whether one serendipitously limiting factor is that you can’t easily cram a 300-slide barrage of PowerPoint onto a smartphone screen. Tablets are an easier target for this pumpkin-headed kind of leveraging, though, and are probably already plagued with far more legacy content than the Geneva Conventions should permit.

I want to underscore that in this first section, Boller’s talking about the way things are, not how they will or should be.

I confess that I’m a little leery of “mobile learning” in a learning-industry context. I fear it’ll be stacking and tracking: loading stuff up because it can go onto a mobile device, and then using ever-better software to track whatever somebody thinks ought to be tracked. It’s always easier to track a score on a quiz than the quality with which someone handled an actual problem from an actual customer.

Outside vendors matter.

One thing Boller says in this section is really about attitudes inside an organization:

Most companies are NOT in the L&D business; they are in business to do something else.

This ought to be obvious, but it’s sometimes only a ritual nod that L&D makes toward the reason there’s a organization at all.

Employees don’t get much formal training.

31 hours a year is the average in ASTD’s data, or 1.5% of a year’s worth of 40-hour weeks.

There’s a way in which much “formal learning” in the workplace is really “focused introduction with maybe a little practice.”  31 hours is like a 2-credit course in college (which may explain my level of skill when it comes to History of Art).

Boller says she thinks of this time spent in formal training like driver’s education. “Would you rather have your kid spending more hours in the classroom… or more hours behind the wheel practicing driving with a qualified adult providing constant feedback?”

In Maryland, where I live, the formal training requirements for a new driver, regardless of age, include completing a standardized driving course with at least 30 hours of classroom instruction and 6 hours behind the wheel.

That’s the formal-training requirement. But obtaining a provisional license also requires 60 hours of driving with “a qualified supervising driver (parent, guardian, or mentor)” who completes and signs a practice log documenting those 60 hours.

I can picture the diagram in my driver’s ed textbook that explained how to parallel park. That was helpful, in the way that a dictionary definition of a word is helpful. But if your goal is more than “repeat the definition when asked,” you’ve got to work up to fitting your car in between two others on the street.

That might not take 30 hours–but it will take spaced practice; it will take varying conditions; it will probably benefit from scaffolding (such as starting with a span of three empty spaces behind a parked car).

And that’s just the parking part of the driving-a-car set of skills.

Majority of eLearning “doesn’t match” what’s optimal.

I can’t possibly improve on what Boller says:

Clients ALWAYS say they want something that is “engaging” and not too content-heavy. Yet the stuff we routinely see looks very much “Text and Next” with tons of content and little relationship to any behaviorally-based outcomes. Sometimes this is the result of a subject matter expert who ruled with an iron fist in terms of focusing on content rather than outcomes. Other times it was the result of an internal person who decided to get Articulate or Captivate and started creating his or her own stuff – with no background in learning design.

Most of the people we talk to inside organizations HATE taking eLearning courses (including lots of folks who hire us to produce it). They hate it because most of it is boring, bad or it’s not really eLearning – it’s a communication piece squished into an eLearning shell so someone’s completion can be tracked via an LMS.

My only quibble is with the “not really eLearning” part. My hunch is that most people in organizations hate elearning because it’s far more about the E (as in ease of delivery and easily outsourced and easily tracked) than it is about the learning.

LMS: few pull data, but they all think they need it.

We've got to get everyone on board.Boller says is that the majority of people “do not actually access or use the data available to them within an LMS.”

This sounds so much like the SCORM evangelism I used to hear–”there’s so much good stuff in there; it’s just not implemented right.”

To which my (occasionally spoken) reaction was, “No kidding.”

There must have been places where SCORMification actually helped increase the likelihood that people learned on the job–but that’s a belief on my part, or perhaps a hope. My own experiences with projects where the management team included a SCORM hall monitor was that the fetishization of the SCO could overrule any argument based on ephemera like principles of learning or on-the-job relevance.

Just as with mainframe-based CBT back in the olden days, just as with the 12-inch laser disks and players grafted between the PC and its VGA monitor, just as with the nearly unavoidable audio response systems that have reanimated the multiple-guess question, there are convention-halls full of vendors eager to explain how their particular magic beans are just the thing you want to trade your corporate cow for.

CC-licensed image of bandwagon by Jed Sullivan.

Dec 302012
 

Today’s New York Times business section included Adam Bryant’s Corner Office interview with Karen May, vice president for people development at Google. The interview is short (the feature takes up a bit less than half a page), but well-focused, particularly on two topics: training and feedback.

Asked about common mistakes she’s seen with regard to training programs for employees, May says:

One thing that doesn’t make sense is to require a lot of training… If people opt in, versus being required to go, you’re more likely to have better outcomes.

Well, there goes the whole compliance-training industry, and a good percentage of elearning producers with them.

Yes, May seems to have in mind training-as-an-event, but I think that was implicit in the question. She’s clearly not an idealist:

Another “don’t” would be thinking that because some training content is interesting, everyone should therefore go to it.

I don’t know whether the other bigwigs at Google listen to her (I suspect, without evidence, that the proportion of formal training there is on the low side), but I can think of a few elsewhere who’d benefit from heeding this. How many large organizations plunge into some flavor of the month because of what was said on the golf course to the vice-president in change of things beginning with R?

Kay segues from training to feedback by talking about performance.  “Don’t use training to fix performance problems,” she says. I’ve said something similar (not that I’m a vice president for people development), though what she’s referring to is problems of individual performance.

In her view, managers will sometimes send a person to training if that person isn’t performing well.

I agree that’s generally a dumb idea–when the cause isn’t a skill deficit, and especially when no one’s looked for evidence of the cause.

May discusses the difficulty people have in giving candid feedback–especially “difficult feedback,” which I take to mean feedback intended to help change current behavior.  There’s the potential for great value in frank feedback, of course, and she believes it’s often realized:

People can do something with the feedback probably 70 percent of the time. And for the other 30 percent, they are either not willing to take it in, it doesn’t fit their self-image, they’re too resistant, in denial, or they don’t have the wherewithal to change.

(I do think she’s left out the possibility that the person giving the feedback is mistaken. That’s not necessarily a common situation, but it’s hell on the person who’s on the receiving end, because attempts to correct a misimpression can easily be seen as unwillingness, resistance, denial, what have you.)

May does say that many of the executives she’s coached needed help “in relationships with others, and understanding the impact they have on the people around them.”  Of the need for empathy, listening, and so on, she says, “It wasn’t usually from a lack of willingness to do those things, but they didn’t have a strong muscle.”

 

 

 

Jul 112012
 

Thanks to David Glow, whose mention of it I happened to notice on Twitter last night, I found a blog post by Steve Flowers that I hadn’t seen: Just a Nudge–Getting into Skill Range. He’s talking about skill, mastery, and the (ultimately futile) “pursuit of instructional perfection.”

Steve starts with a principle from law enforcement: only apply the minimum force necessary to produce compliance.  (This is why those “speed limit enforced by aircraft” signs rarely mean “cops in helicopter gunships”). Then he works on a similar principle for, as he puts it, instruction performance solutions.”

Trying to design training / instruction for skill mastery can hinder–or defeat–the learning process, he says. That’s because mastery, in whatever form reasonable people would define it, is likely the outcome of a long period of practice, reflection, and refinement.

“Mastery” sounds good, which is why the corporate world is hip-deep in centers of excellence and world-class organizations.  A lot of the time, though, “world-class” is a synonym for “fine,” the way you hear it at the end of a TV commercial: “available at fine stores everywhere.”  Meaning, stores that sell our stuff.

He’s not saying there’s no place for formal learning, nor for a planned approach to helping people gain skill.  What he is saying is that we need “to design solutions to provide just the right nudge at just the right moment.

Most of the time, we don’t need mastery on the job, he says, and I agree.  We do need competence, which is what I believe he means by helping the performer move into a “skill range” — meaning the performer has the tools to figure out a particular problem or task.

From a blog post by Steve Flowers
(Click image to view his post.)

I’ve been mulling some related ideas for some time but hadn’t figured out how to even start articulating them. One theme has to to with the role of job aids and other performance support–things that Steve believes strongly in. I despair at the server farms full of “online learning” that shows (and show), and tells (and tells and tells) while failing to offer a single on-the-job tool.

Listen: the only people who’ll “come back to the course” for the embedded reference material are (a) the course reviewers, (b) the utterly bored, and (c) the utterly desperate.

A second theme has to do with the two different kinds of performance support that van Merriënboer and Kirshner talk about in Ten Steps to Complex Learning. In their terminology, you have:

  • Procedural information: this is guidance for applying those skills that you use in pretty much the same way from problem to problem.  That’s the heart of many job aids: follow this procedure to query the database, to write a flood-insurance policy for a business, or to update tasks in the project management system. You can help people learn this kind of information through demonstration, through other presentation strategies, and through just-in-time guidance.
  • Supportive information: as vM&K say, this is intended to bridge the gap between what learners already know, and what they need to know, to productively apply skills you use differently with different problems.  “Updating the project management system” is procedural; “deal with the nonperforming vendor” is almost certainly a different problem each time it arises.  (That’s why Complex Learning uses the somewhat ungainly term “non-recurrent aspects of learning tasks.”) Types of supportive information include mental models for the particular field or area, as well as cognitive strategies for addressing its problems.

As the complexity of a job increases, it’s more and more difficult to help people achieve mastery. That’s not simply because of the number of skills, but because of how they related, and because of the support required.

Rich learning problems

Part of the connection I see, thanks to Steve’s post, is that the quest for perfect instruction ignores both how people move toward mastery (gradually, over time, with a variety of opportunities and guided by relevant feedback). In many corporations and organizations, formal learning for most people gets squeezed for time and defaults to the seen-and-signed mode: get their names on the roster (or in the LMS) so as to prove that learning was had by all.

We focus on coverage, on forms, on a quixotic or Sisyphean effort to cram all learning objectives into stuff that boils down to a course. I’m beginning to wonder, frankly, whether any skill you can master is a course is much of a skill to begin with. At most, such a skill is pretty near the outer border on Steve Flowers’ diagram. So the least  variation from the examples in the course–different circumstances, changed priorities, new coworkers–may knock the performer outside the range of competence.

(Images adapted from photos of F. Scott Fitzgerald and Ernest Hemingway from Wikimedia Commons.)

Mar 092012
 

(This is a continuation of a previous post based on John M. Carroll’s The Nurnberg Funnel)

The main elements in the Minimal Manual test–a task-centric approach to training people in using computer software–were lean documentation, guided exploration, and realistic exercises. So the first document that learners created was a letter. In earlier, off-the-shelf training, the first task had been typing a description of word processing, “something unlikely to be typed at work except by a document processing training designer.”

You call this training?This sort of meta-exercise is very common, and I think almost always counterproductive. Just as with Amtrak’s training trains that (as I said here) didn’t go over real routes, trivial tasks distract, frustrate, or confuse learners. They don’t take you anyplace you wanted to go.

Not that the practice exercise needs to look exactly like what someone does at his so-called real job; the task simply needs to be believable in terms of the work that someone wants to get done.

Into the pool

After creating the Minimal Manual, Carroll’s team created the Typing Pool test.  They hired participants from a temp agency and put them in a simulated office environment, complete with partitions, ringing phones, and office equipment. These people were experienced office workers with little prior computer knowledge. (Remember, this was in the 1980s; computer skills were comparatively rare. And Carroll was testing ways to train people to use computer applications.)

Tasks for the Typing Pool test

(Click to enlarge.)

Each group of two or three participants was given either the Minimal Manual (MM) or the systems style instruction manual (SM). Participants read and follow the training exercises in their manuals and periodically received performance tasks, each related to particular training topics. (You can see the task list by enlarging the image on the right.)

Some topics were beyond the scope of either the MM or the SM; interested participants could use an additional self instruction manual or any document in the system reference library.

After finishing the required portion of training material, participants took the relevant performance test. They were allowed to use any of the training material, the reference library.  They could even call a simulated help line. This last resource had an expert on the system who was familiar with the help line concept but unaware of the goals of the study.

So what happened?  Carroll provides a great deal of detail; I’ll summarize what seem to me to be the most important points.

Minimal learning was faster learning.

In all, the MM participants used 40% less learning time then the SM participants — 10 hours versus 16.4. (“Learning time” refers to time spent with either the MM or SM materials, not including time spent on the performance tasks.) This was true both for the basic tasks (1 through 3 on the list) and the advanced wants.

In addition, the MM group completed 2.7 times as many subtasks, as the SM group. One reason was that some SM participants ran out of time and were unable to try some of the advanced tasks. Even for those tasks that both groups completed, the MM group outperformed by 50%.

We were particularly satisfied with the result that the MM learners continued to outperform their SM counterparts for relatively advanced topics that both groups studied in the common manual. This indicates that MM is not merely Wiccan dirty for getting started… Rather, we find MM better then SM in every significant sense and with no apparent trade-offs. The Minimal Manual seem to help participants learn how to learn.

In the second study, more analytical while more limited in scope, similar results were found. In this study, Carol’s group also compared learning by the book (LBB) with learning by doing (LWD).  The LBB group were given training manuals and assigned portions to work with. After a set period of learning, they were given performance tasks. This cycle was repeated three times. The LWD learners received the first task at the start of the experiment, as they completed each task, they received the next one. There was also an SM by-the-book group and an SM learn-by-doing group.

So there are two ways to look at the study: MM versus SM as with previous study, and LWD versus LBB for each of those formats. To make that clear, both sets of LWD learners received at the start both the training materials and the relevant performance test to complete; both sets of LBB learners had a fixed amount of time to work with the training materials (which included practice) before receiving the performance tests.

Among the things that happened:

  • MM learners completed 58% more subtasks than SM learners did.
  • LWD learners completed 52% more subtasks than LBB learners did.
  • MM learners were twice as fast to start the system up as SM learners.
  • MM learners made fewer errors overall, and tended to recover from them faster.

Mistakes were made.

One outcome was the sort of thing that makes management unhappy and training departments uneasy: the average participant made a lot of errors and spent a lot of time dealing with them.  Carroll and his colleagues observed 6,885 errors and classified them into 40 categories.)

Five error types seemed particularly important–along the accounted for over 46 percent of the errors; all were at least 50 percent more frequent than the sixth most frequent error…

The first three of these were errors that the MM design specifically targeted.  They were imprtant errors: learners spent an average of 36 minutes recovering from the direct consequences of these three errors, or 25 percent of the average total amount of error recovery time [which was 145 minutes or nearly half the total time].

The MM learners man significantly fewer errors for each of the top three categories–in some cases nearly 50% less often.

This to me is an intriguing, tricky finding. A high rate of errors that includes persistence and success can indicate learning, though I wonder whether the participants found this frustrating or simply an unusual way to learn. I’m imagining variables like time between error and resolution, or number of tries before success. Do I as a learner feel like I’m making progress, or do I feel as though I can’t make any headway?

The LWD participants (both those on MM and on SM) had a higher rate for completing tasks and a higher overall comprehension test score than their by-the-book counterparts. So perhaps there’s evidence for the sense of progress.

Was that so hard?

Following the trial, Carroll’s team asked the participants to imagine a 10-week course in office skills.  How long would they allow for learning to use the word processing system that they’d been working with.  The SM people thought it would need 50% of that time; the MM people, 20%.

Slicing these subjective opinions differently, the LBB (learn-by-book) group estimated less time than the LWD (learn-while-doing) group. In fact, LBB/MM estimated 80 hours while LWD/MM estimated 165.

What this seems to say is that in general the MM seemed to help people feel that word processing would be easier to learn compared with SM, but also that LWD would require more time than LBB.

♦  ♦  ♦

The post you’re reading and its predecessor are based on a single chapter in The Nurnberg Funnel–and not the entire chapter.  Subsequent work he discusses supports the main design choices:

  • Present real tasks that learners already understand and are motivated to work on.
  • Get them started on those tasks quickly.
  • Encourage them to rely on their own reasoning and improvisation.
  • Reduce “the instructional verbiage they must passively read.”
  • Facilitate “coordination of attention” — working back and forth between the system and the training materials.
  • Organize materials to support skipping around.

I can see–in fact, I have seen–groups of people who’d resist this approach to learning.  And I don’t only mean stodgy training departments; sometimes the participants in training have a very clear picture of what “training” looks like, what “learning” feels like, and spending half their time making errors doesn’t fit easily into those pictures.

That’s an issue for organizations to address–focusing on what it really means to learn in the context of work.  And it’s an issue for those whose responsibilities include supporting that learning. Instructional designers, subject-matter experts, and their clients aren’t always eager to admit that explanation-laden, application-thin sheep-dip is ineffective and even counterproductive.

CC-licensed image: toy train photo by Ryan Ruppe.