ASTD’s T&D for January includes E-Learning: What’s Old is New Again, by Allison Rossett and James Marshall. They wondered what e-learning looks like in the real world and surveyed nearly a thousand practitioners.
In her book on training needs analysis, Rossett talks about actuals and optimals–finding out how things really are, and determining what they could be. She and Marshall take a similar approach here. They summarize responses about how things are, e-learning-wise. And they speculate about how things could be.
I think the article’s worth reading in full, especially for people who don’t work in corporate or organizational settings (two-thirds of the respondents do). I agree that for many people, the workplace is changing, as is the definition of work. At the same time, most of my own clients have been and are large organizations with multiple locations, often with a significant effort to provide structured learning (a term I prefer to “formal”).
I was especially struck (not to say “depressed”) by the last response in the first of several charts in the article:
Our structured training uses realistic situations, encourages choice, supports learning from that choice — less than “some of the time?”
Sadly, I think that’s accurate, and a true indictment for the organizations in which this happens. Formal training departments may be complicit, but so too are organizational leaders. Often, in the aeries just below C-level executives, there’s a touching faith in magic beans–nice, clear solutions to nagging problems that don’t look like they’re the organization’s real business.
7 thoughts on “Rossett and Marshall, is and ought (or “could be”)”
That last question says to me that L&D practitioners are frustrated with their jobs and organization. The expected response to that question was, of course, “most of the time” because who would (in there right mind) say they create unrealistic scenarios where employees don’t get to make choices? It says, to me, “I make crap e-learning and I’m not happy about it.” I’d be more interested to know why.
From my experience e-learning frequently includes “realistic situations”, but does not encourage or allow choice within a curriculum or within individual learning units. Most of the time there is a specific path to “guarantee” completion of objectives.
Janet: I’m inclined to agree about the frustration–especially among practitioners who’ve moved past the “talking is teaching” stage. Answers provided to Rossett and Marshall may have been more candid because of an anonymous survey than you might get within an organization.
Kelly: possibly because of my own biases, I read “make choices and learn from the results” as meaning decision / action choices — which for most people means something more than a multiple-guess question.
I read your comment as choice within the offerings in a curriculum, or within topics within a course. That’s a different level, though also pertinent: whose hands are on the steering wheel?
…another thought triggered by both of you: many organizations don’t seem to realize what it takes for real learning to occur, including rich experience, spaced practice, challenging yet realistic options.
We’re dealing with similar issues. In an attempt to branch beyond the single affordance ‘next’ forward navigation, our group attempted to remove that cue completely and replace it with choice based progression and organic cues – completely surrounding concepts with authentic contexts and decision based feedback.
The folks that run the program thought that there was ‘no good reason to deviate from established navigation standards’ and forced part of the group to remove the organic / decision based design elements and push a back/next rail control back into the package. Still fighting that battle.
I think that this is a big part of the problem. Folks are used to seeing a certain pattern of execution and packaging. This has a more powerful influence over the way things are done than ‘try something new and evaluate’.
And btw, we evaluated the results in multiple test groups. The response was overwhelmingly positive. Not that evaluation results seem to have more power than ‘that’s-not-what-i’m-used-to-ism’ ;-D
Steve, I’ve seen that sort of resistance many times. It boils down to function following form: if we’re training, we gotta do X because that’s what training looks like. (“Established navigation standards” often translates to “we’ve always done it this way” which in turn can be code for “I prefer it my way.”)
One depressing consequence can be that the learners are browbeaten into thinking that this is how training is supposed to work (meaning “look”). Same is true for bullet-ridden PPT talkathons, mandatory icebreakers, and similar impedimenta.
I once had the same problem, but it wasn’t management demanding the return of the ‘next’ button – it was the instructional designers.
Even though, at a macro level, the course was still rather linear, the absence of a ‘previous’ and ‘next’ button caused great consternation amongst the IDs.
How is it that a function that is supposed to help organizations navigate change is so wary of it?