Janet Clarey asked recently how you measure effectiveness when moving from a content-presentation to a distributed-learning model. There’s a lot going on there, both in her post and in the extended comments.
Janet believes that learning is embedded in social experience (true, though some of them are pretty indirect — I can and do learn from, for example, Head First HMTL, for which the main social component is that two other people wrote this book). Thus, she says, social media are the better choice for supporting the learner. “The role then for the [instructional designer] is to provide access to immersive environments…powered by the tools that foster social experience.”
The crux, though, is how to evaluate effectiveness. And that has me going off on my own tangent, because the next project coming up for me will involve (as Janet says) a more-or-less bean-counting workplace.
Because, you know, unless you work for yourself, pretty much every place counts beans. Joe Harless spoke once about adding value for the client. “Value in terms of what?” Training Magazine asked. “Value in terms of money,” said Joe. “Otherwise, you’re babysitting, or doing psychotherapy.”
The lure of traditional assessments — and far too recently I saw multiple-choice questions for judging the ability of paralegals to interpret case law — is that they’re easy to understand and to track. Hell’s bells, that’s the main reason for the success of learning management systems: they administer.
As you get into more and more complex behavior, it’s harder to define what’s effective, harder to retain the attention of stakeholders (as in, upper management), harder to avoid everyone’s favorite nit to pick: “paralysis by analysis.”
I worked on a series of cold-call roleplays for people selling electronic data interchange services. EDI is a big deal in the corporate world (e.g., Target requires all its suppliers to accept purchase orders and to send invoices electronically — no more fax, no more phone). The salespeople has a wide erange of skills they needed to apply: getting past the gatekeeper, determining whether the “client” has (or was aware of) any problems, doing “client triage” to avoid nonproductive sales. You couldn’t grade this kind of thing.
I worked with the group’s management to come up with scales and checklists. Ad hoc? You bet — but informed ad hoc. As a developer, I sometimes have an irresistible urge to help my clients more than they want help. In a performance system, though, it’s the people who make up the system who matter, not the outsider like me.
If Leo, the cold-caller manager, chose to spend all his time in exaggerated roleplay (wildly hostile customer, preternaturally bored customer), the best I could do ask whether many prospects responded to cold calls in this way. In other words, did he have evidence that these were high-value reactions to encounter and respond to? Or was he just messing with his staff’s head?
Whatever the “new learning” is, it involves the same neurons we developed over a far longer period of time than the use of phrases like “web 2.0.” And in organizations that have to earn their way, someone’s gong to look for evidence of effectiveness. If you can’t work with your partners/colleagues/clients to find that evidence, sooner or later, they’ll find someone who will.
For all involved, then, even if we use iMacs, it’s good to keep in mind the advice of the noted theorist James Thurber: “It’s better to know some of the questions than all of the answers.”