George Siemens mused about anthropomorphic robots the other day. With my long career in the auto industry (I worked at a stamping plant one summer), I wasn’t so sure that robots are “generally” human-shaped.
Not that I thought George was insisting they were. In fact, his reply to my comment included this: “…aside from the use in helping people with disabilities…it seems that designing a robot as a person creates more complexity than functionality.”
In popular culture, “robot” tends to connote a mechanical humanoid… though you can click the picture on the right, you’ll find a page of media robots, including one of Dr. Who’s Daleks, once described as looking like five-foot salt shakers.
But robot doesn’t necessarily mean the Tin Woodsman with microchips. How about “judgment-making tool?” (I’m not trying to brand here, just musing about points of view.)
The welding robots in an assembly plant, the Mars rovers, and the nanorobots that Ray Kurzweil imagines in the doctor’s office: each has some range of decision-making ( “Is the doorframe here? Is it in the right position?” ) and work it can perform as a tool ( “Okay, the frame’s ready — fire!” ).
I looked for some robot-related quotes. I found one purportedly by Adam Smith, a real gem, since Smith died in 1790, and Karel ÄŒapek coined the word “robot” for his 1920 play R.U.R.
This isn’t exactly about robots, but seemed to fit:
The past went that-a-way. When faced with a totally new situation, we tend always to attach ourselves to the objects, to the flavor of the most recent past. We look at the present through a rear view mirror. We march backwards into the future.
I ended up putting McLuhan and Siemens together and thinking about instructional design. Often people look at classroom-based or online learning and start with the surface, so to speak: the appearance of the materials, the “outside” of the exercises. It’s much harder at first to grasp the purpose of these things — why is the exercise set up the way it is? Why would someone do A before B?
I try to help clients look at a situation in reverse: what result aren’t you getting that you’d like, or what result are you getting that you don’t want? (Another facet of the “design” part is finding ways to create useful learning conditions — like examples and practice that build on each other, to encourage thought and broaden understanding while not frustrating someone who’s got a job she needs to do. But that’s musing for another time.)
By focusing on the result, you can often avoid getting caught up in technical details — what IBMers used to call “speeds and feeds” — and look at a broader picture.
A simple example: I once headed a project to design training for over 2,000 sales reps working for a consumer products company. Over 90% of the reps had never used computers, but their employer was giving each of them a laptop to replace a dauntingly paper-intense process.
In three days of classroom training, the reps not only had to learn how to effectively use their new, custom, sales-force-automation application — they had to learn computer basics like moving the trackball, single-click, double-click, drag, as well as generalized skills like selecting, editing, copying, and deleting text.
Early on we realized that for a great deal of computing infrastructure, it doesn’t much matter what you call something; it matters what you do with it. So, for example, if it came to a screen like this (an example, not part of the SFA application):
…we’d ask if the volume was being adjusted for playback. We’d ask how they knew ( “because ‘recording’ is clicked” ), and we’d ask them to set it to adjust playback volume ( “click ‘playback’ instead” ).
In other words, we didn’t teach the label “radio button,” though we might mention the term in passing. You don’t have to teach people they can only choose one radio button in a group, because radio buttons don’t allow you to choose more than one. (This led to another design mantra for us: “If you can’t do it, you can’t do it.”)
In a way, then, the function of the training was to empower the sales reps so they could do the work they needed to do: update accounts, record sales, solve problems. Most of them weren’t going to develop computer interfaces, so we didn’t need to teach them:
- Low-value terminology like “radio button.”
(We did mention the term, and we were very specific about how instructors taught some terms, mainly so that the sales reps could find answers in online help, and could explain any problems clearly to the company’s help desk.)
- No-value concepts like “you can have one item selected in a group of radio buttons.”
(We did demonstrate, very briefly, that you can click more than one checkbox, but we did it in context: e.g., what product brands does this store sell? The same was true for using radio buttons in context: what type of store is this? The company’s guidelines said the store was one of six types, and so the reps were used to thinking that way.)
The reps got very excited about the new application; within a week of completing training, they were on the job and freed from reams of paper.
In the context of this rambly post, I’d say that the application might not have been a true robot, but it was a purposeful, decision-making (or -assisting) tool. And like any tool, it needs a worker with a purpose to apply it in context.