Clusters, chains, and part-task sequencing

This entry is part 6 of 21 in the series Ten Steps to Complex Learning (the book).

(Note: this is a continuation of the previous post in this series.
Together they deal with Step 2, sequencing the task classes.)

Clusters, chains, and parts

It's not just an exhibition; it's a complex.“In exceptional cases,” van Merriënboer and Kirschner say in Step 2 of Ten Steps to Complex Learning, “it might not be possible to find a task class easy enough to start the training with.”  They’re thinking of very complex training–like, say, complete programs for doctors or pilots. “If you are not dealing with such an exceptional case, you may skip this section.”

Thanks, guys.

I’m extending this discussion of Step 2  ( “sequence the task classes” ) because, while vM&K say part-task sequencing is an exception to usual practice, I’ve used some of the techniques in less-than-complex situations.

Parts: on the whole, they’re incomplete

“Part-task sequencing” has a special meaning in the Ten Steps approach.  It refers to how you decide which clusters of constituent skills to deal with, in which order, because you see the overall skill–the whole task–as too complex to confront a learner with.

A cluster is a group of interrelated constituent skills that make up an authentic real-life task, even if it’s not the complete task.  If you think of “diagnosing a patient” as the whole task, one cluster might involve conducting physical exams while another might involve reviewing test results and reports from exams done by others.  Each cluster makes sense on its own (it’s something a physician would do in the real world); each is a part of the whole task of diagnosis.

I’m feeling a little sympathy for vM&K.  There’s lots of terminology like “task” that can apply at different levels.  The authors end up with lots of phrases like “whole task” and “part-task sequencing.”  I suppose  the alternative is to try nailing a term to a particular level.  Good luck.

One advantage to clusters is that they reduce difficulty: the learner has fewer things to attend to.  The tradeoff is that the clusters hinder integration (since you’re leaving out some of the skills) and limit opportunities to coordinate the skills.

In a sense, that coordination is headquarters for “the whole is more than the sum of the parts.”  The complex task is not simply what you get when you add up its constituent skills; you also have to retrieve and deploy combinations of those skills–coordinate them–on the job.

Linking the tasks

Imagine the job of a first-level supervisor.  For the sake of example, we’ll look at the managerial parts of the job (as opposed to industry-specific ones).  As a supervisor/manager/leader, you’ve got whole tasks like:

  • Maintain a skilled team (e.g., make sure people have or acquire necessary skills; make sure they get to develop them)
  • Manage the performance of your team
  • Assess the performance of your team (monitor, provide feedback, conduct performance reviews)

You might choose different clusters, and so might I.  You’ll likely agree that each of the three bullets at least implies a related group of skills, and that each cluster might itself be so complex as to need additional breaking down.  Concentrating on one of these clusters at a time, and training within that cluster, is what part-task sequencing means.

And, yes, you’ll still have to integrate the clusters eventually.  Yes, that’s going to take time and cost money.  Complexity is complex.

In forward chaining, you address the constituent skills in a logical order, typically the way they’re performed.  Take the task of “managing the performance of your team.”  In forward chaining, you might work through these constituent skills:

  • Communicate requirements and standards
  • Monitor individual performance
  • Discuss performance with individual
  • Document results of discussion

Backward chaining works in the opposite direction; you’d train tasks in this order:

  • Document results of discussion
  • Discuss performance with individual
  • Monitor individual performance
  • Communicate requirements and standards

With either method, you can use snowballing.  You combine each subsequent task with what’s been taught before.  The forward version would have task classes like these:

  • “Communicate” tasks
  • “Communicate and monitor” tasks
  • “Communicate, monitor, and discuss” tasks
  • “Communicate, monitor, discuss, and document” tasks

Van Merriënboer and Kirshner believe the most effective combination is backward chaining with snowballing.  Since you start near the completion of the task, you need to provide the learner with outcomes from earlier stages not yet covered:

  • “Document” tasks (given: the outcome of “communicate, monitor, and discuss”)
  • “Discuss and document” given: the outcome of “communicate and monitor”)
  • “Monitor, discuss, and document” tasks (given: the outcome of “communicate”)
  • “Communicate, monitor, discuss, and document” tasks

This last approach, they argue, should be the default mode for part-task sequencing, which in turn is an exception reserved for only highly complex whole tasks.  As for the other combinations:

  • Backward chaining without snowballing is what you do when you lack time for including snowballing.
  • Forward chaining with snowballing is only for part-task practice
  • Forward chaining without snowballing is for part-task practice when you lack time.

“Part-task practice” is another specialized term; it’s Step 10 of the Ten Steps, and it’s meant to develop automaticity for recurrent tasks.  That means the procedural stuff you do the same way each time.  It’s how you learned your times tables; it’s how you practice scales on the cello.

Two more combinations

Earlier we talked about whole-task sequencing: each task class presents the whole task, with the early task classes providing simple examples and the late ones providing complex examples.  This post looked at part-task sequencing, something you do when the whole task is too complex for a learner to tackle.  So, naturally,

Whole-task sequencing and part-task sequencing may be combined in two ways, namely whole-part sequencing and part-whole sequencing.

See why I broke Step 2 into two posts?

It’s not as bad as it sounds–if I understand vM&K.  Here’s what I think they’re talking about.  Remember that constituent task for the supervisor, “Manage performance of your team?’  Its four skills (communicate, monitor, discuss, document) might apply in two contexts: coaching someone to improve current performance, and counseling someone to correct a deficiency.  Or “how to do even better” versus “you need to do better.”)

Here’s how I think that works on a whole-task class for the “manage performance” skill.  This class is made up of relatively simple, straightforward situations.  Individual task problems may involve coaching or may involve counseling.

  • First skill cluster: documenting skill. Learning via backward chaining and snowballing, the supervisor is asked to document various simple counseling and coaching situations.   (The givens include the outcomes of communicating, monitoring, and discussing.)
  • Second cluster:  the supervisor would now works on problems that involve discussing performance with an employee, then documenting the discussion. (Givens: communicating and monitoring.)  Again, some are coaching situations, some are counseling.

And so on.  This is whole-part because the class involves the whole task (“managing performance”); within the class, we sequence by parts of the performance.

vM&K say you could also have part-whole sequencing.  I take that to mean, in terms of this example, that you have a class where you document easy cases, then medium ones, then difficult ones, and then move on to a class where you discuss and document, and so on.

Since their clear preference is for whole-part (if you have to have part-task sequencing), and since they didn’t provide any examples of their own, I’m going to tiptoe away from Step 2.

Next in the series: Step 3, Setting Performance Objectives.

CC-licensed image of Vasarely works by notalike.

Step 3: performance objectives (the how of the what)

This entry is part 7 of 21 in the series Ten Steps to Complex Learning (the book).

In Ten Steps to Complex Learning, van Merriënboer and Kirschner’s third step is set performance objectives.  (Click for a diagram of all ten steps and the four components into which they fit.)  The main sections of the chapter are:

  • Skill decomposition (or, academic language rides again)–figuring out the constituent skills of the whole task
  • Creating performance objectives
  • Classifying performance objectives
  • Performance assessment

The authors emphasize one point so often, I’m putting it first:

Many instruction design models use performance objectives…as the main input for the design decisions to be made….Instructional methods are selected for each objective, and each objective has its corresponding test item(s)….

This is certainly not true for the Ten Steps.  In complex learning, it is precisely the integration and coordination of constituent skills described by the performance objectives that must be supported by the training program.

In other words, you can’t tie your instructional methods to a specific objective; you have to connect with interrelated sets of objectives.

I’m willing to ride at least part of the way with vM&K, even though there’s some risk of semantics here.  The subject is complex learning, not how to set up a Facebook page.  Many criticisms of traditional instructional design and of formal learning result from the failure of training interventions to address the whole of a complex skill.

That said, much traditional organizational learning has glossed over the reality that complex things are complex.  For every death-by-PowerPoint three-day workshop, there’s a senior executive somewhere who believes that talking is teaching.  He doesn’t want “paralysis by analysis,” though his fervent belief in Best Practices (especially if they come from outside his organization) is often a case of faith without good works.

Skill decomposition, or, what does it take?

The Ten Steps assumes you’re doing a needs analysis (and if you’re not, how the hell do you know what you should be doing?).   Two additional assumptions follow:

…There actually is a performance problem that can be solved by training
[and an] overall learning goal…a statement of what the learners will be able to do after they have completed the training program.

Since the Ten Steps is iterative, the goal helps you break down the complex skill, and the constituent parts of the skill help you refine the overall goal.  As you identify the learning tasks (Step 1) and sequence them (Step 2), you’re uncovering information about the relationship of the constituent skills.

The top-level goal can be elaborate, like their example for patent examiners:

After having followed the training program, the learners will be able to decide upon the granting of patent applications by:

  • Analyzing the application
  • Searching for relevant prior documents
  • Conducting a substantive examination
  • Delivering a search report where the relevant documents that have been found are cited and their relevancy is acknowledge on the basis of the substantive examination, and
  • Communicating the result of the substantive examination to either
    • The examining division so that a patent can be granted immediately, or
    • The applicant so that, at a later stage, a reply can be filed by the applicant and a patent granted, or the application is refused.

vM&K list several approaches to filling out the set of constituent skills.  A hierarchy is an obvious approach: what specific skills are necessary in order to perform the more general skill?  In my earlier example of supervisor skills, “monitoring individual performance” is a prerequisite for the overall skill of “managing performance.”

If you want to know whether to expand the skills on the same (horizontal) level, there are temporal relationships (skill A must be performed before skill B), simultaneous ones (you perform C and D at the same time, like shifting gears and steering the car through a turn), and transposable ones (perform skills E and F in any order).

I’m not sure why this amount of detail appears here.  My guess (and that’s all it is) is that the relationship information will come into play with the cognitive strategies and rules.


The chapter mentions heterarchical (peer-to-peer) relationships between skills on the same level, and also what vM&K call reitiary but others seem to call retiary relationships–networks of skill or “competence maps.”

The thumbnail on the right links to a competence map for cognitive behavioral therapy, a form of psychotherapy, though the elements on the map aren’t interconnected.

Data: gaining as you gather

Martina Navratilova demonstrating serves.Analyzing the skills helps highlight which are similar.  Similarity may facilitate learning or may impede it (when “similar” means “with tiny but important differences”).  vM&K recommend observing skilled performers as they work first on simple whole tasks, and then on more complex ones.

So, no, you don’t sit down and just have the subject-matter expert tell you about things.  As Tom Gilbert often noted, to figure out her tennis skills, you have to watch Martina Navratilova’s feet. Martina has lots to say that’s useful, but the whole skill is serving, not just foot-placement or racquet-holding. And the expert performer is often unaware of how she applies complexes of skills.

This chapter offers suggestions for gathering data.  The authors refer to objects–things that the performer focuses on or changes.  A shift from one object to another suggests a shift in constituent skill.  If the supervisor checks the project schedule and then a worker’s weekly report, the shift might be from “determining individual’s goals” to “tracking individual’s progress.”

Another source of data: the tools the performer uses.  The chapter gives an example of a patent examiner switches from a highlighter to a search engine to a word processor as he works on a report.  Possible constituent skills are “analyzing applications” (the highlighter), “performing searches” (the search engine), and “writing results of pre-examination” (the word processor).

In addition to working with skilled professionals who demonstrate the desired performance, there’s value in working with deficient performers as well.  The gap between optimals and actuals (as Allison Rossett phrased it) helps target performance-improvement efforts.

The (A)BCDs of building performance objectives

Van Merriënboer and Kirschner see four elements to a performance objective: the action, the tools and objects, the conditions, and the standards.

Not that difference from ABCD objectives (actor, behavior, condition, and degree).  True, they left out “actor,” but I’d say it’s pretty obvious.

The action verb part is what you’d expect.  No “understand,” “know,” or “be aware of” allowed.

You specify the tools and objects for several reasons.  One, if they’re used on the job, they’re part of the task, and learners need to learn them.  (In designing training, naturally, you might come up with simple or low-fidelity versions of some of the tools or objects, especially for the early stages.)  In addition, if some of these items change often, you’re forewarned; you know that the training may have to change as well.

Conditions play a major part in complex learning–think of the military surgeon who must perform not only in a well-equipped hospital but also in suboptimal conditions closer to the front.

In the Ten Steps approach, the final element, standards, involves not only criteria, but also values and attitudes.  That’s where the next post in the series will begin.

Martina Navratilova image adapted from a CC-licensed photo by Chip_2904.

Criteria for objectives–also, values and attitudes

This entry is part 8 of 21 in the series Ten Steps to Complex Learning (the book).

Note: this is a continuation of the previous post in this series,
because I can’t seem to summarize and comment on one of the
Ten Steps to Complex Learning in a single post.)

Repeat after me: it's iterative.Step 3 is “set performance objectives.”  As the introduction and first section of this chapter emphasize, this is an iterative process, not a linear one.  The real-life tasks in which you perform the complex skill help to determine the overall learning goals and the specific tasks that will help achieve them.

In turn, these help refine understanding of the complex skill and the constituent skills that it embodies.

After analyzing (or “decomposing”) the skills, you create performance objectives.  I’ve discussed Van Merriënboer and Kirscher’s actions and the tools and conditions that apply to the objectives.  It’s a bit tough to talk about standards as they describe them.

Keeping to the standards

The Ten Steps sees standards as having three elements: criteria, values, and attitudes.  Criteria means what you think: minimum requirements for things like accuracy, speed, quality, and so forth.

Values indicate that the constituent skill conforms to some set of rules or regulations.  Two examples vM&K offer: “without violating traffic rules” and “taking the ICAO safety regulations into account.”

My feelings are mixed.  I can see the value of this as shorthand (“wiring for this remodeling must meet the National Electrical Code”).  Is there a little game of gotcha on the side?  Or are we acknowledging that in complex learning, there are areas of performance that matter, even if we’re not going to provide instruction related to them?

I really can’t say; this just feels a bit like a junk drawer in the conceptual cabinet of the Ten Steps.

Feelings about attitudes

If values are the junk drawer of instructional design, attitudes are like scribbling “Get organized!” on a to-do list.  The Ten Steps doesn’t define “attitudes,” but says they’re “subordinate to, but fully integrated with” constituent skills.

Apparently we’ll know them when we see them.  However, they won’t be things like “have a client-centered attitude.”  vM&K say this is a non-example: a research librarian doesn’t need to have such an attitude outside of work, nor does he need to have it when doing tasks that don’t involve clients.

So, whatever an attitude is, it’s not an enduring part of your personality.  I actually think there is such a thing as attitude; I just don’t think  you can influence it directly very well.  The Ten Steps seems to agree:

It is only necessary to specify the attitude in the performance objective for these relevant constituent skills. If possible, all observable behaviors that indicate or demonstrate the attitude should be formulated or specified in a way that they are observable!  The standard “with a smile on your face” is more concrete, and thus more observable, than “friendly;” “regularly performing checks” is more concrete…than “punctual…”

“With a smile on your face?”

This is an unsatisfactory nod toward a complex issue.  Think of medical professionals interacting with patients (so-called bedside manner).  Can it be that helping a surgeon demonstrate interest in the person and not just the condition–“Dr. Manoogian, your gall bladder’s in room 5”–might require this close a focus?

Classifying Objectives

Three dimensions apply to the objectives you develop (remember, these are objectives for the constituent skills that make up the overall complex skill):

  • Teach, or not?
  • Non-recurrent, or recurrent?
  • Make automatic, or not?

The easy part is sorting out the objectives you’re not going to include in your training–either because the typical performer already has these skills, or because the objectives are covered elsewhere.  Those that remain fall into four groups.

Non-recurrent skills, you’ll recall, are applied differently to different problem situations.  They involve schema-based problem solving and reasoning.  They require supporting information like cognitive maps during the training, which is the topic for Step 4.

Recurrent skills are those you apply the same way to different problem situations.  They’re the rule-based skills.  In training, these require procedurall information.

vM&K state that any prerequisite skills for a recurrent skill are by definition recurrent.  “A recurrent constituent skill,” they says, ” can never have non-recurrent aspects!”  Since they say with with both italics and the second exclamation point in two pages, they must mean it.

The same skill, they go on, could be non-recurrent in one training program, but recurrent in another.  Repair of military aircraft in peacetime might be a non-recurrent skill, because there’s time for diagnosis.  In wartime, one of the criteria is to repair or replace as quickly as possible, which could mean that repair becomes a more procedure-driven and thus recurrent task.

Some recurrent skills require a high level of automaticity.  This involves the part-task practice discussed in Step 10.  Some jobs don’t require this type of automaticity (for example, the recurrent patent-examiner task).  Factors that suggest automaticity include:

  • Enabling other skills higher in the hierarchy. Musicians practice scales, even after achieving a high level of skill, in order to automate basic skills and enable more fluid performance of higher skill.
  • Simultaneous performance with many other skills. Process operators in manufacturing and air-traffic controllers are two types of jobs where the individual reads displays automatically as she analyzes and responds to dynamic sistuations.
  • High risk in terms of cost, damage to equipment, or danger to life. Pilots and flight attendants practice emergency procedures.

Ten Steps makes a point that not all routine skills need automation.  There’s a cost/benefit consideration — you don’t memorize all the addition possibilities of two numbers from 0 to 999; you do (eventually) automate the skill needed to keep a car moving in a straight line.

Twofers

“Rare situations” exist, according to vM&K, when you’d choose to automate a non-recurrent skill.  They use “double classified” for what seems to be combinations of recurrent and non-recurrent skills, like their example of shutting down a power plant in an emergency.

The shutdown can occur in many ways, depending on circumstances (non-recurrent), but must follow specific procedures (recurrent).  This is an expensive decision and often requires high-fidelity simulation.  In addition, the authors say that learners should be explicitly told that they’ll switch from automated mode to problem-solving mode at times.

The things you left out

Remember that “category” of objectives that you won’t be teaching?

If learners have already mastered a particular constituent skill in an isolated manner, this is no guarantee that they can carry it out in the context of whole-task performance.  Performing a particular constituent skill in isolation is completely different from performing it in the context of a whole task, and automaticity of a constituent skill that has been developed through extensive part-task practice is often not preserved in the context of whole-task performance.

Which is to say that when Bruno gets a perfect score on the loan-application system, it doesn’t necessarily mean he can use it while interviewing a live loan applicant at the bank branch.

Objectives and assessment

It’s pretty obvious that clear, observable objectives relating to clusters of constituent skills that make up the complex skill have many benefits.  You can develop tools to help learners do self-assessment.  You can provide support for a peer, who can help identify areas of improvement (and whose own performance can benefit from helping the other person).

In assessment, values and attitudes are usually measured narratively, or through qualitative scales (very poor, poor, acceptable, good, excellent).

vM&K acknowledge the potential burden of a highly-detailed assessment, which is virtually a necessity for complex sills.  They recommend self-assessment and peer assessment.  They also suggest a development portfolio, a collection of assessments for all learning tasks.

This details what the learner’s done and how well he’s doen.  He can choose his next learning tasks based on this information.

In the next post, we’ll (finally) move from the learning task component to the supportive information component.  The corresponding steps are designing supporting information (Step 4), analyzing cognitive strategies (Step 5), and analyzing mental models (Step 6).

Step 4: supportive info (by design)

This entry is part 9 of 21 in the series Ten Steps to Complex Learning (the book).

Getting to Step 4, “design supportive information,” feels like a new stage in Ten Steps to Complex Learning.

As the diagram shows, the first three steps relate most closely to learning tasks; steps 4, 5, and 6 relate to the supportive information component of van Merriënboer and Kirschner’s blueprint.

[Supportive information] bridges the gap between what learners already know and what they should know to fruitfully work on the non-recurrent aspects of learning tasks…

Remember, “supportive information” always mean support for non-recurrent aspects of complex skills–the things you do differently when handling different problems.  (The things you do the same each time fall under “procedural information.”  It’ll be a few more posts before I get there.)

So what is supportive information?  First, it’s information about how to solve problems in a particular domain (including how the domain is organized).  It includes examples that illustrate such information.  And it includes cognitive feedback on the quality of the learner’s performance.

You could call it the theory for a field.  In fact, this is where well-meant complex training often goes bad.  “We don’t have much time, so we can’t do any hands-on.  Let’s concentrate on the theory.”

The main parts of the chapter:

  • Providing systematic approaches and domain models
  • Illustrating those approaches and models
  • Presentation strategies
  • Cognitive feedback
  • Supporting information in the training blueprint

Strategies and models: teaming up for learning

Strategy: you've got two.  Which will they eat?A systematic approach to problem-solving (SAP) is a cognitive strategy; it helps you perform tasks and solve problems in a given field–systematically.  vM&K will go further into SAPs in the next chapter.  For now, in terms of learning complex skills, a learner might study an SAP, or might work with a process worksheet that guides him through a task.

Mental models also provide support.  They explain the arrangement of the field–via conceptual models, structural models, and causal models.

The two work together: a cognitive strategy isn’t any good if you don’t have a good mental model of the field; and the mental model’s no good unless you have an effective way to solve problems in that field.

The goal of SAPs is to help the learner establish meaningful relationships among new pieces of information, and to establish meaningful relationships between those new elements and what she already knows.  Suggestions from The Ten Steps:

  • When discussing phases, the learning methods should stress sequence and consequence.  You do job aid analysis after task analysis, because you need to know details about the tasks; you do job aid analysis before designing learning material, because job aids eliminate memorization.
  • When discussing rules of thumb (a significant part of many cognitive strategies), the learning methods should stress cause-effect relationships–effect being the goal of the learner, and cause being what the learner needs to do to bring about the effect.

Those seem closely related to me.  My guess is that the first bullet (“temporal organization” in vM&K’s terms) has to do with broader processes, while the “change relationship”  deals more specifically with decisions and actions.  That at least aligns with the idea that you have both high-level or global SAPs and more detailed ones.

Keeping the model in mind

If SAPs are how experts do things, mental models are how they see things.

Conceptual models help answer the question, “What is this?”  A financial advisor needs to understand the difference between stocks, CDs, options, bonds, mutual funds, 401Ks, 403Bs, IRAs, SEPs, and other types of investment forms and structures.  Some instructional methods to facilitate this:

  • Analyze a particular idea into smaller ideas (what kinds of tax-deferred accounts are there?).
  • Describe main features or characteristics (an SEP is a tax-deferred retirement structure for self-employed people; a mutual fund is a form of indirect investment).
  • Present a more general idea or organizing framework (connect principles of web page design to overall user-interface design).
  • Compare and contrast similar ideas (an ordinary web site compared with a blog).

Simple mental model: affinity diagramSince we’re comparing and contrasting, structural models answer the question, “How is this organized?”  The typical focus is on the spatial or temporal relationship of parts.  What-happens-when models (which vM&K call scripts) might include things like life cycles (for organisms or for processes).  What’s-where models (templates) explain how things fit together. Among methods for aiding learning:

  • Explain the relative location of elements in time and space.  (What are the components of a memorandum of law?  Of a court brief?)
  • Rearrange elements and predict effects.  (What if you move elements within the style sheet?  Will digitized video of an actual performer aid or detract from learning?

Causal models focus on how elements affect each other.  These models help learners interpret processes and make predictions.  They answer the question, “How does this work?”  The simplest form is a principle (for example, the law of supply and demand).  An interrelated set of principles is a theory (for natural phenomena like evolution) or a functional model (for human-built systems).  Methods for presenting causal models stress relationships:

  • Make a prediction of future states.  (What will happen if we post federal earmarks, with locations, amounts, and sponsoring legislator?)
  • Explain a particular state of affairs.  (Why is customer satisfaction o much higher in District Five than in Districts Three and Four?)

Expertise: you can’t practice theory

The eight bullets above, suggesting ways to present cognitive strategies and mental models, are expository.  They don’t provide any practice.  The bulk of this chapter of the Ten Steps deals with how to illustrate strategies and models, and how to activate prior knowledge and elaboration.

In other words: the support needs to make things concrete, and needs to put learners to work.

In instructional materials, modeling examples and case studies are the external counterparts of internal memories, proving a bridge between the supportive information and the learning tasks.

theory_bridge

For providing supportive information, modeling examples illustrate SAPs and case studies illustrate domain models, while these same two approaches may be seen as learning tasks with maximum task support.

One criticism of much advanced training is that it’s too theoretical.  vM&K would say, “Well, duh!”  (Perhaps they’d say something more diplomatic.)  Real cognitive strategies combine theory (strategies and models) with real-life practice; otherwise you’re dealing only with abstractions.

Modeling examples (you remember them as part of learning tasks) bring out the expert performer’s hidden mental processes.   One valuable form is seeing how the expert responds when things don’t work out.  How does she respond to the problem?  What does she see, what does she try?  Psychologists in training, for example, study videotaped sessions in which experienced therapists demonstrate specific techniques and strategies.

Case studies, according to the Ten Steps, embody the types of models mentioned earlier.  A programmer might examine a successful interface to develop models for concepts like user-friendliness, metaphors, dialog boxes, and so on.  A structural-model case study might have architects exploring a building and analyzing how well the materials used fit the original purpose.

I’m going to stop here.  Next time, I’ll look at how to match models and case studies with specific types of presentation.  I’ll also go into elaboration, a key requirement for supportive information, and into the way that supportive information in general fits into the Ten Steps blueprint.

CC-licensed sales-strategy image by bschmove.
CC-licensed affinity diagram by Rosenfeld Media.
CC-licensed bridge photo by tour of boring.

Learning to learn (an elaboration)

This entry is part 10 of 21 in the series Ten Steps to Complex Learning (the book).

As with the previous post in this series, I’m on Step 4, “design supportive information,” of van Merriënboer and Kirschner’s Ten Steps to Complex Learning.

To recap briefly, supportive information includes cognitive strategies (different ways that experts go about dealing with problems in a domain) and mental models (the conceptual maps showing how parts of the domain are organized).

This post will summarize the four remaining topics in Step 4:

  • Presentation strategies–how to help people learn by providing supporting information
  • Elaboration–what it is and why it matters
  • Cognitive feedback–another element in learning to learn
  • Positioning–when to provide the theory, when to provide feedback

Deduction, induction–your thoughts?

What's a typical Friday night like here?A deductive approach moves from the general to the specific.  As vM&K note, it’s the default for a lot of technical training: the theory of widgets, the principles of negotiation, Maslow’s hierarchy.

Deduction is hard for novices.  I’d say it’s nearly impossible.  Since by definition they’re new to the field, they have little if anything already in their mental inventory to connect with the new information.

If you wanted to learn Scottish Gaelic and I began by explaining lenition, slenderization, and epithentic vowels, that’d be taking a deductive approach. Not only would you likely be confused, you couldn’t go into a pub on Lewis and order a drink in Gaelic.

Deductive approaches make sense, the Ten Steps suggest, in certain circumstances:

  • Limited time
  • Learners already knowledgeable about the domain
  • No need for deep knowledge (which certainly applies to my own skill with Scottish Gaelic)

Inductive learning: the specific start

Inductive approaches start with concrete examples and work toward general cases.  Where the deductive approach is exposition (“principles of widget design”), the inductive approach is inquisitory (“What do you like about the layout of Amazon’s home page?  What do you like about the layout of Zappo’s?”).  Examples and models are like stepping stones to the more general concept.

Maat was as much the personification of justice as a goddess.

What’s going on is that the specifics help to activate what the learner already knows, since it’s easier to grasp a concrete example than a general case.  Using analogies and models, the learner makes connections between what she’s already learned and the new information at hand.

Inductive approaches are inevitably more time-consuming, especially if there’s a great distance between the examples and the broad concept they’re part of.  A short-distance example might be seeing photos or videos of different dogs in order to come up with the broad concept of dogs.  That’s less challenging that seeing examples of behavior and coming up with the concept of “justice.”

Even so, the Ten Steps advocates making induction the default strategy for complex learning.

Guided discovery: you’re on your own

Guided discovery is a third approach.  There’s no presentation; the learner independently identifies and articulates the general information.  vM&K make a distinction between pure discovery and guided discovery; the latter has leading questions and other prompts.

You can see how a detailed serious game, simulation, or virtual world could serve as a vehicle for guided discovery–the structure of the environment, the information available, the decisions to make can all provide opportunities for the learner.

When to consider this approach?

  • Ample time
  • Learners with well-developed discovery skills
  • A need for deep understanding

This chapter discusses ways to promote “activation of prior knowledge” as well as elaboration of new information–for example, through epistemic games (here’s an article; here’s a commercial website).

(“Epistemic forms” have to do with how knowledge is organized and how different facts and concepts are related. )

Let’s do more with elaboration

vM&K say (again) that supportive information–those cognitive strategies and mental models–provide a bridge between what learners know already and the new information they’re working with.  Along with induction, a key learning process is elaboration.

This means in part that the learner searches his memory for ways to understand and connect with the new information.  “Oh, that’s kind of like when I…”  This links what he already knows to the new material, and does so consciously.

The theory is that elaboration aids learning because the more connections that exist (within the parts of the new material, and between the new and what’s already known), the more readily he can retrieve and apply what’s new.

How have I handled this in the past?Which also means that a person can learn to elaborate–for example, to deliberate work at incorporating new information, to ask how it might apply in other contexts, and so forth.

Elaboration and induction can produce new cognitive schemas to guide problem-solving and reasoning about a domain.  Usually that guidance is specific, as if you’re following a mental checklist.  ( “I saw a problem like this five years ago–let me think, that was for an executor who was transferring funds from a 403(b)…”)

Performers sometimes activate certains schemas often enough that they become automatic.  vM&K don’t think this is all that common, but it may produce the tacit knowledge we think of as the specialist’s knack.

People constructed advanced schmas through elaboration or induction, and then form cognitive rules of these as a function of direct experience.  Afterwards, the schemas quickly become difficult to articulate because they are no longer used as such.  The cognitive rules directly drive performance, but are not open to conscious inspection.

To me, this makes sense, and helps explain the notion that it takes around 10,000 hours to become an expert.  That amount of time–the equivalent of five years at a full-time job–allows for increase range and depth in a field, and thus for richer elaboration.

Of course, some people don’t have five years’ experience; they’ve got one year that they repeat five times.

Cognitive feedback’s not for correcting

Feedback on the quality of performance, in the Ten Steps, is cognitive feedback.  It refers only to the non-recurrent aspects of the tasks, and provides information (such as prompts, cues, and questions) to help the learner construct or reconstruct cognitive schemas so that future performance is improved.

Such feedback encourages the learner to reflect both on the problem-solving process and on the solution found.  vM&K say the purpose is not to detect and correct errors but to encourage self-reflection.

This, they say, relates to the concept of double-loop learning put forth by Chris Argyris and summarized in a blog post by Ed Batista.

How can you encourage this?

  • Ask learners to compare their own problem-solving processes with those in systematic approaches to problem-solving (SAPs), or with those of other learners, or with mdeling examples.
  • Ask learners to compare their own solutions (or partial solutions) with those in case studies, with expert solutions, with those of other learners.
  • Provide counter-examples.

In that last case, vM&K give the example of a medical student who decides that a patient has a particular disease based on the particular symptoms.  The instructor provied a hypothetical patient who has the same symptioms, some of which may have arisen as side-effects of medication (rather than from the disease in question).

What goes where?

Van Merriënboer and Kirshner suggest that in a deductive approach, general information (the theory) appears in the form of lectures, textbooks, and other preset formats.  In an inductive approach, learners often have to search for the theory; they start with the specific examples.

And in a guided discovery approach, the general information is never presented in a ready-made form; the learners must articulate it for themselves.

Meanwhile, cognitive feedback makes no sense until learners have finished a learning task.  You can’t see how you did until you’ve done something.  (Note, as vM&K do, that immediate feedback does make sense for the recurrent aspects of tasks, as we’ll see in a later post.)

CC-licensed photo of a Manly NSW street sign by Jeff Croft.
Image of Ma’at, ancient Egyptian “concept of justice,” from Wikimedia Commons.
CC-licensed “strategies” photo by Old Shoe Woman.