Lovin’ Bloom

Reader advisory: this is “repurposed legacy content.” In other words, I’m recycling something I wrote elsewhere.

Quite a while ago, someone in a discussion forum said:

“The first time I met Bloom’s taxonomy terms, I was appalled.  Know, understand, apply, analyze, evaluate are wide open terms with no accountability that would never have been accepted in any RFP that I’d ever seen as a standard of service level commitment or project achievement.”

This, with slight editing, was my reply:

Well, now. This is kind of like criticizing the poet Homer for not painting better pictures.

Objectives, not objectionsI can’t help that [this person] was “appalled,” but I’d bet $25 against a box of stale doughnuts that legions of trainers have been thankful for Bloom’s levels, at least for the cognitive domain, and especially for concrete examples to use as models for their own objectives.

Since Bloom and his colleagues called it a taxonomy and not a proposal for a service-level agreement, the first question would be whether their system does in fact provide categories for a body of knowledge.

You may disagree with whether there ought to be six main categories or five or seven, you may argue with the terms or the subcategories,  but it’s hard to dispute that this system provides categories.

For general usefulness in training, if I had $100 in award money to dole out, I’d send $75 to Bloom and $15 to Kirkpatrick, who’s managed an entire career off of a two-page notion.

(I’d keep the rest for expenses.)

Imagine your client says, “I want people to understand the SHILLELAGH System.”  You could get into a Talmudic discussion about what the client means (does the Talmud actually deal with shillelaghs?).  Or you could work with the client to clarify the meaning–in a discussion, not an RFP.

  • “Do you want people simply to be able to describe the system, label parts of it, tell what it does?”  (Remembering)
  • “Do you want people to distinguish the Shillelagh system from its predecessor, the Sassenach system?  Do you want them to describe how it might handle certain situations?  Do you want them to be able to read reports or displays from the system?”  (Understanding)
  • “Do you want people to use the Shillelagh System to enter new orders, track problems, answer customer questions, transfer leprechauns?”  (Applying)
  • “Do you want people to explain the subsystems that make up Shillelagh? Do you want them to describe how data moves from the Banshee database via the Poteen processor into Shillelagh?  Do you want them to identify where the heaviest requests are or the greatest workload occurs?”  (Analyzing)
  • “Do you want them to assess whether we should adopt the Shillelagh system or the competing Napoleonic Nodal system?  Do you want them to make recommendations for using this year’s capital budget as it relates to online ceilis?  Do you want them to produce recommendations to help the senior executive council decide whether to continue work on Shillelagh or outsource to our strategic partners in Bangalore?”  (Evaluating)
  • “Do you want people to develop ideas for new applications within Shillelagh, or for new systems to compliment what Shillelagh does?  Do you want them to take data or components from different parts of Shillelagh and develop new reports aimed at greater efficiency or productivity?  Do you want them to describe new demands that are likely to be made on the database as a result of the merger with Connemara Computers?”  (Creating)

Considering how often in forums like this we see that foolishness about how we  remember 90% of what we smell but only 10% of what we eat, or however that goes, I couldn’t stand by while Benjamin Bloom gets slighted for not writing functional specifications.

Often, when your client says, “I want X,” that means you owe it to yourself and to your client to get a LOT of clarification about what X means, about how you can tell a good X from a bad one, and what the factors are that affect how well people can produce Xs in real life.  Then you can work with your client to figure out what the cost of X is, what the cost of getting X might be, and whether in Tom Gilbert’s phrase the getting of X is a worthy accomplishment.

Sometimes that means helping your client see how “training” will not produce valuable results because the X problem doesn’t arise from a lack of skill or knowledge.

Like Bloom, Bob Mager and Peter Block are not the be-all nor the end-all, but they’re a lot better than a poke in the eye with a sharp stick.  Or with a binder full of metadata.

Do(ugh)nut photo by gillicious.

7 thoughts on “Lovin’ Bloom

  1. Will, now we’ve got a real three-sided discussion going. I’m not going to tell anyone how long ago I wrote the original version of this; I see it’s still a topic that invites a range of views.

    You’re right about the credit; Bloom edited the original set. His colleagues included Max D. Engelhart, Edward J. Furst, Walker H. Hill, and David R. Krathwohl. They all kind of play the Peter Pipe to Bloom’s Bob Mager.

  2. Interesting discussion. The description seems to explicate a potential misapplication across the scale. One the one side, the three part objective nazi that says the taxonomy is too soft to be accountable. On the other side, one that might take the taxonomy as an encompassing cubby of verbs from which they can pluck to fill gaps in or begin their own objectives.

    Evil dragons live on both ends of this scale. I see Bloom’s as a potential organizational / planning scaffold. It’s not permanent but absent other indicators or experience it provides a logical organization framework. I see the formal three part objective as another tool in the planning process, but saying that an objective must be written in a particular way only tends to bind a foundational element by a set of goofy rules. This tends to encourage ‘creative pouring and bending’ of the actual goals and performance into a mold that doesn’t support the accurate representation of what we really want to measure.

    Take, for example, if one of our goals is that a learner value a particular set of rules. So if our goal is that the learner cares about something enough to use it, most designers will reject that goal since it’s tough (impossible) to measure and make up some three part’ers to substitute or represent this in terms of observable / measurable performance. This can be done right, but most purists will throw out the baby and twist the goal. If this is where we start… you can see where it usually ends up.

    I support the taxonomy as a tool. I support measurable three part objective construction as a tool. These are great formal process scaffolds. Personally, I take issue with those who’s primary position is the establishment of ‘proper objectives’ which miss the mark and carry into the course to be displayed proudly as advanced organizers. /facepalm.

  3. Steve, I’ve seen lots of people digging verbs out of that metaphoric cubby. On a project last year, I was told I didn’t understand what a given verb meant–because I was using it for analysis when it obviously was for synthesis.

    I had the fun of showing where the verb fit in the organization’s official cubby.

    I don’t often get into “care enough to use it” goals–I list my religion as “Reform Behaviorist”–but I get the point. I might settle for having you use the rules, even if it’s just to show you can. My own sense of what I can influence beyond that is low.

    I agree completely with the last paragraph. I see “learning outcomes” (my phrase, not necessarily anyone else’s) as what a course needs: “When we’re done, you’ll able to create a web page from scratch.” Or, “when you’re done, you’ll be able to try getting a date, in Gaelic.”

    If you think “from scratch” includes building your own computer, maybe you should take a different class.

  4. Haha – guess I should finish expressing my thoughts before hitting submit:)

    We’ve had a lot of recent discussion around the ‘value it’ goal (which plenty of our programs have this as an underlying goal – above all we want people to value the rules enough not to do X, Y, or Z) have resulted in a discussion around measuring this NOT at the course / performance solution level but at the organization level. That we baseline where we are now at Alcohol Related Incidents, or Sexual Harassment Claims, etc.. and measure how well our proposed solution has impacted the end result. If we are able to isolate our data well enough, run targeted surveys, etc.., we SHOULD be able to tell whether or not we have hit the ‘value it’ mark.

    You can’t reallistically expect get this with a ‘fast food drive-thru’ assessment.

    I guess my point is that we should keep the overall goal in mind and scaffold around that goal with the valuable measurables that we can put into action. Too often I’ve seen a bent objective completely displace the goal and the results end up invalidating the effort. If we cannot measure what we ACTUALLY want to measure within the product, we need to be OK with measuring it elsewhere. Measurement for measurement’s sake seems like a futile exercise to me.

    This isn’t to say that we can’t derive abstractions. If the abstractions don’t support the actual goal we are wasting our time.

    I’ve been adjusting a simplified model for representing the relationship between conceptual nuggets, skills, values, and tasks. It’s simplified / clear (not all encompassing), but the point is adjusting the focus away from the task level and focusing on the concept level to support task performance.

    This doesn’t always work… But at the foundations of organization wide performance, it makes sense.

    http://www.xpconcept.com/conceptRelationship.jpg

Comments are closed.