How we learn, or, getting testy

The New York Times reports on research suggesting that if you really want to learn, you should take a test.  Pam Belluck’s article cites work by Jeffrey D. Karpicke and Janell R. Blunt recently published in ScienceExpress (linked article is on Scribd).

There's more than one meaning for "battery."

The researchers looked at “elaborative studying” (in this case, working from a text to create your own concept map) and “retrieval practice”–writing a freeform essay after reading the material.  In the latter case, you’re writing without the material; hence, you’re retrieving information from memory.

Here’s the researchers’ abstract:

Educators rely heavily on learning activities that encourage elaborative studying, while activities that require students to practice retrieving and reconstructing knowledge are used less frequently.

Here, we show that practicing retrieval produces greater gains in meaningful learning than elaborative studying with concept mapping.

The advantage of retrieval practice generalized across texts identical to those commonly found in science education. The advantage of retrieval practice was observed with test questions that assessed comprehension and required students to make inferences. The advantage of retrieval practice occurred even when the criterial test involved creating concept maps. Our findings support the theory that retrieval practice enhances learning by retrieval-specific mechanisms rather than by elaborative study processes. Retrieval practice is an effective tool to promote conceptual learning about science.

This is is sort of thing that’ll end up on the evening news: “Researcher Says Take Tests, Don’t Study.”  The reality is more nuanced, of course.

As Karpiche and Blunt say, “It is beyond question that activities that promote effective encoding, known as elaborative study tasks, are important for learning.”  What they were questioning, in part, is the notion that retrieval of information is “neutral and uninfluential” in the learning process.

Because each act of retrieval changes memory, the act of reconstructing knowledge must be considered essential to the process of learning.

I’m sorry that most reports about this study use the word “test,” one of those terms (like “training”) that’s a kind of conceptual rent-a-truck; people load them up with all sorts of meaning.

Try thinking outside the box.

I know I tend to.  And despite knowing better, when I hear “test,” I have a hard time not picturing the multiple-guess, factoid-shackled artifact that so often is labeled as a knowledge nugget.

In the world of learning at work, we don’t always consider that “test” can refer to something other than a mid-semester quiz.   This, despite the fact that the workplace is full of other, more robust examples of testing.

Like load tests on a server.  Stress tests for a product.  Market testing for a new product (or for a media campaign).  Engineering testing aimed at continuous improvement in a process.

Even if you’re aiming at (allegedly) objective assessment, you can shoot for more than recall of discrete bits of information.  So in Karpicke and Blunt’s research, the final testing  involved both verbatim questions (for “conceptual knowledge stated directly in the text”) and inference questions that required the learner to relate different points in the original content.

It’s interesting that participants in the student couldn’t predict whether their retrieval practice would help them learn:

Students predicted that repeated studying would produce the best long-term retention and that practicing retrieval would produce the worst retention, even though the opposite was true.

One version of the study, as part of the “final test,” had students create a concept map.  Once again, students who engaged in retrieval practice produced better concept maps (by which I assume “more accurate ones”) than did the students whose study included creating concept maps in the first place.

CC-licensed images:
ASVAB scores by Krista Kennedy.
Test-box photo by Dave Blaisdale.

 

That knowing feeling

Jonah Lehrer at The Frontal Cortex talks about “feelings of knowing” — how we feel sure we know what we can’t retrieve from memory. He’s talking about tip-of-the-tongue things: you can’t quite remember who played the sheriff of Nottingham in Robin and Marian, but you know he had a short last name that started with S.

Lehrer suggests that this “feeling of knowing” is often highly accurate.  (I hadn’t considered this concept before, so I’m glad Lehrer linked to this study (PDF) by Janet Metcalfe.) This comes into play (as he notes) when Jeopardy contestants click the buzzer without (presumably) knowing the answer: they’re betting that they will know it (retrieve it) within five seconds.

And often, they’re right.

The larger point is that we won’t get a genuinely “human” version of artificial intelligence (not to mention more energy efficient computers) until our computers start to run emotion-like algorithms. What Watson needs isn’t a bigger hard drive or some more microchips – he needs to develop feelings of knowing, which will tell him that he probably knows the answer even if he’s still drawing a blank.

For decades, we’ve assumed that our emotions interfere with cognition, and that our computers will outpace us precisely because they aren’t vulnerable to these impulsive, distracting drives. But it turns out that we were wrong. Our fleeting feelings are an essential aspect of human thought, even when it comes to answering the trivia questions on Jeopardy.

In an update, Lehrer links to a later post by Vaughan Bell at Mind Hacks, who sees the early-buzzing of Jeopardy players as a kind of metacognition. “It’s being able to manage your mental resources based on estimations.”

“Facial paralysis makes me a really good judge of character”

Grad student Kathleen Bogart has Moebius syndrome, a neurological disorder that causes facial paralysis: no smiling, no blinking, no lateral eye movement.  A New York Times article, Seeking Emotional Clues Without Facial Cues, looked at her experience and that of others with Moebius.

When she tried working with refugees from Hurricane Katrina, Bogart often couldn’t connect with them.  They didn’t see sympathy or understanding in her face–because she can’t express those things facially.  People in conversations mirror and react to one another, and we’re usually very skilled at detecting and interpreting very small physical signals: a forced smile, a distracted glance.

This is a complicated area.  It’s not necessarily the case that people with similar paralysis can’t recognize emotion, but the inability to mimic is a barrier.  Some people cope through other channels: eye contact, for example, or voice.  The challenge has turned into a research field for Bogart.

I had no special interest in studying facial paralysis, even though I had it; there were many other things I could have done. But in college I looked to see what psychologists had to say about it, and there was nothing. Very, very little on facial paralysis at all. And I was just — well, I was angry.  Angry.  I thought, I might as well do it because certainly no one else is.

One result was a study of how people with Moebius recognize facial expressions (link is a PDF) of her study, demonstrating that the ability to mimic the expressions of others is not essential to recognizing their emotional state.  As the Times article suggests, if the strategies that people with Moebius use to understand emotion are “teachable,…they could help others with social awkwardness, whether because of anxiety, developmental problems like autism, or common causes of partial paralysis, like Bell’s palsy.”

The Times website has aslide show in which Bogart talks about having a face that can’t express emotion.

 

 

Mind over matter through reinnervation

You can thank my mother for this.  She gives me a subscription to National Geographic for my birthday.  Each year she asks if I’d still like to get it.  Here’s one reason I always answer “yes.”

The January 2010 issue includes A Better Life with Bionics.  Joel Fischman’s article  starts with Amanda Kitts (pictured at right ), who lost most of her left arm in an auto accident in 2006.  Kitts one of the people on the front lines of bionics because of her collaboration with the Rehabilitation Institute of Chicago‘s Todd Kuiken.

Traditional prosthetic arms, the article says, rely on cables: the individual presses a lever on a harness to make one of three movements of the pincer hand.  In Kitts’s case, Kuiken “rewired” nerves that used to go all the way down her arm.  That’s reinnervation (New York Times graphic).

The nerves started in Kitts’s brain…which holds a rough map of the body…. In an intricate operation, a surgeon rerouted those nerves to different regions of Kitts’s upper-arm muscles…

“By four months, I could actually feel different parts of my hand when I touched my upper arm.  I could touch it in different places and feel different fingers,” [says Kitts.]

That was the start.  Kitts then received a new bionic arm with electrodes that could pick up electrical signals from those muscles.  How does it know which signals?  Because Kitts also has a phantom arm–a set of electrodes controlling a virtual arm in a computer–that RIC’s Blair Lock uses to fine-turn the connection between muscle signal and the desired motion.

So, how does it do?  Here’s Kitts in the lab.  (Note: there’s no sound in this video.)

Related items:

“The world in six songs” (sounds good)

The World in Six SongsDaniel Levitin used to be a record producer and a professional musician.  His fascination with how we grasp music, emotionally and physically, led to a new career as a professor of psychology and neuroscience at McGill University.  He’s followed an earlier book, This is Your Brain on Music, with The World in Six Songs.

I’m not far into it, but it’s already a “hey, listen to this” experience.  (Want to see the first chapter?)

Levitin contends that music isn’t simply a distraction or a pastime, but “a core element of our identity as a species, an activity that paved the way for more complex behaviors such as language…”

The six songs of the title aren’t specific songs; they’re categories for how we fit music into our lives.  At the start, he says, he was trying to figure out what all the different forms of song–work songs, love songs, counting rhymes, nearly the entire work of Bobby McFerrin–had in common.

Anthropologist Jim Ferguson (no relation that I’m aware of) told Levitin that was the wrong question.

Quoting the great anthropologist Clifford Geertz, Jim persuaded me that the right question to ask, in trying to understand music’s universality, is not what all musics have in common, but how they differ….

it is in the particulars, the nuances, the overwhelming variety of ways we express ourselves that one can come to understand best what it means to be a musical human.

Levitin sees six types of songs as having shaped human nature: songs of friendship, joy, comfort, knowledge, religion, and love.  Interestingly to me, his definition of “song” is “any music that people make, with or without melody, with or without lyrics.”

I like the inherent complexity (and possible paradox) in that.  “Without lyrics,” for example, opens the door for the effect that deliberate rhythm may have had on human behavior and the evolution of the brain.

I also like insights he includes from Pete Seeger.  Pete pointed out that not all music is intended to be popular.

“Among American Indians,” Seeger explained, “a young man got his eye on a girl and he would make a reed flute and compose a melody.  And when she came down to get a pail of water at the brook, he would hide in the weeds and play her his turn… It was her special tune.  A tune wasn’t thought of as being free for everybody.  It belonged to one person.  You might sing somebody’s song after they’re dead to recall them, but each person had a private song…”

In addition, Seeger says, the power of music comes from its combination of form, structure, and meaning.  “Ordinary speech doesn’t have quite that much organization….and this becomes intriguing, something you can remember.”

Levitin suggests that before there was language, the human brain didn’t have the full capacity to learn langauge.  That capacity emerged as the brain worked with sounds and verbalizations.  The new structure, he says, made possible three cognitive abilities:

  • Perspective-taking: we could think about our own thoughts, and could realize that others have thoughts different from our own.
  • Representation: we could think and talk about things that aren’t present.
  • Rearrangement: we can “combine, recombine, and impose hierarchical order” on things in the world around us.

I’ve got a number of music- or language-related thoughts circulating.  This post is the first verse.