PC, XT, and me

Wikipedia’s main page today among the “today in history” events the introduction of the IBM personal computer on August 12, 1981. It wasn’t the first, and it probably wasn’t the best, but its open architecture and rapid adoption by business changed the way people thought about harnessing technology.

I specially like that this photo, from a PC World article, shows the ubiquitous manuals in their tidy slipcases.

I never used the original PC (which, as you can see from the picture, didn’t come with a hard disk — only two floppy drives). In late 1983, though, as I started a new job, I received a then-new IBM PC XT.

This technological powerhouse had:

  • A monochrome screen incapable of displaying graphics (other than the ASCII character set).
  • 256 kilobytes of memory (which my boss and I upgraded to 640 thanks to the AST Six Pack.
  • A 10 megabyte hard drive (upgraded after a few years to a whopping 30 megabytes).
  • An external 1200 baud Hayes modem.
  • One of the ubiquous Okidate dot-matrix printers.

All that for something like $3,500, which would be close to $8,000 (using the Consumer Price Index to calculate the effect of inflation).  You’d be hard-pressed to spend eight grand on a computer today; for that kind of money the Three Bears could each get a MacBook Pro — and if they didn’t go for top of the line, the could probably afford a MacBook Air for Goldilocks.

At the risk of sounding like my great-uncle Rory, talking about hauling wood for railroad ties at a salary of 25 cents per day, it’s astonishing to consider the scale of changes since then.

I’m writing this post on a laptop I bought new for around $1,200. It’s got 2 gigabytes of memory, or 8,000 times the memory of the XT. (Heck, the cache in my processor has more capacity than the XT did, and I’m ignoring the power of the processor.) And the 120 gigabyte hard drive is 12,000 times larger.

Granted, today’s applications need a lot more memory and a lot more storage. But today’s applications offer a lot more potential than Lotus 1-2-3 and WordStar did, back then at the dawn of time.

On the internet, somebody knows you’re a doc

My current project involves working with statutes and with case law. One of my project partners has built a learning assignment around a court case. Eric Turkewitz has the details (as do many others, including the Boston Globe), but this is the quick summary:

Useful medical toolDr. Robert Lindeman was defending against a malpractice suit in 2007. While Lindeman was on the stand, the plaintiff’s attorney asked if he had a medical blog. He said he did. She asked if he was Flea (posting on the now-vanished drfleablog). He said yes.

The case was settled the following day.

Flea, it turns out, had been blogging before the trial began. He discussed meeting with “an expert on juries” for advice on how to behave on the stand. He also blogged during the trial, commenting on the judge, the sleepy jurors and the appearance of the plaintiff’s attorney.

Not always as useful a toolIronically, in a PDF that claims to have been made of Flea’s site before it was taken down, Flea reports his lawyer suggesting that the opposing side “may pull articles from Flea’s ‘legitimate’ web site to use against him.”

This apparently did not cause Lindeman to tell his attorney, “You know, I have a blog, too.”

I don’t know anything about the merits of the court case. I do know that a client needs to help his attorney anticipate potential difficulties. And that blogging, while free, can have costs.

Stethoscope photo by happysnappr / Adrian Clark.
Megaphone photo by LarimdaME / Gene Han.

Blogging about science

P. Z. Myers of Pharyngula wonders, “Where is science blogging going?” His post is a musing about how blogs fit into the overall world of science — one in theory more rigorous than the training / learning / performance arenas I tend to frequent.

He notes that there isn’t much accountability in science blogging.

This is a general problem with solutions that bubble up from the ground rather than being defined from above — they do something very, very well, but it usually isn’t the something that a planner would design, and they often won’t easily do something else that you think they ought to do.

He also suggests that it’s hard to design what’s going to be the next stage. Design, he says, is “a terrible paradigm for adding unexpected newness and potential (which any evolutionary biologist would tell you).”

A lot of interesting points of view in the comments on his post, as well, like this one from Blake Stacey:

I think it’s important to remember that the nature of the blogosphere is not carved in marble. A few years ago, it didn’t exist. It just is the way it ended up being. When we want something different, it’ll change. Right now, doing anything other than what we normally do might be like hammering nails with a screwdriver, but when every other tool in your toolbox is broken and getting rustier by the day, you start to wonder how you could modify that screwdriver.

It was this lengthy (and rambly) post on “What Science Blogs Can’t Do” by Stacey at Science after Sunclipse that triggered Myers’ post.

Computer input: a heads-up

For about a year, I’ve used voice recognition software off and on while working at my computer. I’ve been doing more of that lately, and one of the unexpected side effects is that my typing skills seem to have deteriorated a bit.

I learned to type when I was in junior high — it seemed easier than trying to improve the quality of my penmanship. So typing, now known to everyone except Mavis Beacon as keyboarding, may well be my most finely honed psychomotor skill. It’ll be a shame to lose that.

A skillful typist can hit 80 or so words a minute, meaning 400 characters or spaces. That’s better than six per second, though that’s a difficult pace to sustain for very long. With my speech recognition software, even speaking at a relatively slow pace, I beat that with less effort. And in fact the software is able to keep up with a virtually normal rate of speech.

The Emotiv EPOC headsetIn today’s New York Times, Moving Mountains with the Brain describes a headset and translates it into onscreen commands. the headset uses electroencephalography technology to pick up electrical signals, interpret them, and convert them to computer-interface actions.

(And who would have expected the voice recognition software to be able to spell “electroencephalography?”)

You can view a demo at the Emotiv Systems site.

The specific technology, in a way, doesn’t matter. Whether the device works by picking up electrical impulses from the brain or from facial muscles, what’s important is that it provides another way for human to interact with the computer.

The developers of the headset provide a team that includes practice exercises. After less than a minute of training, you can lift an on-screen block with your thoughts. As with the speech recognition software that I use, the headset learns about how you think as you use it, and adapts to your specific “configuration.”