Saturday, July 25, 2009

Scientists Worry

Okay, how could I resist? The New York Times runs an article with the headline "Scientists Worry Machines May Outsmart Man". When I first saw it, I couldn't stop laughing. But it only gets better from there.

"A robot that can open doors and find electrical outlets to recharge itself. Computer viruses that no one can stop. Predator drones, which, though still controlled remotely by humans, come close to a machine that can kill autonomously."

What in this has anything to do with intelligence? Opening doors, maybe. It at least requires a bit of coordination; cats can do it, but most fish can't (though there may be other reasons for that). Being hard to wipe out? Later in the article, viruses are described as having reached a "cockroach" stage because they can't be easily gotten rid of. Yeah, sort of like cockroaches, or, I don't know, viruses? The least intelligent (arguably) living organisms in existence? And killing autonomously -- viruses can do that too, as can every predatory creature in existence, no matter how dumb.

But what really takes the cake is the caption on the photo of a robot that can plug itself in when its batteries run low. I quote: "This personal robot plugs itself in when it needs a charge. Servant now, master later?"

Ahahahahaha! Sorry. I thought I was in a 60's B-movie for a second. I usually put a good deal of trust in the NYTimes to come up with stories about things that are important, or at least coherent. This time, they've really let their readers down. Which is a pity, because the second quarter of the article is actually somewhat meaningful. When they talk about what actual scientists are actually afraid of, it becomes clear that it's not about intelligence at all.

In fact, they're thinking about two main problems: first, the rather pedestrian worry that computers will take human jobs (it's happened before, it'll happen again, and we seem to have survived somehow); and second, the far more important concern that people will have difficulty adapting to advances in technology. This is really something worth writing an article about -- the idea that our current social structures won't stand up to rapidly advancing technology, AI or otherwise. It's the same problem the music industry has been struggling with for years now, without anything even remotely resembling intelligence involved (on either end, heh). They take the time to publish a few interesting questions the scientists came up with: "What could a criminal do with a speech synthesis system that could masquerade as a human being? What happens if artificial intelligence technology is used to mine personal information from smart phones?" Unfortunately, they seem more interested in sounding alarms than actually answering these questions.

After this, the rest of the article is devoted to the usual singularity nonsense, which makes for great sensationalist reading (once) but doesn't really forward the dialogue here. I'm not really the person to pick up the ball the NYTimes dropped, but I'll at least say a little about the issues they should have been talking about.

Computer intelligence isn't really a concern here; in fact, as time goes on, it becomes increasingly clear that the label "AI" exists only to impress and frighten the laypeople (for those who don't know, these days it's mostly statistics). Intelligence isn't the operative variable in determining whether or not a particular technological breakthrough causes societal problems. What matters is how it can be used in people's daily lives, to do things that are already possible differently and more efficiently. It doesn't matter how huge Google's database of personal information is if they don't have efficient algorithms to make sense out of all the numbers. A criminal can already simulate another person's voice with a vocoder or just a talent for imitation, so I'm not so worried about intelligent speech systems. Similarly, I'm not (yet) worried about Ray Kurzweil's singularity, because it's about a completely different mode of existence; it wouldn't so much clash with our society as rewrite it. Someone needs to think about these things, but they're blue-sky next to the real and present concerns (I won't say dangers) caused by any technological innovation that affects our way of life.

All right, enough of this rant. I hope the Times does a bit better next time they decide to cover a computer science conference. Really, very few of us take over the world for a living. I suppose I ought to thank them, though, for demonstrating that the infamous Frankenstein complex is alive and well in the modern age. Here's to more enlightened times; the singularity can't get here soon enough.

(Oh, by the way. This post came pretty quickly after the last one, but make sure to read that one too! It has important news and such.)

1 comment:

  1. This article also got a nod from Paul Krugman's blog a few minutes ago.