Sunday, April 11, 2010

The brains we make

Today I thought I'd write a bit about artificial intelligence. AI is an evocative label, suggesting I, Robot, HAL, your favorite computer that's just like a human (or not quite human, or a little too human, etc., etc.). It captured the hearts and minds of both computer scientists and the general public for decades, from Turing's famous test almost a hundred years ago to its heyday in the 70's and 80's. You don't hear that much about AI anymore, though, except for the few true believers sitting in their bunkers and praying for the Singularity (sorry, Mr. Kurzweil). It's my guess that this is a direct result of a huge shift in the nature of AI research, one that few people outside the field know about but which has dampened a lot of the wild expectations it used to inspire. So, what was AI, and what is it now? What happened to it?

The old AI was, fundamentally, about logic. The approach was exactly what you'd imagine: AI researchers were trying to program machines to think. Of course, by "think" they meant "reason", and the prevailing attitude reflected a certain view of human intelligence as well: people were basically logical, and any deviation from logic wasn't going to be particularly useful in solving important problems. From a certain perspective, this is completely ridiculous -- take language, for instance. Understanding human language has always been one of AI's primary goals, but we don't figure out what people mean by looking up a table of grammar rules and figuring out which ones apply; our ability to understand each other depends on things like shared symbols, word associations, and social context.

Inevitably, this approach led to backlashes; the response to each breakthrough was a cry of "that isn't thinking!" Doing advanced math by brute force? Not really thinking. Playing tic-tac-toe unbeatably? Not really thinking. The Eliza program, which convinced hundreds of people that they were talking to a human psychologist, was obviously not thinking -- its algorithm was ridiculously simplistic. Finally, researchers came to the conclusion that this whole thinking thing was more complicated than they'd given it credit for, and might just be a matter of perspective anyway. And so began the decline of what's known as "rule-based AI".

At the same time, another school of thought was on the rise. In contrast to the bottom-up approach of rule-based AI -- making machines think like people -- these researchers thought it would be just as effective, and a lot easier, to make machines act like people. The basis of this approach is statistical rather than logical. If we start a sentence with "the cat", what are the likely next words? If a chess master reaches a certain board position, how likely is he to win? The computer doesn't need to know why certain verbs are connected with cats and others aren't, or why chess masters prefer one move over another. It just needs to know that they are, and do as they do. This is often referred to as "machine learning", but it's learning only in the Pavlovian sense, unthinking response to fixed stimuli (and there's that word thinking again). It's proved very powerful in various areas, chess most famously. At the same time, it's not clear that it really merits the label "AI", since it's not really about intelligence anymore so much as statistical models and computing power.

I can't say for sure that the decline in general enthusiasm for AI is linked to the death of rule-based AI and the ascendancy of statistics, but one thing is certain: the old grail of "strong AI" (an AI that thinks like a human) isn't even on the table anymore. Machine learning might yet produce a computer that can pass the Turing test, but it would be more likely to have just figured out how to string words together in a convincing way than to actually understand what it was saying. Of course, this is a fuzzy line, and I've previously espoused the view that anything that appears to be sentient should be assumed so until proven otherwise. Even if in principle we're not shooting for strong AI anymore, we may end up with something that looks a whole lot like it. Nonetheless, I think some of the big dreams and mad-scientist ambition went out of the field with the regime change. And as someone who enjoys logic a whole lot more than statistics, I can't help but feel a bit nostalgic for rule-based AI.

So, why did I bring all this up in the first place? Mainly because a recent news story suggests that maybe rules-based AI isn't as dead as all that. When you put things down in such an obvious dichotomy, someone's going to try to get the best of both worlds, and a scientist at MIT has just announced that he's doing exactly that. As you might have guessed from the length of the rant, this is something that has been near and dear to my heart for a long time, and I'm really excited about the prospect of bringing some of the logic, and maybe even some of the intelligence, back into AI. Is it time to get excited again? Did the Singularity crowd have the right idea after all? (Probably not.) Are we on the verge of a bigger, better AI breakthrough? I'm completely unqualified to answer these questions, but the fact that there's reason to ask is heartening in itself. Spread the word, and warm up your thinking machines: AI is getting cool again.