Wednesday, October 6, 2010

EML

I haven't quite figured out what to make of this yet, but I know it's absolutely incredible. I'm going to have to read the whole thing before I can comment (I read the XML spec once when I had too much time on my hands, so this should be light reading in comparison), but at first glance, it's nothing less than a formal language for describing and representing emotional states. I'm torn between awe at the grandiosity of the undertaking, and extreme skepticism towards the entire idea of, yes, the standardization of emotions. That said, the skepticism angle is easy and boring, so I'll lay off it for now. I'd rather talk about how awesome it is.

For those of you whose response was "wait, what?" or something along those lines, a bit of explication. W3C is the World Wide Web Consortium, the organization that, among other things, runs the Web. I'll give you a moment to express suitable amusement, incredulity, scorn, etc. But entirely seriously, they're responsible for things like HTML, the backbone of most web pages, and XML, a generic framework for expressing and transmitting data. And now, apparently, they've decided to give emotion a try.

The advantages of having a standard language for emotion are twofold. First, it allows for modularity: some programs can deal with taking input and turning it into EML, and other programs can take EML and respond accordingly. The first type of programs could include, say, facial recognition software that looks at people and decides how they're feeling, or conversation programs that look for signs of anger in internet communication (made that one up, but it'd be useful, don't you think?). The second could include conversation programs that try to cheer people up, or music selection programs that change a game's soundtrack according to the mood (one of my personal favorite problems). Having a common language for this stuff means that the two types can be disassociated - the conversation program could be made to respond to facial expressions, or analysis of forum posts, or anything else, just by feeding it the analysis produced by the appropriate program. Undoubtedly a boon for all kinds of AI/HCI research.

The second advantage is a bit more dubious. A language implies, to some extent, a model. It shapes the patterns its users apply to the world. Someone who grows up on imperative programming (C, for instance) thinks of sequences of activities in terms of for loops: go through each object, handle it accordingly. Similarly, someone who uses EML, whether that someone is a human programmer or a computer program, is going to have to think in terms of the emotional categories, scales, attributes, etc. that are allowed for in the language. This can be a useful aid to analysis - we need to have some way to break things down if we're going to understand them - but it can also be limiting: for instance, it turns out that C-style for loops are a lot harder to adapt to the world of parallel computing than more functional approaches. And when it comes to emotion, and the people doing the standardizing are W3C (in my opinion, at least, they're a pretty big deal), this could have a huge effect on the way future AI researchers think about the world. And, dare I say it, a huge effect on the way future AIs think about the world.

So, yeah, that's my first-glance reaction. I'm sure I'll have more to say once I've actually read the thing. If anyone feels like working through it with me, I'd love to chat about it! Though, given the impenetrability of most W3C specs, I wouldn't blame you if you'd rather not. Either way, I hope I've managed to give a bit of an idea of why (I think) this is exciting/important. If you're interested, stay tuned!

Friday, July 23, 2010

A conclusive disproof of a conclusive disproof of free will

The New York Times philosophy column recently ran an interesting piece about free will. The thesis was that, whether or not one assumes a deterministic universe, one cannot be responsible for one's actions. What caught my attention here wasn't the thrust of the argument - it's simple enough - but the fact that it was presented, not as an opinion or a way of viewing life, but as a logical inevitability. This set my fallacy radar off something fierce, especially since the conclusion was one I disagreed with. So, if you're interested in things like proving we do or don't have free will, follow me for a bit as I take a look at the argument.

The argument, which the author refers to as Basic for some reason, goes as follows (direct quote):

(1) You do what you do — in the circumstances in which you find yourself—because of the way you then are.

(2) So if you’re going to be ultimately responsible for what you do, you’re going to have to be ultimately responsible for the way you are — at least in certain mental respects.

(3) But you can’t be ultimately responsible for the way you are in any respect at all.

(4) So you can’t be ultimately responsible for what you do.

(3) seems like the most obviously objectionable point here, and indeed, the author immediately informs us that "the key move is (3)." He then goes on to restate the argument inductively, justifying (3) as an assumption. You don't start out responsible for what you are (we can't help the way we're born), and the rest of the argument seems to show that we're not responsible for what we do when we're not responsible for what we are. But what we do determines what we become, so we can never be responsible for what we are, and the argument holds.

Not bad, not bad. One can always nitpick, but I think the logic's pretty sound here. But, as I'm sure is obvious by now, I'm not convinced. After all, as any logician knows, sound logic can easily get you to false conclusions: all you need to do is start with incorrect assumptions. And in this case, that focus on the oh-so-shocking point (3) obscured a more basic problem: point (1).

What's wrong with point (1)? Well, the author restates it a bit more rigorously later in the argument: "When one acts for a reason, what one does is a function of how one is, mentally speaking." A function? Really? The mathematical definition of a function, and the one being used by this author, is as an operator that takes in some inputs, and based on them produces precisely one output, completely determined by the inputs. In other words - wait a second - point (1) means that our actions are completely determined by our current state. That's nothing less than determinism! The argument was supposed to hold "whether determinism is true or false". But actually, the very first statement in it assumes determinism!

So much for that, then. If you believe that the universe is deterministic, then this is a pretty compelling argument against moral responsibility. There are a lot of those, though, if your basic assumption is that we have no control over our actions. If you don't take determinism on faith, this whole structure has probably proved nothing you didn't already know. It's refreshing to see a bit of rigor being brought into philosophical arguments presented for popular consumption; I wouldn't mind seeing more of this kind of article in the Times, if only because I so enjoy picking them apart. But I hope that people won't be too easily taken in by so-called conclusive arguments (mine included) without a careful examination of the premises.

Okay, then, hope you had fun!

Sunday, April 11, 2010

The brains we make

Today I thought I'd write a bit about artificial intelligence. AI is an evocative label, suggesting I, Robot, HAL, your favorite computer that's just like a human (or not quite human, or a little too human, etc., etc.). It captured the hearts and minds of both computer scientists and the general public for decades, from Turing's famous test almost a hundred years ago to its heyday in the 70's and 80's. You don't hear that much about AI anymore, though, except for the few true believers sitting in their bunkers and praying for the Singularity (sorry, Mr. Kurzweil). It's my guess that this is a direct result of a huge shift in the nature of AI research, one that few people outside the field know about but which has dampened a lot of the wild expectations it used to inspire. So, what was AI, and what is it now? What happened to it?

The old AI was, fundamentally, about logic. The approach was exactly what you'd imagine: AI researchers were trying to program machines to think. Of course, by "think" they meant "reason", and the prevailing attitude reflected a certain view of human intelligence as well: people were basically logical, and any deviation from logic wasn't going to be particularly useful in solving important problems. From a certain perspective, this is completely ridiculous -- take language, for instance. Understanding human language has always been one of AI's primary goals, but we don't figure out what people mean by looking up a table of grammar rules and figuring out which ones apply; our ability to understand each other depends on things like shared symbols, word associations, and social context.

Inevitably, this approach led to backlashes; the response to each breakthrough was a cry of "that isn't thinking!" Doing advanced math by brute force? Not really thinking. Playing tic-tac-toe unbeatably? Not really thinking. The Eliza program, which convinced hundreds of people that they were talking to a human psychologist, was obviously not thinking -- its algorithm was ridiculously simplistic. Finally, researchers came to the conclusion that this whole thinking thing was more complicated than they'd given it credit for, and might just be a matter of perspective anyway. And so began the decline of what's known as "rule-based AI".

At the same time, another school of thought was on the rise. In contrast to the bottom-up approach of rule-based AI -- making machines think like people -- these researchers thought it would be just as effective, and a lot easier, to make machines act like people. The basis of this approach is statistical rather than logical. If we start a sentence with "the cat", what are the likely next words? If a chess master reaches a certain board position, how likely is he to win? The computer doesn't need to know why certain verbs are connected with cats and others aren't, or why chess masters prefer one move over another. It just needs to know that they are, and do as they do. This is often referred to as "machine learning", but it's learning only in the Pavlovian sense, unthinking response to fixed stimuli (and there's that word thinking again). It's proved very powerful in various areas, chess most famously. At the same time, it's not clear that it really merits the label "AI", since it's not really about intelligence anymore so much as statistical models and computing power.

I can't say for sure that the decline in general enthusiasm for AI is linked to the death of rule-based AI and the ascendancy of statistics, but one thing is certain: the old grail of "strong AI" (an AI that thinks like a human) isn't even on the table anymore. Machine learning might yet produce a computer that can pass the Turing test, but it would be more likely to have just figured out how to string words together in a convincing way than to actually understand what it was saying. Of course, this is a fuzzy line, and I've previously espoused the view that anything that appears to be sentient should be assumed so until proven otherwise. Even if in principle we're not shooting for strong AI anymore, we may end up with something that looks a whole lot like it. Nonetheless, I think some of the big dreams and mad-scientist ambition went out of the field with the regime change. And as someone who enjoys logic a whole lot more than statistics, I can't help but feel a bit nostalgic for rule-based AI.

So, why did I bring all this up in the first place? Mainly because a recent news story suggests that maybe rules-based AI isn't as dead as all that. When you put things down in such an obvious dichotomy, someone's going to try to get the best of both worlds, and a scientist at MIT has just announced that he's doing exactly that. As you might have guessed from the length of the rant, this is something that has been near and dear to my heart for a long time, and I'm really excited about the prospect of bringing some of the logic, and maybe even some of the intelligence, back into AI. Is it time to get excited again? Did the Singularity crowd have the right idea after all? (Probably not.) Are we on the verge of a bigger, better AI breakthrough? I'm completely unqualified to answer these questions, but the fact that there's reason to ask is heartening in itself. Spread the word, and warm up your thinking machines: AI is getting cool again.

Monday, February 15, 2010

What we talk about when we talk about business

Every once in a while, my Japanese class delivers an irritating reminder of just how ingrained sexism is in language. Of course, this is at least as true in English as in Japanese -- it's nearly impossible to talk about a person in English without using gendered pronouns like "he" or "she". Japanese, thankfully, generally doesn't have this problem, but it has its own set of issues.

I have the misfortune to be taking "business Japanese" this semester, and today's scenario was a discussion between a V.P. and some subordinates. When men and women are expected to say the lines differently, the teacher writes the changes on the board (usually, though not always, the lines in the textbook are the "male" versions). Usually, these are just small modifications to sentence endings; casual Japanese has distinct "male" and "female" speech styles, though as far as I can tell it's acceptable in modern Japanese for anyone to use the "male" style, or something in between. Today, though, we had a particularly egregious change: in the textbook, the V.P. referred to a subordinate with the familiar suffix -kun, but the teacher asked the female students to use the more respectful -san, and when a student used -kun anyway she was "corrected". Apparently it's disrespectful for a woman to use a familiar term of address to (presumably) a man, even if he's working under her.

Of course, I don't know how representative our teacher's opinions are, or what would be considered acceptable in a real modern Japanese business. I suspect that, just as during my stay in Japan my host family all spoke in the "male" style, there are places where these rules don't apply. But it was an interesting, and to me rather blatant, example of how sexism comes across in language, even when the subject of gender is nominally unrelated. When speaking or writing in our native language, we're often too close to see the assumptions we make, but when we take a step back they come across loud and clear. Maybe next time I'll try reading the "other side's" lines; after all, it would still show that I understood the dialogue. If I do, I'll let you know how it goes.