Today is Ada Lovelace Day. The internet said it, so it must be true. And that means that we get to spend some time celebrating one of the single most awesome people in history. Ada Byron, Lady Lovelace, for those who haven't heard the tale, was the daughter of Lord Byron, the famous dissipated aesthete. Under the careful tutelage of her psychotic mother (you'd have to be pretty crazy to hate poets and have Lord Byron's kid, though then again the two might be related), she grew into both the manic-depressive romantic heroine any child of Lord Byron should be, and a hero of SCIENCE! in an era when this was not a thing that happened.
A decade or two after Mary Shelley invented science fiction with Frankenstein, Ada Lovelace was working with Charles Babbage on figuring out the World's First Computer. By virtue of this she became, by most counts, the World's First Computer Programmer. The fact that the computer didn't actually exist doesn't really diminish this at all; in fact, it makes it even greater, since it does exist now and we know it actually would have worked. This is both 1) the most steampunk thing ever and 2) incredibly awesome. Somehow I guess there are still people who believe the "women can't do computer science" thing out there, despite such sterling examples as Grace Hopper, Barbara Liskov, and my own advisor. So, for those poor benighted people, YOU'RE DOING IT WRONG. And for boys and girls in CS looking for someone to admire, it's hard to do better than Ada Lovelace. Happy Lovelace Day!
P.S.: Also, there is an amazing webcomic miniseries about Lovelace and Byron that you all should read.
Friday, October 7, 2011
Tuesday, August 30, 2011
Animals are tasty, but hurting them is wrong(-ish)
Note: If you're familiar with the common arguments for and against vegetarianism, this may be a boring post.
I found myself in an interesting argument last night, in which my position was approximately the following:
Assertion: Animals in some measure are capable of feeling pain.
Assertion: Causing unnecessary pain to animals is undesirable.
Conclusion 1: I might be a better person if I was a vegetarian.
Of course, there are various arguments for and against vegetarianism, and one can't expect a group of people (or at least, a group of people chosen on a non-animal-related basis) to agree with this conclusion unanimously. So I chose a second point to try to prove:
Conclusion 2: All other things being equal, a world in which animals are not hurt is preferable to one in which they are hurt.
Somewhat to my surprise, this point also did not go uncontested. Some of the counterarguments were as follows:
Objection 1: Animals aren't people, so it doesn't matter.
My response to this objection is generally to assume that the person making it actually believes what I believe, namely, that the pain of non-human animals matters much less than the pain (or even the comfort) of humans. This is a valid argument against conclusion 1, but it doesn't refute or even address conclusion 2. If people could be just as comfortable, well-nourished, etc. without eating animals (which is arguably true in the present, and certainly could become true in the future; it's not technologically impossible), then as long as the pain of animals matters even a little tiny bit, a world in which they're not hurt is preferable to one in which they are. On the other hand, there are people (I think) who do not believe my modified form of the statement, but instead believe the statement itself: that the pain of animals literally does not matter at all in a moral sense. There are arguments to be made here too, probably, but I'm more inclined to say that if you believe this, I cannot argue with you. Anyone who accepts as a basic premise that the pain of animals literally does not have any moral significance whatsoever, has premises sufficiently different from mine that our viewpoints are irreconcilable (on this matter, at least). Thankfully, we didn't spend too much time on this point.
Objection 2: This might justify eating free-range, but doesn't justify vegetarianism.
To this, the best I can manage is "well, yes, maybe." This is one of the reasons why I included in conclusion 1 the qualification that I "might" be a better person. I don't know how painful or pleasant the lives of animals are in various situations. I am fairly certain that they are fairly miserable in factory farms and the like. There is also the argument to be made that most farm animals would not exist if they were not being raised for food, to which I can only respond that some of their existences are not preferable to non-existence, and some perhaps are. The assertion that there are existences that are not preferable to non-existence, lives not worth living as it were, is a contentious one. These are all interesting discussions to have, but in a sense all of them miss the mark: while they address conclusion 1, they have no impact on conclusion 2 whatsoever. Raising animals and treating them well (and then maybe even killing them for food) is still preferable to raising animals and torturing them and then killing them for food.
Objection 3: In this hypothetical world, can I still eat animals?
Response: You can eat something that to you is completely indistinguishable from animals.
Objection 3: Then no.
This is a logically void argument, of course, since the objector would have no way of distinguishing between the situation he accepts and the situation to which he objects. If he can't tell whether he's eating animals or the hypothetical food that is completely indistinguishable from animals, which he can't by definition of "indistinguishable", then he can't very well object to the indistinguishable food. However, the existence of this objection does raise an interesting question: can it be ethical for a government to lie to its people? Suppose that the world I conjecture in conclusion 2 has been made technologically possible. A substance (call it food i) has been developed which is as nutritious as meat, tastes the same as meat, costs less to produce than meat, and can be produced without harming animals. (Anyone who responds to this with the argument that this isn't possible may, again, have a viewpoint irreconcilable with mine. This seems easily within the reach of technology to me, and probably feasible within the next 100 years.) The government accepts conclusion 2 as truth, and would like to mandate the replacement of meat products with food i, since this would be a clear moral improvement. However, there are people under this government who make objection 3 despite its logical invalidity, and since their objection is logically invalid, they cannot be convinced otherwise. Would it be right for the government to execute the replacement secretly, since it would be a moral good and the objectors would be literally incapable of telling the difference? Things to ponder.
Anyway, my conclusion here is that none of these arguments really have anything to say against conclusion 2, and in fact I will go so far as to assert that conclusion 2 follows necessarily from my assumptions. This is a risky assertion for a logician to make. So I'm curious: if you're reading this, can you think of any logical objections to conclusion 2? Of course, if you have any other thoughts on the matter, I'd love to hear them too. It's been a while since I've had a proper debate. Looking forward to hearing from you, dear hypothetical readers!
I found myself in an interesting argument last night, in which my position was approximately the following:
Assertion: Animals in some measure are capable of feeling pain.
Assertion: Causing unnecessary pain to animals is undesirable.
Conclusion 1: I might be a better person if I was a vegetarian.
Of course, there are various arguments for and against vegetarianism, and one can't expect a group of people (or at least, a group of people chosen on a non-animal-related basis) to agree with this conclusion unanimously. So I chose a second point to try to prove:
Conclusion 2: All other things being equal, a world in which animals are not hurt is preferable to one in which they are hurt.
Somewhat to my surprise, this point also did not go uncontested. Some of the counterarguments were as follows:
Objection 1: Animals aren't people, so it doesn't matter.
My response to this objection is generally to assume that the person making it actually believes what I believe, namely, that the pain of non-human animals matters much less than the pain (or even the comfort) of humans. This is a valid argument against conclusion 1, but it doesn't refute or even address conclusion 2. If people could be just as comfortable, well-nourished, etc. without eating animals (which is arguably true in the present, and certainly could become true in the future; it's not technologically impossible), then as long as the pain of animals matters even a little tiny bit, a world in which they're not hurt is preferable to one in which they are. On the other hand, there are people (I think) who do not believe my modified form of the statement, but instead believe the statement itself: that the pain of animals literally does not matter at all in a moral sense. There are arguments to be made here too, probably, but I'm more inclined to say that if you believe this, I cannot argue with you. Anyone who accepts as a basic premise that the pain of animals literally does not have any moral significance whatsoever, has premises sufficiently different from mine that our viewpoints are irreconcilable (on this matter, at least). Thankfully, we didn't spend too much time on this point.
Objection 2: This might justify eating free-range, but doesn't justify vegetarianism.
To this, the best I can manage is "well, yes, maybe." This is one of the reasons why I included in conclusion 1 the qualification that I "might" be a better person. I don't know how painful or pleasant the lives of animals are in various situations. I am fairly certain that they are fairly miserable in factory farms and the like. There is also the argument to be made that most farm animals would not exist if they were not being raised for food, to which I can only respond that some of their existences are not preferable to non-existence, and some perhaps are. The assertion that there are existences that are not preferable to non-existence, lives not worth living as it were, is a contentious one. These are all interesting discussions to have, but in a sense all of them miss the mark: while they address conclusion 1, they have no impact on conclusion 2 whatsoever. Raising animals and treating them well (and then maybe even killing them for food) is still preferable to raising animals and torturing them and then killing them for food.
Objection 3: In this hypothetical world, can I still eat animals?
Response: You can eat something that to you is completely indistinguishable from animals.
Objection 3: Then no.
This is a logically void argument, of course, since the objector would have no way of distinguishing between the situation he accepts and the situation to which he objects. If he can't tell whether he's eating animals or the hypothetical food that is completely indistinguishable from animals, which he can't by definition of "indistinguishable", then he can't very well object to the indistinguishable food. However, the existence of this objection does raise an interesting question: can it be ethical for a government to lie to its people? Suppose that the world I conjecture in conclusion 2 has been made technologically possible. A substance (call it food i) has been developed which is as nutritious as meat, tastes the same as meat, costs less to produce than meat, and can be produced without harming animals. (Anyone who responds to this with the argument that this isn't possible may, again, have a viewpoint irreconcilable with mine. This seems easily within the reach of technology to me, and probably feasible within the next 100 years.) The government accepts conclusion 2 as truth, and would like to mandate the replacement of meat products with food i, since this would be a clear moral improvement. However, there are people under this government who make objection 3 despite its logical invalidity, and since their objection is logically invalid, they cannot be convinced otherwise. Would it be right for the government to execute the replacement secretly, since it would be a moral good and the objectors would be literally incapable of telling the difference? Things to ponder.
Anyway, my conclusion here is that none of these arguments really have anything to say against conclusion 2, and in fact I will go so far as to assert that conclusion 2 follows necessarily from my assumptions. This is a risky assertion for a logician to make. So I'm curious: if you're reading this, can you think of any logical objections to conclusion 2? Of course, if you have any other thoughts on the matter, I'd love to hear them too. It's been a while since I've had a proper debate. Looking forward to hearing from you, dear hypothetical readers!
Monday, January 3, 2011
1993
I turned six in 1993. I received two presents: a CD drive, and a CD to go in it. The CD drive was external, meant to be attached to the computer with the sort of thick cable that's been obsoleted by USB. The CD was a game, The Even More Incredible Machine. It's still on my shelf, though I'm not at all confident that it'll run.
Obsolescence isn't what it used to be. When I was 2, the family computer was an Amiga, running its own command-line operating system, Amiga OS. A few years later, it had been replaced by a Packard-Bell running Windows 3.1, the first real popular Windows release, with graphics in place of the command prompt (though DOS was still there every time it booted, cursor blinking, waiting for you to type "win" and start the new OS). The next computer had a built-in CD drive, and a little black box called a modem, which brought us email at turtle speed courtesy of America On-Line. Somewhere along the line we bought into the Zip Disk fad, and had an external drive for those too, marveling over the storage space of 100 floppies in a single plastic case. The peripherals were necessary to keep up, if you weren't going to buy a new computer every two years.
That Zip Drive turned out to be a bad investment. Modern computers still use CDs. The modem's quietly disappeared in favor of wireless Verizon fiber-optic through a wireless router, and it's hard even to remember that "tower" used to be the opposite of "desktop", not a synonym. Even so, it's hard not to feel like things have settled down. Long before the numbers stopped going up (how can another 2-gigahertz processor inspire those who plugged a Pentium upgrade cap over their 486?), new computers started looking less like brave new worlds, and more like better, faster, cleaner iterations of the previous ones.
Maybe it's the new year, or reading the new novel by New William Gibson (completely unlike Old William Gibson*), or the influence of a certain web site. Whatever the case, I've been thinking about 1993, and about technology, because to me the two are intimately related. I'm not trying to say that computers aren't advancing as fast as they did in the past, or hold up signs proclaiming the End of Moore's Law or anything, though I do believe that we've reached a point where the next revolution (and the previous) won't be driven by advances in hardware. I'm just remembering a time when things felt different, at least to me. Wishing you all a happy and healthy New Year.
*For readers who remember '80s computers with fondness, I highly recommend Digital. Even if you don't, it's a lovely adventure game/visual novel, and worth giving a try.
Obsolescence isn't what it used to be. When I was 2, the family computer was an Amiga, running its own command-line operating system, Amiga OS. A few years later, it had been replaced by a Packard-Bell running Windows 3.1, the first real popular Windows release, with graphics in place of the command prompt (though DOS was still there every time it booted, cursor blinking, waiting for you to type "win" and start the new OS). The next computer had a built-in CD drive, and a little black box called a modem, which brought us email at turtle speed courtesy of America On-Line. Somewhere along the line we bought into the Zip Disk fad, and had an external drive for those too, marveling over the storage space of 100 floppies in a single plastic case. The peripherals were necessary to keep up, if you weren't going to buy a new computer every two years.
That Zip Drive turned out to be a bad investment. Modern computers still use CDs. The modem's quietly disappeared in favor of wireless Verizon fiber-optic through a wireless router, and it's hard even to remember that "tower" used to be the opposite of "desktop", not a synonym. Even so, it's hard not to feel like things have settled down. Long before the numbers stopped going up (how can another 2-gigahertz processor inspire those who plugged a Pentium upgrade cap over their 486?), new computers started looking less like brave new worlds, and more like better, faster, cleaner iterations of the previous ones.
Maybe it's the new year, or reading the new novel by New William Gibson (completely unlike Old William Gibson*), or the influence of a certain web site. Whatever the case, I've been thinking about 1993, and about technology, because to me the two are intimately related. I'm not trying to say that computers aren't advancing as fast as they did in the past, or hold up signs proclaiming the End of Moore's Law or anything, though I do believe that we've reached a point where the next revolution (and the previous) won't be driven by advances in hardware. I'm just remembering a time when things felt different, at least to me. Wishing you all a happy and healthy New Year.
*For readers who remember '80s computers with fondness, I highly recommend Digital. Even if you don't, it's a lovely adventure game/visual novel, and worth giving a try.
Wednesday, October 6, 2010
EML
I haven't quite figured out what to make of this yet, but I know it's absolutely incredible. I'm going to have to read the whole thing before I can comment (I read the XML spec once when I had too much time on my hands, so this should be light reading in comparison), but at first glance, it's nothing less than a formal language for describing and representing emotional states. I'm torn between awe at the grandiosity of the undertaking, and extreme skepticism towards the entire idea of, yes, the standardization of emotions. That said, the skepticism angle is easy and boring, so I'll lay off it for now. I'd rather talk about how awesome it is.
For those of you whose response was "wait, what?" or something along those lines, a bit of explication. W3C is the World Wide Web Consortium, the organization that, among other things, runs the Web. I'll give you a moment to express suitable amusement, incredulity, scorn, etc. But entirely seriously, they're responsible for things like HTML, the backbone of most web pages, and XML, a generic framework for expressing and transmitting data. And now, apparently, they've decided to give emotion a try.
The advantages of having a standard language for emotion are twofold. First, it allows for modularity: some programs can deal with taking input and turning it into EML, and other programs can take EML and respond accordingly. The first type of programs could include, say, facial recognition software that looks at people and decides how they're feeling, or conversation programs that look for signs of anger in internet communication (made that one up, but it'd be useful, don't you think?). The second could include conversation programs that try to cheer people up, or music selection programs that change a game's soundtrack according to the mood (one of my personal favorite problems). Having a common language for this stuff means that the two types can be disassociated - the conversation program could be made to respond to facial expressions, or analysis of forum posts, or anything else, just by feeding it the analysis produced by the appropriate program. Undoubtedly a boon for all kinds of AI/HCI research.
The second advantage is a bit more dubious. A language implies, to some extent, a model. It shapes the patterns its users apply to the world. Someone who grows up on imperative programming (C, for instance) thinks of sequences of activities in terms of for loops: go through each object, handle it accordingly. Similarly, someone who uses EML, whether that someone is a human programmer or a computer program, is going to have to think in terms of the emotional categories, scales, attributes, etc. that are allowed for in the language. This can be a useful aid to analysis - we need to have some way to break things down if we're going to understand them - but it can also be limiting: for instance, it turns out that C-style for loops are a lot harder to adapt to the world of parallel computing than more functional approaches. And when it comes to emotion, and the people doing the standardizing are W3C (in my opinion, at least, they're a pretty big deal), this could have a huge effect on the way future AI researchers think about the world. And, dare I say it, a huge effect on the way future AIs think about the world.
So, yeah, that's my first-glance reaction. I'm sure I'll have more to say once I've actually read the thing. If anyone feels like working through it with me, I'd love to chat about it! Though, given the impenetrability of most W3C specs, I wouldn't blame you if you'd rather not. Either way, I hope I've managed to give a bit of an idea of why (I think) this is exciting/important. If you're interested, stay tuned!
For those of you whose response was "wait, what?" or something along those lines, a bit of explication. W3C is the World Wide Web Consortium, the organization that, among other things, runs the Web. I'll give you a moment to express suitable amusement, incredulity, scorn, etc. But entirely seriously, they're responsible for things like HTML, the backbone of most web pages, and XML, a generic framework for expressing and transmitting data. And now, apparently, they've decided to give emotion a try.
The advantages of having a standard language for emotion are twofold. First, it allows for modularity: some programs can deal with taking input and turning it into EML, and other programs can take EML and respond accordingly. The first type of programs could include, say, facial recognition software that looks at people and decides how they're feeling, or conversation programs that look for signs of anger in internet communication (made that one up, but it'd be useful, don't you think?). The second could include conversation programs that try to cheer people up, or music selection programs that change a game's soundtrack according to the mood (one of my personal favorite problems). Having a common language for this stuff means that the two types can be disassociated - the conversation program could be made to respond to facial expressions, or analysis of forum posts, or anything else, just by feeding it the analysis produced by the appropriate program. Undoubtedly a boon for all kinds of AI/HCI research.
The second advantage is a bit more dubious. A language implies, to some extent, a model. It shapes the patterns its users apply to the world. Someone who grows up on imperative programming (C, for instance) thinks of sequences of activities in terms of for loops: go through each object, handle it accordingly. Similarly, someone who uses EML, whether that someone is a human programmer or a computer program, is going to have to think in terms of the emotional categories, scales, attributes, etc. that are allowed for in the language. This can be a useful aid to analysis - we need to have some way to break things down if we're going to understand them - but it can also be limiting: for instance, it turns out that C-style for loops are a lot harder to adapt to the world of parallel computing than more functional approaches. And when it comes to emotion, and the people doing the standardizing are W3C (in my opinion, at least, they're a pretty big deal), this could have a huge effect on the way future AI researchers think about the world. And, dare I say it, a huge effect on the way future AIs think about the world.
So, yeah, that's my first-glance reaction. I'm sure I'll have more to say once I've actually read the thing. If anyone feels like working through it with me, I'd love to chat about it! Though, given the impenetrability of most W3C specs, I wouldn't blame you if you'd rather not. Either way, I hope I've managed to give a bit of an idea of why (I think) this is exciting/important. If you're interested, stay tuned!
Friday, July 23, 2010
A conclusive disproof of a conclusive disproof of free will
The New York Times philosophy column recently ran an interesting piece about free will. The thesis was that, whether or not one assumes a deterministic universe, one cannot be responsible for one's actions. What caught my attention here wasn't the thrust of the argument - it's simple enough - but the fact that it was presented, not as an opinion or a way of viewing life, but as a logical inevitability. This set my fallacy radar off something fierce, especially since the conclusion was one I disagreed with. So, if you're interested in things like proving we do or don't have free will, follow me for a bit as I take a look at the argument.
The argument, which the author refers to as Basic for some reason, goes as follows (direct quote):
(1) You do what you do — in the circumstances in which you find yourself—because of the way you then are.
Not bad, not bad. One can always nitpick, but I think the logic's pretty sound here. But, as I'm sure is obvious by now, I'm not convinced. After all, as any logician knows, sound logic can easily get you to false conclusions: all you need to do is start with incorrect assumptions. And in this case, that focus on the oh-so-shocking point (3) obscured a more basic problem: point (1).
What's wrong with point (1)? Well, the author restates it a bit more rigorously later in the argument: "When one acts for a reason, what one does is a function of how one is, mentally speaking." A function? Really? The mathematical definition of a function, and the one being used by this author, is as an operator that takes in some inputs, and based on them produces precisely one output, completely determined by the inputs. In other words - wait a second - point (1) means that our actions are completely determined by our current state. That's nothing less than determinism! The argument was supposed to hold "whether determinism is true or false". But actually, the very first statement in it assumes determinism!
So much for that, then. If you believe that the universe is deterministic, then this is a pretty compelling argument against moral responsibility. There are a lot of those, though, if your basic assumption is that we have no control over our actions. If you don't take determinism on faith, this whole structure has probably proved nothing you didn't already know. It's refreshing to see a bit of rigor being brought into philosophical arguments presented for popular consumption; I wouldn't mind seeing more of this kind of article in the Times, if only because I so enjoy picking them apart. But I hope that people won't be too easily taken in by so-called conclusive arguments (mine included) without a careful examination of the premises.
Okay, then, hope you had fun!
The argument, which the author refers to as Basic for some reason, goes as follows (direct quote):
(1) You do what you do — in the circumstances in which you find yourself—because of the way you then are.
(2) So if you’re going to be ultimately responsible for what you do, you’re going to have to be ultimately responsible for the way you are — at least in certain mental respects.
(3) But you can’t be ultimately responsible for the way you are in any respect at all.
(4) So you can’t be ultimately responsible for what you do.
(3) seems like the most obviously objectionable point here, and indeed, the author immediately informs us that "the key move is (3)." He then goes on to restate the argument inductively, justifying (3) as an assumption. You don't start out responsible for what you are (we can't help the way we're born), and the rest of the argument seems to show that we're not responsible for what we do when we're not responsible for what we are. But what we do determines what we become, so we can never be responsible for what we are, and the argument holds.Not bad, not bad. One can always nitpick, but I think the logic's pretty sound here. But, as I'm sure is obvious by now, I'm not convinced. After all, as any logician knows, sound logic can easily get you to false conclusions: all you need to do is start with incorrect assumptions. And in this case, that focus on the oh-so-shocking point (3) obscured a more basic problem: point (1).
What's wrong with point (1)? Well, the author restates it a bit more rigorously later in the argument: "When one acts for a reason, what one does is a function of how one is, mentally speaking." A function? Really? The mathematical definition of a function, and the one being used by this author, is as an operator that takes in some inputs, and based on them produces precisely one output, completely determined by the inputs. In other words - wait a second - point (1) means that our actions are completely determined by our current state. That's nothing less than determinism! The argument was supposed to hold "whether determinism is true or false". But actually, the very first statement in it assumes determinism!
So much for that, then. If you believe that the universe is deterministic, then this is a pretty compelling argument against moral responsibility. There are a lot of those, though, if your basic assumption is that we have no control over our actions. If you don't take determinism on faith, this whole structure has probably proved nothing you didn't already know. It's refreshing to see a bit of rigor being brought into philosophical arguments presented for popular consumption; I wouldn't mind seeing more of this kind of article in the Times, if only because I so enjoy picking them apart. But I hope that people won't be too easily taken in by so-called conclusive arguments (mine included) without a careful examination of the premises.
Okay, then, hope you had fun!
Sunday, April 11, 2010
The brains we make
Today I thought I'd write a bit about artificial intelligence. AI is an evocative label, suggesting I, Robot, HAL, your favorite computer that's just like a human (or not quite human, or a little too human, etc., etc.). It captured the hearts and minds of both computer scientists and the general public for decades, from Turing's famous test almost a hundred years ago to its heyday in the 70's and 80's. You don't hear that much about AI anymore, though, except for the few true believers sitting in their bunkers and praying for the Singularity (sorry, Mr. Kurzweil). It's my guess that this is a direct result of a huge shift in the nature of AI research, one that few people outside the field know about but which has dampened a lot of the wild expectations it used to inspire. So, what was AI, and what is it now? What happened to it?
The old AI was, fundamentally, about logic. The approach was exactly what you'd imagine: AI researchers were trying to program machines to think. Of course, by "think" they meant "reason", and the prevailing attitude reflected a certain view of human intelligence as well: people were basically logical, and any deviation from logic wasn't going to be particularly useful in solving important problems. From a certain perspective, this is completely ridiculous -- take language, for instance. Understanding human language has always been one of AI's primary goals, but we don't figure out what people mean by looking up a table of grammar rules and figuring out which ones apply; our ability to understand each other depends on things like shared symbols, word associations, and social context.
Inevitably, this approach led to backlashes; the response to each breakthrough was a cry of "that isn't thinking!" Doing advanced math by brute force? Not really thinking. Playing tic-tac-toe unbeatably? Not really thinking. The Eliza program, which convinced hundreds of people that they were talking to a human psychologist, was obviously not thinking -- its algorithm was ridiculously simplistic. Finally, researchers came to the conclusion that this whole thinking thing was more complicated than they'd given it credit for, and might just be a matter of perspective anyway. And so began the decline of what's known as "rule-based AI".
At the same time, another school of thought was on the rise. In contrast to the bottom-up approach of rule-based AI -- making machines think like people -- these researchers thought it would be just as effective, and a lot easier, to make machines act like people. The basis of this approach is statistical rather than logical. If we start a sentence with "the cat", what are the likely next words? If a chess master reaches a certain board position, how likely is he to win? The computer doesn't need to know why certain verbs are connected with cats and others aren't, or why chess masters prefer one move over another. It just needs to know that they are, and do as they do. This is often referred to as "machine learning", but it's learning only in the Pavlovian sense, unthinking response to fixed stimuli (and there's that word thinking again). It's proved very powerful in various areas, chess most famously. At the same time, it's not clear that it really merits the label "AI", since it's not really about intelligence anymore so much as statistical models and computing power.
I can't say for sure that the decline in general enthusiasm for AI is linked to the death of rule-based AI and the ascendancy of statistics, but one thing is certain: the old grail of "strong AI" (an AI that thinks like a human) isn't even on the table anymore. Machine learning might yet produce a computer that can pass the Turing test, but it would be more likely to have just figured out how to string words together in a convincing way than to actually understand what it was saying. Of course, this is a fuzzy line, and I've previously espoused the view that anything that appears to be sentient should be assumed so until proven otherwise. Even if in principle we're not shooting for strong AI anymore, we may end up with something that looks a whole lot like it. Nonetheless, I think some of the big dreams and mad-scientist ambition went out of the field with the regime change. And as someone who enjoys logic a whole lot more than statistics, I can't help but feel a bit nostalgic for rule-based AI.
So, why did I bring all this up in the first place? Mainly because a recent news story suggests that maybe rules-based AI isn't as dead as all that. When you put things down in such an obvious dichotomy, someone's going to try to get the best of both worlds, and a scientist at MIT has just announced that he's doing exactly that. As you might have guessed from the length of the rant, this is something that has been near and dear to my heart for a long time, and I'm really excited about the prospect of bringing some of the logic, and maybe even some of the intelligence, back into AI. Is it time to get excited again? Did the Singularity crowd have the right idea after all? (Probably not.) Are we on the verge of a bigger, better AI breakthrough? I'm completely unqualified to answer these questions, but the fact that there's reason to ask is heartening in itself. Spread the word, and warm up your thinking machines: AI is getting cool again.
The old AI was, fundamentally, about logic. The approach was exactly what you'd imagine: AI researchers were trying to program machines to think. Of course, by "think" they meant "reason", and the prevailing attitude reflected a certain view of human intelligence as well: people were basically logical, and any deviation from logic wasn't going to be particularly useful in solving important problems. From a certain perspective, this is completely ridiculous -- take language, for instance. Understanding human language has always been one of AI's primary goals, but we don't figure out what people mean by looking up a table of grammar rules and figuring out which ones apply; our ability to understand each other depends on things like shared symbols, word associations, and social context.
Inevitably, this approach led to backlashes; the response to each breakthrough was a cry of "that isn't thinking!" Doing advanced math by brute force? Not really thinking. Playing tic-tac-toe unbeatably? Not really thinking. The Eliza program, which convinced hundreds of people that they were talking to a human psychologist, was obviously not thinking -- its algorithm was ridiculously simplistic. Finally, researchers came to the conclusion that this whole thinking thing was more complicated than they'd given it credit for, and might just be a matter of perspective anyway. And so began the decline of what's known as "rule-based AI".
At the same time, another school of thought was on the rise. In contrast to the bottom-up approach of rule-based AI -- making machines think like people -- these researchers thought it would be just as effective, and a lot easier, to make machines act like people. The basis of this approach is statistical rather than logical. If we start a sentence with "the cat", what are the likely next words? If a chess master reaches a certain board position, how likely is he to win? The computer doesn't need to know why certain verbs are connected with cats and others aren't, or why chess masters prefer one move over another. It just needs to know that they are, and do as they do. This is often referred to as "machine learning", but it's learning only in the Pavlovian sense, unthinking response to fixed stimuli (and there's that word thinking again). It's proved very powerful in various areas, chess most famously. At the same time, it's not clear that it really merits the label "AI", since it's not really about intelligence anymore so much as statistical models and computing power.
I can't say for sure that the decline in general enthusiasm for AI is linked to the death of rule-based AI and the ascendancy of statistics, but one thing is certain: the old grail of "strong AI" (an AI that thinks like a human) isn't even on the table anymore. Machine learning might yet produce a computer that can pass the Turing test, but it would be more likely to have just figured out how to string words together in a convincing way than to actually understand what it was saying. Of course, this is a fuzzy line, and I've previously espoused the view that anything that appears to be sentient should be assumed so until proven otherwise. Even if in principle we're not shooting for strong AI anymore, we may end up with something that looks a whole lot like it. Nonetheless, I think some of the big dreams and mad-scientist ambition went out of the field with the regime change. And as someone who enjoys logic a whole lot more than statistics, I can't help but feel a bit nostalgic for rule-based AI.
So, why did I bring all this up in the first place? Mainly because a recent news story suggests that maybe rules-based AI isn't as dead as all that. When you put things down in such an obvious dichotomy, someone's going to try to get the best of both worlds, and a scientist at MIT has just announced that he's doing exactly that. As you might have guessed from the length of the rant, this is something that has been near and dear to my heart for a long time, and I'm really excited about the prospect of bringing some of the logic, and maybe even some of the intelligence, back into AI. Is it time to get excited again? Did the Singularity crowd have the right idea after all? (Probably not.) Are we on the verge of a bigger, better AI breakthrough? I'm completely unqualified to answer these questions, but the fact that there's reason to ask is heartening in itself. Spread the word, and warm up your thinking machines: AI is getting cool again.
Monday, February 15, 2010
What we talk about when we talk about business
Every once in a while, my Japanese class delivers an irritating reminder of just how ingrained sexism is in language. Of course, this is at least as true in English as in Japanese -- it's nearly impossible to talk about a person in English without using gendered pronouns like "he" or "she". Japanese, thankfully, generally doesn't have this problem, but it has its own set of issues.
I have the misfortune to be taking "business Japanese" this semester, and today's scenario was a discussion between a V.P. and some subordinates. When men and women are expected to say the lines differently, the teacher writes the changes on the board (usually, though not always, the lines in the textbook are the "male" versions). Usually, these are just small modifications to sentence endings; casual Japanese has distinct "male" and "female" speech styles, though as far as I can tell it's acceptable in modern Japanese for anyone to use the "male" style, or something in between. Today, though, we had a particularly egregious change: in the textbook, the V.P. referred to a subordinate with the familiar suffix -kun, but the teacher asked the female students to use the more respectful -san, and when a student used -kun anyway she was "corrected". Apparently it's disrespectful for a woman to use a familiar term of address to (presumably) a man, even if he's working under her.
Of course, I don't know how representative our teacher's opinions are, or what would be considered acceptable in a real modern Japanese business. I suspect that, just as during my stay in Japan my host family all spoke in the "male" style, there are places where these rules don't apply. But it was an interesting, and to me rather blatant, example of how sexism comes across in language, even when the subject of gender is nominally unrelated. When speaking or writing in our native language, we're often too close to see the assumptions we make, but when we take a step back they come across loud and clear. Maybe next time I'll try reading the "other side's" lines; after all, it would still show that I understood the dialogue. If I do, I'll let you know how it goes.
I have the misfortune to be taking "business Japanese" this semester, and today's scenario was a discussion between a V.P. and some subordinates. When men and women are expected to say the lines differently, the teacher writes the changes on the board (usually, though not always, the lines in the textbook are the "male" versions). Usually, these are just small modifications to sentence endings; casual Japanese has distinct "male" and "female" speech styles, though as far as I can tell it's acceptable in modern Japanese for anyone to use the "male" style, or something in between. Today, though, we had a particularly egregious change: in the textbook, the V.P. referred to a subordinate with the familiar suffix -kun, but the teacher asked the female students to use the more respectful -san, and when a student used -kun anyway she was "corrected". Apparently it's disrespectful for a woman to use a familiar term of address to (presumably) a man, even if he's working under her.
Of course, I don't know how representative our teacher's opinions are, or what would be considered acceptable in a real modern Japanese business. I suspect that, just as during my stay in Japan my host family all spoke in the "male" style, there are places where these rules don't apply. But it was an interesting, and to me rather blatant, example of how sexism comes across in language, even when the subject of gender is nominally unrelated. When speaking or writing in our native language, we're often too close to see the assumptions we make, but when we take a step back they come across loud and clear. Maybe next time I'll try reading the "other side's" lines; after all, it would still show that I understood the dialogue. If I do, I'll let you know how it goes.
Subscribe to:
Posts (Atom)