Showing posts with label logic. Show all posts
Showing posts with label logic. Show all posts

Tuesday, August 30, 2011

Animals are tasty, but hurting them is wrong(-ish)

Note: If you're familiar with the common arguments for and against vegetarianism, this may be a boring post.


I found myself in an interesting argument last night, in which my position was approximately the following:

Assertion: Animals in some measure are capable of feeling pain.
Assertion: Causing unnecessary pain to animals is undesirable.
Conclusion 1: I might be a better person if I was a vegetarian.

Of course, there are various arguments for and against vegetarianism, and one can't expect a group of people (or at least, a group of people chosen on a non-animal-related basis) to agree with this conclusion unanimously. So I chose a second point to try to prove:

Conclusion 2: All other things being equal, a world in which animals are not hurt is preferable to one in which they are hurt.

Somewhat to my surprise, this point also did not go uncontested. Some of the counterarguments were as follows:

Objection 1: Animals aren't people, so it doesn't matter.

My response to this objection is generally to assume that the person making it actually believes what I believe, namely, that the pain of non-human animals matters much less than the pain (or even the comfort) of humans. This is a valid argument against conclusion 1, but it doesn't refute or even address conclusion 2. If people could be just as comfortable, well-nourished, etc. without eating animals (which is arguably true in the present, and certainly could become true in the future; it's not technologically impossible), then as long as the pain of animals matters even a little tiny bit, a world in which they're not hurt is preferable to one in which they are. On the other hand, there are people (I think) who do not believe my modified form of the statement, but instead believe the statement itself: that the pain of animals literally does not matter at all in a moral sense. There are arguments to be made here too, probably, but I'm more inclined to say that if you believe this, I cannot argue with you. Anyone who accepts as a basic premise that the pain of animals literally does not have any moral significance whatsoever, has premises sufficiently different from mine that our viewpoints are irreconcilable (on this matter, at least). Thankfully, we didn't spend too much time on this point.

Objection 2: This might justify eating free-range, but doesn't justify vegetarianism.

To this, the best I can manage is "well, yes, maybe." This is one of the reasons why I included in conclusion 1 the qualification that I "might" be a better person. I don't know how painful or pleasant the lives of animals are in various situations. I am fairly certain that they are fairly miserable in factory farms and the like. There is also the argument to be made that most farm animals would not exist if they were not being raised for food, to which I can only respond that some of their existences are not preferable to non-existence, and some perhaps are. The assertion that there are existences that are not preferable to non-existence, lives not worth living as it were, is a contentious one. These are all interesting discussions to have, but in a sense all of them miss the mark: while they address conclusion 1, they have no impact on conclusion 2 whatsoever. Raising animals and treating them well (and then maybe even killing them for food) is still preferable to raising animals and torturing them and then killing them for food.

Objection 3: In this hypothetical world, can I still eat animals?
Response: You can eat something that to you is completely indistinguishable from animals.
Objection 3: Then no.

This is a logically void argument, of course, since the objector would have no way of distinguishing between the situation he accepts and the situation to which he objects. If he can't tell whether he's eating animals or the hypothetical food that is completely indistinguishable from animals, which he can't by definition of "indistinguishable", then he can't very well object to the indistinguishable food. However, the existence of this objection does raise an interesting question: can it be ethical for a government to lie to its people? Suppose that the world I conjecture in conclusion 2 has been made technologically possible. A substance (call it food i) has been developed which is as nutritious as meat, tastes the same as meat, costs less to produce than meat, and can be produced without harming animals. (Anyone who responds to this with the argument that this isn't possible may, again, have a viewpoint irreconcilable with mine. This seems easily within the reach of technology to me, and probably feasible within the next 100 years.) The government accepts conclusion 2 as truth, and would like to mandate the replacement of meat products with food i, since this would be a clear moral improvement. However, there are people under this government who make objection 3 despite its logical invalidity, and since their objection is logically invalid, they cannot be convinced otherwise. Would it be right for the government to execute the replacement secretly, since it would be a moral good and the objectors would be literally incapable of telling the difference? Things to ponder.

Anyway, my conclusion here is that none of these arguments really have anything to say against conclusion 2, and in fact I will go so far as to assert that conclusion 2 follows necessarily from my assumptions. This is a risky assertion for a logician to make. So I'm curious: if you're reading this, can you think of any logical objections to conclusion 2? Of course, if you have any other thoughts on the matter, I'd love to hear them too. It's been a while since I've had a proper debate. Looking forward to hearing from you, dear hypothetical readers!

Friday, July 23, 2010

A conclusive disproof of a conclusive disproof of free will

The New York Times philosophy column recently ran an interesting piece about free will. The thesis was that, whether or not one assumes a deterministic universe, one cannot be responsible for one's actions. What caught my attention here wasn't the thrust of the argument - it's simple enough - but the fact that it was presented, not as an opinion or a way of viewing life, but as a logical inevitability. This set my fallacy radar off something fierce, especially since the conclusion was one I disagreed with. So, if you're interested in things like proving we do or don't have free will, follow me for a bit as I take a look at the argument.

The argument, which the author refers to as Basic for some reason, goes as follows (direct quote):

(1) You do what you do — in the circumstances in which you find yourself—because of the way you then are.

(2) So if you’re going to be ultimately responsible for what you do, you’re going to have to be ultimately responsible for the way you are — at least in certain mental respects.

(3) But you can’t be ultimately responsible for the way you are in any respect at all.

(4) So you can’t be ultimately responsible for what you do.

(3) seems like the most obviously objectionable point here, and indeed, the author immediately informs us that "the key move is (3)." He then goes on to restate the argument inductively, justifying (3) as an assumption. You don't start out responsible for what you are (we can't help the way we're born), and the rest of the argument seems to show that we're not responsible for what we do when we're not responsible for what we are. But what we do determines what we become, so we can never be responsible for what we are, and the argument holds.

Not bad, not bad. One can always nitpick, but I think the logic's pretty sound here. But, as I'm sure is obvious by now, I'm not convinced. After all, as any logician knows, sound logic can easily get you to false conclusions: all you need to do is start with incorrect assumptions. And in this case, that focus on the oh-so-shocking point (3) obscured a more basic problem: point (1).

What's wrong with point (1)? Well, the author restates it a bit more rigorously later in the argument: "When one acts for a reason, what one does is a function of how one is, mentally speaking." A function? Really? The mathematical definition of a function, and the one being used by this author, is as an operator that takes in some inputs, and based on them produces precisely one output, completely determined by the inputs. In other words - wait a second - point (1) means that our actions are completely determined by our current state. That's nothing less than determinism! The argument was supposed to hold "whether determinism is true or false". But actually, the very first statement in it assumes determinism!

So much for that, then. If you believe that the universe is deterministic, then this is a pretty compelling argument against moral responsibility. There are a lot of those, though, if your basic assumption is that we have no control over our actions. If you don't take determinism on faith, this whole structure has probably proved nothing you didn't already know. It's refreshing to see a bit of rigor being brought into philosophical arguments presented for popular consumption; I wouldn't mind seeing more of this kind of article in the Times, if only because I so enjoy picking them apart. But I hope that people won't be too easily taken in by so-called conclusive arguments (mine included) without a careful examination of the premises.

Okay, then, hope you had fun!

Sunday, April 11, 2010

The brains we make

Today I thought I'd write a bit about artificial intelligence. AI is an evocative label, suggesting I, Robot, HAL, your favorite computer that's just like a human (or not quite human, or a little too human, etc., etc.). It captured the hearts and minds of both computer scientists and the general public for decades, from Turing's famous test almost a hundred years ago to its heyday in the 70's and 80's. You don't hear that much about AI anymore, though, except for the few true believers sitting in their bunkers and praying for the Singularity (sorry, Mr. Kurzweil). It's my guess that this is a direct result of a huge shift in the nature of AI research, one that few people outside the field know about but which has dampened a lot of the wild expectations it used to inspire. So, what was AI, and what is it now? What happened to it?

The old AI was, fundamentally, about logic. The approach was exactly what you'd imagine: AI researchers were trying to program machines to think. Of course, by "think" they meant "reason", and the prevailing attitude reflected a certain view of human intelligence as well: people were basically logical, and any deviation from logic wasn't going to be particularly useful in solving important problems. From a certain perspective, this is completely ridiculous -- take language, for instance. Understanding human language has always been one of AI's primary goals, but we don't figure out what people mean by looking up a table of grammar rules and figuring out which ones apply; our ability to understand each other depends on things like shared symbols, word associations, and social context.

Inevitably, this approach led to backlashes; the response to each breakthrough was a cry of "that isn't thinking!" Doing advanced math by brute force? Not really thinking. Playing tic-tac-toe unbeatably? Not really thinking. The Eliza program, which convinced hundreds of people that they were talking to a human psychologist, was obviously not thinking -- its algorithm was ridiculously simplistic. Finally, researchers came to the conclusion that this whole thinking thing was more complicated than they'd given it credit for, and might just be a matter of perspective anyway. And so began the decline of what's known as "rule-based AI".

At the same time, another school of thought was on the rise. In contrast to the bottom-up approach of rule-based AI -- making machines think like people -- these researchers thought it would be just as effective, and a lot easier, to make machines act like people. The basis of this approach is statistical rather than logical. If we start a sentence with "the cat", what are the likely next words? If a chess master reaches a certain board position, how likely is he to win? The computer doesn't need to know why certain verbs are connected with cats and others aren't, or why chess masters prefer one move over another. It just needs to know that they are, and do as they do. This is often referred to as "machine learning", but it's learning only in the Pavlovian sense, unthinking response to fixed stimuli (and there's that word thinking again). It's proved very powerful in various areas, chess most famously. At the same time, it's not clear that it really merits the label "AI", since it's not really about intelligence anymore so much as statistical models and computing power.

I can't say for sure that the decline in general enthusiasm for AI is linked to the death of rule-based AI and the ascendancy of statistics, but one thing is certain: the old grail of "strong AI" (an AI that thinks like a human) isn't even on the table anymore. Machine learning might yet produce a computer that can pass the Turing test, but it would be more likely to have just figured out how to string words together in a convincing way than to actually understand what it was saying. Of course, this is a fuzzy line, and I've previously espoused the view that anything that appears to be sentient should be assumed so until proven otherwise. Even if in principle we're not shooting for strong AI anymore, we may end up with something that looks a whole lot like it. Nonetheless, I think some of the big dreams and mad-scientist ambition went out of the field with the regime change. And as someone who enjoys logic a whole lot more than statistics, I can't help but feel a bit nostalgic for rule-based AI.

So, why did I bring all this up in the first place? Mainly because a recent news story suggests that maybe rules-based AI isn't as dead as all that. When you put things down in such an obvious dichotomy, someone's going to try to get the best of both worlds, and a scientist at MIT has just announced that he's doing exactly that. As you might have guessed from the length of the rant, this is something that has been near and dear to my heart for a long time, and I'm really excited about the prospect of bringing some of the logic, and maybe even some of the intelligence, back into AI. Is it time to get excited again? Did the Singularity crowd have the right idea after all? (Probably not.) Are we on the verge of a bigger, better AI breakthrough? I'm completely unqualified to answer these questions, but the fact that there's reason to ask is heartening in itself. Spread the word, and warm up your thinking machines: AI is getting cool again.