Note: If you're familiar with the common arguments for and against vegetarianism, this may be a boring post.
I found myself in an interesting argument last night, in which my position was approximately the following:
Assertion: Animals in some measure are capable of feeling pain.
Assertion: Causing unnecessary pain to animals is undesirable.
Conclusion 1: I might be a better person if I was a vegetarian.
Of course, there are various arguments for and against vegetarianism, and one can't expect a group of people (or at least, a group of people chosen on a non-animal-related basis) to agree with this conclusion unanimously. So I chose a second point to try to prove:
Conclusion 2: All other things being equal, a world in which animals are not hurt is preferable to one in which they are hurt.
Somewhat to my surprise, this point also did not go uncontested. Some of the counterarguments were as follows:
Objection 1: Animals aren't people, so it doesn't matter.
My response to this objection is generally to assume that the person making it actually believes what I believe, namely, that the pain of non-human animals matters much less than the pain (or even the comfort) of humans. This is a valid argument against conclusion 1, but it doesn't refute or even address conclusion 2. If people could be just as comfortable, well-nourished, etc. without eating animals (which is arguably true in the present, and certainly could become true in the future; it's not technologically impossible), then as long as the pain of animals matters even a little tiny bit, a world in which they're not hurt is preferable to one in which they are. On the other hand, there are people (I think) who do not believe my modified form of the statement, but instead believe the statement itself: that the pain of animals literally does not matter at all in a moral sense. There are arguments to be made here too, probably, but I'm more inclined to say that if you believe this, I cannot argue with you. Anyone who accepts as a basic premise that the pain of animals literally does not have any moral significance whatsoever, has premises sufficiently different from mine that our viewpoints are irreconcilable (on this matter, at least). Thankfully, we didn't spend too much time on this point.
Objection 2: This might justify eating free-range, but doesn't justify vegetarianism.
To this, the best I can manage is "well, yes, maybe." This is one of the reasons why I included in conclusion 1 the qualification that I "might" be a better person. I don't know how painful or pleasant the lives of animals are in various situations. I am fairly certain that they are fairly miserable in factory farms and the like. There is also the argument to be made that most farm animals would not exist if they were not being raised for food, to which I can only respond that some of their existences are not preferable to non-existence, and some perhaps are. The assertion that there are existences that are not preferable to non-existence, lives not worth living as it were, is a contentious one. These are all interesting discussions to have, but in a sense all of them miss the mark: while they address conclusion 1, they have no impact on conclusion 2 whatsoever. Raising animals and treating them well (and then maybe even killing them for food) is still preferable to raising animals and torturing them and then killing them for food.
Objection 3: In this hypothetical world, can I still eat animals?
Response: You can eat something that to you is completely indistinguishable from animals.
Objection 3: Then no.
This is a logically void argument, of course, since the objector would have no way of distinguishing between the situation he accepts and the situation to which he objects. If he can't tell whether he's eating animals or the hypothetical food that is completely indistinguishable from animals, which he can't by definition of "indistinguishable", then he can't very well object to the indistinguishable food. However, the existence of this objection does raise an interesting question: can it be ethical for a government to lie to its people? Suppose that the world I conjecture in conclusion 2 has been made technologically possible. A substance (call it food i) has been developed which is as nutritious as meat, tastes the same as meat, costs less to produce than meat, and can be produced without harming animals. (Anyone who responds to this with the argument that this isn't possible may, again, have a viewpoint irreconcilable with mine. This seems easily within the reach of technology to me, and probably feasible within the next 100 years.) The government accepts conclusion 2 as truth, and would like to mandate the replacement of meat products with food i, since this would be a clear moral improvement. However, there are people under this government who make objection 3 despite its logical invalidity, and since their objection is logically invalid, they cannot be convinced otherwise. Would it be right for the government to execute the replacement secretly, since it would be a moral good and the objectors would be literally incapable of telling the difference? Things to ponder.
Anyway, my conclusion here is that none of these arguments really have anything to say against conclusion 2, and in fact I will go so far as to assert that conclusion 2 follows necessarily from my assumptions. This is a risky assertion for a logician to make. So I'm curious: if you're reading this, can you think of any logical objections to conclusion 2? Of course, if you have any other thoughts on the matter, I'd love to hear them too. It's been a while since I've had a proper debate. Looking forward to hearing from you, dear hypothetical readers!
Showing posts with label pretentious. Show all posts
Showing posts with label pretentious. Show all posts
Tuesday, August 30, 2011
Wednesday, October 6, 2010
EML
I haven't quite figured out what to make of this yet, but I know it's absolutely incredible. I'm going to have to read the whole thing before I can comment (I read the XML spec once when I had too much time on my hands, so this should be light reading in comparison), but at first glance, it's nothing less than a formal language for describing and representing emotional states. I'm torn between awe at the grandiosity of the undertaking, and extreme skepticism towards the entire idea of, yes, the standardization of emotions. That said, the skepticism angle is easy and boring, so I'll lay off it for now. I'd rather talk about how awesome it is.
For those of you whose response was "wait, what?" or something along those lines, a bit of explication. W3C is the World Wide Web Consortium, the organization that, among other things, runs the Web. I'll give you a moment to express suitable amusement, incredulity, scorn, etc. But entirely seriously, they're responsible for things like HTML, the backbone of most web pages, and XML, a generic framework for expressing and transmitting data. And now, apparently, they've decided to give emotion a try.
The advantages of having a standard language for emotion are twofold. First, it allows for modularity: some programs can deal with taking input and turning it into EML, and other programs can take EML and respond accordingly. The first type of programs could include, say, facial recognition software that looks at people and decides how they're feeling, or conversation programs that look for signs of anger in internet communication (made that one up, but it'd be useful, don't you think?). The second could include conversation programs that try to cheer people up, or music selection programs that change a game's soundtrack according to the mood (one of my personal favorite problems). Having a common language for this stuff means that the two types can be disassociated - the conversation program could be made to respond to facial expressions, or analysis of forum posts, or anything else, just by feeding it the analysis produced by the appropriate program. Undoubtedly a boon for all kinds of AI/HCI research.
The second advantage is a bit more dubious. A language implies, to some extent, a model. It shapes the patterns its users apply to the world. Someone who grows up on imperative programming (C, for instance) thinks of sequences of activities in terms of for loops: go through each object, handle it accordingly. Similarly, someone who uses EML, whether that someone is a human programmer or a computer program, is going to have to think in terms of the emotional categories, scales, attributes, etc. that are allowed for in the language. This can be a useful aid to analysis - we need to have some way to break things down if we're going to understand them - but it can also be limiting: for instance, it turns out that C-style for loops are a lot harder to adapt to the world of parallel computing than more functional approaches. And when it comes to emotion, and the people doing the standardizing are W3C (in my opinion, at least, they're a pretty big deal), this could have a huge effect on the way future AI researchers think about the world. And, dare I say it, a huge effect on the way future AIs think about the world.
So, yeah, that's my first-glance reaction. I'm sure I'll have more to say once I've actually read the thing. If anyone feels like working through it with me, I'd love to chat about it! Though, given the impenetrability of most W3C specs, I wouldn't blame you if you'd rather not. Either way, I hope I've managed to give a bit of an idea of why (I think) this is exciting/important. If you're interested, stay tuned!
For those of you whose response was "wait, what?" or something along those lines, a bit of explication. W3C is the World Wide Web Consortium, the organization that, among other things, runs the Web. I'll give you a moment to express suitable amusement, incredulity, scorn, etc. But entirely seriously, they're responsible for things like HTML, the backbone of most web pages, and XML, a generic framework for expressing and transmitting data. And now, apparently, they've decided to give emotion a try.
The advantages of having a standard language for emotion are twofold. First, it allows for modularity: some programs can deal with taking input and turning it into EML, and other programs can take EML and respond accordingly. The first type of programs could include, say, facial recognition software that looks at people and decides how they're feeling, or conversation programs that look for signs of anger in internet communication (made that one up, but it'd be useful, don't you think?). The second could include conversation programs that try to cheer people up, or music selection programs that change a game's soundtrack according to the mood (one of my personal favorite problems). Having a common language for this stuff means that the two types can be disassociated - the conversation program could be made to respond to facial expressions, or analysis of forum posts, or anything else, just by feeding it the analysis produced by the appropriate program. Undoubtedly a boon for all kinds of AI/HCI research.
The second advantage is a bit more dubious. A language implies, to some extent, a model. It shapes the patterns its users apply to the world. Someone who grows up on imperative programming (C, for instance) thinks of sequences of activities in terms of for loops: go through each object, handle it accordingly. Similarly, someone who uses EML, whether that someone is a human programmer or a computer program, is going to have to think in terms of the emotional categories, scales, attributes, etc. that are allowed for in the language. This can be a useful aid to analysis - we need to have some way to break things down if we're going to understand them - but it can also be limiting: for instance, it turns out that C-style for loops are a lot harder to adapt to the world of parallel computing than more functional approaches. And when it comes to emotion, and the people doing the standardizing are W3C (in my opinion, at least, they're a pretty big deal), this could have a huge effect on the way future AI researchers think about the world. And, dare I say it, a huge effect on the way future AIs think about the world.
So, yeah, that's my first-glance reaction. I'm sure I'll have more to say once I've actually read the thing. If anyone feels like working through it with me, I'd love to chat about it! Though, given the impenetrability of most W3C specs, I wouldn't blame you if you'd rather not. Either way, I hope I've managed to give a bit of an idea of why (I think) this is exciting/important. If you're interested, stay tuned!
Friday, July 23, 2010
A conclusive disproof of a conclusive disproof of free will
The New York Times philosophy column recently ran an interesting piece about free will. The thesis was that, whether or not one assumes a deterministic universe, one cannot be responsible for one's actions. What caught my attention here wasn't the thrust of the argument - it's simple enough - but the fact that it was presented, not as an opinion or a way of viewing life, but as a logical inevitability. This set my fallacy radar off something fierce, especially since the conclusion was one I disagreed with. So, if you're interested in things like proving we do or don't have free will, follow me for a bit as I take a look at the argument.
The argument, which the author refers to as Basic for some reason, goes as follows (direct quote):
(1) You do what you do — in the circumstances in which you find yourself—because of the way you then are.
Not bad, not bad. One can always nitpick, but I think the logic's pretty sound here. But, as I'm sure is obvious by now, I'm not convinced. After all, as any logician knows, sound logic can easily get you to false conclusions: all you need to do is start with incorrect assumptions. And in this case, that focus on the oh-so-shocking point (3) obscured a more basic problem: point (1).
What's wrong with point (1)? Well, the author restates it a bit more rigorously later in the argument: "When one acts for a reason, what one does is a function of how one is, mentally speaking." A function? Really? The mathematical definition of a function, and the one being used by this author, is as an operator that takes in some inputs, and based on them produces precisely one output, completely determined by the inputs. In other words - wait a second - point (1) means that our actions are completely determined by our current state. That's nothing less than determinism! The argument was supposed to hold "whether determinism is true or false". But actually, the very first statement in it assumes determinism!
So much for that, then. If you believe that the universe is deterministic, then this is a pretty compelling argument against moral responsibility. There are a lot of those, though, if your basic assumption is that we have no control over our actions. If you don't take determinism on faith, this whole structure has probably proved nothing you didn't already know. It's refreshing to see a bit of rigor being brought into philosophical arguments presented for popular consumption; I wouldn't mind seeing more of this kind of article in the Times, if only because I so enjoy picking them apart. But I hope that people won't be too easily taken in by so-called conclusive arguments (mine included) without a careful examination of the premises.
Okay, then, hope you had fun!
The argument, which the author refers to as Basic for some reason, goes as follows (direct quote):
(1) You do what you do — in the circumstances in which you find yourself—because of the way you then are.
(2) So if you’re going to be ultimately responsible for what you do, you’re going to have to be ultimately responsible for the way you are — at least in certain mental respects.
(3) But you can’t be ultimately responsible for the way you are in any respect at all.
(4) So you can’t be ultimately responsible for what you do.
(3) seems like the most obviously objectionable point here, and indeed, the author immediately informs us that "the key move is (3)." He then goes on to restate the argument inductively, justifying (3) as an assumption. You don't start out responsible for what you are (we can't help the way we're born), and the rest of the argument seems to show that we're not responsible for what we do when we're not responsible for what we are. But what we do determines what we become, so we can never be responsible for what we are, and the argument holds.Not bad, not bad. One can always nitpick, but I think the logic's pretty sound here. But, as I'm sure is obvious by now, I'm not convinced. After all, as any logician knows, sound logic can easily get you to false conclusions: all you need to do is start with incorrect assumptions. And in this case, that focus on the oh-so-shocking point (3) obscured a more basic problem: point (1).
What's wrong with point (1)? Well, the author restates it a bit more rigorously later in the argument: "When one acts for a reason, what one does is a function of how one is, mentally speaking." A function? Really? The mathematical definition of a function, and the one being used by this author, is as an operator that takes in some inputs, and based on them produces precisely one output, completely determined by the inputs. In other words - wait a second - point (1) means that our actions are completely determined by our current state. That's nothing less than determinism! The argument was supposed to hold "whether determinism is true or false". But actually, the very first statement in it assumes determinism!
So much for that, then. If you believe that the universe is deterministic, then this is a pretty compelling argument against moral responsibility. There are a lot of those, though, if your basic assumption is that we have no control over our actions. If you don't take determinism on faith, this whole structure has probably proved nothing you didn't already know. It's refreshing to see a bit of rigor being brought into philosophical arguments presented for popular consumption; I wouldn't mind seeing more of this kind of article in the Times, if only because I so enjoy picking them apart. But I hope that people won't be too easily taken in by so-called conclusive arguments (mine included) without a careful examination of the premises.
Okay, then, hope you had fun!
Sunday, June 7, 2009
Psych
I had a very strange experience today. I was reading a book, intently, when I felt slightly nauseous. At first I didn't pay it any attention, and kept reading, but it steadily grew worse. Eventually I realized that it was what I was reading that was making me nauseous. Intellectually, I didn't see anything wrong with it; emotionally, I didn't feel anything unusual; but physically, I felt sick. I finished the section, and sat down for a while, and after a few minutes it passed; I picked up at the next section and felt fine.
Now, let me add a bit of context. I've read plenty of shocking, graphic, and unpleasant material, and I have a fairly vivid imagination. I've read things that have given me nightmares, I've gotten awful mental images stuck in my head. I'm a fan of fantasy horror; I've read Lovecraft before sleeping, and I've read the Sandman graphic novels, in which case I didn't even need to visualize*. The thing I read today wasn't as bad as any of these. It wasn't even particularly objectionable. And yet, as far as I remember, nothing I've read has caused such a dramatic physical response.
A combination of genetics, instincts, and early nurture give us a package of associations that determine how we react to various perceptions. This is, perhaps, what we'd call human nature. Throughout the process of education and socialization, new associations are created, and existing ones are undone or superseded. This is because human nature owes nothing to constructed concepts like right and wrong, safe and risky, kind and cruel. It might, in some general sense, tend to encourage the survival of the human species, as was probably the case in this particular incident. However, it works in a way that often isn't the way we'd like to be, and depressingly often can't be overcome by any incentive. We do our best to reprogram ourselves and others to make us, in some sense, better people, but we're constantly fighting against a tendency that doesn't care whether we're good or not, one that can evoke powerful responses on a level that we can't control.
Of course, I'm being incredibly hypocritical here. I've drawn a huge false dichotomy between the human mind and this animal-level "human nature". Our constructed concepts are built on these instincts, and the associations and reactions they provide are the levers by which we can be taught. We wouldn't be able to have a sense of right and wrong if it didn't grow out of our basic responses to perceptions. If we cut out the animal brain, we wouldn't be super-human and super-moral; we'd just die. A little irrational discomfort, a susceptibility to fear, an inability to care about people we've never met and can't put a face to as much as those we've spent all our lives with; these are part of the price we have to pay to be able to have a mind at all. We can't wipe out the roots of our sentience; rather, by understanding how we work, we can come up with new ways to make ourselves better.
Well, that was a large reaction to a relatively minor event. Maybe it doesn't signify anything so grand; maybe it was coincidence, or suppressed neurosis, or something I ate. Still, though, I think there's an important lesson here. Don't take yourself too much for granted -- take the time to think about why you feel the way you feel. Who knows? You might learn something.
* By the way, please don't be put off by this characterization -- both Lovecraft and the Sandman series are great stuff, and highly recommended reading to anyone who doesn't mind a bit of scariness. They won't really give you bad dreams. Probably.
Now, let me add a bit of context. I've read plenty of shocking, graphic, and unpleasant material, and I have a fairly vivid imagination. I've read things that have given me nightmares, I've gotten awful mental images stuck in my head. I'm a fan of fantasy horror; I've read Lovecraft before sleeping, and I've read the Sandman graphic novels, in which case I didn't even need to visualize*. The thing I read today wasn't as bad as any of these. It wasn't even particularly objectionable. And yet, as far as I remember, nothing I've read has caused such a dramatic physical response.
A combination of genetics, instincts, and early nurture give us a package of associations that determine how we react to various perceptions. This is, perhaps, what we'd call human nature. Throughout the process of education and socialization, new associations are created, and existing ones are undone or superseded. This is because human nature owes nothing to constructed concepts like right and wrong, safe and risky, kind and cruel. It might, in some general sense, tend to encourage the survival of the human species, as was probably the case in this particular incident. However, it works in a way that often isn't the way we'd like to be, and depressingly often can't be overcome by any incentive. We do our best to reprogram ourselves and others to make us, in some sense, better people, but we're constantly fighting against a tendency that doesn't care whether we're good or not, one that can evoke powerful responses on a level that we can't control.
Of course, I'm being incredibly hypocritical here. I've drawn a huge false dichotomy between the human mind and this animal-level "human nature". Our constructed concepts are built on these instincts, and the associations and reactions they provide are the levers by which we can be taught. We wouldn't be able to have a sense of right and wrong if it didn't grow out of our basic responses to perceptions. If we cut out the animal brain, we wouldn't be super-human and super-moral; we'd just die. A little irrational discomfort, a susceptibility to fear, an inability to care about people we've never met and can't put a face to as much as those we've spent all our lives with; these are part of the price we have to pay to be able to have a mind at all. We can't wipe out the roots of our sentience; rather, by understanding how we work, we can come up with new ways to make ourselves better.
Well, that was a large reaction to a relatively minor event. Maybe it doesn't signify anything so grand; maybe it was coincidence, or suppressed neurosis, or something I ate. Still, though, I think there's an important lesson here. Don't take yourself too much for granted -- take the time to think about why you feel the way you feel. Who knows? You might learn something.
* By the way, please don't be put off by this characterization -- both Lovecraft and the Sandman series are great stuff, and highly recommended reading to anyone who doesn't mind a bit of scariness. They won't really give you bad dreams. Probably.
Sunday, May 3, 2009
Worlds in my head, take 2
It might be obvious by now, but it often occurs to me that my favorite entertainments, chiefly science fiction and fantasy, derive much of their charm from their depiction of alternate realities. Whether it's through books, games, movies, or webcomics, I enjoy imagining (and, if what I've said so far is true, to some extent living in) other worlds. This could easily be described as escapist, and prompts (but does not beg) the question: what's wrong with this world? My first instinct is to get defensive, but that's never a particularly convincing approach. Actually, thinking about it, a better response would be: what do you mean by "this world"?
One of the great triumphs of human society, perhaps even its fundamental purpose, is to convince us that we all perceive the same basic reality. It's obvious why this is generally desirable: we can't work together or communicate if we don't believe that what we see somehow correlates to what others see. Whether this is true or not, as I've said before, is out of my scope for the moment. What's important here is that reality as it's conventionally thought of is just cyberspace on a larger scale, a consensual hallucination including nearly the entire human species. Each of us has different perceptions, but despite this we nearly all believe that we're perceiving the same things. Then to me, what we call "the real world" is the region in which my mental space overlaps with that of (what I believe) most other people think of as real. That is, rather than thinking of the "virtual worlds" I've talked about as alternatives to reality, it makes more sense to think of reality as just another one of these worlds. We each have our own constructed reality, and insofar as we divide it from fantasy that's a constructed division; if we were taught from birth that everything we imagine is real, there'd be no difference to us between reality and fantasy.
Again, it's obvious why this distinction is desirable; a common perspective seems (from the common perspective) to be useful if not necessary to a productive life. Nonetheless, even from within the bounds of our constructed reality we can feel the desire for other interpretations, and can even see it as useful as well, working the idea, if not the reality, of alternate worlds into our own.
So, why fight the status quo, even though it leaves space for the worlds I so cherish? To a certain extent, it's a game; I enjoy thinking about these things, for many of the same reasons that I enjoy science fiction and fantasy. There is, however, a part of me that believes that some serious reenvisioning of reality is going to be required sooner rather than later. The internet has already brought virtual worlds to the forefront to an unprecedented extent, and if developments in any of various technologies continue, including telepresence, AI, and the awkwardly named VR, we'll soon enough have "real-world" issues that can only be discussed coherently by recognizing the proper place of "unreal" worlds. The current debates over intellectual property might well be one of these problems, and of course speculative fiction began to delve into these issues long ago. At any rate, if the future isn't here now, it may well be soon, and if and when it comes this sort of mental exercise will no longer be purely academic. Or so I believe. You have to take anything I say with a grain of salt; after all, I read a lot of science fiction.
One of the great triumphs of human society, perhaps even its fundamental purpose, is to convince us that we all perceive the same basic reality. It's obvious why this is generally desirable: we can't work together or communicate if we don't believe that what we see somehow correlates to what others see. Whether this is true or not, as I've said before, is out of my scope for the moment. What's important here is that reality as it's conventionally thought of is just cyberspace on a larger scale, a consensual hallucination including nearly the entire human species. Each of us has different perceptions, but despite this we nearly all believe that we're perceiving the same things. Then to me, what we call "the real world" is the region in which my mental space overlaps with that of (what I believe) most other people think of as real. That is, rather than thinking of the "virtual worlds" I've talked about as alternatives to reality, it makes more sense to think of reality as just another one of these worlds. We each have our own constructed reality, and insofar as we divide it from fantasy that's a constructed division; if we were taught from birth that everything we imagine is real, there'd be no difference to us between reality and fantasy.
Again, it's obvious why this distinction is desirable; a common perspective seems (from the common perspective) to be useful if not necessary to a productive life. Nonetheless, even from within the bounds of our constructed reality we can feel the desire for other interpretations, and can even see it as useful as well, working the idea, if not the reality, of alternate worlds into our own.
So, why fight the status quo, even though it leaves space for the worlds I so cherish? To a certain extent, it's a game; I enjoy thinking about these things, for many of the same reasons that I enjoy science fiction and fantasy. There is, however, a part of me that believes that some serious reenvisioning of reality is going to be required sooner rather than later. The internet has already brought virtual worlds to the forefront to an unprecedented extent, and if developments in any of various technologies continue, including telepresence, AI, and the awkwardly named VR, we'll soon enough have "real-world" issues that can only be discussed coherently by recognizing the proper place of "unreal" worlds. The current debates over intellectual property might well be one of these problems, and of course speculative fiction began to delve into these issues long ago. At any rate, if the future isn't here now, it may well be soon, and if and when it comes this sort of mental exercise will no longer be purely academic. Or so I believe. You have to take anything I say with a grain of salt; after all, I read a lot of science fiction.
Wednesday, April 29, 2009
The worlds inside my head
Maybe it's the weather, but I'm not really in any mood to write coherently. Nonetheless, there's something I want to say, so I'll just go with the flow and hope you can forgive things like rambling, poor organization, and general incoherence. Let's start off with a little question: what's real? There are various different answers, of course. I don't know whether there's such a thing as objective reality -- in fact, it might well be impossible to know whether there is. I take it as an axiom that everything I know (or think I know) is based on my perceptions -- I don't believe in a priori knowledge. So, not "I think, therefore I am", but rather "I can hear myself think, therefore I am".
Then let's forget about objective reality for the moment. Subjectively speaking, what is real? I've already begged the question -- I'm assuming that my subjective reality is determined by my perceptions. Now, of course, I have a problem, because I perceive things all the time that are, intuitively at least, not real. I might think I heard someone call my name, or misread a word, or dream. What then? The first answer that comes to mind is that, for the moment, even those illusions are real. It's not until I realize I was wrong, until I wake up, that I'm able to perceive the difference between what I thought was true and what was actually true, and until I perceive that difference, it doesn't exist. If I start hallucinating and never stop, that hallucination is my new reality.
The upshot of this is that I believe it's reasonable to say that a completely convincing imaginary world is no more or less real than what we think of as actual reality. For instance, suppose that telepresence* technology advances to the point where I can't tell the difference between meeting someone in person and meeting them through teleconferencing. Then it doesn't seem all that crazy to say that I've met someone "in real life", even if I haven't actually physically been in the same room with them. In mathematics, this is called extensionality -- two functions are extensionally equal if when given the same input they give the same output. I'm sure there's a nice philosophical term for it too, but I don't know what it is.
If you guessed that all of this was just an excuse to fantasize about future technology, you'd be more than half right. From this perspective, an AI capable of passing the Turing Test is basically human, and sufficiently realistic augmented or virtual reality is just as good as actual reality. Bringing it back to the present day, consensual hallucinations like the Internet actually exist, not just as side effects of networks and displays, but as parallel worlds generated by the belief of their users. Less convincing illusions like the worlds inside of novels and movies are slightly less real, little toy worlds in our heads that we can start, stop, rewind, and reshape to some extent. The more the world seems to have an existence of its own, independent of the will of the perceiver, the stronger the illusion -- and thus the reality -- of reality. There's no magical line past which a virtual world suddenly pops into existence; rather, it was there all the time, slowly growing more real as it became more convincing. No matter its substrate -- atoms, words on a page, bits in a computer's memory -- if it seems real to me, then it is real to me.
I've run out of steam for the moment, but I greatly enjoy thinking about this topic, so I'm sure you'll hear about it again if you stay tuned. Again, my apologies for the incoherence.
*Telepresence, as sci-fi as it sounds, is just a fancy word for real-time communication methods such as telephone, video conferencing, instant messaging, etc., that allow people to give the impression of "being present from a distance".
Then let's forget about objective reality for the moment. Subjectively speaking, what is real? I've already begged the question -- I'm assuming that my subjective reality is determined by my perceptions. Now, of course, I have a problem, because I perceive things all the time that are, intuitively at least, not real. I might think I heard someone call my name, or misread a word, or dream. What then? The first answer that comes to mind is that, for the moment, even those illusions are real. It's not until I realize I was wrong, until I wake up, that I'm able to perceive the difference between what I thought was true and what was actually true, and until I perceive that difference, it doesn't exist. If I start hallucinating and never stop, that hallucination is my new reality.
The upshot of this is that I believe it's reasonable to say that a completely convincing imaginary world is no more or less real than what we think of as actual reality. For instance, suppose that telepresence* technology advances to the point where I can't tell the difference between meeting someone in person and meeting them through teleconferencing. Then it doesn't seem all that crazy to say that I've met someone "in real life", even if I haven't actually physically been in the same room with them. In mathematics, this is called extensionality -- two functions are extensionally equal if when given the same input they give the same output. I'm sure there's a nice philosophical term for it too, but I don't know what it is.
If you guessed that all of this was just an excuse to fantasize about future technology, you'd be more than half right. From this perspective, an AI capable of passing the Turing Test is basically human, and sufficiently realistic augmented or virtual reality is just as good as actual reality. Bringing it back to the present day, consensual hallucinations like the Internet actually exist, not just as side effects of networks and displays, but as parallel worlds generated by the belief of their users. Less convincing illusions like the worlds inside of novels and movies are slightly less real, little toy worlds in our heads that we can start, stop, rewind, and reshape to some extent. The more the world seems to have an existence of its own, independent of the will of the perceiver, the stronger the illusion -- and thus the reality -- of reality. There's no magical line past which a virtual world suddenly pops into existence; rather, it was there all the time, slowly growing more real as it became more convincing. No matter its substrate -- atoms, words on a page, bits in a computer's memory -- if it seems real to me, then it is real to me.
I've run out of steam for the moment, but I greatly enjoy thinking about this topic, so I'm sure you'll hear about it again if you stay tuned. Again, my apologies for the incoherence.
*Telepresence, as sci-fi as it sounds, is just a fancy word for real-time communication methods such as telephone, video conferencing, instant messaging, etc., that allow people to give the impression of "being present from a distance".
Thursday, April 16, 2009
On the nature of nature
A lot of my ideas on what I call dualism (it's possible that nobody else calls it that) come out of an ecology class I once took, in which I learned, among other things, that 1) anything and everything can be thought of in terms of ecology, and 2) the word "unecological" can be used as a scathing insult. I didn't buy wholeheartedly into the professor's philosophy, and I still don't now, but one thing I took away from the class was that people tend to understand things by splitting them into two groups, and then taking sides. This is unecological. The distinctions between between Western and Eastern cultures, between animals and machines, between Us and Them are never quite as sharp as we'd like them to be. For some reason, this inspired me. So, I made dualism my enemy (which, now that I think of it, is rather dualistic of me). I decided that it would be fun to promote greater understanding by, whenever I had the chance, demonstrating that two things people tend to think of as separate are actually inseparable from each other - not only can you not have one without the other, but in fact there's no consistent way of drawing a line between them. A great example of this is the idea of nature.
As a technological crusader (in my own mind, at least), one of the things that most bothers me is the view of the world that says that technology and the environment are diametrically opposed. The basis of this attitude is simple enough: technology is by definition used to change our environment to our liking; the environment would rather stay as it is. But what is this "environment" thing, anyway? Most people would agree that plants and animals are part of it. We might also include features of the land (lakes, mountains, and such), and maybe more abstract concepts like habitats or ecosystems. But what about humans? We're animals too, right? Why aren't we part of the environment? And if we are, what about the things we make? If a bird's nest is natural, then why isn't a skyscraper? We can talk about details like materials, but everything we use comes from somewhere, and if we go far back enough we'll find nature. At what point did humans become a class of our own, separate from the world around us?
My answer, of course, is that there is no meaningful difference. A forest, a park, an office building; there's nothing more or less natural about any of these. Our drive to shape our environment isn't artificial, any more than a bird's nesting instinct. The human capacity for thought, reasoning, innovation is a natural occurrence; everything that follows from it is a natural consequence. This doesn't mean that there's nothing worth preserving in the parts of the world we haven't yet changed to suit us; it just means that casting us as the villains and the woodland creatures as the heroes (or the other way around, which is something I've done on occasion) isn't a coherent line of argument. This kind of dualism is very easy to create, and it might help us make sense of the world, but it doesn't make sense in itself.
A plea for clemency: these are very rough ideas. There may be holes, there may be avenues of argument I've missed. Part of my goal in putting these up here is to help turn my vague ideas into a coherent philosophy. Please comment, but please be gentle.
(As a side note, half of the aforementioned class was spent in debates between two students who considered themselves to be on the side of nature and the environment, and myself and one other student who considered ourselves to be on the side of technology, human progress, etc. I thought this was kind of ironic.)
As a technological crusader (in my own mind, at least), one of the things that most bothers me is the view of the world that says that technology and the environment are diametrically opposed. The basis of this attitude is simple enough: technology is by definition used to change our environment to our liking; the environment would rather stay as it is. But what is this "environment" thing, anyway? Most people would agree that plants and animals are part of it. We might also include features of the land (lakes, mountains, and such), and maybe more abstract concepts like habitats or ecosystems. But what about humans? We're animals too, right? Why aren't we part of the environment? And if we are, what about the things we make? If a bird's nest is natural, then why isn't a skyscraper? We can talk about details like materials, but everything we use comes from somewhere, and if we go far back enough we'll find nature. At what point did humans become a class of our own, separate from the world around us?
My answer, of course, is that there is no meaningful difference. A forest, a park, an office building; there's nothing more or less natural about any of these. Our drive to shape our environment isn't artificial, any more than a bird's nesting instinct. The human capacity for thought, reasoning, innovation is a natural occurrence; everything that follows from it is a natural consequence. This doesn't mean that there's nothing worth preserving in the parts of the world we haven't yet changed to suit us; it just means that casting us as the villains and the woodland creatures as the heroes (or the other way around, which is something I've done on occasion) isn't a coherent line of argument. This kind of dualism is very easy to create, and it might help us make sense of the world, but it doesn't make sense in itself.
A plea for clemency: these are very rough ideas. There may be holes, there may be avenues of argument I've missed. Part of my goal in putting these up here is to help turn my vague ideas into a coherent philosophy. Please comment, but please be gentle.
(As a side note, half of the aforementioned class was spent in debates between two students who considered themselves to be on the side of nature and the environment, and myself and one other student who considered ourselves to be on the side of technology, human progress, etc. I thought this was kind of ironic.)
Monday, April 13, 2009
A skirmish with an old enemy
Today, for whatever reason, I've been thinking about the divide between the rational and the emotional. We're raised from birth with this idea of the two halves of our mind, the left brain and the right brain, the analytical and the creative, and so on. This has several implications, the most serious being that the various range of human mental activities can be classified as one or another, or perhaps on a sliding scale between the two. Painting, singing, writing fiction, "self-expression" sit solidly on one side; solving puzzles, conducting experiments, programming, learning facts belong largely to the other. From this it follows that talent in one is linked to lack of talent in the other; overly analytical people can't make or appreciate art, overly creative people can't do math, etc., and if they can, then clearly these are two separate talents, a case of unusual gift in not one but two unrelated areas.
At this point, of course, I point dramatically and shout "objection!" I've done my best not to make a straw man of this argument, but it still looks like it's full of holes to me. In what sense are these two categories different? At the bottom level, thought is just brain chemistry; at the top level, it's impossible to disentangle "rational thought" from "emotion". As a computer scientist, I can assure you that plenty of emotion, and yes, even creativity goes into solving problems classified as technical; as an amateur sociologist, I can purport that the appreciation of arts such as music, literature, and video games is inextricably linked to analyzing the material in terms of one's social context. Fields like music and architecture are sometimes brought up as rare cases where the two tendencies intersect; I think this is the tip of a broader recognition that the two are intermingled in *every* area of thought.
I think I'll leave it at that for now; I have a tendency to rant, especially on this topic. Rest assured, dualism, we *will* meet again!
At this point, of course, I point dramatically and shout "objection!" I've done my best not to make a straw man of this argument, but it still looks like it's full of holes to me. In what sense are these two categories different? At the bottom level, thought is just brain chemistry; at the top level, it's impossible to disentangle "rational thought" from "emotion". As a computer scientist, I can assure you that plenty of emotion, and yes, even creativity goes into solving problems classified as technical; as an amateur sociologist, I can purport that the appreciation of arts such as music, literature, and video games is inextricably linked to analyzing the material in terms of one's social context. Fields like music and architecture are sometimes brought up as rare cases where the two tendencies intersect; I think this is the tip of a broader recognition that the two are intermingled in *every* area of thought.
I think I'll leave it at that for now; I have a tendency to rant, especially on this topic. Rest assured, dualism, we *will* meet again!
Subscribe to:
Comments (Atom)