My Education
My Weekends
My Religion
My Information
My Guilty Pleasure
My Role Model
 
For Your Eyes
For Your Ears
For Your Palate
For Your Touch
For Your Gag Reflex
For The Love of God
 
 

Wednesday, February 22, 2006

Evolutionary Context



Another riff from another weblog’s post (since I have no original thoughts right now):

not to get too deep into philosophy, a subject i feel that is unsuited for academia

but one of my friends insists that the motivation behind actions is the exclusive determiner of morality. he provided this example - if a baby is drowning in the water, and you save him because you want a reward, then you are immoral.

but it seems obvious to me that the alternative of not saving the baby (this thought experiment is predicated on the fact that you can save the baby, but not without exerting effort) is in fact the immoral action in this situation. and the morally neutral action is to try to save the baby and fail.

therefore if you save him because you want a reward, then you are being less moral than if you had tried and failed, and equivalent to letting the baby drown in the first place. i totally don't understand this. from the perspective of the only person that is affected by your decision, namely, the baby, his preferred option is for you to save him! regardless of what your motivation is, not saving him produces a net decrease in overall 'happiness' for everyone. which after all, is what morality should be predicated on. the greatest amount of overall good in the system. the ends justify the means (in a odd sort of way)

okay, so maybe saving him for a bad motive is morally neutral, not immoral. whatever it is, it seems key that being motivated for selfish reasons is worse than being motivated for altrustic reasons (IE saving the baby for shis sake is more moral, in whatever way, than saving the baby for your own sake). but say you're a hardcore religious person whose entire morals are predicated upon the existence of an afterlife reward/punishment system. in that sense no action they take can possibly be considered moral, for all of their own altruism and kindness is entirely predicated on a desire to be rewarded in the afterlife. like, they may not explicitly SAY that they're doing it to get into heaven, but because their sense of altruistic morality was originally completely built on getting into heaven, on the deepest level they are morally neutral.

in that sense, if you believe that true moral actions must stem from entirely altrustic motivations, then no one who is at all religious can ever commit a truly moral act.

(via here

There are a few things I want to talk about here, because this is particularly emblematic of why I hate philosophy and why I refuse to take the last six credits for my philosophy BA.

Most classical moral philosophers try to divide up morality into intent vs. action. That is, does the moral character of an action derive from the confluence of the actor’s intended outcome and what is amorphously called ‘meaning,’ or does it derive from the actual consequences of the action? Any vaguely sophisticated view will quickly devolve into an admission that it is not the realm of either, specifically, but that one can be more important than the other. Even Mill concedes that absolute utility isn’t a practicable standard, even if it’s the measure of an action. And Kant does some intense equivocation gymnastics to get around problems of consequence.

Let’s say that you’re looking at the above example, and break it into its component parts. On the one hand, it can be conceded that it’s harmful if the baby drowns, and that the baby is not responsible for its own situation. This isolates out variables like guilt for one’s own situation (ostensibly the baby was not ice fishing or something) and immediate harms (saving Stalin from drowning might be inherently bad).

Having established those parameters and the need to act, the question becomes: what is the motivation to act? A person is a biological entity. They will never act randomly, and something is going to have to get them from the side of the lake and into the water. In the above example, the two motivating factors are “money” (ie: hope of reward) and “altruism” (ie: doing it for its own sake.)

Let’s deal with money. There needs to be a distinction made here: if one is saving the baby with the hope of monetary reward (or sexual reward if it’s a MILF), that’s morally different from saving the baby with the intention of holding the baby for ransom. I’m going to assume for the sake of argument that the point being made is that one might expect that, should the baby get saved, the mother would be grateful and provide a reward – not that an element of coercion would be used to extort the mother out of a reward. That complicates the moral framework in the same way “save the baby so you can later use it for human sacrifice” might, and is, I believe, irrelevant to the actual question the situation is meant to illustrate.

But what about altruism? What about someone who, because they’re a ‘good person’ goes and gets a baby from the water? This is why philosophy is stupid. The actual reason that someone would do this is that 6 billion years of evolutionary hardwiring has created a herding instinct in people, where they’ll save anything that resembles them (ie: that they identify as being human or possessing human characteristics) because it’s beneficial to the survival of their gene pool in a tribal environment. Their brains will create an impetus to act, it will release adrenaline, and if they succeed, they will be inundated with endorphins, providing a light euphoria. If they fail and the baby dies, their system will be flooded with a sense of mourning and grief (also a chemical state) which acts as a punishment.

Philosophy operates on the assumption that biology is irrelevant, except insofar as people can feel pain/die/whatever. Yet it sees the mind as an autonomous source of reality. The mind is affected by the body, but in essence, separate from it. The obvious problem is that this is lies. So a philosopher who has to come to grips with the above scenario will manufacture reasons why the individual might do it. Rawls would say, behind a veil of ignorance, not knowing if you’re the baby or the individual able to save the baby, you would prefer both survive, therefore we can stipulate an obligation to act in the way that a rational, objective actor would prefer. Kant would talk about imperatives, Singer would talk about the relative utility to the baby vs. the disutility to you, and so on and so forth. But what is important is this: all of these would be explanations, not a priori predictions, and second, that all of these would be more complicated than the original understanding of the circumstances.

It’s important to note that, in that situation, where you can save the baby or not, assuming there’s no danger to you – you would do it without thinking. Anyone would. If it were rushing rapids or there were Viet Cong snipers or something, you might freeze as your two instincts (survival and herding) conflict, but if it were just a baby facedown in a lake, you’d definitely save the baby if it were clearly imperiled. This is morally relevant insofar as, for any of the philosophical rationalizations to be meaningful, they would have to affect the outcome of actions. Your instinct and intuition is the measure of the validity of a theory, and when a strong intuition conflicts with a philosophical theory, intuition always wins.

That’s why when philosophers argue for the morality or immorality of a certain action, they never use logical deductions per se. What they’ll do is take a situation where you have a clear intuition and try to manipulate your perception of the argued action into one of analog with the intuitively clear situation.

Singer’s distance paradigm is a perfect example of this – most people would say that, were a child on their street drowning, they would expend effort to save the child. However, despite children starving in Africa, we tend not to pony up for OXFAM or UNICEF, despite it being about as much effort. Singer tries to argue that distance should not be meaningful, that we should give up just as much to save the African child as our neighbor’s, and that we have just as much obligation. His claim is that pain is the morally relevant claim. Yet our intuitions obviously do not support this. We feel sympathy for the afflicted children when we see them on CNN or Sally Struthers’ infomercials, typically because television (or if we hear on radio or read) create the impression of proximity that doesn’t exist. Yet it never, for most people, really invokes a sense of urgency. If you see a child on the street drowning, your instinct kicks in and BAM you’re trying to save her. That sense of urgency is never invoked, so it’s just a general sympathetic response. And hence no action.

If you look at it in terms of biology, it makes perfect sense. The kid in Africa isn’t part of your ‘tribe’ or ‘herd’. So why the hell would you give up valuable calories and resources trying to save him, unless you have some ulterior motive (feeling good about yourself [endorphins])? Technology will mess with your evolutionary responses, but it won’t entirely get around them. Perception and false perception can invoke reactions that wouldn’t have made sense five hundred years ago, because you’re presently in a different evolutionary context. But trying to make up stories about veils of ignorance because there’s a dissonance between responses developed in other contexts and their consequences now is just silly, and seems to be the entire point of academic philosophy.

So if you believe this, then look at the baby example again. Either (a) you are saving the baby with hope of your own reward or (b) you’re saving the baby because you’re programmed to do this in your social self interest. Regardless of why you do it, it’s essentially for your own edification. The reason people find it distasteful that someone would say, “I’m in it for the money,” is the tacit admission that they wouldn’t have done it anyway. This is something that violates a rational person’s sensibilities, because it goes against the instincts that have been developed to keep our species going. So, there’s a difference between “I am doing this with an expectation of reward, but I would have done it even if no reward were forthcoming” and “I will do this if there is a reward, but if no reward is coming to me, so much for THAT bright idea.” Our instincts conflict with the second, but not the first (and in fact, most people wouldn't even characterize the first as 'doing it for reward.')

Most people tend to be naïve. They think they can overcome billions of years of genetics just because we build houses out of bricks now. They make up elaborate myths to explain themselves as the center of the universe. The irony is, every time a philosopher creates another journal-checked theory of ethics, they’re filling the same, basic need that some Pagan priest did thousands of years ago in weaving a tale of descent from the Gods and man’s divine heritage. The verbal concatenations increase in complexity, but avoid any real introspection. Why do we act the way we act? Is it some ethereal force beyond comprehension who will cast us to Hell in the afterlife, or is it to be found among the annals of academia, among whose storied halls the greater intellectual edifice of ethics and morals has been constructed?

People need things to be rational, and in that, we are victims of our own circumstance. Our own minds have created a world that our instincts no longer comport to, so we need ever more elaborate lies to act as intermediary between our senses and emotions, and our conscious, rational brain. What does it mean when you are confronted with a picture and sounds that seem to be there, but aren’t? Our rational brain has to play referee – tell us that the movie on the screen isn’t real. Yet a love story can still move us, a horror story can still scare us, and when the killer jumps out, chainsaw at ready, to eviscerate another innocent sorority girl – we still jump.

The sense of dissonance people feel in an increasingly technological world isn’t surprising. It’s the same thing that happens if you put a gerbil in a room with a strobe light and loud music. It’s the same thing that happens when any living thing is confronted with a situation it cannot fully grasp. Parts of it shut down. Parts of it simply cannot deal.

Look at the older generations and their crusade against “indecency.” On a purely rational level, we all know that there’s no actual harm that comes from swearing on TV. Words, even invective, only have the force they’re given. Yet people’s moral sensibilities would still rather put them in a world they understand than one that works. A human being will always resist changes that make the world sufficiently different than the one they are comfortable in. Morally, culturally, whatever.

Every culture has myths. Everyone has lies they tell themselves. Morality is just one of them. Save the baby, but don’t do it because it’s “altruistic.” Do it because you have to. Because you’re human.

cranked out at 9:25 AM | |

 
template © elementopia 2003
Chicken and/or Waffles
 
Be Objective
Be Qualitative
Be Mindless
Be Heartless
Be Confused
Be Aware
 
Gawker
The Lounge
Appellate Blog