|this. It's a blog post from a friend of my sisters, which she insisted I read, becuase at some point, vaguely in the past, I may have called the author "goofy looking." The essential problem is one where, by chain reaction of a sort, any action one takes has the potential for infinite "good" utility, or infinite "bad" utility. You can read the post. I linked it for a goddamn reason, you lazy bastards.
Anyway, he says it's a problem, and I'm not quite sure why. It has a few minor suppositional errors. First, the confusion with something which is infinite in duration with something which is infinite in magnitude. If I have a ring of dominos, for example, which is just large enough so that I can set them back up after they fall really quickly, such that it is an infinitely continuous loop, that doesn't mean that all the dominoes everywhere are going to fall. Bad analogy, but it's illustritive. Like, the fact that almost any action you take is limited very much solely to the inhabitants of a very small area in general with a finite lifespan sort of precludes the infinities he's talking about. So this idea that there's even a "potential" for infinite consequence is questionable off the bat.
Even without the limiting factors of geography (I don't know, spaceship or whatever) there's the fact that actions taken tend to ground out exponentially - that is, even though the ripple effect will allow for the effects of an action to continue forever, the magnitude of impact very quickly diminishes, and even if it never actually reaches zero, it gets very very close, very quickly. It's much like taking the infinite sum of 1/x^9 - while technically you are incrimenting the sum forever, you are never going to get beyond a certain point. This is even assuming that an action will not ground out in actuality - as the effects of actions tend to exhert some influence based on a threshold. Like, the fact that in 1842 Sir Ernest Pinklebottom happened to spank his child may have forced the child to grow up to be cruel, and invent a literary tradition which produced the miscarriage of words we call "White Oleander" (and yes, I will continue taking potshots at the book AND movie of this until the end of time. Infinite utility THAT.), which stole 2 hours of my goddamn life for the movie and much more for the book, but regardless, you can see how quickly the consequences of that will die out, even if they temporarily had a relatively large impact. That is, on the people at the post office I will end up shooting becuase of the frustration it caused me.
Anyway, that aside, the point of the post was to say that, given the possibility of infinite (dis)utility, the intended consequences of our actions seem to drop out (if you're a utilitarian, which is to say, stupid.) This is where the above comes in to play. First, it relies on the idea that there is the possibility at all for infinite utility, which I've already noted is sort of silly. Like, the greatest disutility you could POSSIBLY exhert is to destroy the world. That in and of itself is a limiting case. Moreover, and I never thought I'd hear myself make this argument, there are a given number of particles in the universe, and hence a given number of potential intelligences, therefore there is also a limiting (albeit somewhat large) case on the utility side of things. Since the whole argument relies on the fact that there is a potential for infinite consequence, it's pretty damning even prima facie that there are such abominably low limits for good or bad.
Even so, it's easy to argue that the expected utility of an action becomes negligable compared to such large amounts of good/bad. That's where the second thing comes in to play. Because the period during which any action exherts, in general, its highest amount of benefit or cost to people is immediately following its performance - that is, during which the consequences are highly forseeable. There are a few anomolies with respect to this rule, but those can be dealt with (a blog post is not the place to really go in to every classification of utilitarian anomoly out there - things like Riemann geometry which, unintentionally, helped give us the nuclear bomb or the fast Fourier transform, which gave us Counter-strike). So the equation actually starts to look something like:
Sum[B/x^a, x, 0, inf] + U
Sum[C/y^a, y, 0, inf]
Where U ends up, in actuality, being equal or greater in most instances to the first few terms of any given sum (the remainder of sum of the inverse exponential tends to be close enough to nil to ignore) so even given unintended consequences, the produced utility of almost all actions is exactly what you'd expect. For example, even with big things, say I decide to murder the dictator of Argentina. Weigh the expected loss in security, order, etc... to the people of the country with the benefits of freedom. Then take that, and weigh it against the marginal difference in production technology 10 years later which ends up giving jobs to people in Detroit who were previously out of work becuase they have to ship equipment. The unintended consequences are almost invariably much more minor than the short term, expected ones.
There's obviously much more to be said on the subject, but I'm still drunk and it's still 3 AM, so I'll leave it for another time. This is my instinctual outline of the flaws in such a theory, though.
cranked out at 2:51 AM | |
|template © elementopia 2003|