(urth) Utilitarians, Severian, and Consequences

Chris rasputin_ at hotmail.com
Sat Apr 9 15:25:09 PDT 2005


>Never: but again is this really unique to utilitarianism?  Can't you be 
>uncertain as to whether you have really fulfilled your duty in a duty-based 
>system, or whether you fell into a fiendish trap whose appearances tricked 
>you?

The error I think you make in the above is that you are thinking of duty in 
a consequentialist framework - that duty is somehow objectively fulfilled by 
a set of consequences that result from the action. But deontological ethical 
systems precisely do *not* work this way. (You're not alone in this by the 
way; Nagel makes almost exactly the same mistake in his critique of Kant in 
"Moral Luck").

In a duty-based system, you are following a rule based on the input you 
have; you can have doubts as to the verity of that input, but ultimately you 
do know when you've followed the rule as well as you are able. You know, for 
example, as you speak that you are trying to follow the rules of language, 
and you can "know" (as well as you "know" anything) that you've followed 
them correctly. Sure, you can introduce radical skeptical doubt, evil 
demons, and the like. But that kind of skepticism is just that: radical. If 
you take it too seriously, then you end up conceding that we're not really 
communicating with each other at all because such communication is 
impossible.

>Sounds like a cop-out: Because you can't be perfect and 100%, just do what 
>everybody else does.
>As the saying goes, 'No one was ever fired for buying IBM.'

It pretty much does sound like a cop-out, just throwing your hands up in the 
air. But you have to understand that Moore is not taking it lightly. It's 
not just a matter of your knowledge being imperfect - it's that you have *no 
basis at all* from which to judge the probabilities of the ultimate results 
(extended off indefinitely into the future) of your actions.

The pragmatic way out, from your standpoint (and this hits on something 
you've talked about in a couple other posts, so I skipped responding to 
those and just made the point here) is to use something of a "discount 
function". A present good is always better than an equal good in the future, 
because contingent circumstances beyond your control might come up to 
prevent you from attaining that future good. Beyond a certain point in time 
the probabilities become so uncertain that you then discount any 
consequences beyond that point entirely because you're unable to weigh them. 
[I am not a utilitarian, but if I were, this is how I'd try to deal with the 
problem.]

Most of our actions have pretty short horizons in this regard. We live in a 
very, very contingent world. Severian's case is interestingly different (to 
bring this back around to his case) because he's been told, to some extent, 
what the results of his actions are going to be. So to judge him in 
consequentialist terms, where do we draw the horizon? Well, you could make 
an argument that it would be as far forward as the Green Man's time. I don't 
think this is practical, though, because while the Green Man's time is a 
very good period, it may have been preceded by some pretty awful times, and 
going forward from the Green Man's time may be a pretty terrible future. 
[You could argue that the entirety of the future history of man is cyclical 
and predetermined, but while this would give Severian a broader horizon of 
knowledge, it would also end up making Severian's *choice* predetermined 
too, and so not really a choice at all. This means you don't end up judging 
Severian by consequences, you end up exempting him from judgment entirely as 
a mere tool, as described in my original post on this.]

I think that, in the frozen-Urth future, the horizon extends basically to 
Ash's time. The outcome there is that a lot of people live diminished (but 
certainly not entirely unhappy) lives for a long time [a substantial 
positive value], followed by evacuation of *most of* the population to other 
worlds [also a positive], along with a minority who eventually perish on 
Urth [a small negative].

On the Ushas side, you start with an extremely large negative, a lot of 
deaths. It's unclear where to draw the horizon here, but considering the 
small number of survivors (who will generate only the tiniest amount of 
utility in a given generation) we can assume that it would take a very long 
time to repopulate Ushas back to the point where it was producing enough 
positive outcome to outweigh the negative balance you start with. Even if 
you eventually break even, the frozen-Urth branch has been racking up 
utility the entire time; and if a substantial number of people were 
evacuated, their sheer population alone would make it very difficult for the 
Ushas branch to catch up. Ultimately I think that it would take far too long 
for the Ushas future to catch up, even assuming that it could catch up; 
drawing the horizon so far out gets into territory where it's very unlikely 
that Severian can judge probabilities.

And so I think that even a strict consequentialist (which, granted, I am 
not) would not find reason for approval without some *other*, additional 
factor.

>Well, I've argued elsewhere that a major factor is the loss of Urth as a 
>habitat, which all things equal (and it appears they are; no one says that 
>Ice Urthians will be brought to a new, just as good planet, which isn't 
>already colonized.) means a drastic reduction in future numbers of humans.

It doesn't mean this at all. It doesn't seem likely that the limiting factor 
for human populations in Urth's future will be the sheer number of planets 
available as habitats. Terraforming seems far from impossible (see: the 
green Moon). So if one more world is needed it would seem better to 
terraform one without people already on it.

>If your 'trying' does not do as well as 'actually producing', there is 
>something more than a little screwed up about your efforts,  And how can 
>you 'actually produce' more utility without previously having decided to do 
>what you 'thought' would produce the most utility? Accident?

Sure. Even the slightest bit of bad luck. Or, alternately, if you chose a 
suboptimal path and got a little lucky you could easily come out better. 
Action *outcomes* involve a considerable luck factor. Any system that relies 
on consequences to judge actions has to either find a way to disregard the 
luck factor, or else embrace the idea that it is better to be lucky than 
well-intentioned - in fact, it's hard to find any room for crediting 
intentions at all, if they don't end up producing the right results.

Even JS Mill acknowledged that this was a problem. His answer was to make a 
sort of distinction between "motive" and "intention"; so he was able to say 
that a Utilitarian ignores "motives" in moral judgment, but not 
"intentions". Unfortunately his distinction ends up being a little too 
slippery to provide an adequate answer.

>>This isn't an objection that's really open to the utilitarian, because 
>>while it's true that if you're wrong you've just caused a great deal of 
>>harm, on the other hand if you abstain from acting and you're wrong about 
>>that, you have just condemned an even larger number of people who need 
>>transplants to death.
>>
>>There is no inherent virtue in refusing to act for a utilitarian; the 
>>results of inaction are just as substantial as the results of action.
>
>Again, not unique. 'Sins of omission' &etc

I am not the one who tried to use "But what if you're wrong?" as an 
objection; you basically implied that you wouldn't hack up the homeless 
people, despite this being what your ethical system seems to require, on the 
grounds that you might be wrong and if so, better to not do it. The point 
here is just to show that for a committed utilitarian this is not an 
allowable excuse.

The deontological ethicist won't hack up the homeless for organs because she 
follows rules which respect the rights of individuals. She does not need to 
appeal to uncertainty about the outcome.

>I've held that in practice we use triage- like governments prioritizing 
>aid, seeking the maximum utility for buck.  So it is little surprise if in 
>our everyday life  it changes the actual actions only a little, and is more 
>a paradigm shift.  It is like replacing Newton's physics with Einsteins'- 
>the edge cases vary considerably, but in more mundane realms, they accord 
>pretty closely.

Well, it's an interesting concept that has been argued, that collective 
actions (the actions of organizations, governments, etc) would be best ruled 
by a utilitarian standard, while individual actions continue to be governed 
by rules. Utilitarian ethics for governments, deontological ethics for 
people. I don't really have a position on that. I gather that you wouldn't 
accept it, though, because it would imply that you personally shouldn't try 
to be a utilitarian.





More information about the Urth mailing list