(urth) 5HC : Skinner, Turing and happiness, and even more Thread Necromancy
marudubshinki at gmail.com
Sun Mar 20 08:49:15 PST 2005
> I don't even remember the original message being quoted here.
Yeah... Sorry about that. Sometimes it takes me a while to work up the
will to reply.
>> My own brand of utilitarianism would say that following the
>> categorical imperative will increase happiness at large, and
>> eventually result i large gains for myself in addition. 'Do well by
>> doing good' in a long run sense. That, and it is rationally
>> desirable for one's beliefs to at the very least be consistent.
> There is a fundamental tension you are missing between *any* kind of
> view which acknowledges a "categorical imperative" (a system based on
> a particular *duty*) and *any* system like utilitarianism which relies
> on *results* as a justification.
I haven't crossed the line there- notice I justified the imperative as a
heuristic, a rule which more often than not will work better than random
choices; it is justified as something which will lead to good. Obviously
if I have overriding data I'll do something which seems more likely to
help. I haven't adopted the 'duty' part of the imperative. I don't like
duty systems- not amenable to rational thought, and can lead to results
totally diametric to my own desired results ("Recruits! It is your
solemn patriotic duty to exterminate all life on this god-forsaken
globe! Why? Your duty is to follow orders, not ask why! Fall out!" etc.)
> With utilitarianism, the ends ultimately justify the means. So if, on
> any given specific occasion, it will produce a better result to
> disobey the "categorical imperative", then there is no question - you
> disobey the imperative. In such a situation the "rule" is purely
> superfluous, because you obey it or disobey it as the occasion demands.
> There is an attempt to reconcile these two points of view, called
> "rule utilitarianism", which you could probably google to good effect,
> and I suspect you would view it favorably. The problem is that rule
> utilitarianism walks a rather tenuous line, and it tends to collapse
> completely into either pure act utilitarianism or else pure duty-based
> ethics if you really start running it through its paces in thought
I know of rule utilitarianism, and while I am deeply sympathetic to its
aims of formulating a utilitarianism suitable for fallible or even
malicious humans, I do not think it has come up with the right mix of
rules, methods and heuristics necessary. But its goal of developing a
utilitarianism that only ractchets upward never downward is nevertheless
I feel, possible.
> This really is a debate of its own that is far too long to present
> here. Yes, there is something to your intuition. But when you think it
> out, the notion of "selfish" that would be required to absorb *all*
> altruistic action ends up being trivial to the point where it
> disappears as a useful category. To really bring this point home I
> would recommend an article called "Psychological Egoism" by Joel
> Feinberg. Unfortunately I can't find an electronic version of it, so
> you'd have to find a tree/ink-killing version.
That would be in /"Reason and Responsibility: Readings in Some Basic
Problems of Philosophy"/
> "Now we would have to say that all actions are selfish; but, in
> addition, we would want to say that there are two different kinds of
> selfish actions, those which regard the interests of others and those
> which disregard the intrests of others... After a while, we might even
> invent two new words, perhaps 'selfitic' and 'unselfitic,' to
> distinguish the two important classes of 'selfish' actions. Then we
> would be right back where we started..." -- Joel Feinberg
Not all actions are selfish- people make mistakes sometimes.
man, there has got to be a catchier, easier to spell name for utilitarianism
More information about the Urth