Friday, January 31, 2014

Consequentialism and doing what is very likely wrong

Consider a version of consequentialism on which the right thing to do is the one that has the best consequences. Now suppose you're captured by an eccentric evil dictator who always tells the truth. She informs you there are ten innocent prisoners and there is a game you can play.

  • If you refuse to play, the prisoners will all be released.
  • If you play, the number of hairs on your head will be quickly counted by a machine, and if that number is divisible by 50, all the prisoners will be tortured to death. If that number is not divisible by 50, they will be released and one of them will be given a tasty and nutritious muffin as well, which muffin will otherwise go to waste.
Now it is very probable that the number of hairs on year head is not divisible by 50. And if it's not divisible by 50, then by the above consequentialism, you should play the game—saving ten lives and providing one with a muffin is a better consequence than saving ten lives. So if you subscribe to the above consequentialism, you will think that very likely playing is right and refusing to play is wrong. But still you clearly shouldn't play—the risk is too high (and you can just put that in expected utility terms: a 1/50 probability of 10 being tortured to death is much worse than a 49/50 probability of an extra muffin for somebody). So it seems that you should do what is very likely wrong.

So the consequentialist had better not say that the right thing to do is the one that has the best consequences. She would do better to say that the right thing to do is the one that has the best expected consequences. But I think that is a significant concession to make. The claim that you should act so as to produce the best consequences has a very pleasing simplicity to it. In its simplicity, it is a lovely philosophical theory (even though it leads to morally abhorrent conclusions). But once we say that you should maximize expected utility, we lose that elegant simplicity. We wonder why maximize expected utility instead of doing something more risk averse.

But even putting risk to one side, we should wonder why expected utility matters so much morally speaking. The best story about why expected utility matters have to do with long-run consequences and the law of large numbers. But that story, first, tells us nothing about intrinsically one-shot situations. And, second, that justification of expected utility maximization is essentially a rule utilitarian style of argument—it is the policy, not the particular act, that is being evaluated. Thus, anyone impressed by this line of thought should rather be a rule than an act consequentialist. And rule consequentialism has really serious theoretical problems.

1 comment:

ockraz said...

It's an interesting thought experiment. My intuitive response if my task was to defend act utilitarianism, would be to argue that a 'large numbers' approach to calculating the utility value of a situation where I only know the probabilities of different outcomes doesn't mean that I'm adopting a 'rule based' approach. Rather, it's merely one possible way to approach how probability factors into calculation of utility. To the extent that it's using a rule it's not using a rule about how one should act, but a rule in the same sense as the formula for calculating the surface are of a sphere.