7 Comments
User's avatar
Daniel Greco's avatar

This is basically how Harsanyi argues for utilitarianism. (Veil of ignorance pre-Rawls, but the agent making the decision gets to think probabilistically, and is trying to maximize utility for himself without knowing who he'll be.)

Expand full comment
Joel da Silva's avatar

That was my reaction, too.

Although, it should be noted that Harsanyi later admitted that his argument doesn’t work as he intended it to. Originally, he argued that:

In a scenario with no knowledge of probabilities, Bayesian probability theory says it’s rational to make an equiprobability assumption, which makes it rational to use the maximize expected utility decision rule, which makes it rational to choose average utilitarianism.

However, later he realized that Bayesian probability theory only says it’s rational to make AN equiprobability assumption, not the specific equiprobability assumption that’s required to make the argument work - i.e., the assumption that you have an equal chance of being anybody. (Alternatives include: the assumption that you have an equal chance of being in the 10% as you do of being in the 90%, the assumption that you have an equal chance of being in the 40% as you do of being in the 60%, etc.). Without that specific equiprobability assumption, you don’t get average utilitarianism.

Basically, he was forced to admit that, like Rawls, he couldn’t get to his preferred conclusion via decision-theoretic premises alone.

Expand full comment
metachirality's avatar

I think in the particular scenario outlined here, there's no reason to posit unequal probabilities.

Expand full comment
Corsaren's avatar

Where did he argue this? I kinda feel like this is actually a strength rather than a weakness of a theory. Maybe I'm interpreting this wrong, but it sounds like this would suggest that two individuals might have legitimate differences in ethical positions because they have a differently weighted view of who is in their circle and/or different levels of risk tolerance. That actually does sound like the sort of disagreement that ought to warrant different ethical conclusions.

Expand full comment
Rafael Ruiz's avatar

ok but why is your wife called David

Expand full comment
Moralla W. Within's avatar

That’s the name Taurek used for the example

Expand full comment
Richard Y Chappell's avatar

> "goodness of states of affairs is just an approximate way of talking about our obligations."

I'd be curious to hear your take on my post 'Deontology and Preferability': https://www.goodthoughts.blog/p/deontology-and-preferability

A key passage:

"It would be absurd to deny that to there can be truths about preferability. We clearly should prefer that innocent people not suffer, all else equal. Compare a possible future in which a child gets struck by lightning with an alternative in which they don’t. Nobody has acted wrongly in either case. But we clearly ought to hope and prefer that the child not be struck by lightning (all else equal). This is a datum of common sense, and any theorist who denies it is in the grip of a theory."

I might add: we clearly should be *more upset* -- because more moral preferences have been thwarted -- if a million people die in a natural disaster than if just one does. If we aren't yet sure which of the two possibilities is actual, we should *hope* that only one person died. And so on. This is not merely to say that we would be obliged to prioritize saving the million if we could (though that is, of course, *also* true). It is more fundamentally to say something about what *attitudes* of care and concern (desire, preference, hope, etc.) are morally fitting.

Expand full comment