So, I’ve finally gotten around to reading John Taurek’s famous 1977 article “Should the Numbers Count?” where he argues no, if you have the choice between saving one person and fifty people, it would be reasonable to just flip a coin to decide who to save. In particular, even if all else is equal, there is no obligation to save the greater number.
Taurek is quite a mystery of a man. This is where a writer would usually say some vague details about his life to emphasize what we don’t know, but I don’t even have the vague details. He didn’t publish any other articles, and from some googling the most I can gather is that he is more likely than the average philosopher to have gotten his PhD from UCLA. It seems like he just wrote this paper and left no other trace of his existence. My running hypothesis is that Taurek does not exist and never existed; rather, the paper was immaculately conceived by God and submitted to Philosophy and Public Affairs in order to to help us in our struggle against the consequentialists. Some may object that the paper, while interesting, would have been of far higher quality if written by God—but this is just the usual problem of evil, and the usual responses apply here. Nevertheless, for simplicity, I will stipulate “Taurek” to refer to the author of “Should the Numbers Count?” whoever that may be.
The paper starts off kinda weird. Taurek imagines a case where you have a drug, a full dose of which could save either David or five other people. He basically argues that even the slightest ground for preferring to save David—say, he’s just some guy I “know and like”—would render it permissible to save David. It would be weird if an obligation to save the five could be overturned by such ludicrously weak grounds for preference. So, according to Taurek, both options are permissible even in the case where all else is equal.
Sure, the numbers-counter could explain the permissibility of saving David in some cases. If David is my wife, then it seems like I’ve undertaken some special obligation (or at least prerogative) to weight his interests more than those of strangers. But in the case where David is just some guy I know and like, that explanation doesn’t seem plausible, according to Taurek. I don’t really think Taurek hits the mark here. If David is my actual friend, a word Taurek sometimes uses, then the “special obligation” explanation seems plausible. If he is literally just some guy I happen to know, e.g. a guy I see at the bus stop and occasionally make small talk with, then it seems equally as implausible to say it’s okay to save him as it does to say it’s okay in the case where all else is equal. What, I’m preferring to save this guy just because he uses the same bus stop as me, which any of the other five just as easily could have done? I don’t think there are any cases where it really seems like a weak grounds for preference renders it permissible to save David, insofar as one thinks it’s wrong to save him in the case where all else is equal.
The paper gets stronger when Taurek focuses less on the comparisons between cases, and gives the reader a feel for how weird it is to think goodness aggregates across individuals. He writes:
I cannot but think of the situation in this way. For each of these six persons it is no doubt a terrible thing to die. Each faces the loss of something among the things he values most. His loss means something to me only, or chiefly, because of what it means to him. It is the loss to the individual that matters to me, not the loss of the individual. But should any one of these five lose his life, his loss is no greater a loss to him because, as it happens, four others (or forty-nine others) lose theirs as well. And neither he nor anyone else loses anything of greater value to him than does David, should David lose his life. Five individuals each losing his life does not add up to anyone's experiencing a loss five times greater than the loss suffered by any one of the five.1
I think this line of thinking is very plausible, and it’s what stands behind the intuition (other than scope insensitivity) that e.g. it would be better to save one man from fifty years of brutal torture than it would be to save 3^^^3 people from the minor discomfort of getting a dust speck in their eyes (note: 3^^^3 is a very big number, way bigger than even 10100). You could imagine the 3^^^3 people getting together and agreeing to brave a dust speck each in order to save the man from torture. Sure, one of these individuals may think, 3^^^3 is a lot of people; but I’m only deciding whether I get a dust speck, and whether anyone else joins this agreement is their prerogative, and I need not care on their behalf about whether to join this agreement. So all 3^^^3 people think, and so the man is spared of torture. The 3^^^3 people do not constitute some super-organism who experiences something 3^^^3 times worse than a dust speck in the eye. From each individual’s perspective, they are merely paying a trivial cost to prevent a terrible fate for one man.
This is basically the same as the criticism of utilitarianism given by Rawls, namely, that the inference from a number of smaller goods for individual people to the existence of a larger, aggregate good ignored the “separateness of persons.” Rawls writes:
It is this [impartial] spectator [and not the individuals] who is conceived as carrying out the required organization of the desires of all persons into one coherent system of desire; it is by this construction that many persons are fused into one. … The nature of the decision made by the ideal legislator is not, therefore, materially different from that of an entrepreneur deciding how to maximize his profit by producing this or that commodity… . In each case there is a single person whose systems of desires determines the best allocation of limited means. … This view of social cooperation is the consequence of extending to society the principle of choice for one man, and then, to make this extension work, conflating all persons into one through the imaginative acts of the impartial sympathetic spectator. Utilitarianism does not take seriously the distinction between persons.2
Similarly, in “The Relational Nature of the Good,” Korsgaard writes:
I believe that there are good and bad states of affairs because there exist in the world beings for whom things can be good or bad in a specific way. The beings in question are the ones who are sentient, or conscious – roughly speaking, the animals. The thesis is unsurprising because it is a thesis that is also held by some philosophers who defend a very different philosophical outlook from my own – namely, the hedonistic utilitarians. In fact I think that the reason why hedonism is so perennially tempting is that the idea that the good is pleasure captures, or anyway wants to capture, the relational nature of the good. But hedonistic utilitarians promptly lose this advantage by making pleasure intrinsically rather than relationally good after all, in order to make the aggregation of goods across the boundaries between persons (or animals) possible.3
I think, as far as goodness of states of affairs goes, all this thinking from Taurek, Rawls, and Korsgaard is basically correct (indeed, I would be ostracized by my department, and perhaps even face worse punishment, were I to disagree). I think goodness of states of affairs is just an indirect and approximate way of talking about the ways we’re obligated to affect the world. We can think of lots of reasons why we’re obligated to create states of affairs where more rather than fewer people are doing well, but the utilitarian is committed to a specific explanation for this obligation: that N people suffering some evil aggregates into a state of affairs whose badness is N times that evil, and that this badness of the state of affairs explains one’s obligations. To my knowledge, your typical utilitarian doesn’t really address the force behind the skepticism of this reasoning.4
Yet, you read the title of this paper: despite being anti-aggregation, I do think the numbers count. To see why, let us imagine again Taurek’s case, where you can save David or five other people, and all is equal. Let us ask: what policy would the six people want you to adopt before finding out who the one person is who requires the whole dose of the drug?5 Certainly, they would all agree, the preferable policy would be to save the five, as that gives each a 5/6 chance of living as opposed to a 1/6 chance (or a mere 1/2 chance if you flip a coin). But, then, why not act on this policy even after you find out that David is the one who requires the full dose? Does the mere fact that they didn’t get to empirically agree on a policy before hand make a morally relevant difference? I don’t think so. People should act on whatever policy is rational do adopt, even if they didn’t get a chance to adopt it before finding themselves in the relevant case. Maybe empirical consent would matter if the drug were David’s property, but in the case being discussed, it’s literally just a question of who to save, both options (according to Taurek) being otherwise permissible.6
More generally, all else equal, we should care about doing good for a greater number. Even from the perspective of the individual agents, it would be preferable for others to act on the policy of helping more rather than fewer, since this gives each individual a greater probability of being helped. I don’t think this is an ad hoc way of recovering intuitions about numbers; my position has always been that we should act on laws that we would will to be universal laws, and saving the five over David (when all else is equal) is the clearly preferable universal law.
And what if David is my wife? Well, the question is whether we’d all like most to adopt a policy that lets us save our wives even if incurring a slightly greater risk of her dying (in the case where she is one of the five, and the one is some other guy’s wife). I don’t need to get into details for the purposes of this post, so suffice to say, the policy that lets you save your wife seems pretty reasonable, so saving David is permissible in the case that he is my wife.
And what if David is the pleasant small-talker at the bus stop? Maybe I’m wrong about the second-order effects of a policy which allows saving him, but that seems like a bad policy. All sorts of goods result from people being able to give special care to their spouses, whereas I do not react in horror at the possibility of finding myself in David’s shoes and being betrayed by my beloved small-talker. It seems like the numbers, here, are the only relevant consideration with regard to which policy to adopt.
I will leave the case of the 3^^^3 dust specks as an exercise for the reader.
Still, the reason you should help more instead of fewer is not because there’s a bunch of “intrinsic value” in the world that globs together to form an even bigger intrinsic value that you’re supposed to care about even more. Good is still good-for, and goodness of states of affairs is just an approximate way of talking about our obligations. This is why I don’t feel like the emotional appeals utilitarians on here use against non-consequentialism are very forceful. “You would save the five over David, but you wouldn’t steal the drug from him to save the five? You care about some magical metaphysical relation between a person and their ‘property’ more than people’s lives?” No, the case gives no evidence as to how much I value people’s lives; an innocent individual dies in either case. Nobody is having a five-times-worse outcome inflicted upon them by my choice not to steal. I do care about property rights more than I care about imaginary aggregations of goodness, though.
So, in summary: while I do not think goodness aggregates across individuals, we still have a general obligation to help more rather than fewer, for derivative reasons. Because there are many cases where only the numbers matter (e.g. in deciding which charity to donate to7), it makes sense to talk about goodness as if it aggregates, as long as we do not carry that figurative speaking into cases where things matter to the affected individuals other than the chance of getting helped.
p. 307
A Theory of Justice, A27/B24.
p. 22
It is, certainly, possible to have a utilitarian view that does not rely on aggregation. John Harsanyi gave a decision-theoretic argument that any social welfare function meeting plausible constraints will just be average utility.
In the interest of not being a plagiarist, I should say that this particular framing is salient to me due to a conversation with Daniel Muñoz, Katie Creel, and one other person who I am forgetting, though I think I’ve had this view on aggregation for a while. Also, I regret that I write this not having read Muñoz’ work on the subject, which I am sure is very good.
Taurek sort of anticipates this consideration under alternate circumstances (pp. 312-313). He seems to think empirical adoption of a policy matters, and he does not apply this to the case of David and the five for some reason. And Taurek considers, mainly, the rationality of adopting a policy after one knows they are in the minority, for some reason.
For the case of charity, I should note also that when it comes to ineffective chairities, they generally do not merely help a lesser number of people, but they also give less help to each individual.
This is basically how Harsanyi argues for utilitarianism. (Veil of ignorance pre-Rawls, but the agent making the decision gets to think probabilistically, and is trying to maximize utility for himself without knowing who he'll be.)
ok but why is your wife called David