How to be right about ethics when it matters
Surprisingly, the answer is not "be good at philosophy"
Here’s your problem: you’re a fallible human, but you really want to be a good person, or at least if that’s not attainable, you don’t want to be doing things that are catastrophically evil. And this is going to be hard, because in history we see plenty of examples of people, people who say they want to be good, who did things that are catastrophically evil without a second thought. The moral status of what they did wasn’t even on their radar. I’m no anthropologist, but I assume that if you went back a few thousand years ago in certain societies and told people it was horrifically bad to marry twelve-year-olds, you’d have a hard time being taken seriously. Same with slavery, stoning homosexuals, and so on. Okay, so we have our goal: don’t be those people. We might have a hard time getting nuanced moral issues right, but for now our goal is to just not be catastrophically evil.
Our problem now is to come up with a strategy we can apply, to not be horribly morally mistaken. We want to be risk-averse: in large part we want to minimize the probability that we are horrendously mistaken, and maybe once we achieve that then we can start accepting we might make some smaller moral mistakes in our rigorous pursuit of truth. But this is a hard problem. We need to come up with a strategy such that, if we had ended up in the position of a would-be slaveowner in ancient Greece, we would choose to not own slaves. If your strategy does not have that output, you fail—you will, today, do the things just as bad as owning slaves, if such things exist. We can’t settle for such failure.
First, here is a very bad strategy. The very bad strategy, one adopted by most of my colleagues, is: “Examine the arguments on both sides of the issue, then believe one side or the other or suspend judgment depending on the arguments. If the most plausible arguments say what you’re doing is okay, then feel free to keep doing it.” It should be clear why this strategy is bad. Philosophy is famous for never getting consensus; no matter the issue, just as long as neither view is downright crazy (from the perspective of the philosophers living at the time), you’ll find philosophers relatively divided and providing strong arguments for either side. So conditional on finding yourself in a society where something catastrophically immoral is considered normal by many, the “Go with your evaluation of the strongest argument” yields a pretty high chance of owning slaves, marrying 12-year-olds, etc. People had sophisticated arguments for slavery; they were pretty wrong, but probably not more wrong than e.g. epistemic externalists, if we just go by the intellectual plausibility of the arguments. “Do philosophy, and refrain from actions when the best arguments say you should” is a strategy with an extremely high rate of catastrophic error.
Now, here is a good strategy, probably the best one: just don’t do things that cause massive harm to others for relatively trivial benefits to oneself. Suffering and dying are bad; so don’t kill or inflict huge amounts of suffering on others when you can avoid it. Being a slave is terrible, your entire life and autonomy is taken from you, all so that you can do some work for someone else. So don’t enslave people. Being a 12-year-old married off to some man with complete control over you would be awful, all so that…the guy gets to marry someone? So don’t marry 12-year-olds. If someone gives you arguments in favor of these things, you should think “Wow, people have come up with some sophisticated arguments on both sides. This seems to be an area of high uncertainty, and if I just go by the arguments there’s too much of a chance I’ll do something really evil. So I’ll just do the obvious thing and not inflict massive harm on others just to make my life more comfortable.” Don’t get me wrong, I don’t think avoiding suffering and death is the be-all-end-all of ethics; but for now our task is just to avoid catastrophic failure, and we can figure out the hard stuff later.
If the above strategy already sounds appealing to you (and I imagine it won’t to most), then you can stop reading right now; but I ask that you accept the obvious conclusion of that strategy, namely, that you should go vegan. “Factory farming” inflicts incomprehensible amounts of harm on others. Even in the best case, someone who wants to live gets their throat slit so they can be a meal. It’s not that hard to have different meals, compared to what’s at stake. “Oh, but this is different, they’re animals”—stop, you’re not applying the Best Strategy anymore. Everyone thought there was a reason why their case was different. “But the animal wouldn’t have existed in the first place if they weren’t raised for food”—a beautiful argument, about as beautiful as there are for owning slaves and marrying twelve-year-olds. If you apply the Best Strategy, you won’t eat animals or own slaves or marry twelve-year-olds. If you give the above responses, then there’s a pretty good chance you’d own slaves and marry twelve-year-olds conditional on your society finding that normal. If your goal is to be a conscientious individual who has good arguments in support of what they do, then continue to think through the above arguments and don’t change your behavior until you’ve been convinced. If, in contrast, your goal is to not own slaves and not marry twelve-year-olds, then for the love of God do not make your actions contingent upon your evaluations of the arguments. Just apply the Best Strategy to decide how to act in the cases where it gives a clear verdict. I am not telling you to believe things for simple reasons or just because they seem obvious—beliefs are another matter, and the usual methods of philosophy are probably best for most people. But if you’re thinking about how to act in cases where you’d antecedently expect that the usual philosophical method would yield a high chance of concluding you can act in a way that is in fact catastrophically evil, you need to be risk-averse.
But if you were antecedently disposed to following the Best Strategy, you’d already be vegan, and so presumably my average non-vegan reader already has reasons to not apply it in this case. Maybe it just seems too weird to change your life to act in a way that you do not assent to in your philosophical thinking. This brings us to the Second-Best Strategy, which has a higher rate of catastrophic error than the Best Strategy, but still a much lower error rate than the default, and it has the advantage of greater alignment between your intellect and will.
The strategy is to learn to notice the difference between actual arguments and stupid rationalizations. No matter how smart you are, you’re going to make stupid rationalizations for things in some cases. Even I still do this not infrequently. And your experience probably confirms that, when a person is rationalizing, then just explaining to them why their arguments aren’t plausible usually doesn’t snap them into clarity; they just explain what you say away with more rationalizations. One who rationalizes really is believing what seems true to them; just looking at more arguments will not usually get them out of the cycle of rationalization, since the same things that make them rationalize away evidence against their view also make them rationalize away evidence for why their rationalizations don’t work. Our duty as people who want to be right is not to follow whatever locally seems most plausible, since we are systematically biased in such an endeavor. Rather, we need to form ourselves so that what seems most plausible will, in the future, be more likely to be correct.
So: train yourself to notice when an argument you give is a stupid rationalization. “Slavery has economic benefits, and some people are natural slaves”—is this something that would occur to someone who doesn’t already believe slavery is okay? Can you imagine a man in a slavery-free society saying “Hey, I have an idea: it seems like some people are natural slaves, and we can enslave them for economic benefits” and not looking insane? No: the arguments given are ones people use to justify their behavior, not the kinds of arguments people give when they are looking for new truths to believe.
“The animals wouldn’t exist if we didn’t raise them to become food.” Such a thought would never occur to someone for whom eating animals is not a foregone conclusion. There is no worldview from which “It’s okay to bring someone into existence so you can slit their throat and eat them, just as long as they wouldn’t have existed otherwise” seems like a thought someone would think in order to discover new truths, except a worldview that is antecedently committed to slitting the throats of those who wouldn’t otherwise exist. No one would say such things in the case of humans: “We should breed some people so that we can harvest their organs later, so long as their lives are still worth living and they wouldn’t exist otherwise.”
“But,” you say, “humans and animals are different. I have arguments for why we should be deontologists regarding humans and consequentialists regarding nonhuman animals.” Stop. You’re going to own slaves and marry twelve-year-olds. I promise that your arguments for why it’s not okay to harvest the organs of those who wouldn’t otherwise exist aren’t as strong as you think they are. Do you think, if you lived in a consequentialist world, your arguments would seem remotely plausible? That if tons of lives were saved by the organ-harvesting, and those harvested still had lives worth living and they wouldn’t exist otherwise, you’d be able to make your reasoning sound serious to this world’s philosophers? The way philosophical thinking actually works for most people is that you find certain claims to be antecedently plausible, and you work backwards to theories whose plausibility derives largely from the conclusions justified. “Because of the social contract, because of the nature of autonomy” are not considerations you would use against organ-harvesting if you did not already believe the conclusion. Now, it’s fine to an extent to build a theory out of the data you want to account for, but insofar as that’s what you’re doing, you can no longer claim the theory provides evidence that the data are what they are. So, for the average person with this view, “deontology for humans, consequentialism for animals,” and whatever arguments you have for the former that don’t apply to the latter can hardly be used to explain why you’d be against human organ faming in the world where that’s normal. You should be seriously skeptical that the reasoning you provide is actually why you have the conclusions that you do, rather than just a rationalization of a foregone conclusion.
It’s my serious contention that you’d have to be stupid to really buy any of the arguments for eating animals, and that people who make these arguments are not stupid, and they would easily realize that they’re only offering rationalizations if they actually prioritized avoiding catastrophic error over merely being a person who has arguments for what they do. This Second-Best Strategy still has a high error rate, since it is a difficult matter to figure out when we’re rationalizing. But please: no matter how you go about thinking about these matters, you need to keep in mind what your strategy would output in a society that owns slaves and marries twelve-year-olds. I promise you will find no strategy that doesn’t let you be a slaveowner but lets you slit others’ throats if they’re dumb enough and different enough and you want to eat them. Whatever argument you provide, really think: how sure am I that this line of reasoning is correct? Would I really not find an equally plausible argument in favor of slavery, assuming I wanted to be a slaveowner? The answers are “not comfortingly plausible” and “no.” Were you raised in a vegan society, you would not be among the first to discover that it’s actually okay to slit others’ throats if they wouldn’t otherwise exist (for some values of “others”). If someone made that argument, you’d probably reply with arguments against them that are as strong as your actual ones in the case of doing the same with humans. So, your reasoning is not very sensitive to the actual truth of the matter, and you should be pretty worried about the fact that the foregone conclusion you’re justifying just happens to be the one that involves inflicting massive amounts of suffering and death on others, just so that you can eat them.


I admire the intent here. But the proposed method of sidelining philosophical argument in favor of moral risk-aversion presupposes what it ought to examine. Treating any dissent as mere rationalization is a move that immunizes itself from challenge and frames disagreement as moral blindness by default.
The bigger concern is this: had we adopted the same epistemic posture two thousand years ago of deferring to strong moral intuitions and treating conceptual analysis with suspicion, we’d still be defending child marriage, stoning, and slavery, all under the banner of minimizing harm as it was then conceived. The moral breakthroughs invoked as cautionary tales weren’t achieved by avoiding argument. They were achieved by allowing it; by doing the hard work of prying open the categories of personhood, dignity, and rights using reason.
Aristotle defended slavery not because he fell victim to too much reason, but because he failed to reason universally. His error was cultural capture, reinforced by deference to prevailing assumptions. It was reason (finally given freedom) that dismantled those assumptions. A slow, fallible, and essential process.
If we guard against future catastrophe by pathologizing disagreement and collapsing complex distinctions into emotionally salient analogies, we risk creating the very thing we claim to oppose: a moral consensus that feels secure only because it can no longer be questioned.
Do you really think adopting the strategy you're suggesting would preclude owning slaves and marrying 12 year olds? You allude to the Aristitle's ideas about slavery, but the way I understood it was that plenty of ancient Greek slave owners would have thought they were complying with the strategy. Same goes for marrying 12 year olds. In a world where 12 year old marriage is normal, it doesn't strike people as massively harmful to the child brides.