33 Comments
User's avatar
Daniel Greco's avatar

I'm very sympathetic to your claim that the intuitionist isn't really saying anything the Humean can't, but i suppose as a big fan of Hume, I would be.

I know you didn't attempt to give the grounding of the moral law without taking anything for granted in this post, but I still want to ask how that could be possible. Not in detail, but just how any kind of argument, in ethics or anywhere else, could proceed without taking anything for granted. Even in math you have foundational disputes about logic. Intuitionists and constructivists have lost those disputes as a matter of sociological fact (all to the good I say) but it's not as if their positions were proven false on the basis of zero assumptions.

And math would be the best hope for such arguments. Descartes tried to prove the existence of the external world on the basis of zero assumptions, and almost immediately after setting such high standards for himself he flagrantly violates them; I for one am perfectly capable of doubting that ideas can have no more objective reality than their causes have formal reality.

I want to say something like the following: any time you take one step in an argument to follow from previous ones you're taking for granted something that is in principle contestable. So I want to hear more about what this kind of assumption free argumentation could look like.

Expand full comment
Flo Bacus's avatar

I think Descartes is the perfect example, albeit not his argument for the existence of God, but rather the cogito (noting, however, that it doesn't prove the existence of an individual thinking substance, but rather a thinner sense of the I). The naive way of reading the argument would make it an empirical one--I perceive myself thinking, from which it follows I exist. But I think even relying on introspection would be doing too much work. Descartes' reasoning (if I recall correctly) is that if I even consider the possibility of being mistaken in judging I exist, that still implies I exist. So, once I even enter the position of deliberating about whether I exist, I already have the proof I need.

Proving I have a rational will (from which the authority of the Moral Law supposedly follows) is a *lot* harder than the cogito, but I think it's essentially the same in structure. In deliberating about whether to act as a free agent does, I already commit myself to regarding myself as a free agent (could the commitment be to something false? I don't know if that's coherent in this case, but in either case you still commit yourself to the authority of the Moral Law). None of that should be convincing, but what I can argue for within the limitations of Substack is that Descartes' cogito provides a possibility proof.

Now, I'm no magician, and if what I'm saying I'm doing seems like magic, my work in addressing that will be less "Yes, actually, I can do the magic" and more "Here's why what I'm talking about counts as proving something with zero premises."

You say: "any time you take one step in an argument to follow from previous ones you're taking for granted something that is in principle contestable." By "step" you can mean two things. If you mean "inference," then I reply that my position is that an inference can proceed from any number of premises, including zero. If by "step" you rather mean a part of my thoughts that occurs in time--in which case, any complicated reasoning will indeed have to start with a first step--then I reply that the temporal order of thoughts does not necessarily reflect the structure of justification of the conclusion. In the case of the cogito, "If I'm mistaken, I exist" is not a premise among others from which I derive "I exist," (for "either I am mistaken or am correct that I exist" would clearly presuppose I exist). Rather, it is an explanation for the higher-order fact that any agent who simply concludes they exist counts as having rationally derived it from zero premises.

Expand full comment
Daniel Greco's avatar

Here's why I think Descartes should be a scary analogy. I agree there's a plausible case that the cogito satisfies this constraint. But it's when he tries to start using it as a basis for further argument--anything he clearly and distinctly perceives must be true, and that turns out to be a lot--that he immediately starts overstepping. If he really had held the rest of his reasoning to the standard the cogito arguably meets, I think it's hard to deny that he couldn't have gotten anywhere.

I suppose I have to wait for the apparent magic, but that's certainly my reaction when I read contemporary Kantians. I loved sources of normativity, but in various places when I read it I'm told I must be presupposing certain things in deliberating about what to do, and all I can say is that I don't *think* I'm presupposing them, and I can imagine someone who definitely doesn't presuppose them, (a schmagent, in Enoch's sense) who's doing something that looks very similar to deliberation. So rather than being given a zero premise argument that deliberation must reach certain results, I'm given a zero premise stipulation that an activity only counts as "deliberation" if it reaches those results, and a multi-premise argument that deliberation should be "deliberation."

Expand full comment
Flo Bacus's avatar

I agree with your general skepticism about clear & distinct perceptions, and I agree the rest of the Meditations was extremely dubious. But I don't think his having a clear and distinct perception is what makes the cogito argument work; rather, I think it's the fact that even deliberating about the question of whether I exist is sufficient to establish that I exist, and so an agent who simply concludes "I exist" is performing an infallible inference. I think *this* feature is what is shared by our knowledge of our rationality.

If you don't like waiting, I can send you my second-year paper which is largely about this matter, though absolutely feel free not to take me up on that.

I agree with the critique of Korsgaard; I agree with her conclusions, though I don't think she (or Kant) did all the necessary work to establish them. IMO there are two Big Steps that require further explanation: first, showing the necessity of "reflective distance," i.e. from not taking any practical claims for granted; and second, showing that once you abstract from everything, you actually get a law you're committed to that actually rules some actions out (let alone the putatively immoral ones). On the second point, Kant himself usually skirted over the issue, saying "well there's form and matter, and the moral law can't be matter, so it's form, and that's universalizability."

Expand full comment
Daniel Muñoz's avatar

One of the most substantive pieces of philosophy I’ve seen on Substack in a good while! Enjoyed this, even though I’m an intuitionist (and reject reflective equilibrium).

A couple of comments:

1) despite Hume’s “slave of the passions” line in T 2.3.3, I’ve come to agree with Setiya and Sayre-McCord that Hume actually *does* believe in a role for reason in morality. He just doesn’t call it “reason.” (And while he does have a role for sentiment, so does Kant—Achtung. So that doesn’t set them apart.)

2) it seems to me that a lot of your argument rests on the idea that, if X matters, it must matter for all rational agents as such. That seems debatable. Some very subtle mathematical evidence might give Ramanujan a reason to conclude that P even though it doesn’t give me any reason to conclude anything, since I can’t appreciate it. Similarly, in your Bob/Alice example, it might be that utility matters, though not for Bob, who fails to appreciate it. In Dancy’s terms, normative reasons might have enabling conditions, without that undermining their status as reasons.

Anyway, got a bunch more thoughts but I’ll leave it at that. Thanks for the very thought-provoking post!

Expand full comment
Flo Bacus's avatar

Thanks for the thoughts Daniel!

1. I guess I need to read that stuff on Hume. IME from reading the Treatise I didn't really find anything that contradicted his initial statements about practical reason. Sure, reason plays a role in action/morality, but only insofar as our passions are such that theoretical judgments can affect what actions they recommend. It still seems to me that Kant thinks there are rationally required non-instrumental ends and Hume doesn't, but maybe that's what's at issue in Setiya's etc. reading.

2. I do not think rationally mandatory considerations have to empirically matter to all agents, except insofar as being committed to something mattering counts as something mattering to you. And I think the case of Ramanujan exactly parallels what I say about morality. His intuitions are, empirically, good at tracking mathematical truths, which is why they give him reason to believe things; he just has empirical evidence that isn't feasibly accessible to me. Nevertheless, our calling Ramanujan's intuitions "good reasons [for him] to believe mathematical statements" is dependent on the statements in question being provable. Likewise, if it is true that Alice has good intuitions about ethics, I claim that is dependent on it being possible to prove utilitarianism is true without taking intuitions for granted.

Regarding the last sentence of 2, I'd need to read the Dancy, but my initial thought is that (i) the motivational considerations from OP mean that whatever Alice appreciates is no reason for Bob, and (ii) reasons that only apply to you if you antecedently appreciate them are a basis for preference, not obligation or requirement.

Lastly, you're an intuitionist who doesn't like RE? Is that for particularist reasons, or do you have a Grand First Principle like our forefathers?

Expand full comment
Daniel Muñoz's avatar

Hah! I wish I had a GFP, but in fact it’s particularist reasons that make me wary of reflective equilibrium. For one thing I’m not convinced there’s an equilibrium (though maybe we’ll find it someday). And for another I tend to treat intuition as lexically prior to principles (except insofar as principles help me clarify or focus my intuitions).

Thanks for the very generous reply. And again, great post!!

Expand full comment
Roko Maria's avatar

An objection that you probably have seen before/ have a response to but is stumping me:

Doesn’t this conceptualization of “ought implies can”, if we accept determinism, imply that all actions taken are ethical? If we couldn’t have done anything other than what we did do, then by definition everything we did do (the only thing we could) is what we ought to have done. Therefore it is self-contradictory to say that someone did something morally wrong.

The boring rebuttal is “I’m not a determinist” but I’m curious if there is a determinist/Kantian rebuttal.

Expand full comment
Flo Bacus's avatar

I have this weird talent I can't control where I talk in a way that people's natural response is something addressed in my dissertation and I get to say "I'll have a chapter on this!"

Put briefly: first as a minor point, if it were the case that there was only one thing I "can" do, then there would actually be nothing I can do, i.e. the thing that happens would not be a choice of mine. Anyway, onto your important question. I think even if we accept determinism--indeed, even if every fact about what happens is metaphysically necessary--there may still be multiply options up to us (unless you're paralyzed, constrained, etc.). Suppose determinism is true, so God can predict from the world a million years ago that I'll raise my arm. Now I think about raising my arm, and I choose to do so, as was physically inevitable. I think this is still a voluntary action; I could have not raised my arm, but I chose to not take that option. It just *also* happens that past events necessitate this whole state of affairs of me-voluntarily-foregoing-that-option. This is okay, because humans are physical beings, so whatever physical state of affairs humans-choosing-things-voluntarily amounts to, it is a state of affairs that might be caused.

On my view, the most significant requirement for free will is that there is a causal dependence of the relevant state of affairs on my choice (where "choice" is a thing that happens in my head e.g. when I'm done deliberating). That causal dependence can remain intact, even if the choice is, itself, caused. So necessity/determinism of choice doesn't undermine freedom by itself.

Expand full comment
Roko Maria's avatar

But what’s the difference between your evaluation that you have free will and your evaluation that the trolley-problem guy couldn’t have chosen any differently? Your choice to raise your arm and his choice to be a non-puller seem equally predetermined, yet one is subject to moral judgement whereas the other is “the only thing he could have done”.

Expand full comment
Mark Young's avatar

Your reasoning is a bit off. If "can implied ought" then doing the only thing we can do would entail doing the thing we ought to do. From "ought implies can" we get that nothing we do is unethical. The difference is that under this scheme it is possible that the one thing we can do is not a thing we ought to do; it's just also not a thing that we ought not to do. (This is the position of "hard" determinists.)

But if you look at the explanation given for "ought implies can" you see that it's not quite that, either.

> Why does ought imply can? Simple: one cannot rationally judge that an agent ought to do something they judge they cannot do.

and, in a footnote:

> I conclude that ought implies can because an agent who judges they ought to do something is committed to judging they can do it.

That is, commitment to the claim that you ought to do something rationally requires commitment to the claim that it is something you can do. If you are committed to the claim that you cannot give $1 million to shrimp welfare, then you are committed to the claim that it's not the case that you ought to give $1 million to shrimp welfare.

Note that this connection between the commitments doesn't require you to be correct about either of them. Thus the counter-claim "I can't" is a valid defence against the claim "you ought to." If ought did not imply can, then that'd be a non-sequitur: "What's whether you /can/ do it got to do with whether you /ought/ to?"

When it comes to "the unavoidable question of what to do," determinism doesn't answer the question for us. Yes, it entails that what we are going to do is entailed by the things that have come before, but it doesn't tell us everything that came before, let alone what that entails that we are about to do. Even in a deterministic universe we cannot avoid the question of what to do. To answer the question we need to look to ourselves and our circumstances and try to figure out what action best meets our desires without violating our moral commitments. If we don't, we are likely to end up miserable or dead.

Of course we might still end up miserable or dead from a perfectly reasonable answer to the question of what to do. That's because, even in a metaphysically determined universe, the future is not determined by our knowledge of the past and present. From the point of view of someone doing the reasoning, there are things that it seems we might be able to do and things that it seems like we cannot do. The ones we judge that we cannot do get discarded, and everything else counts as something that (AFAWCT) we can do. Among those, there may be one that I judge I ought to do. Having judged that I ought to do it, I either do it or judge myself harshly for not doing it. Similarly for you and what I judged that you ought to do.

And THAT is what was entailed by the deterministic universe. (If the universe is deterministic, which I doubt.) In spite of being determined, our decisions do reveal something about our character.

Hard determinists hold that the word "can" refers only to those actions that get done. Reflective equilibrium based on my intuitions says that the word "can" used in a moral context refers to those actions that are not ruled out by our knowledge. Of course, being based on RE is imposes no obligation on you.

Expand full comment
Jesse's avatar

I found this pretty compelling. I’ve been thinking about this from a more mechanistic/architectural angle but see plenty of overlap. It seems that a lot of these disagreements over justification (intuitions vs. foundations vs. constructivism, etc.) might be downstream of what kind of constraint we’re trying to account for.

I think there’s a difference between a principle that needs grounding and a constraint that doesn’t allow you to act otherwise. The latter feels like it bypasses the need for justification entirely. As in, it’s not justified, it’s enacted. It might originate in something Humean (emotion, reinforcement, salience), but once it’s consolidated into your deliberative structure, it acts like a Kantian limit: fast, rigid, identity-defining.

So maybe the question isn’t whether you can justify moral law from nothing, but whether some forms of action-restriction become so deep that asking for a justification no longer makes sense from the agent’s perspective. Not because it’s irrational, but because it’s already doing the work that reason would otherwise be asked to do.

Expand full comment
Richard Y Chappell's avatar

This is a great challenge! (Probably for reasons externalists in general, not just intuitionists.)

> “Think again, and form correct beliefs this time” isn’t an option available to him; the problem isn’t with his thinking, but rather circumstances beyond his control.

There's a third option: the problem is with Bob's sentiments: he cares about the wrong things. He doesn't care enough about the five people whose lives he could save by pulling the switch. And he cares too much about standing in certain relational properties like "causing death" (to the one), or whatever it is that his non-consequentialism assigns ground-level significance to in this situation.

The significance of "ought implies can" is that failure to do something infeasible (e.g. curing cancer) doesn't reflect poorly on an agent's quality of will: "of course I'd choose to cure your cancer *if I could*!" But having a bad will (or bad moral priorities) *does* reflect poorly on the agent, even if he doesn't realize that his will and priorities are bad, and there's no chain of reasoning "internally available" from his current perspective that could lead him to appreciate his mistake.

People can become *impervious to rational correction*, after all, but that isn't the same thing as being rationally justified. And if some ends (and hence actions) are better justified than others, that seems a suitably important sense of "ought" for ethics to be concerned with, even if it doesn't have the extra features you've stipulated to be necessary for your sense of "obligation".

Expand full comment
Gabriel Gottlieb's avatar

Love this, independently of the fact that you quoted my guy!

I've not read deep in the literature on reflective equilibrium but I'm wondering how both you and defenders of reflective equilibrium might respond to Rawls's views about political theory as a project of reconciliation (outlined in "Justices as Fairness: A Restatement"). One might think, as I think you show in the case of Alice and Bob, that achieving reflective equilibrium just is about arriving at moral equilibrium relative to one's culture (broadly conceived) and its values. For instance, when thinking about the use of reflective equilibrium in Rawls's late political philosophy, Rawls is invested in considering matters from within, to put it simply, a liberal democratic society where certain values are, more or less, already well entrenched. The practice of reflective equilibrium then serves the project of reconciliation, that is, brining us to a sense of self-understanding about what our values and commitments really entail. If you take this point about reconciliation seriously, one might think that the mistake of philosophers employing the methodology is that there is a kind of normative slip (or whatever you want to call it), where rather than appreciating that they are actually engaged in a project of reconciliation, they slip into a view where they think they are grounding moral obligations one ought to follow. To put the point even more in the terms of Hegel, philosophy when conceived as a project of reconciliation does not, as he puts it in "The Philosophy of Right," issue instructions - his project there (at least on a common reading) is not about grounding, for instance, a normative principle of right that will issue moral or political instructions about how one ought to live.

Expand full comment
Flo Bacus's avatar

Thanks :) first off though, Fichte is MY guy and you CAN'T HAVE HIM.

Regarding the question: it's been *so* long since I've read any of JAF and I haven't even read Political Liberalism, it's only ATOJ that I feel I have any competence on (and even that was a while ago). But what you say is kinda like what I took myself to be getting at in the first footnote--RE is justified in Rawls' case because he's talking about accomplishing a specific goal in a specific context, so it makes sense to take stuff from outside for granted. But I *do* agree with the intuitionist that when it comes to ethics as a whole, we want to find a basis for our obligations, indeed our specific ones. (And I strongly disagree with Hegel on this--I think philosophy is for telling us what to think and what to do).

Expand full comment
Timothy Johnson's avatar

I don't pretend to follow all the philosophical arguments here, but as someone with a math background, my understanding is that mathematicians do sometimes fall back on reflective equilibrium.

This is, as far as I can tell, the only possible course of action when it comes to which mathematical axioms to accept. For example, the usual quip about the Axiom of Choice is: "The Axiom of Choice is obviously true, the Well-Ordering Principle is obviously false, and who can tell about Zorn’s Lemma?"

The joke, of course, is that within the standard ZF system of axioms, all three of these are logically equivalent.

Most mathematicians do choose to use the Axiom of Choice, while perhaps feeling slightly uneasy about some of its implications. A few choose to develop alternative theories that avoid it.

But, unless we somehow discover an inconsistency that results from using the Axiom of Choice (which I believe is highly unlikely), neither side has any method to prove that the other side is wrong. They simply have to accept that they have different intuitions.

Expand full comment
Mark Young's avatar

I think your example fits with the OP point. The mathematicians have conflicting intuitions, but don't hold that any mathematical obligation follows from their respective views. Each of them takes a stance on the axiom of choice and accepts/rejects proofs dependent on it. None of them accuse their opponents of being bad mathematicians based merely on that stance.

Expand full comment
Not-Toby's avatar

I've been spending this year getting more into reading all the philosophy I've wanted to since high school. I guess it must be a sign that I have not gotten particularly deep into it that I never ran into this stuff before substack. It seems weirdly lazy to me and I really don't get how one can be satisfied with it.

Expand full comment
Flo Bacus's avatar

Thanks! Glad you found the post interesting.

Since I'm a detractor, if intuitionism seems lazy from my post, you'd be reasonable to be skeptical of my portrayal. Especially because I was mainly going off my general impressions from reading lots of stuff and speaking to many people about these matters, as opposed to responding to some specific work.

If you want to read pro-intuitionism stuff, Ross' "The Right and the Good" is a classic. For contemporary stuff, Thomas Scanlon (in Being Realistic about Reasons), Russ Schafer-Landau (in Moral Realism: A Defense), and David Enoch (in Taking Morality Seriously) are major proponents.

Expand full comment
Not-Toby's avatar

Scanlon's been on my to-read list as social contract thinking appeals to my intuitions (lol). I'm currently reading Gauthier's Morals by Agreement, which aligns a bit better with my thinking about what analytic philosophy was going to look like lol. I'll make sure to check out the others you mention!

Expand full comment
Not-Toby's avatar

(ofc now that I've finished your piece, Gauthier accepting Hume on preferences shows he's not drilling *all* the way down!)

Expand full comment
Edokwin's avatar

Pretty good, I must begrudgingly admit. Putting that grad school Ed to work here.

Expand full comment
Jeremy Khoo's avatar

Thanks for posting this — I enjoyed the chance to revisit some of these issues which have cropped up in my own research.

You write

the question of what to do and the question of what I morally ought to do are the same. Moral judgments are ways of articulating my decisions for how to live, not merely my judgments regarding how I ought to live.

I think this setup eclipses the distinction between what there is sufficient (or decisive) reason to do, and what there is sufficient moral reason to do. That is, I think that what you call moral obligation is what others have called rational obligation, obligation simpliciter, or the ought of most reason, or the deliberative ought. (Let’s ignore whatever differences there may be between these things.) Of course you are entitled to stipulate how you want to use the term ‘moral obligation’, but this leaves open the question of whether our “moral obligations” (in your sense) will include things like ‘One ought not steal things (even if there is no chance of being arrested).’

I strongly endorse the claim that

(R) You cannot be morally obligated by any principle that you cannot come to be rationally convinced of.

To me, (R) is a version of moral rationalism (i.e. that moral obligations are rational obligations), and an element of both neo-Kantian (e.g. Korsgaard’s) and neo-Humean (e.g. Michael Smith’s) theories of action and reasons.

However, I don’t think this is the principle which interests you. (Is that correct?) That principle seems instead to be

(R*) You cannot be rationally obligated, or obligated simpliciter, etc. by any principle that you cannot come to be rationally convinced of.

I also endorse this principle. I take it to be a version of what Kiesewetter calls

The Principle of Decisive Reasons (PDR): Necessarily, if A has decisive reason to φ, then A has sufficient reason to believe that she herself has decisive reason to φ.

My view is that (R) leads to moral error theory, or at least moral skepticism — in the sense that our obligations don’t include ones like ‘One ought not steal things’. (I think this is a concern whether we are talking about (R) or (R*), since it seems to me that (R) follows from (R*) or PDR.) That is, whether or not we call them “moral obligations” as you do, it will turn out that our obligations don’t have moral content.

My thinking here is based on Richard Joyce’s argument (in The Myth of Morality) that the Smithian version of moral rationalism implies that there aren’t any moral obligations after all. To give a very compressed summary, the idea is that even under ideal deliberative conditions, fully rational agents won’t necessarily agree that we are obligated, in such-and-such circumstances, to do one particular thing (e.g. not steal), which is to say that there may not be anything we are exceptionlessly obligated to do under those circumstances. In Kiesewetterian terms, this is to say that there won’t be one action φ, like not stealing, that any rational agent in some fixed set of circumstances has sufficient reason to believe she has decisive reason to do. Since it’s in the nature of moral obligations that they apply exceptionlessly to all moral agents (holding fixed the circumstances), this means that there aren’t any moral obligations.

(I should mention that Michael himself was once sympathetic to but now rejects the claim that neo-Humeanism leads to error theory, and has explained his position in later articles (e.g. "A constitutive theory of reasons: its promise and parts" and "Beyond the error theory").)

Since you believe not only that we have “moral obligations” in your sense (I agree), but that they include things like “One ought not steal” (I don’t agree, don't @ me), it would be interesting to know where you get off the train here.

Expand full comment
Flo Bacus's avatar

Thanks for the comment Jeremy! Super helpful. We've met before, right?

So yeah, I'm a moral rationalist. The definition of moral obligation given, however, doesn't collapse moral and non-moral reasons. I don't like talking in terms of reasons, but the important point is that not every all-things-considered judgment about what to do is a moral judgment. It's only if *desire-independent* motives single out an action that it's judged to be morally obligatory. From among my morally permissible options, I'll pick the one that satisfies my desires best, and then we have an all-things-considered judgment. (There is a sense in which moral reasons weigh against non-moral reasons, but my view is that's just a rough approximation useful for beings like us who think in terms of vague generalities, and that moral permission and obligation are fundamental.)

As far as I can tell, PDR seems weaker (or at least, too weak to be sufficient for our purposes), since the intuitionist will just say "Bob does have reason to believe he ought to pull the lever, his deliberation is just defective so he's not responsive to that reason" (though I'm sure this is the sort of thing the author anticipates). I think the principle becomes the same as what I said if we place "having sufficient reason to believe" with "being internally committed to." For me, when we're talking about moral obligations, those are the same thing, but I figure that matter is part of what's at issue with the intuitionist.

Where I get off the train with the error theory stuff: I don't have an ideal agent account, but I figure you can replace ideal agents or sufficient reason for belief with whatever my criterion is and it'll work the same. I figure the general point is "These moral rationalists have a super strong condition on what can be an obligation, and in fact obligations will satisfy that condition, so we have no obligations." My response is just what I said with the Fichte quote in the OP; it is reasonable to be skeptical, and the only way I can respond is by showing the Categorical Imperative meets the relevant condition. I just have to prove that every agent is internally committed to the CI from activity they undertake in virtue of which they are rational agents in the first place. Hard, but it is what the concept of moral obligation demands, and if it doesn't work I'll be an error theorist.

Expand full comment
The Purple Turtle's avatar

A very interesting post. I am curious; do you accept cases of akrasia? In your view, can one recognize the moral obligation to donate to shrimp welfare, then fail to do so (maybe because they really want to put that money toward buying an expensive watch instead and succumb to temptation)?

Expand full comment
Flo Bacus's avatar

I have a post about akrasia coming tomorrow! I think it’s possible in principle, but I’m not convinced it really happens

Expand full comment
Both Sides Brigade's avatar

I think this is a really interesting piece and I'm glad you're on here writing more long-form stuff! I guess I am curious, though, how an analysis like this avoids a roughly parallel conclusion regarding people who form bizarre or obviously false non-practical beliefs on the basis of a dysfunctional intellectual process: Motivated reasoning, inconsistent or arbitrary epistemic standards, etc. I'm sure there are some hardcore Trump supporters, for example, who have "reasoned" themselves into such a hermetic bubble of mutually reinforcing beliefs through (what was at least at one time) culpable ignorance or other vices, such that they are no longer able from a psychological perspective to form correct views about some particular new scandal Trump is involved in. It seems to me they're still obligated in a meaningful sense to believe what is obviously true, and blameworthy if they don't, yet “Think again, and form correct beliefs this time” isn't really available either. Or am I wrong to think that's an extra bullet to bite, and you'd also affirm they wouldn't be obligated to believe obviously truth things in an epistemic sense if this sort of analysis were true in that realm too?

Expand full comment
Silas Abrahamsen's avatar

Really enjoyed this! Excited to read more about getting substantive content to morality

Expand full comment
Jack Thompson's avatar

"Why does ought imply can? Simple: one cannot rationally judge that an agent ought to do something they judge they cannot do. Specifically, if I judge I ought to do something, then I settle on doing it (with a desire-independent motivation, but that assumption is not needed here)"

I don't understand this argument—in that it really doesn't seem like an argument at all! Why is it that if you judge you ought to do something, then you *necessarily* settle on doing it? Perhaps there is a perfectly good explanation, but you don't offer any. It feels like you are just asserting what you intend to prove.

For instance, it seems well within the stipulation "obligation is a question of what to do" to believe that you are obligated to do a thing if it would be worse if you didn't do it. That definition of ought does not imply can. How do you show that this is incorrect within a stipulative definition of ought, but *without* stipulating that ought implies can? What exactly is being stipulated?

I find it equally probable that my confusion is a product of my misunderstanding as it is a product of you being wrong, so any guidance you can offer here is appreciated. :)

Expand full comment
Allan Olley's avatar

Ought implies can has never sat right with me, but its probably just a question of emphasis and my quibbling and misunderstanding. For example, it seems to me ought implies you should try and at least verify it is impossible. There seem to be any number of cases where you would not know whether you can without a good old college try, but that is probably more a practical question. I still yearn for a more felicitous formulation of "ought implies can".

More generally I am not at all well read on the subject but your point if I understand it correctly in terms of the requirement that a universal justification of ethics be free of these sorts of particular intuitions. Reminds me of some thought I have read . Although it is on the utilitarian/consequentialist side. And also perhaps another more Kantian take on "Why be moral" I happened across. So I don't imagine you are somehow too alone in the wilderness on some of these requirements.

In general different axiomatizations of the same system of inference might look very different (at least it can be so in mathematical disciplines). I find it plausible someone could formulate a system of ethics from a set of principles of reasoning plus something that looks like a particular fact and it would turn out to be equivalent to another system which was a set of principles of reasoning "zero intuitions(facts?)" . Although, this is just me waving my hands wildly at this point, so may not mean much.

Expand full comment
apexrose's avatar

As someone who is not only "not against intuitionism" but tends to dive head-first into intuition, I truly can't say I find much disagreement here.

In fact, let intuition be an ultimate justification of moral principles.. if you manage to prove it through a characterization on our terms. Life is not GTA and 'intuition' is not a cheat code you enter to ultimately justify anything. Intuition needs to bring new explanatory power through integration of relationships previously unknown.

Slapping a label with 'intuition' written in it is no intuition, or at least is never proven such. If your intuition is substantial, then sit with it for years on end if need be. Sit with the unintelligible but profound until life delivers you an intelligible characterization of the intuition. And bam you got utility. One camp might argue it's not pure intuition if it's characterized on our terms. The other might argue the intelligible characterization proves there was never need for intuition in the first place. You know one can't exist or be proven to exist without the other. The taller the tree, the deeper the roots. Root growth and shoot growth on a seamless feedback loop where roots feed shoot with water and minerals and shoot feeds the root through photosynthesis. Intuition is the tree that receives the sun.

For years I have sat with an intuition heartbreaking as it was at times. And still is. For all my attempts at characterization no characterization has been achieved. Still, my intuition overpowers the ever-creeping doubts characterization is impossible. And I am up against a reality which deems that nothing will come of it if it doesn't reach the point of intelligible characterization. Without intelligibility an intuition is in a state of vacuum.

Being a philosopher doesn't grant one the privilege of soliloquizing. It's an insult on the entire field as it sets the underlying assumption that philosophy's detached from reality.

Expand full comment
yakiimo's avatar

This is great! I agree that intuitionism fails to vindicate moral normativity, though I think that constitutivism also fails to do that, for Enoch-like reasons. Constitutivism ultimately doesn't lead to principles that all agents can rationally adopt, because its logic is at various points escapable.

Korsgaard tries to block that conclusion by appealing to the psychological inescapability of normativity, e.g. in her discussion of the contrast between the first-person deliberative perspective and the third-person scientific perspective--"if you think reasons and values are unreal, go and make a choice, and you will change your mind"--but I don't think she succeeds. If constitutivist logic were truly psychologically inescapable in the strong sense that I think is required, then constitutivists wouldn't even have to argue for their conclusions--we would all already believe them.

Expand full comment