Deflationism about knowledge
My contribution to Gettierology
As the legend goes, back in the day, everyone thought that knowledge was just justified true belief. To know something you have to believe it, and it has to be true—a person who thinks a Democrat will win in 2028 cannot be said to know such a thing unless, at the very least, they turn out to be right. But merely being correct isn’t sufficient for knowledge, either. If I randomly guess that Bob owns an even number of books, and it turns out he does, that’s not knowledge. If I believed he has an even number of books because I counted his books, then that is knowledge. So knowledge is a true belief that one has sufficient justification for. For thousands of years, philosophers would be executed if they ever doubted that knowledge is justified true belief.
Then, a fearless renegade named Gettier came along. No prior publications, he gave us a three-page paper where he showed knowledge isn’t justified true belief. The point is that even if you have a justified belief, it can still end up being true by accident, and in such a case it doesn’t get to be knowledge. For example, suppose my extremely reliable friend tells me that it’s raining in Boston. I believe him, and that’s a justified belief. I then, for whatever reason, infer that it is raining or it rained this day one year ago, which is also a justified belief, since obviously-valid inferences preserve justification. But say this is one of the rare instances where my friend is wrong. As it turns out, though unbeknownst to me, it did rain a year ago, so my belief is true. It’s also justified, so I have a justified true belief. But I cannot be said to know that either it’s raining or it rained a year ago, since despite being justified, my belief is correct by accident. Thus, knowledge cannot be analyzed simply as justified true belief.1
So Gettierology was born. Philosophers, in a frenzy, wanted to find the elusive fourth condition that would be utterly immune to counterexamples. Perhaps the fourth condition is that, in addition to being justified true belief, knowledge can’t be inferred from false premises—that’s certainly what goes wrong in Gettier’s cases. Well, though, not all knowledge is gained by inference, and Gettier cases arise just as well in such cases. Or maybe the fourth condition is that one’s belief is caused (perhaps in some special way) by the fact believed. But in our case above, we can imagine my friend’s erroneous testimony is somehow caused by the fact that it rained in Boston a year ago. And so on.2 A new fourth condition is given, new counterexamples arise, and nothing seems to do the trick. This decades-long struggle is, I speculate, a large reason why philosophers today are less interested in giving necessary and sufficient conditions for things.
Now, like my hubristic predecessors, I’m pretty sure I have a solution to the Gettier problem.3 However, as I’ve said before, I’m no miracle-worker. There are principled reasons why similar sorts of counterexamples arise for various proposals regarding the analysis of knowledge, and I have no intention of repeating my predecessors’ mistakes.4 I offer no fourth condition, nor any modification of the “justified true belief” account of knowledge.
Three features of my view render it non-miraculous:
This is a deflationary view. Just as saying “‘Grass is green’ is true iff grass is green” does not help you determine what is true or false, so my account will not help you do the hard work of figuring out who knows what. Nevertheless, deflationary accounts, when correct, are still important when one has an intrinsic interest in the concept in question. (I’ll also note that a lot of the dialectic about deflationism about truth could be repeated here, and I’ll try not to go into enough detail here to take on huge commitments in that discussion. The informed reader is free to substitute my account into whatever their favorite thing to say about truth-deflationism is.)
This is an expressivist view.5 I am not laying out an account of the form “S knows p iff S’s situation has such-and-such properties.” Rather, I say we should understand the concept by uncovering what an agent is doing or committing themselves to when they say someone knows something.6 This is enough to tell us when you should say someone knows something, what you can deduce from knowledge-claims, and so on.
I’m not sure my account is significantly different from Mark Schroeder’s analysis of knowledge as belief for objectively and subjectively sufficient reasons (and I haven’t had the time to read the paper fully). His view seems correct, and my only dissatisfaction is that the concept of “objective reason for belief” seems like a black box (as it does in other cases). I’d predict that “objective reason” is just the semantic shadow of an assessor’s own endorsement of some reasons, such that my account and Schroeder’s will come out equivalent.
1. The account
What, then, is my account? What attitude does one express when they say a person S knows that p? I say: when you judge a person S knows p, you are expressing the attitude of believing p for the same reasons S believes p.7 For example, you and I both see that it’s raining. I say “You know it’s raining.” I express not only my sharing your belief, but also your believing for the same reasons I do, viz. perception. If your eyes were closed and you just made a lucky guess, I would no longer say you know it’s raining, because I do not believe it’s raining on the basis of your silly reasons.
There’s a lot I need to say for this to make sense. Let’s list the obvious objections:
Sometimes I believe p for different reasons than others who know p. Maybe I believe p on the basis of direct observation, and you on the basis of testimony. Yet I still ascribe knowledge to both of us.
Sometimes I say others know things when I don’t even know what proposition I’m saying they know. E.g. Grothendiek states some category-theoretical fact, though I didn’t hear him. I can still say “Whatever he said, Grothendiek knows it’s true.”
Sometimes I say others know things when I don’t even know their reasons for belief. As in the previous example, I may well not know why Grothendiek believes what he does. How, then, could Grothendiek’s reason for belief possibly also be my reason for belief?
I’ll address these objections in order.
Regarding the first: I want to distinguish between having a certain reason for believing p and forming a belief that p for a certain reason. This distinction is artificial, since when we talk about reasons for belief we usually mean reasons for forming a belief. Yet it’s a valid and necessary distinction no less.
Consider an example: Alice and Bob both believe Smith committed a robbery. Alice, because she saw a video of him doing it, and Bob, because he heard a recording of Smith’s confession. They believe Smith is guilty for different reasons. Now suppose Alice then hears the confession herself, and Bob sees the video for himself. So, now, Alice and Bob are in the exact same epistemic state. Do Alice and Bob still believe Smith is guilty for different reasons? In one sense, yes; their beliefs are the same ones they formed earlier, and they formed them for different reasons. In another sense, no; when Alice heard the confession, she obtained new and entirely sufficient grounds for her belief, and now it stands on equal footing with the video. Similar for Bob. It is one’s reasons for believing that I want to focus on, not one’s reasons for originally forming the belief.
Suppose Alice hears the tape, but Bob doesn’t see the video. Alice will still say Bob knows Smith is guilty, because Bob’s reason for belief are still a reason for which Alice believes Smith is guilty, a reason which is individually sufficient. Likewise in the case where I know p by observation and you by testimony. Were I to learn you believe p as a result of testimony, I would then adopt the testimony as an independent sufficient reason for my belief, just as Alice adopted hearing the confession as an independent sufficient basis for her belief. I then express my believing p on the basis of the testimony you heard (and which I take to be your reason for belief) by saying you know p.8
Now, in the normal case of belief, we have a lot of independently sufficient reasons for belief. Indeed, in cases where we have a lot of independent reasons, we might even expect that some of them aren’t good reasons. Yet we maintain the belief, because it’s unlikely all the reasons are bad. Strictly speaking, I should articulate my account by saying that ascribing knowledge that p to S expresses believing p for some sufficient reason that S has for believing p.
The second and third objections concern ascriptions of knowledge when the speaker doesn’t know what proposition or what reasons they’re assenting to. Grothendiek makes a mathematical assertion p that is unintelligible to me, and his reasons are equally unintelligible. Yet I say that, whatever he said, he knows it. My account says this means I’m expressing believing p for the same reasons Grothendiek does. How could that possibly make sense?
This issue is parallel to an issue deflationary accounts of truth have to address. Deflationists, roughly, say that truth isn’t a property of propositions; rather, to say “p is true” is just to say that p. But we believe propositions to be true when we don’t know the propositions themselves; I may say “Everything Bob believes is true” without having any idea what Bob believes.
The short story: there are many deflationary views with many ways of understanding “Everything Bob believes is true.” I predict that, whatever your favorite thing to say about this puzzle is, you can say the same with my account of knowledge. Here, I just sketch my favorite way to understand what’s going on.
What am I doing when I assert that everything Bob believes is true, if not ascribing a property to each of Bob’s beliefs? I think what I’m doing in such a case is forming a set of commitments built up out of the unproblematic kind of truth ascription. Usually, to say p is true is just to say that p. If I don’t know the relevant p—e.g. “What Bob just said is true”—then what I’m doing is potentially treating what Bob said as I do the unproblematic case. In particular, if I learn what Bob actually said, I will then treat my ascription of truth as I do others. Ultimately, what this amounts to is as follows: when I say “What Bob just said is true,” then I form a commitment such that, if I learn that what Bob said is p, then I will either come to believe p or else retract my belief that what Bob just said is true. Similarly, if I say that everything Bob believes is true, then I form a commitment to believe p upon learning Bob believes p, or else abandon my claim that everything Bob believes is true.
Some may be dissatisfied by this picture, since I only say what commitments I form when I ascribe truth, as opposed to saying what the ascription of truth asserts in itself. For my part, I think that once we identify where a claim lands in the web of commitments, we’ve done all we need to do to understand what it’s saying. If you disagree, then I will not be offended at your having gotten off the boat.
I understand ascriptions of knowledge similarly. When I say Grothendiek knows p, for some belief of his whose reasons and content I do not comprehend, I am likewise just committing myself to believing p upon learning what Grothendiek said, as well as adopting Grothendiek’s reasons for belief as my own upon learning what those are, or else abandoning my claim that Grothendiek knows p.
For the general case, we need to be careful. In saying S knows p, I merely take there to be some sufficient reasons for S’s believing p, reasons which I would also adopt as sufficient. So, in saying S knows p, I am not committed to believing p because q upon learning S believes p because q. For I might think q is a bad reason to believe p, but expect that S has other, good reasons to believe p. Strictly speaking, in saying S knows p, I am committed to believing p for at least one of the reasons S has upon learning what all those reasons are, or else abandoning my knowledge ascription.9
For the rest of this post, I implicitly stick to cases where knowledge is ascribed in which the speaker knows the proposition in question and the subject’s reasons for believing that proposition. I think my arguments can be extended to work in the general case, it would just be messier.
2. Explaining the core features of knowledge
Again, this is an expressivist account, so I am not speaking at the ground-level. In showing that knowing p requires that p is true, for instance, I will not make an argument of the form “Suppose S knows p. [stuff]. Therefore, p.” For I have no claim about the ground-level conditions for knowledge, but instead an account of what people are doing when they ascribe knowledge. Thus, for example, I argue that taking someone to know p requires taking p to be true, i.e. that p can be inferred from [S knows p].
Justification, truth, and belief are standardly taken to be necessary (but insufficient) conditions for knowledge. Truth and belief are built into the account, so I get no explanatory brownie points on that front. Saying S knows p expresses believing p for some sufficient reason S has for believing p; so if I say S knows p, then I believe p, and I take S to believe p. If I ascribe knowledge without thinking the content of knowledge is true, or without thinking the agent believes that content, then I am being inconsistent.
Knowledge requires justification as an objective condition for it because belief requires justification as a rational condition for it. I say Bob knows p. I thus believe p for some sufficient reasons that Bob also believes p for. If I thought Bob’s belief were unjustified, that would mean I believe p for reasons I don’t take to justify the belief, which would be irrational. Thus, in saying Bob knows p, I am committed to Bob’s belief being justified.
We can also see why even a justified true belief isn’t knowledge if it relies ineliminably on false premises. Say Bob believes p, and his justification for p ineliminably involves appealing to q. Then every collection of sufficient reasons Bob has for believing p involves q. Thus, if I say Bob knows p, I believe p for some such set of sufficient reasons, and so I believe p in part on the basis of q. Thus, I am committed to q’s truth. Conversely, if I believe q to be false, then because Bob’s justification ineliminably relies on q, I must take Bob to not know p.
Additionally, we can explain the behavior of knowledge from the first-person perspective. If I believe it’s raining, then I take myself to know it’s raining. Well, duh: if I believe p, then surely I believe p for some sufficient reason for which I believe p, and so I should judge I know p.
We can also explain the truth of a restricted version of the KK principle, namely that if someone knows p, then they are in a position to know that they know p. Speaking generally, people are in a position to know things they can rationally infer from what they know. The previous paragraph shows one can rationally infer [I know p] from p. Thus, if one knows p, they are in a position to know that they know p.
As a final point, I’ll discuss the fact that knowledge seems to have varying standards in varying contexts (a topic very close to my heart). In a normal context, one would say I know that the appendix is on a person’s right side. If it turned out I had to perform an emergency appendectomy in the Antarctic, then we might no longer say I really know which side the appendix is on; sure, I’d guess right over left, but I should check before making the incision, since my memory is quite fallible and a mistake would be a disaster. Philosophers have put forth various semantic theories about how this variance in standards works; read this paper if you want to know why the obvious views are wrong.
It appears to me that the variance in evidential standard that a belief has to meet to be knowledge is due to a similar variance in standard for us to call an epistemic state of our own a belief. Look, I’m a Bayesian; at the end of the day all I care about are subjectively probabilities, and if we were smart, we’d hardly need to talk about anything else. But we are not smart, and so we need to think in more coarse-grained terms that will hopefully allow us to approximate perfect, Bayesian reasoning. For example, we must talk about what we believe without qualification, and about what truths we know.
My credence that the appendix is on the right side is, let’s say, 0.95. In an ordinary context, I’m confident enough to simply unqualifiedly assert the appendix is on the right side, and we say I believe it’s on the right side. If I have to perform an emergency appendectomy in the Antarctic, though, the cost of error is so high, my credence is no longer high enough to justify unqualified assertion. In such a context, I would not simply say I “believe” the appendix is on the right side; rather, I’d say “That’s my impression” or “I’m pretty sure.” Nothing about my epistemic state (that is, my credences) changes; all that changes is the best way to coarse-grain epistemic states into talk about unqualified belief and such.
It stands to reason that, if the stakes are high or skeptical scenarios are unusually salient, the reasons that justify what I would call a “belief” in an ordinary context do not justify what I call a “belief” now. A person in the same, stringent context would no longer “judge” the appendix is on the right side for the same reasons I do, since the standards for unqualified judgment are unusually high. For those keen on the contextualism/SSI/assessment sensitivity dialectic, my view explains why assessment sensitivity appears to be the best; in assessing a knowledge claim, I think about whether I am to believe p for the same reasons the believer does, and so the standards that bear on my assessment will be those relevant to my unqualifiedly believing something.
3. Explaining the classic cases
The classic Gettier cases like the one given in the introduction can be straightforwardly explained as in the previous section, since they involve inferences from false premises. Smith justifiedly believes Jones will get the job, and he knows Jones to have ten coins in his pocket, and so Smith infers that the man who gets the job has ten coins in his pocket. It turns out Smith got the job, and by coincidence, Smith also has ten coins in his pocket. So Smith’s inferred belief was justified and true, yet not knowledge. The reason we do not ascribe knowledge to Smith is because we do not believe that the man who gets the job has ten coins in his pocket for the same reason Smith did, viz. on the supposed basis that Jones will get the job.
In general, the kinds of features that undermine a justified true belief’s status as knowledge are the features that, if you knew about them, would prevent you from having the relevant belief for the same reasons the believer does. This phenomenon transfers to cases not involving inference.
Consider the famous case of Fake Barn County. Bob is driving down a stretch of road that, for some reason, has a bunch of very convincing facades of barns surrounding it. He looks to the side of the road and says “Here is a barn.” As it turns out, what he saw was one of the very few real barns, by sheer luck. Intuitively, Bob does not know that what is before him is a barn; he just got lucky, and he could have easily said the same about one of the facades.
Bob doesn’t believe there’s a barn for no reason; spell his reasons out however you like. A straightforward thing to say would be that he believes there’s a barn because there appears to be one (which is not to say this describes an inference he performs). An onlooker aware of the situation would not judge that there is a barn on the basis that there appears to be one, or however else we spell out Bob’s reasons. Thus, the onlooker would not judge Bob knows there’s a barn.
I will leave the other classic examples as an exercise to the reader. I predict it’s all basically the same story as above.10
4. Epistemology is not about knowledge
I hope my strategy is obvious by now; the account I give is maximally noncommittal about the details, though I do think the explanations it gives for various features of knowledge are substantive, and that believing this account (if correct) really clears a lot of epistemology up. Yet, even if correct, the account will not answer the profoundest questions we have about the nature of knowledge, the questions that occupied the ancients and all that.
Can we know if there’s an external world? Well, the answer just depends on whether you think the evidence we have is rationally sufficient for believing there’s an external world. Must knowledge bottom out in inferences from indubitable self-evidence truths? That depends on whether you think we ought to only believe things that follow from such truths.
In making a judgment about knowledge, really, we’re just voicing commitments we have about what reasons are rationally sufficient for which beliefs. For this reason, I don’t think the conditions for knowledge is an interesting question by itself, and I don’t think that’s what epistemology should be about. Rather, what is of importance is epistemic normativity, the first-person question of what to believe. Addressing the hard questions about knowledge requires us to do the work of figuring out what our rational requirements are, and we will not get those answers merely by analyzing a concept.11
Beginners to Gettierology often reply that it turns out my belief isn’t justified after all, since my friend was mistaken. But if you think knowledge can be fallible—that I can know things in cases where my evidence doesn’t make it literally impossible that I’m wrong—then surely there can be justified false beliefs, and if you infer an accidentally-true thing from the false thing, that latter belief is justified. If you think knowledge has to be infallible, then yeah, it will be true that all ~zero cases of people knowing things are cases where they have a justified true belief.
As is usual, I don’t know if I’m the first person to have this view, and it would be very unsurprising if I were not. Such is the nature of being correct.
Actually, Zagzebski’s argument for the inescapability of Gettier cases doesn’t work in full generality. Even if our fourth condition doesn’t entail truth, it doesn’t follow that we can “make the belief false” (p. 69) without ipso facto making the fourth condition not satisfied, since our fourth condition might depend on the truth of the belief at nearby worlds. Consider e.g. the safety condition, which requires that the belief is true in nearby worlds where the agent has the belief. This condition entails truth, so it’s the kind of analysis Zagzebski isn’t talking about, but there’s no reason a priori to rule out that there could be a natural condition that is similarly modally-loaded without entailing truth.
If you like assessment-sensitivity, you can read this as that kind of account. I’m not convinced there’s really an important difference between expressivist and assessment-sensitive accounts of things.
For the LessWrong types reading this, I could say: knowledge is not merely a property of the territory, but also of the map.
I must take some liberty in my use of the word “reason,” which I take to include not only one’s premises and the like, but also everything else that goes into forming the belief. I may derive Q from P; perhaps someone else also derives Q from P, but in an incompetent manner, so I say they don’t know Q. In this case, I want to include among their “reasons” their process of deduction, though ordinarily it’s important to not count rules of inference as additional reasons, lest we have an infinite regress.
Also, implicit in my account (if read de dicto) is that the ascriber themselves takes the relevant reasons to be ones S has for believing p. If I believe p for the same reasons Bob does by mere coincidence and I know nothing of Bob’s situation, I do not thereby believe he knows p.
Perhaps I do not believe the testimony to be reliable enough to accept as an independent sufficient basis for my belief. But in that case, I should not ascribe knowledge to you.
For sure, it might often be unrealistic for me to learn all of a person’s sufficient reasons for believing something. That doesn’t matter. In many cases, we’re justified in thinking a person’s belief stands or falls with a few salient reasons, or we can conclude by induction that a person has no sufficient reason for their belief that we would want to be committed to.
There are much more complicated cases to be discussed, e.g. cases of knowledge being undermined by misleading evidence you don’t have, and stuff involving meta-meta-meta-defeaters and what have you. I refer the reader back to footnote 7.
Schroeder, it seems, came to a similar conclusion in his paper.


I like this very much. Though I wonder what you want to say about the idea of having reasons. I ask because many people will want to explain it in terms of knowledge. Eg, for r to be one of S's reasons for phi, S must know r, and it must stand in some appropriate relation to phi. It seems to me that if you're going to explain what it is to regard a belief as an instance of knowledge in terms of judgments about reasons, such judgements shouldn't also presuppose claims about knowledge. Does that sound right?
Cool. I have a different flavor of deflating it but it's basically the same thrust; your way has some advantages in that it acts an easier and more direct reply to the classical scenarios.