(Do not read this post if you are a professional philosopher who is currently reviewing a paper on the philosophy of action. Also, please hurry up.)
I would speculate that, if one is to be an exceptional philosopher, one must either start out by studying mathematics or be so ridiculously lucky that they’re born with the skills they would have gotten from the former. The reason is that mathematics teaches you what it’s like to prove something substantive. Mathematicians deal in logical truths—what they get paid to do is show certain theorems follow from the axioms and definitions.1 Yet, somehow, some of these logical truths are substantive, in a way that the statement “p or not p” is not. That there are infinite many primes does not feel like a mere restatement of the information contained in the axioms of arithmetic plus the definition of a prime number.
If you do a lot of math proofs, even basic ones, once you’re good, you might make a mistake and then just get the feeling, “Wait. This is not an argument that anything substantive would follow from,” even before you identify where your mistake was. If you are unlucky enough to study enough math to develop these sorts of feelings and then for some God-forsaken reason go into philosophy, then you are Permanently Cursed. Once you stop reading the Greats—namely, Hume and Kant—and venture into contemporary analytic philosophy, your job becomes worrying about Lovers of Wisdom who make arguments without regard for these feelings they ought to have. I recently complained about Donald Davidson in this respect. A repeating pattern for him is that he technically provides a solution to a problem with some relatively scholastic distinction, leaving the reader of refined taste with the (correct) feeling that all he did was move words around in a way that doesn’t solve the real problem.
In mathematics, you get to realize a substantive analysis of concepts are possible, without even having to stop and worry about what Moore says on the matter. Intuitively, what is a continuous function? People generally say it is one whose graph you can draw without picking your pencil up—no sudden jumps in value, just a nice continuous curve. What about the function g that outputs x2 for x less than 0 and x2+1 for x grater than or equal to 0? There’s certainly a discontinuity at 0, but it seems like the function is continuous with only that exception, i.e. g continuous at every point except 0. What does that mean? The only time you have to pick your pencil up is at 0? But what does it mean in general to say that a function is continuous or not at a point? Perhaps it means that the point lies in a region on the graph that you can draw without picking your pencil up. So, should be conclude that if a function f is continuous at x, then there is some region of nonzero size containing x that f is continuous on? How do we even prove anything about continuous functions? Do we appeal to intuitions about what you can do without picking a pencil up? Should we fund empirical studies to investigate drawing without picking a pencil up?
Fortunately, as it turns out, mathematics does not force us to think about the nature of pencils. We can give an analysis of continuity just in terms of facts about real numbers. A function f (which takes in a real number and spits out a real number) is said to be continuous at x provided that, for all ε > 0, there exists a δ > 0 such that, for all y, if |x - y| < δ, then |f(x) - f(y)| < ε. The intuition behind this definition is that if f is continuous at x, then f(y) gets closer and closer to f(x) as y gets closer and closer to x. More formally, for any possible distance ε, we can pick another distance δ so that we can guarantee that f(y) is within ε of f(x) by having y be no further than δ away from x. I.e. we can guarantee that the output get sufficiently close to f(x) by having the input be sufficiently close to x. Our function g above is not continuous at x = 0, because we may merely pick ε to be 0.5 (or, indeed, anything less than 1), and then we can never guarantee that g(y) will be within 0.5 of g(0) = 1 simply by making y close to 0—for even putting a tiny negative number into g will give us something close to 0, which is not within 0.5 of 1. Imagine you have a continuous function f, but your worse enemy is skeptical that your function is continuous at x. He will pick any ε he wants, and laugh at you as he says you cannot guarantee the output gets within ε of f(x), and your job is to prove him wrong by finding a δ such that whenever the input is within δ of x, the output will be ε of f(x).
Do not proceed until you understand the above paragraph and can show that f(x) = 2x is continuous at every point. Because by the time you finish this sentence you will understand what functions and real numbers are, you have everything you need to understand the above definition right before you and inside of your head, and correctly reading Kant and Hume requires you to spontaneously think thoughts at least as difficult as proving that f(x) = 2x is continuous everywhere, and with the same precision. Some who study the “humanities” would judge me for never having read the Iliad. I judge you for not understanding the ε-δ definition of continuity.
It is a consequence of our definition that just because f is continuous at x, it does not follow that f is continuous on some region containing x. Sure, the conclusion holds for our function g above, since if g is continuous at x then x is not equal to 0 and we can pick any region small enough to contain x but not 0. But consider h, the so-called “popcorn function,” which works as follows. For any irrational number x, we say h(x) = 0. If, however, x is rational, we write it as p/q in lowest terms and with q positive, and then stipulate that h(x) = 1/q. So h(π) = 0, h(2/3) = 1/3, and h(0.4) = 1/5. At what points, if any, is h continuous? You can’t draw any part of h’s graph, with or without picking your pencil up—any region of the graph will just be an infinitely detailed scatter of points. As it turns out, h is continuous at every irrational point, and discontinuous at every rational point. The latter claim is straightforward—every interval on the real number line has infinitely many irrational numbers, so on any interval h will have outputs equal to 0, so the output of h can never be guaranteed to be within less of 1/q of h(p/q) = 1/q, no matter how close x gets to p/q. Additionally, h is continuous at every irrational point. The reason is that, to get a rational number sufficiently close to a given irrational number, the denominator has to be large—the smaller a distance you want from the irrational number, the larger the denominator has to be. Rational numbers with a denominator up to 100 can only get so close to π. This means that, once you focus on a very narrow region around π, you’ll only see rational numbers p/q such that q is very big (in addition to the irrational numbers you’ll see). The outputs 1/q if such inputs will, then, be very small. And since all the irrational numbers output 0, this means that if you focus on a very small region around π, you’ll only see values of h which are very close to h(π) = 0, and you can make these outputs as close as you want to h by getting closer and closer to π. Importantly, also, there are infinite many rationals in any interval—so even though h is continuous at π, there will be no region around π small enough that h is continuous on every point in that region.
You can do literally whatever you want with the the ε-δ definition of continuity. You can prove all polynomials are continuous, you can prove that the sum and product of continuous functions is continuous, that the quotient of continuous functions is continuous wherever the denominator is nonzero, and so on. How can you argue that, if you can draw two graphs without picking your pencil up, then you can also draw their sum without picking your pencil up?
Pretheoretically, you never would have suspected that the popcorn function h is continuous anywhere. A Lover of Wisdom might have argued that it follows from obvious intuitions about continuity that if a function is continuous at x, then it is continuous on some region containing x, and any analysis of continuity which fails to capture that result must be discarded unless there is some other more important intuition about continuity it captures better than other accounts. But once you do real analysis, you see that the ε-δ definition captures paradigm cases and also lets you do everything you care about, and importantly, discarding the pencil definition does not end up requiring discarding anything you care about. If you still care about graphs you can draw, you can spend your time thinking about how to do that in terms of the ε-δ definition. These are all things one can figure out without thinking about one’s intuitions as such for more than a moment.
Most importantly, once you see the ε-δ definition, it makes sense to say that h is continuous at irrational points. This does not consist in “revising your intuitions” in light of an analysis that has some “theoretical virtues.” It consists in the fact that, once you have the ε-δ definition, there’s nothing left to care about. The answer to “But can you draw the graph around a point at which the function is continuous without picking your pencil up?” is not “I bite the bullet there,” but rather, “That literally does not matter even a little bit. I’ve done everything I need to do without saying anything about pencils. If you are worried about that, and if your worry matters, then something in my theory will capture what you say and give you the answers you want. But your worries, as such, are of none of my concern.”
Anyway, we’re 1900 words in, and I led you to believe we’d be talking about the philosophy of action, so let’s move on. But let no one ignorant of real analysis enter here.
Okay, so the way I usually like to bring this up is with a case I originally got from Ned Hall. Imagine a scientist with a belief-detector; he can type in a proposition, like “It is raining,” then scan a guy, and the device will tell the scientist whether the guy believes it’s raining or not. Now, you get to take part in the following experiment. The scientist explains to you that, in ten minutes, he will either give you a $1,000 prize or not. The way he’ll do this is, in ten minutes, he will scan you to determine whether you believe p = “I will receive $1,000 today.” If the detector says you believe it, he gives you the $1,000 prize. If you do not believe it,2 he will give you nothing. Assume the following: (i) you have strong evidence that the belief detector works and that the scientist is being honest (perhaps you have various friends who participated in this experiment and they all say they got a reward depending on what they seemed to believe beforehand); (ii) on this day, as on most days if you are not a financially fortunate, you would have otherwise believed you will not get $1,000, the only difference being you might get it from this experiment.
The important thing to note is that when you think about whether you’ll get $1,000, no matter what belief you form (if any), you will have sufficient evidence for your belief. If you believe you’ll get the $1,000, then the belief detector will say you believe p, and the scientist will give you $1,000. In other words, the fact that you believe you’ll get the prize is sufficient evidence to believe that you really will get the prize. And likewise if you end up believing you won’t get the prize.
So, are you going to get $1,000 in ten minutes? I know I will.
But let us move on to important questions. Suppose, unbeknownst to you, a guy with a belief detector follows you around, often scanning you to see what you believe. One day, you expect to get your paycheck—you believe “I’ll get $2,000 today.” Unfortunately, your paycheck is a day late, but when you look, you see $2,000 deposited into your bank account from an unknown source. One day, you have a bad day, one where just everything goes wrong. On your way home from work, you really expect that traffic will be particularly awful. Lo and behold, there’s a huge backup, resulting from a strange act of vandalism on the road. Another day, you suspect a package you ordered has arrived, and as it turns out, it has—and this is the third time that has happened.
Many of the events seem ordinary on your own, but you have a weird feeling. Things are going as you expect way too often. Not jumping to any conclusions, you try to test your hypothesis. Though you feel foolish to admit this, the sheer quantity of these events make it easy for you to believe that whatever you believe will happen (as long as it’s not more spectacular than what you’ve observed already) will in fact happen. So, by flipping coins, you come up with a random number, and decide to believe that $2564.21 will be deposited into your bank account. And that very amount is indeed deposited.3 This proves your suspicions.
Continuing to do the obvious thing with this newly-discovered power would probably land you in prison, but you keep trying it on more trivial matters. Your favorite meal is delivered to your doorstep right when you get home, traffic is manageable (honestly, it’s weird how few cars on the road—do those people not need to drive to and from work?), your boss gets fired. It doesn’t take long for the process to become automatic—it would be nice if the book you’ve wanted appeared on your doorstep, and so it will. Wouldn’t it no longer feel like you’re just getting yourself to believe something that coincidentally turns out to happen, but rather like you’re willing these things to happen? This would be good, and so it shall happen. Of course, you still do believe the thing will happen, just like how you believe you’ll brush your teeth tomorrow. Try to imagine a case where you really do have the power to simply will these things happen. Wouldn’t that case seem just like the scenario I’ve described?
The above, I think, is just an extreme example of what all agency is. Such is David Velleman’s view from Practical Reflection and The Possibility of Practical Reason.4 In the actual world, the only causal link between my mental states and the world comes from the connection between my brain and my nerves, and ultimately my limbs, lips, etc. But nothing fundamental to the nature of action has anything to do with nerves. People could act and know they were acting—and they could have even done correct philosophy of action—before they knew what nerves are. In the example above, you simply act, you simply will that the book shall appear on your doorstep, and the stranger with his belief detector merely plays the role your nerves do in the actual world.
The reason we have can agency in the belief-detector case is because the potential belief in question is a self-fulfilling belief: a belief that would cause its content (the thing believed) to be true. Now, not any case of a self-fulfilling belief gets to be agency. I might fail an interview because I believe I’ll fail it, because the latter belief causes me to be nervous and perform poorly. For one, you have to know the belief in question is a self-fulfilling one: if my believe that p just causes p to be true by accident, I can hardly say I voluntarily brought it about that p.
But even if I know a belief is self-fulfilling, e.g. I know I’ll pass the interview unless my own belief that I’ll fail gets in the way, we might still hesitate to say it’s up to me whether I fail or not. Maybe the mere thought of my failing causes me to be nervous, and then I get so nervous that I realize I’ll fail, and the power of my nervousness prevents me from getting out of that cycle. For a self-fulfilling belief to count as an action, we require that, in addition, the agent is psychologically constituted such that they will form the belief or not depending on how practical considerations bear on the case. If I believe I’ll fail the interview because the power of my nervousness gets me into a vicious cycle, I am not doing anything voluntarily. If, in contrast, I have greater self-control, and therefore believe I’ll pass the interview because it would be good to pass it (in knowing that having this belief is sufficient evidence that I will in fact pass), then we can say I am simply willing that I pass the interview.
So, in sum, for my forming a belief that p to count as an action, we require that:
There is a dependence of whether p obtains on whether I believe p or not
I have knowledge of this dependence
I am psychologically constituted such that, knowing 1 obtains, I will believe p or not depending on whether it would be good for me to bring it about that p
Depending on whether I can get away with it, I may also add:
I know all these conditions to obtain, including 4
I will not dwell on how beautiful this analysis5 is and how it explains every feature of action we could possibly want to explain.
“You’re saying that, when you act, you’re just forming a belief? Weird. Actions don’t seem like belief-formations.” An action is not a mere belief-formation; it is, rather, a belief-formation in conjunction with the above conditions being satisfied. I would prefer to say that forming a belief is a part of making a choice, the other parts being the satisfaction of the four conditions.6
“But it doesn’t seem to me that when I do something, I form a belief that causes its content to obtain.” Conditional on my account being correct, things would be expected to seem to you exactly as they in fact do. Something being a correct analysis of X does not imply that occurrences of X will seem like occurrences of the terms in the analysis.
“Forming a belief because you want its content to obtain is irrational. Beliefs have to be supported by the evidence. I agree that in the original belief-detector case, after you form the belief that you’ll get $1,000 it will be evidentially supported, but that doesn’t mean forming the belief in the first place is rational, since when you form the belief, you don’ have the evidence yet.” Three things to say to this. First, that is loser talk.
Second, the moment you form the belief is the same moment you have the evidence. What the objection comes down to is to which of these two conditions rationality requires: (i) that you never have a belief that is not supported by sufficient evidence; or, (ii) that any belief you form has sufficient evidence independently of your forming the belief. What cases discern between these two views? Well, the belief-detector case, and highly idealized versions of the interview case, etc. Strange cases, that is to say. Couldn’t it be that we act without understanding the true nature of action, so we are not consciously aware of the role belief plays in action, and so our paradigm cases of belief are ones where beliefs don’t cause their contents to come about? Couldn’t it be that any preference for (ii) over (i) is just an intuition about normal cases that overgeneralizes? If belief detector-cases were common, and everyone just believed that the thing they want would happen (whenever they knew their belief to be self-fulfilling), it would not occur to you to prefer (ii) over (i). In the actual world, our intuitions are exactly what you’d expect conditional on my account being correct, given that people understand they act without understanding the correct analysis of action (whatever that analysis may be), and thus do not form intuitions that reliably distinguish between (i) and (ii)’s truth.
Third, as it turns out, even if you prefer (ii) to (i), the beliefs formed on my account satisfy (ii). I know the kind of being I am; I know, in particular, that I’ll believe whatever is convenient in the original belief-detector case, as long as the belief would thereafter be evidentially justified. I want money, and so I’d believe I’ll get the money. If I decide that wealth actually corrupts, then I’ll believe I won’t get the money. I know this right now. All you have to add for me to get the $1,000 is to actually put me in the case. Since I know all this about my dispositions ahead of time, this means that the moment I recognize it as good to get $1,000, I will already have sufficient evidence that I’ll get the $1,000, because I know ahead of time that I’m the kind of being who will believe I’ll get the $1,000 under the conditions describe. I know all this even before I form the belief. So I don’t need to wait until I form the belief for me to have sufficient evidence.
Of course, in real life, all these steps will happen in an instant—I do not think through the king of being I am in order to form an intention. I just see I should do something, and then I do it. But what matters is not the temporal order of reasoning, since we often reason automatically and without a series of conscious thoughts, but rather the structure of the justification for my belief. And, in this case, I have all the evidence I need beforehand, even before I form the belief, and whether I form the belief is in fact sensitive to whether I have that evidence.
Sure, in general, there’s the issue of choosing the best axioms, which is why mathematical truths are not themselves logical truths. I only did my undergrad in math, but I will hazard to assert that results about which axioms are “actually true” are not the kinds of things they usually publish; rather, if they want to work with an alternative system, they will publish results about what logically follows from the relevant axioms. Mathematicians, correct me if I am wrong in asserting that your job is to figure out logical truths about what theorems follow from which axioms.
Consequently, if you do not have a belief either way, you get nothing.
The mysterious stranger did not have a difficult time finding this number—he used a binary search. Or we may imagine he has an advanced belief detector, which does not merely tell you whether someone believes p, but also answers questions such as “What dollar amount does this person believe they’ll get?”
I am too conceited to be able to say this without mentioning that I independently came up with (a better version of) this view when Ned Hall told me about his belief-detector case.
I am uncertain, actually, that these conditions count as an analysis, since an analysis requires that the terms used are not in turn defined in terms of the thing to be analyzed. Here, it seems plausible that “practical considerations” or “good” will need to be defined in terms of action. I would, rather, say that these conditions are a part of a functional analysis of action and practical reason as a whole.
Actually, actions only come with beliefs in relatively simple cases. I might e.g. have a nerve disorder where my arm only moves 10% of the time when I will it to move. I still intentionally move my arm in the cases where I succeed. But take my word that the account given here can be generalized to cover degrees of belief.
In case any of you want to check your work:
Let x be arbitrary, and let ε > 0. Then we pick δ = ε/2. If |y-x| < δ, then |f(y)-f(x)| = |2y-2x| = 2|y-x| < 2δ = ε. Therefore f is continuous at x, and consequently at every point since x was arbitrary.
It seems to me that a disanalogy between the mathematical analysis of continuity and this philosophical analysis of agency is that now mathematicians no longer care about common usage of the word “continuous”. The epsilon-delta definition *is* continuity now. Is this also the case for philosophical analyses? If they were shown to have clear divergences from common usage of the concepts being analyzed, would the analyses be discarded?