> I think the only reasons the EDT-inclined in academia reject FDT are (i) they haven’t head of it, (ii) it was invented by Eliezer Yudkowsky, and (iii) it gives weird verdicts in e.g. Transparent Newcomb and MacAskill’s bomb case (see my linked post for discussion)
It could also be that
-FDT requires counterlogical suppositions
-FDT depends on a notion of “algorithmic similarity” that has yet to be worked out
-FDT seems to assume a controversial “algorithmic agent” ontology (see here [*])
I do love how deliciously Kantian your view of FDT and theory of action is. Literally just adopt a maxim, loser. itsnotthathard.
I usually find Newcomb's Problem discussions pretty tiresome / contrived, but both of these posts were really good! Not boring! I'm still a bit skeptical about that MacAskill's bomb case though. I think I'm generally capable of pre-committing to actions and principles in a manner consistent with FDT, such that I could decide as a matter of course to be a true one-boxer for even the Transparent Newcomb.
But if I think about the bomb scenario, I don't think I could actually bring myself to open a box that I know has a bomb in it and which will kill me. Even if I did "pre-commit" to opening the left box, I know that in the 1 in a trillion trillion universe where Omega messes up and I end up in the situation described, I would under no circumstances open the Left box. It's just not an action that I'm capable of taking. So even if I choose to open the Left box ahead of time because doing so would save me $100, my conditional chance of failure in the described situation isn't merely higher; it's ~100% and I know it. Which of course means that Omega would predict** that I'll pick Right box and so I'll end up in this situation regardless.
Basically, this feels like a situation where I really am incapable of credibly committing to a policy / future action even if I want to, and because the prediction can account for that lack of credibility, no amount of pretending or telling myself that I really am committing to it will do anything to change the result.
**Side note: This is only true if we modify the original bomb problem. As currently stated, it's not actually equivalent to the Transparent Newcomb, because if I do commit to opening the Left box and Omega predicts that I will open the Left box and leaves me a note saying that it predicted as such, I will of course just open the left box. So the fact that I know I would open the right box if told there was a bomb in the left box doesn't actually determine Omega's prediction. For my failure to follow FDT to matter, we'd have to instead say that Omega specifically is predicting what I would do in the scenario described (i.e., when told that there is a bomb in the left box). I still don't think this is quite the same as Transparent Newcomb? But at least now the argument goes through.
I'm not quite seeing your point on them not being parallel--this is important, thank you for bringing it up.
As stated in MacAskill's case, Omega puts a bomb in Left iff he predicts you take right. I took interpretive liberties here, though I probably should have brought this up, since as stated the case is paradoxical. If I'm a troll and I decide to take Left iff Omega put the bomb in (i.e. I take Right iff Omega predicted Left), and if Omega was able to foresee this, he won't be able to form a stable prediction. So I assumed Omega figures out if you would take Right conditional on there being a bomb in Left, and puts a bomb in *at least* if that's the case. It seems like you need that for FDT to recommend taking Left, as MacAskill intended. If in fact I'm someone who will take Right if there's a bomb and Left otherwise (an anti-troll), then Omega could put a bomb in or not and still get a correct prediction, and the problem would be under-described. Is this just the same sort of point you were getting at?
Yeah exactly! In the Transparent Newcomb, once I know Omega’s prediction, two-boxing is “always better” (under CDT/EDT), but for the bomb case (as I initially interpreted it) which box is better depends on what Omega predicted I’d pick. So yeah, it ends up underspecified because I can just pick conditional on Omega’s prediction (the anti-troll case you outline).
I think one of the commenters on the original LW thread pointed this out as well, and so the solve is just as you describe: treat the problem as Omega predicting specifically what you would do iff there’s a bomb in the left box, and then not putting a bomb in left box if you would blow yourself up, but putting a bomb in left box if you wouldn’t.
I do still think this feels a bit different from Transparent Newcomb even in the modified version? Because in Transparent Newcomb, if Omega predicts that you’ll one box, CDT and EDT would have you two box. But here, if Omega predicts that you would blow yourself up if left box has a bomb, then it won’t put the bomb in left box and CDT and EDT still just say to take left box, which doesn’t violate Omega’s prediction at all. So CDT/EDT still pick boxes conditional on Omega’s prediction, but now at least LDT does say to pick left no matter what. So it makes the point but it still doesn’t seem totally isomorphic.
Love this article! Like Olivia, I'm very into this particular topic. I have so, so much to say about this, but for fear of overwhelming you with a wall of text, I'll bring up only a few questions and glosses.
Firstly: do you think necessity is a desiderata for a choice procedure (or whatever the technical name is for the category that naive choosing, sophisticated choosing, and resolute choosing belong to)? I ask because the remark beginning with "Maybe if you're lucky..." seems to imply that your ideal choice procedure is contingent on your own capabilities as an agent. Namely: if you're capable of following through on resolutions, you should be an RCer, and if you're not, you should be an SCer. But surely we can ask what that judgement is in virtue of? In particular, this consideration about your capabilities as an agent seems very SC-flavored. In fact, bare SC itself would recommend this exact move: "If making resolutions will cause your future self to follow through with those resolutions, then make those resolutions." In light of this, it feels hard to escape the idea that SC is just completely prior to RC.
Secondly: doesn't RC insufficiently account for tactics? Say you resolved to plant a tree in your hometown thirty years from today. If you *just* made the resolution today, with no causal measures in place, you would almost certainly fail to follow through with it-- at the very least, you'd have probably forgotten about your resolution by then! It would seem that you need *some* sort of measure to work around your future self and ensure that your resolution is actually enacted. (Do you maybe think that there's a meaningful difference between 'working around your future self's preferences' and 'working around your future self's capabilities'?)
Thanks again for writing an article on one of my favorite pet topics. Cheers!
On your point about valuing rationality itself versus rationality being an instrument for gaining stuff. A parallel I've thought about (because of my way of thinking about Newcomb's problem that has parallels and differences from FDT) is thinking about the Church-Turing demonstration that there are numbers uncomputable by a general method (procedure).
If one inherently valued general computability over accumulating mathematical truths you could just claim that uncomputable numbers are not numbers and so there are no uncomputable numbers or truths associated with them. Hurray a general method to grind through to any mathematical truth is possible!
In the classic example of the halting problem let us say you want a general algorithm that calculates 1 for an arbitrary program that will halt and 0 for no halting. And the problem is there will obviously be for any such general algorithm deviant algorithms for which it will calculate 0 and that algorithm halts and 1 and the algorithm does not halt. The correct numbers for these deviants are not computable by a general method (a single algorithm).
However it is still usually determinate whether the deviant algorithms will halt or not. You could use special methods to determine this in advance even in many if not all cases. But if we just say "No we can't consider those correct, the only proper numbers are computable by a general method" we abandon all that for the sake of propping up the generla method. Those special methods and the results of whether a deviant halts or not could be very useful. This does not seem rational or even good mathematics. To commit to the general method come what may is not a good commitment. Boo you've just consigned a bunch of perfectly good mathematical truths to limbo!
Likewise as you suggest to commit to some canonical procedure in decision theory come what may, so that you are constrained to choose means that foreseeably tend to an unproductive end is to subordinate the desired outcome to the means you usually employ.
It doesn't seem to make any more sense to subordinate "rationally preferable" to a general procedure of how to behave any more than it makes sense to subordinate "number" or "mathematical truth" to can be computed by a general algorithm.
A lot of this seems to come down to something dangerously close to semantics. It seems like for the wine problem, if you describe it as 1 glass of wine did not alter your preferences but instead impairs you rational faculty then you'd admit that even as a resolute chooser you would not choose/plan to drink the 1st glass of of wine because the 1 glass drunk person would not be you but someone else (in some relevant sense) and it would be appropriate to evaluate the contingencies in the sophisticated chooser model. For example presumably you do not argue that has a resolute chooser you can raise your arm tomorrow even if someone drugs you and renders you unconscious before the appointed hour tomorrow.
1 glass of wine sounds like it impairs you arithmetic faculty (you make more arithmetic mistakes under the influence of one glass calculating your bar bill say), but maybe you could describe it as making you prefer a different arithmetic. So your arithmetic is not wrong, but instead it was just a different arithmetic. You could say this but I don't think many would choose this description of the situation. The system of arithmetic you would have to invent to make those moves "right" is unwieldly and not very useful (the bartender is not going to agree with your method of calculation even if you can show an axiomatization of this alternate arithmetic).
I'm not seeing the reason to describe the uninhibited state you are in after 1 glass as having difference preferences, it really sounds like a state of impaired rationality. It sounds like few would agree with the utility calculations the person 1 glass of wine drunk would make, not that they have different pay off numbers for them.
I think it is a tricky and somewhat mysterious question whether humans in general or one particular person is a resolute chooser and even to what extent. Even you seem to admit that temporally distant actions are differently subject to choice as you admit they have a higher failure rate. You seem to say that merely a different rate is not a difference in kind but that seems unlikely even if we are not Hegelian dialecticians I think we sometimes acknowledge a different in quantity becomes a difference in quality, a 50% failure rate is qualitatively different from a 100% failure rate.
I'd agree that many of the choices we make are not best understood as the outcome of a mental process occurring just before and contiguous with the act. If I see a full red wine glass falling towards a white carpet and reach out and catch it, it is very unlikely that is because I engaged in a long deliberative process of whether to try to catch it and how to go about doing it. Rather at early times I've trained my character and my hands to react in that way, probably not by thinking and handling wine glasses (unless I trained to be a bar tender or the like) but say through playing catch with my dad and learning to enjoy the aesthetic of a nice clean white carpet free of red wine stains. I'd still say it was voluntary and my act.
So clearly some acts are not conditioned by very different kinds of choices and the key parts of those choices have very different temporal proximity to the acts (some occur shortly before, others long before). Sometimes the very same sort act is conditioned by very different choices with very different temporal relations.
Sometimes when I mark my X on a ballot in an election, I make my decision of which candidate's name to mark the X next to while staring at the ballot pencil in hand, with a long process of back and forth remembering and calculation in the booth. Sometimes I mark my X quickly almost reflexively having made my decision not based on careful conscious calculation but vibes, and having judged which was the best candidate weeks early.
Also some acts are of very different kinds. Choosing to have chocolate ice cream by uttering "I'll have chocolate" is a very different act from choosing to be good at catching balls by practicing for many hours in the evenings over many weeks.
All this goes not just to personal identity, but the free will/moral responsibility/determinism debate. It seems clear from their arguments some incompatibilists argument goes something like this, the catcher can't choose to catch or drop the pop fly, rather all the training and experience before that determines the catcher's skill and chance of catching the ball and so on back (every time they trained the "choice" to train was dependent on opportunities and past acts beyond the catcher's control also). Therefore there are no choices, because choice would have to be some immediately contiguous mental act that solely or chiefly determined the outcome ignoring all precursors and there are no such events. They also seem to have a problem with personal identity since they will say things like "the catching of the ball was chosen by the initial conditions of the universe", but it seems really easy to identify the catcher as different than the initial conditions of the universe, and the relevant chooser. Anyway, that seems like a really bad and unconvincing definition of choice has been selected, like defining that the person who has 1 glass of wine has different preferences rather than saying they are just less capable of rational thought/calculation.
In terms of whether you need an explicit plan under FDT. My sense would be that under FDT a plan is just whatever can be modeled as a program/algorithm of the Turing machine you are modeling the thought/decision process of the subject with that leads to an attempt to cary out a series of steps in a given situation. If you have a reaction of the form that can be modeled as the conditional branch instruction "if I see glass of wine falling then I catch it" then you have a "plan" to catch glasses of wine in those circumstances. Likewise for more complicated sets of instructions (algorithms) that lead to a clear if more complex conditional dependency. Whether stated or unstated it would still be a plan. A choice is when something analogous to the Turing machine calculating (or "judging") that certain data or instructions should be added (or subtracted) from the machines tape happens.
If Omega in the Newton's problem is predicting based on the mental states modeled as Turing machine by FDT then the fact that you can have the Newcomb's problem situation at all (that there is any prediction better than chance) is an admission that the subject has a plan in the FDT sense. Are all Newcomb Problem type situations one can state possible? I'd say certainly not, Even for all actually possible Newcomb Problem type sitautions do we really always have a determinate reaction to all situations implicit in the structure of our mind alone? Seems unlikely to me, So I don't think FDT is going to be the right analysis in every case.
I described the wine case as I did, and as far as I can tell what I said still goes through *given* that that's the case I was talking about. That there is another way of describing things on which I would I would have to say different things does not mean that my argument doesn't go through as I stated it. The sophisticated chooser holds that you should have 0 glasses, even if we describe the case as a change in non-instrumental preferences which leaves you instrumentally rational. The resolute chooser disagrees. Moreover, resolute choice does not mean you have to recommend drinking one glass in the case where you anticipate future irrationality, or any case where the causal link between your adopting a plan and your carrying out the plan are severed. So the crux is what to do in the case as I described it, not your description, which is one where the resolute chooser and sophisticated chooser have the same verdict.
Most importantly, anything said about the wine case that depends on issues regarding changes in preference or anticipated irrationality will not affect the analogous discussion of Newcomb+Commitment, where there is no change in preference and no anticipated irrationality. If we imagine the same case but where you might be irrational after Omega scans you, that's a different decision problem, and depending on the details FDT might recommend paying to commit (since the world where FDT recommends one-boxing without commitment would not necessarily be a world where the FDT agent actually ends up one-boxing). So, likewise, that would not be the case to look at if you want to adjudicate between FDT and CDT.
Newcomb's problem as classically stated (and as you stated it) makes no explicit assumptions about how Omega will make his predictions (only that it is a prediction and not time travel or backward causation etc.). There is no condition that Omega can only use information from the scan or only information about the conscious thought processes of the person scanned. If you want to consider the subset of cases where Omega achieves their prescience in that way, you should explicitly state that as an assumption. Note this changes a lot of things in your discussion of the question for example this means if a 3rd party interferes in some way Omega won't be able to predict it, so it would be useless to ask 3rd party to restrain you from 2 boxing as Omega can't predict what they will do by the assumption made, Even if we constrain Omega not to initiate any change in your state of mind, if we allow he can predict outside (3rd party etc.) interference then he may have predicted that your state of mind will be changed after the scan by some outside force and so the accuracy of his may depend not on your decision making process but that outside force.
Making all these constraint assumptions explicit I think would change the analysis Decision theorists would make of them. On my analysis the difference of different kinds of DT in these sorts of cases depends in part on the assumptions they are making about how the situation is constituted (how Omega is constrained etc.).
I'm sorry if I come off as pushy or condescending. I just find it really fun and exciting to think through these things. I do find your discussions clarifying in many cases.
> I think the only reasons the EDT-inclined in academia reject FDT are (i) they haven’t head of it, (ii) it was invented by Eliezer Yudkowsky, and (iii) it gives weird verdicts in e.g. Transparent Newcomb and MacAskill’s bomb case (see my linked post for discussion)
It could also be that
-FDT requires counterlogical suppositions
-FDT depends on a notion of “algorithmic similarity” that has yet to be worked out
-FDT seems to assume a controversial “algorithmic agent” ontology (see here [*])
[*] https://www.lesswrong.com/posts/dmjvJwCjXWE2jFbRN/fdt-is-not-directly-comparable-to-cdt-and-edt
I do love how deliciously Kantian your view of FDT and theory of action is. Literally just adopt a maxim, loser. itsnotthathard.
I usually find Newcomb's Problem discussions pretty tiresome / contrived, but both of these posts were really good! Not boring! I'm still a bit skeptical about that MacAskill's bomb case though. I think I'm generally capable of pre-committing to actions and principles in a manner consistent with FDT, such that I could decide as a matter of course to be a true one-boxer for even the Transparent Newcomb.
But if I think about the bomb scenario, I don't think I could actually bring myself to open a box that I know has a bomb in it and which will kill me. Even if I did "pre-commit" to opening the left box, I know that in the 1 in a trillion trillion universe where Omega messes up and I end up in the situation described, I would under no circumstances open the Left box. It's just not an action that I'm capable of taking. So even if I choose to open the Left box ahead of time because doing so would save me $100, my conditional chance of failure in the described situation isn't merely higher; it's ~100% and I know it. Which of course means that Omega would predict** that I'll pick Right box and so I'll end up in this situation regardless.
Basically, this feels like a situation where I really am incapable of credibly committing to a policy / future action even if I want to, and because the prediction can account for that lack of credibility, no amount of pretending or telling myself that I really am committing to it will do anything to change the result.
**Side note: This is only true if we modify the original bomb problem. As currently stated, it's not actually equivalent to the Transparent Newcomb, because if I do commit to opening the Left box and Omega predicts that I will open the Left box and leaves me a note saying that it predicted as such, I will of course just open the left box. So the fact that I know I would open the right box if told there was a bomb in the left box doesn't actually determine Omega's prediction. For my failure to follow FDT to matter, we'd have to instead say that Omega specifically is predicting what I would do in the scenario described (i.e., when told that there is a bomb in the left box). I still don't think this is quite the same as Transparent Newcomb? But at least now the argument goes through.
I'm not quite seeing your point on them not being parallel--this is important, thank you for bringing it up.
As stated in MacAskill's case, Omega puts a bomb in Left iff he predicts you take right. I took interpretive liberties here, though I probably should have brought this up, since as stated the case is paradoxical. If I'm a troll and I decide to take Left iff Omega put the bomb in (i.e. I take Right iff Omega predicted Left), and if Omega was able to foresee this, he won't be able to form a stable prediction. So I assumed Omega figures out if you would take Right conditional on there being a bomb in Left, and puts a bomb in *at least* if that's the case. It seems like you need that for FDT to recommend taking Left, as MacAskill intended. If in fact I'm someone who will take Right if there's a bomb and Left otherwise (an anti-troll), then Omega could put a bomb in or not and still get a correct prediction, and the problem would be under-described. Is this just the same sort of point you were getting at?
Yeah exactly! In the Transparent Newcomb, once I know Omega’s prediction, two-boxing is “always better” (under CDT/EDT), but for the bomb case (as I initially interpreted it) which box is better depends on what Omega predicted I’d pick. So yeah, it ends up underspecified because I can just pick conditional on Omega’s prediction (the anti-troll case you outline).
I think one of the commenters on the original LW thread pointed this out as well, and so the solve is just as you describe: treat the problem as Omega predicting specifically what you would do iff there’s a bomb in the left box, and then not putting a bomb in left box if you would blow yourself up, but putting a bomb in left box if you wouldn’t.
I do still think this feels a bit different from Transparent Newcomb even in the modified version? Because in Transparent Newcomb, if Omega predicts that you’ll one box, CDT and EDT would have you two box. But here, if Omega predicts that you would blow yourself up if left box has a bomb, then it won’t put the bomb in left box and CDT and EDT still just say to take left box, which doesn’t violate Omega’s prediction at all. So CDT/EDT still pick boxes conditional on Omega’s prediction, but now at least LDT does say to pick left no matter what. So it makes the point but it still doesn’t seem totally isomorphic.
Love this article! Like Olivia, I'm very into this particular topic. I have so, so much to say about this, but for fear of overwhelming you with a wall of text, I'll bring up only a few questions and glosses.
Firstly: do you think necessity is a desiderata for a choice procedure (or whatever the technical name is for the category that naive choosing, sophisticated choosing, and resolute choosing belong to)? I ask because the remark beginning with "Maybe if you're lucky..." seems to imply that your ideal choice procedure is contingent on your own capabilities as an agent. Namely: if you're capable of following through on resolutions, you should be an RCer, and if you're not, you should be an SCer. But surely we can ask what that judgement is in virtue of? In particular, this consideration about your capabilities as an agent seems very SC-flavored. In fact, bare SC itself would recommend this exact move: "If making resolutions will cause your future self to follow through with those resolutions, then make those resolutions." In light of this, it feels hard to escape the idea that SC is just completely prior to RC.
Secondly: doesn't RC insufficiently account for tactics? Say you resolved to plant a tree in your hometown thirty years from today. If you *just* made the resolution today, with no causal measures in place, you would almost certainly fail to follow through with it-- at the very least, you'd have probably forgotten about your resolution by then! It would seem that you need *some* sort of measure to work around your future self and ensure that your resolution is actually enacted. (Do you maybe think that there's a meaningful difference between 'working around your future self's preferences' and 'working around your future self's capabilities'?)
Thanks again for writing an article on one of my favorite pet topics. Cheers!
On your point about valuing rationality itself versus rationality being an instrument for gaining stuff. A parallel I've thought about (because of my way of thinking about Newcomb's problem that has parallels and differences from FDT) is thinking about the Church-Turing demonstration that there are numbers uncomputable by a general method (procedure).
If one inherently valued general computability over accumulating mathematical truths you could just claim that uncomputable numbers are not numbers and so there are no uncomputable numbers or truths associated with them. Hurray a general method to grind through to any mathematical truth is possible!
In the classic example of the halting problem let us say you want a general algorithm that calculates 1 for an arbitrary program that will halt and 0 for no halting. And the problem is there will obviously be for any such general algorithm deviant algorithms for which it will calculate 0 and that algorithm halts and 1 and the algorithm does not halt. The correct numbers for these deviants are not computable by a general method (a single algorithm).
However it is still usually determinate whether the deviant algorithms will halt or not. You could use special methods to determine this in advance even in many if not all cases. But if we just say "No we can't consider those correct, the only proper numbers are computable by a general method" we abandon all that for the sake of propping up the generla method. Those special methods and the results of whether a deviant halts or not could be very useful. This does not seem rational or even good mathematics. To commit to the general method come what may is not a good commitment. Boo you've just consigned a bunch of perfectly good mathematical truths to limbo!
Likewise as you suggest to commit to some canonical procedure in decision theory come what may, so that you are constrained to choose means that foreseeably tend to an unproductive end is to subordinate the desired outcome to the means you usually employ.
It doesn't seem to make any more sense to subordinate "rationally preferable" to a general procedure of how to behave any more than it makes sense to subordinate "number" or "mathematical truth" to can be computed by a general algorithm.
A lot of this seems to come down to something dangerously close to semantics. It seems like for the wine problem, if you describe it as 1 glass of wine did not alter your preferences but instead impairs you rational faculty then you'd admit that even as a resolute chooser you would not choose/plan to drink the 1st glass of of wine because the 1 glass drunk person would not be you but someone else (in some relevant sense) and it would be appropriate to evaluate the contingencies in the sophisticated chooser model. For example presumably you do not argue that has a resolute chooser you can raise your arm tomorrow even if someone drugs you and renders you unconscious before the appointed hour tomorrow.
1 glass of wine sounds like it impairs you arithmetic faculty (you make more arithmetic mistakes under the influence of one glass calculating your bar bill say), but maybe you could describe it as making you prefer a different arithmetic. So your arithmetic is not wrong, but instead it was just a different arithmetic. You could say this but I don't think many would choose this description of the situation. The system of arithmetic you would have to invent to make those moves "right" is unwieldly and not very useful (the bartender is not going to agree with your method of calculation even if you can show an axiomatization of this alternate arithmetic).
I'm not seeing the reason to describe the uninhibited state you are in after 1 glass as having difference preferences, it really sounds like a state of impaired rationality. It sounds like few would agree with the utility calculations the person 1 glass of wine drunk would make, not that they have different pay off numbers for them.
I think it is a tricky and somewhat mysterious question whether humans in general or one particular person is a resolute chooser and even to what extent. Even you seem to admit that temporally distant actions are differently subject to choice as you admit they have a higher failure rate. You seem to say that merely a different rate is not a difference in kind but that seems unlikely even if we are not Hegelian dialecticians I think we sometimes acknowledge a different in quantity becomes a difference in quality, a 50% failure rate is qualitatively different from a 100% failure rate.
I'd agree that many of the choices we make are not best understood as the outcome of a mental process occurring just before and contiguous with the act. If I see a full red wine glass falling towards a white carpet and reach out and catch it, it is very unlikely that is because I engaged in a long deliberative process of whether to try to catch it and how to go about doing it. Rather at early times I've trained my character and my hands to react in that way, probably not by thinking and handling wine glasses (unless I trained to be a bar tender or the like) but say through playing catch with my dad and learning to enjoy the aesthetic of a nice clean white carpet free of red wine stains. I'd still say it was voluntary and my act.
So clearly some acts are not conditioned by very different kinds of choices and the key parts of those choices have very different temporal proximity to the acts (some occur shortly before, others long before). Sometimes the very same sort act is conditioned by very different choices with very different temporal relations.
Sometimes when I mark my X on a ballot in an election, I make my decision of which candidate's name to mark the X next to while staring at the ballot pencil in hand, with a long process of back and forth remembering and calculation in the booth. Sometimes I mark my X quickly almost reflexively having made my decision not based on careful conscious calculation but vibes, and having judged which was the best candidate weeks early.
Also some acts are of very different kinds. Choosing to have chocolate ice cream by uttering "I'll have chocolate" is a very different act from choosing to be good at catching balls by practicing for many hours in the evenings over many weeks.
All this goes not just to personal identity, but the free will/moral responsibility/determinism debate. It seems clear from their arguments some incompatibilists argument goes something like this, the catcher can't choose to catch or drop the pop fly, rather all the training and experience before that determines the catcher's skill and chance of catching the ball and so on back (every time they trained the "choice" to train was dependent on opportunities and past acts beyond the catcher's control also). Therefore there are no choices, because choice would have to be some immediately contiguous mental act that solely or chiefly determined the outcome ignoring all precursors and there are no such events. They also seem to have a problem with personal identity since they will say things like "the catching of the ball was chosen by the initial conditions of the universe", but it seems really easy to identify the catcher as different than the initial conditions of the universe, and the relevant chooser. Anyway, that seems like a really bad and unconvincing definition of choice has been selected, like defining that the person who has 1 glass of wine has different preferences rather than saying they are just less capable of rational thought/calculation.
In terms of whether you need an explicit plan under FDT. My sense would be that under FDT a plan is just whatever can be modeled as a program/algorithm of the Turing machine you are modeling the thought/decision process of the subject with that leads to an attempt to cary out a series of steps in a given situation. If you have a reaction of the form that can be modeled as the conditional branch instruction "if I see glass of wine falling then I catch it" then you have a "plan" to catch glasses of wine in those circumstances. Likewise for more complicated sets of instructions (algorithms) that lead to a clear if more complex conditional dependency. Whether stated or unstated it would still be a plan. A choice is when something analogous to the Turing machine calculating (or "judging") that certain data or instructions should be added (or subtracted) from the machines tape happens.
If Omega in the Newton's problem is predicting based on the mental states modeled as Turing machine by FDT then the fact that you can have the Newcomb's problem situation at all (that there is any prediction better than chance) is an admission that the subject has a plan in the FDT sense. Are all Newcomb Problem type situations one can state possible? I'd say certainly not, Even for all actually possible Newcomb Problem type sitautions do we really always have a determinate reaction to all situations implicit in the structure of our mind alone? Seems unlikely to me, So I don't think FDT is going to be the right analysis in every case.
I described the wine case as I did, and as far as I can tell what I said still goes through *given* that that's the case I was talking about. That there is another way of describing things on which I would I would have to say different things does not mean that my argument doesn't go through as I stated it. The sophisticated chooser holds that you should have 0 glasses, even if we describe the case as a change in non-instrumental preferences which leaves you instrumentally rational. The resolute chooser disagrees. Moreover, resolute choice does not mean you have to recommend drinking one glass in the case where you anticipate future irrationality, or any case where the causal link between your adopting a plan and your carrying out the plan are severed. So the crux is what to do in the case as I described it, not your description, which is one where the resolute chooser and sophisticated chooser have the same verdict.
Most importantly, anything said about the wine case that depends on issues regarding changes in preference or anticipated irrationality will not affect the analogous discussion of Newcomb+Commitment, where there is no change in preference and no anticipated irrationality. If we imagine the same case but where you might be irrational after Omega scans you, that's a different decision problem, and depending on the details FDT might recommend paying to commit (since the world where FDT recommends one-boxing without commitment would not necessarily be a world where the FDT agent actually ends up one-boxing). So, likewise, that would not be the case to look at if you want to adjudicate between FDT and CDT.
Newcomb's problem as classically stated (and as you stated it) makes no explicit assumptions about how Omega will make his predictions (only that it is a prediction and not time travel or backward causation etc.). There is no condition that Omega can only use information from the scan or only information about the conscious thought processes of the person scanned. If you want to consider the subset of cases where Omega achieves their prescience in that way, you should explicitly state that as an assumption. Note this changes a lot of things in your discussion of the question for example this means if a 3rd party interferes in some way Omega won't be able to predict it, so it would be useless to ask 3rd party to restrain you from 2 boxing as Omega can't predict what they will do by the assumption made, Even if we constrain Omega not to initiate any change in your state of mind, if we allow he can predict outside (3rd party etc.) interference then he may have predicted that your state of mind will be changed after the scan by some outside force and so the accuracy of his may depend not on your decision making process but that outside force.
Making all these constraint assumptions explicit I think would change the analysis Decision theorists would make of them. On my analysis the difference of different kinds of DT in these sorts of cases depends in part on the assumptions they are making about how the situation is constituted (how Omega is constrained etc.).
I'm sorry if I come off as pushy or condescending. I just find it really fun and exciting to think through these things. I do find your discussions clarifying in many cases.