Discussion about this post

User's avatar
Jesse Clifton's avatar

> I think the only reasons the EDT-inclined in academia reject FDT are (i) they haven’t head of it, (ii) it was invented by Eliezer Yudkowsky, and (iii) it gives weird verdicts in e.g. Transparent Newcomb and MacAskill’s bomb case (see my linked post for discussion)

It could also be that

-FDT requires counterlogical suppositions

-FDT depends on a notion of “algorithmic similarity” that has yet to be worked out

-FDT seems to assume a controversial “algorithmic agent” ontology (see here [*])

[*] https://www.lesswrong.com/posts/dmjvJwCjXWE2jFbRN/fdt-is-not-directly-comparable-to-cdt-and-edt

Expand full comment
Corsaren's avatar

I do love how deliciously Kantian your view of FDT and theory of action is. Literally just adopt a maxim, loser. itsnotthathard.

I usually find Newcomb's Problem discussions pretty tiresome / contrived, but both of these posts were really good! Not boring! I'm still a bit skeptical about that MacAskill's bomb case though. I think I'm generally capable of pre-committing to actions and principles in a manner consistent with FDT, such that I could decide as a matter of course to be a true one-boxer for even the Transparent Newcomb.

But if I think about the bomb scenario, I don't think I could actually bring myself to open a box that I know has a bomb in it and which will kill me. Even if I did "pre-commit" to opening the left box, I know that in the 1 in a trillion trillion universe where Omega messes up and I end up in the situation described, I would under no circumstances open the Left box. It's just not an action that I'm capable of taking. So even if I choose to open the Left box ahead of time because doing so would save me $100, my conditional chance of failure in the described situation isn't merely higher; it's ~100% and I know it. Which of course means that Omega would predict** that I'll pick Right box and so I'll end up in this situation regardless.

Basically, this feels like a situation where I really am incapable of credibly committing to a policy / future action even if I want to, and because the prediction can account for that lack of credibility, no amount of pretending or telling myself that I really am committing to it will do anything to change the result.

**Side note: This is only true if we modify the original bomb problem. As currently stated, it's not actually equivalent to the Transparent Newcomb, because if I do commit to opening the Left box and Omega predicts that I will open the Left box and leaves me a note saying that it predicted as such, I will of course just open the left box. So the fact that I know I would open the right box if told there was a bomb in the left box doesn't actually determine Omega's prediction. For my failure to follow FDT to matter, we'd have to instead say that Omega specifically is predicting what I would do in the scenario described (i.e., when told that there is a bomb in the left box). I still don't think this is quite the same as Transparent Newcomb? But at least now the argument goes through.

Expand full comment
7 more comments...

No posts