Discussion about this post

User's avatar
Jack Thompson's avatar

Thank goodness someone else is writing about FDT on substack! Just subscribed, looking forward to more of your work :)

If Newcomb's problem feels like a cheat to you, requiring predictions of what you would do in a hypothetical, Joe Carlsmith recommended (on the 80K hours podcast) thinking about twin prisoners' dilemmas as a more visceral and explicitly *fair* representation of the problem. I wrote about it here: https://jacktlab.substack.com/p/introduction-to-functional-decision

Expand full comment
Tyler Seacrest's avatar

Great article! I just wanted to comment with a minor technical point, but it may be somewhat related to Kyle Star's more substantial comment. There may be a minor inconsistency in the "both boxes transparent" Newcomb or the bomb example. On first reading, someone may assume

1. Omega has a very high success rate no matter the agent's strategy.

2. Omega must divulge the guess before the agent decides.

3. The agent is free to pursue any strategy.

By (2) and (3), the agent could ensure that Omega is wrong, contradicting (1). Thus, to make these well-posed hypotheticals, I think you need to specify which of (1), (2), or (3) is not quite right. But there are a lot of options: maybe Omega merely maximizes the probability of success, achieving a very high number only if possible. Or maybe Omega can decide whether to divulge or not. Or maybe the agent must use a strategy consistent with a high Omega success rate. No matter what is chosen I don't think the analysis of the article changes, hence why this is a minor point I think.

Expand full comment
12 more comments...

No posts