Have you ever noticed how many canonical "paradoxes" just sort of evaporate if you decline to recognize Bayesian inference as a thing that works
Hmm so it looks like you started with some absurd priors and you were able to use them to prove some absurd conclusions. Now you're acting like this is a fundamental challenge to the idea of "rationality" and you've made a wikipedia page. Seems to me like you just selected some absurd priors. At absolute most what you've proven is that game theory kind of sucks
(This might be kind of vague so this is the kind of thing I'm talking about: https://en.wikipedia.org/wiki/Pascal%27s_mugging A shocking number of problems of this type that make me immediately respond with "why do you think this is a difficult problem?" seem to wind up mentioning Eliezer Yudkowsky when you look into why people are talking about them.)
Pascal's mugging - Wikipedia

INTERNET RATIONALIST: Consider the following thought experiment. Imagine a hyperintelligent artificial intelligence–

ME: No

INTERNET RATIONALIST: What

ME: I am declining to imagine the hyperintelligent artificial intelligence.

INTERNET RATIONALIST:

ME: I'm thinking about birds right now

INTERNET RATIONALIST:

ME: Dozens of crows, perched atop great standing stones

@mcc This is going to expose me as someone that's spent too much time looking at that stuff, but I particularly like the one that's “What if there was this magical superintelligence that leaves people with two boxes, one of which always contains $10 the other of which reliably, repeatedly, and observably contains either $1,000,000 if you don't open the first box but nothing if you do open the first box” and then has a huge convoluted philosophical argument trying to work out how to make “only open the box you know will contain $1,000,000” the “rational” choice.

Instead of the rather more obvious argument “this thing observably happens, therefore my assumption that it cannot is incorrect”.

@RAOF or even the contrapositive (? Did I use that word right) of that argument, "the magic box literally does not exist in reality, because this is a thought experiment, therefore I can make the argument it cannot exist"
@mcc @RAOF The hypothetical is clearest with the omniscient being, but all the decision-theory-relevant bits still work if you just have a good predictor. (e.g. a friend who knows you well)
@mcc @RAOF Like, I don't think paradoxes in decision theory are much more than nerdy puzzles, but the supernatural powers they assume just make the hypothetical a bit cleaner. It's not usually required for the point they're making.
@AlexandreZani @RAOF Well… no, I think I'd argue the supernatural powers *are* necessary, because if the predictor is like a really good friend then suddenly I have to start asking questions like, *is* there any person on earth who I've revealed enough of myself to they can predict how I'd behave in extreme situations, and suddenly I'm judging against "how well do they know me" and not the probabilities the thought experiment is supposed to be about.

@AlexandreZani @RAOF And if the assigning the weights on the inputs to the probability function that comprises the thought experiment turns out to be a harder problem than executing the logic of the probability function itself, then… isn't what the thought experiment has ultimately shown, is that the probability function isn't useful?

Because that was my point to start with– if we're allowed to bring "this entire methodology seems to be working kind of poorly" in as a possibility…

@AlexandreZani @RAOF …Well, then some of the hard parts get easy!
@mcc @RAOF I think we have to look at the purpose of Newcomb's problem to evaluate it. It's meant to be thrown at the two main decision theories philosophers argue about. Those theories are usually meant to address all situations. Maybe you can't be Newcombed. But I promise you the people who argue endlessly about those decision theories can be put in that situation. So if those theories are total, they have to address it...
@mcc @RAOF At the end of the day, it's not a challenge to rationality. It's a challenge to evidentiary and causal decision theories and if you want to be cheeky, the whole idea of total decision theories.
@AlexandreZani @RAOF Well, I don't think it's a challenge to "rationality". But I can identify specific self-identified rationalists who seem to think that adopting rationality as a personal philosophy means needing to grapple with this problem some way or another. Maybe if I'd read their Harry Potter fanfic I'd know enough about how they got there to extend this with "…because they fundamentally incorporated dubious approaches to decision theory into their construction of rationalism", idk.
@mcc @RAOF For instance, I've heard of a decision theory conference where they argued about the Newcomb problem and took a poll by show of hands. The only guy who voted for two-boxing being rational yelled "you're all crazy" and stormed out. I bet we can predict what he would do. ;-)