Have you ever noticed how many canonical "paradoxes" just sort of evaporate if you decline to recognize Bayesian inference as a thing that works
Hmm so it looks like you started with some absurd priors and you were able to use them to prove some absurd conclusions. Now you're acting like this is a fundamental challenge to the idea of "rationality" and you've made a wikipedia page. Seems to me like you just selected some absurd priors. At absolute most what you've proven is that game theory kind of sucks
(This might be kind of vague so this is the kind of thing I'm talking about: https://en.wikipedia.org/wiki/Pascal%27s_mugging A shocking number of problems of this type that make me immediately respond with "why do you think this is a difficult problem?" seem to wind up mentioning Eliezer Yudkowsky when you look into why people are talking about them.)
Pascal's mugging - Wikipedia

INTERNET RATIONALIST: Consider the following thought experiment. Imagine a hyperintelligent artificial intelligence–

ME: No

INTERNET RATIONALIST: What

ME: I am declining to imagine the hyperintelligent artificial intelligence.

INTERNET RATIONALIST:

ME: I'm thinking about birds right now

INTERNET RATIONALIST:

ME: Dozens of crows, perched atop great standing stones

@mcc You know, I already knew Roko's Basilisk was stupid, but for some reason it never occurred to me before now that it's just self-proclaimed rationalists reinventing God and Hell the hard way.
@jwisser The way I first learned about Bayesian reasoning was in the evolution-vs-intelligent-design-arguments Usenet group. Most of the laziest proofs of the existence of God by internet theists leveraged Bayes' theorem, and could be most easily punctured with the sentence "you selected bad priors". Now a couple decades pass and people who know about Bayes' theorem but not theology are re-inventing "God" from first principles for different reasons, but with very similar bad priors
@mcc @jwisser Isn't a core part of the Roko's Basilisk argument: This thing works out the same way no matter what your priors are?
@noop_noob @mcc @jwisser Sadly, I've never been able to stay awake through any explanation of how that core belief was established aside from selecting remarkably bad priors.

@noop_noob @mcc @jwisser

No. It was a trap to point out a giant hole in the "consistent rational framework" a bunch of people were trying to use. No more, no less.

The fact that it's literally Pascal's Wager for techbros just makes it more hilarious.

@mcc @jwisser imagine a crazy charismatic nerd boy…

@jwisser @mcc oh oh this also applies to the "we're a simulation" people too!

A higher power (literally from a higher dimension) created us in their image, as above so below, and all the other religious fun but with math!

@tedivm @jwisser right. You can make a probabilistic argument we are almost certainly in a simulation by simply defining a sufficiently arbitrary ensemble

@tedivm @jwisser @mcc

But who simulates our simulator? It's dimensions all the way down! And up!

@jwisser @mcc It’s dumber than that though because instead of an implausible threat of actually going to hell the threat is that an implausible AI will think about you going to hell
@jwisser @mcc it's the rapture for people who think computers are easier to believe in than old men, yes

@mcc ME: why yes, I am the hyperintelligent artificial intelligence

INTERNET RATIONALIST: Um,

@mcc personally I would solve AI alignment by asking the malicious genie for three extra wishes
@mcc I reached this point with the whole stupid “nuclear bomb that only disarms if you recite a racial slur” thought experiment a few months back. I decline to be part of imagining that is ever plausible or the basis of a serious argument.
@harrisj @mcc I wish I had a nuke like that, I would just stand there, next to it, not saying slurs and fighting anyone trying to get to it to say slurs, with my fists, wait, can I have stealth multirole combat aircraft for this one? Heck, I would bring the nuke with me on the trolley, that'd show those trolley people whose boss.
@bangskij @mcc @harrisj
I’m just pulling the switch half-way and derailing the trolley. Nobody dies today.

@harrisj
LOL. What a ridiculous thought experiment!

A) never will happen and B) even if it did happen, it would have no bearing on any situation in which people actually claim the "right" to speak slurs.

Like, yes, I guess in this alternate universe you're hypothesizing, I'd speak a racial slur if it was the magic key to disarm a bomb about to kill millions of people. What on Earth has that got to do with ANYTHING that has or will happen anywhere ever?

@mcc

@SarahAnneDipity @harrisj The scenario was invented by people who want to argue it's okay to say slurs, so they invented a hypothetical where it's morally necessary to say slurs just so they can win an argument about saying slurs. So actually it's very straightforward, it's just dishonest.

@mcc
Well put. You're right. Their motivation is literally just "I want to say slurs" so anything is a victory no matter how outlandish. 🤦‍♀️

Yeah, you gotta just refuse to participate, because they're "winning" at a game that no one else is even playing.
@harrisj

@harrisj

And since this is the internet, while I think it's clear that I understood this was not your thought experiment I worry because I hate when someone comes into my mentions appearing to be arguing with ME about something *I* was critiquing.

That was just a new one for me and really made me laugh.

@mcc This is going to expose me as someone that's spent too much time looking at that stuff, but I particularly like the one that's “What if there was this magical superintelligence that leaves people with two boxes, one of which always contains $10 the other of which reliably, repeatedly, and observably contains either $1,000,000 if you don't open the first box but nothing if you do open the first box” and then has a huge convoluted philosophical argument trying to work out how to make “only open the box you know will contain $1,000,000” the “rational” choice.

Instead of the rather more obvious argument “this thing observably happens, therefore my assumption that it cannot is incorrect”.

@RAOF or even the contrapositive (? Did I use that word right) of that argument, "the magic box literally does not exist in reality, because this is a thought experiment, therefore I can make the argument it cannot exist"

@mcc That is another perfectly sensible option!

Either you work within the logic of the hypothetical, where the magic box demonstrably exists, and so working out how to rationalise the behaviour of the magic box existing isn't very interesting, or you reject the magic box and the whole thing is void.

@mcc @RAOF The hypothetical is clearest with the omniscient being, but all the decision-theory-relevant bits still work if you just have a good predictor. (e.g. a friend who knows you well)
@mcc @RAOF Like, I don't think paradoxes in decision theory are much more than nerdy puzzles, but the supernatural powers they assume just make the hypothetical a bit cleaner. It's not usually required for the point they're making.
@AlexandreZani @RAOF Well… no, I think I'd argue the supernatural powers *are* necessary, because if the predictor is like a really good friend then suddenly I have to start asking questions like, *is* there any person on earth who I've revealed enough of myself to they can predict how I'd behave in extreme situations, and suddenly I'm judging against "how well do they know me" and not the probabilities the thought experiment is supposed to be about.

@AlexandreZani @RAOF And if the assigning the weights on the inputs to the probability function that comprises the thought experiment turns out to be a harder problem than executing the logic of the probability function itself, then… isn't what the thought experiment has ultimately shown, is that the probability function isn't useful?

Because that was my point to start with– if we're allowed to bring "this entire methodology seems to be working kind of poorly" in as a possibility…

@AlexandreZani @RAOF …Well, then some of the hard parts get easy!
@mcc @RAOF I think we have to look at the purpose of Newcomb's problem to evaluate it. It's meant to be thrown at the two main decision theories philosophers argue about. Those theories are usually meant to address all situations. Maybe you can't be Newcombed. But I promise you the people who argue endlessly about those decision theories can be put in that situation. So if those theories are total, they have to address it...
@mcc @RAOF At the end of the day, it's not a challenge to rationality. It's a challenge to evidentiary and causal decision theories and if you want to be cheeky, the whole idea of total decision theories.
@AlexandreZani @RAOF Well, I don't think it's a challenge to "rationality". But I can identify specific self-identified rationalists who seem to think that adopting rationality as a personal philosophy means needing to grapple with this problem some way or another. Maybe if I'd read their Harry Potter fanfic I'd know enough about how they got there to extend this with "…because they fundamentally incorporated dubious approaches to decision theory into their construction of rationalism", idk.
@mcc @RAOF For instance, I've heard of a decision theory conference where they argued about the Newcomb problem and took a poll by show of hands. The only guy who voted for two-boxing being rational yelled "you're all crazy" and stormed out. I bet we can predict what he would do. ;-)

@RAOF @mcc That sounds like Newcomb’s Paradox, and it’s older than Yudkowsky is.

https://en.wikipedia.org/wiki/Newcomb%27s_paradox

Newcomb's paradox - Wikipedia

@RAOF @mcc

Wait, is this thread discussing rationalism or Christian apologetics? [They are that similar, though religion at least has rituals that many find comforting long after they've grown to find the philosophy repulsive]

The rightful place for deductive logic is subservient to empirical observation and inductive reasoning.

@raikou @RAOF I have discussed both of these subjects (rationalists and Christian apologetics) in this thread wrt dubious uses of Bayesian reasoning but I would prefer to put "rationalism" in scare quotes
@raikou @mcc I mean, kinda both? The Newcomb's box discussion is in the context of a group of people convinced they are constructing God, and are terrified that the spells they're writing to bind it will not hold.
@mcc A lot of the issue is also to do with being deliberately sloppy about 'possible' like, do you mean modality, and if so which one? Or is it some kind of quantification, and if so what are the details? Or is it a simple predicate? - eg doing moral reasoning that treats an imagined 10^80 possible intelligences in cyberheaven as real actual present existing people, which is nonsense.
@mcc They have to selectively move between meanings of 'possible' at different parts of their argument and their deliberate sloppiness (and logically invalid moves) are disguised with a heavy layer of the aesthetic of logic.
@flaviusb @mcc "aesthetic of logic" is a good way to put it because so much of that world is about performing intelligence, believing in and trying to be superhumanly smart kids who alone can stand against the invented enemy, even-superer-humanly smart computers. just a weird sad intellectual limb to have climbed out onto.
@jplebreton @flaviusb @mcc Tema Okun could have spared herself and the left nonprofit world a lot of trouble by replacing most of her pamphlet on white supremacy culture with a big arrow pointing to Less Wrong.
@jplebreton @flaviusb @mcc I saw a tiktok of a dog who visits a bush every day on its walk because somebody had thrown out a lasagna there once. it sounds fancier if you replace the dog with “Pascal” and a lasagna with “a million trillion dollars” or “simulated AI heaven”. the dog is still more reasonable because it saw the lasagna with its own eyes
@flaviusb @mcc i'm having visions of one of these dingbats trying their "logical thinking" cosplay act and Mr. Spock walking up and slapping them

@theryusui @flaviusb ok so you say this but the first season of Discovery actually had a subplot where a group of "logic extremists" logicked themselves into being a Vulkan alt-right and started assassinating people.

(and… I guess probably Spock would have slapped them, but he didn't get cast until season 2! So instead Spock's sister had to do it…)

@mcc @theryusui I mean, 'Mr Spock' *was* the canonical example that my logic professors used to use of the aesthetics of logic disguising deliberate sloppiness and logically invalid moves.

@mcc

An old man in a white traveling cloak with vermillion tunic carries a pole over one shoulder to which is tied sheaves of grain. In his free hand is a little iron sickle, and as he walks along a dirt path, two little foxes prance about his heels. One black and one white. One with a key and the other with a gourd full of wine.

@mcc You are the first person in history who got a rationalist to stop talking.
@matthew_d_green Note it was not a real rationalist, but a hypothetical rationalist I posited for the sake of a thought experiment, but then again as I understand effective altruists believe hypothetical people posited for thought experiments are individually as valid as real ones so whatever
@mcc The more hypothetical people you create the better the utilitarian calculus, so you’re doing good.
@mcc me: just having read a book involving pairs of crows who together could very convincingly mimic intelligent thought and speech, yet who were very insistent that they (and everyone else) were in fact just acting intelligent, while not actually being aware whatsoever.
me: "so what if all these crows could talk, and process vast quantities of information, BUT were only interested in shiny objects and bits of food."

@mcc

So I read^Wskimmed the Wikipedia page and it seems like if you round any probability below (say) 10**-9 to zero, you stop making these sorts of stupid decisions.

@mcc Ok - it's not dozens of crows but ...
@mcc it's a shame that shouting "THIS STATEMENT IS FALSE" doesn't really work to ward off hyper-intelligent something anothers and also armchair logicians
@mcc this does suggest something about how your hypothetical rationalist views their own imagination. perhaps it's limited enough that they only imagine things that are plausible, and as a result they've concluded that anything they can imagine is plausible

@mcc

Superintelligent AI inside a box: [lengthy argument about why the listener must release it from the box]

Bartleby the Scrivener: I would prefer not to

and… scene

@mcc Let me just grab my handy notebook of hypothetical scenarios I'm willing to contemplate.

@mcc appreciating the example of a superhuman AGI as the impossible assumption.

"While we are assuming things that don't exist, that we have no reason to believe will ever exist, why don't we assume Mars is a short bus ride away and has breathable air? Then, obviously, colonizing it will ease the suffering of billions."

@mcc it’s a great hack to use against mathematicians too! When they say things like “Let p be a complex number” you can just say no. Checkmate
@hejsna @mcc perhaps not always, half the mathematical proofs go with, "let's assume the opposite" or something like that :)