INTERNET RATIONALIST: Consider the following thought experiment. Imagine a hyperintelligent artificial intelligence–
ME: No
INTERNET RATIONALIST: What
ME: I am declining to imagine the hyperintelligent artificial intelligence.
INTERNET RATIONALIST:
ME: I'm thinking about birds right now
INTERNET RATIONALIST:
ME: Dozens of crows, perched atop great standing stones
No. It was a trap to point out a giant hole in the "consistent rational framework" a bunch of people were trying to use. No more, no less.
The fact that it's literally Pascal's Wager for techbros just makes it more hilarious.
@mcc ME: why yes, I am the hyperintelligent artificial intelligence
INTERNET RATIONALIST: Um,
@harrisj
LOL. What a ridiculous thought experiment!
A) never will happen and B) even if it did happen, it would have no bearing on any situation in which people actually claim the "right" to speak slurs.
Like, yes, I guess in this alternate universe you're hypothesizing, I'd speak a racial slur if it was the magic key to disarm a bomb about to kill millions of people. What on Earth has that got to do with ANYTHING that has or will happen anywhere ever?
And since this is the internet, while I think it's clear that I understood this was not your thought experiment I worry because I hate when someone comes into my mentions appearing to be arguing with ME about something *I* was critiquing.
That was just a new one for me and really made me laugh.
@mcc This is going to expose me as someone that's spent too much time looking at that stuff, but I particularly like the one that's “What if there was this magical superintelligence that leaves people with two boxes, one of which always contains $10 the other of which reliably, repeatedly, and observably contains either $1,000,000 if you don't open the first box but nothing if you do open the first box” and then has a huge convoluted philosophical argument trying to work out how to make “only open the box you know will contain $1,000,000” the “rational” choice.
Instead of the rather more obvious argument “this thing observably happens, therefore my assumption that it cannot is incorrect”.
@mcc That is another perfectly sensible option!
Either you work within the logic of the hypothetical, where the magic box demonstrably exists, and so working out how to rationalise the behaviour of the magic box existing isn't very interesting, or you reject the magic box and the whole thing is void.
@AlexandreZani @RAOF And if the assigning the weights on the inputs to the probability function that comprises the thought experiment turns out to be a harder problem than executing the logic of the probability function itself, then… isn't what the thought experiment has ultimately shown, is that the probability function isn't useful?
Because that was my point to start with– if we're allowed to bring "this entire methodology seems to be working kind of poorly" in as a possibility…
Wait, is this thread discussing rationalism or Christian apologetics? [They are that similar, though religion at least has rituals that many find comforting long after they've grown to find the philosophy repulsive]
The rightful place for deductive logic is subservient to empirical observation and inductive reasoning.
@theryusui @flaviusb ok so you say this but the first season of Discovery actually had a subplot where a group of "logic extremists" logicked themselves into being a Vulkan alt-right and started assassinating people.
(and… I guess probably Spock would have slapped them, but he didn't get cast until season 2! So instead Spock's sister had to do it…)
An old man in a white traveling cloak with vermillion tunic carries a pole over one shoulder to which is tied sheaves of grain. In his free hand is a little iron sickle, and as he walks along a dirt path, two little foxes prance about his heels. One black and one white. One with a key and the other with a gourd full of wine.
So I read^Wskimmed the Wikipedia page and it seems like if you round any probability below (say) 10**-9 to zero, you stop making these sorts of stupid decisions.
Superintelligent AI inside a box: [lengthy argument about why the listener must release it from the box]
Bartleby the Scrivener: I would prefer not to
and… scene
@mcc appreciating the example of a superhuman AGI as the impossible assumption.
"While we are assuming things that don't exist, that we have no reason to believe will ever exist, why don't we assume Mars is a short bus ride away and has breathable air? Then, obviously, colonizing it will ease the suffering of billions."