INTERNET RATIONALIST: Consider the following thought experiment. Imagine a hyperintelligent artificial intelligence–
ME: No
INTERNET RATIONALIST: What
ME: I am declining to imagine the hyperintelligent artificial intelligence.
INTERNET RATIONALIST:
ME: I'm thinking about birds right now
INTERNET RATIONALIST:
ME: Dozens of crows, perched atop great standing stones
@mcc This is going to expose me as someone that's spent too much time looking at that stuff, but I particularly like the one that's “What if there was this magical superintelligence that leaves people with two boxes, one of which always contains $10 the other of which reliably, repeatedly, and observably contains either $1,000,000 if you don't open the first box but nothing if you do open the first box” and then has a huge convoluted philosophical argument trying to work out how to make “only open the box you know will contain $1,000,000” the “rational” choice.
Instead of the rather more obvious argument “this thing observably happens, therefore my assumption that it cannot is incorrect”.
@AlexandreZani @RAOF And if the assigning the weights on the inputs to the probability function that comprises the thought experiment turns out to be a harder problem than executing the logic of the probability function itself, then… isn't what the thought experiment has ultimately shown, is that the probability function isn't useful?
Because that was my point to start with– if we're allowed to bring "this entire methodology seems to be working kind of poorly" in as a possibility…