How prediction markets create harmful outcomes: a case study
How prediction markets create harmful outcomes: a case study
people without disposable income are now excluded
The article does say/link:
I’ve previously talked about how it may not always be ethical to require people to bet on their beliefs, and talked about how the interests of rich people could bias certain prediction markets
As for
The bullshit artist prevention also doesnt work
In the footnote it does say:
This doesn’t work for very longterm bets, and it also wouldn’t convince everyone, since conspiracy theorists still exist. Still, I expect it to be helpful on average.
Although there’s likely still an overestimation of how much it would help
@Collectivist @Soyweiser regarding the bullshit prevention/mitigation, it hinges on, "If every time a bullshit artist starts spouting bullshit people would go: “Okay, bet on it then”" which is obvious nonsense.
These people are already on record lying/being wrong *constantly*. The only way to believe anything this kind of person says in the first place is to eschew rationality.
I think the other big objection is that the value of the information you can get from a prediction market basically only approaches usability as the time to market close approaches zero. If you’re trying to predict whether an event is actually going to happen you usually want to know with enough of a time lead to actually do something about it, but at the same time that “do something about it” is going to impact the actual event being predicted and get “priced in.”
It’s that old business aphorism about making a metric into a target. Even if prediction markets were unambiguously useful as informational tools and didn’t have any of the incredibly obvious perverse incentives and power imbalances that they do, as soon as you try to actually use that information to do anything the market will start to change based on the perception of the market itself. Like, if there’s a market on someone being assassinated, you need to factor in not only the chances of it happening on its own but also the chances of it happening given that a high likelihood from the prediction market will result in additional safety measures being deployed or given that a small likelihood from the market may cause them to take on riskier public appearances or otherwise create more opportunities. If you don’t actually use the information for anything then it might be capturing something, but that something becomes wildly self-referential is the information is actually used in any way.
Conversely, people who may not look or sound like a traditional expert, but are good at making predictions
The weird rationalist assumption that being good at predictions is a standalone skill that some people are just gifted with (see also the emphasis on superpredictors being a thing in itself that’s just clamoring to come out of the woodwork but for the lack of sufficient monetary incentive) tends to come off a lot like if an important part of the prediction market project was for rationalists to isolate the muad’dib gene.