I’m sure they could have found someone in the EA ecoystem to throw them money if it weren’t for the fundraising freeze. This seems like a case of Oxford killing the institute deliberately. The 2020 freeze predates the Bostrom email, this guy who was consulted by oxford said there was a dysfunctional relationship for many years.
It’s not like oxford is hurting for money, they probably just decided FHI was too much of a pain to work with and hurt the oxford brand.
Comment by Sean_o_h - I can't imagine it helped in winning allies in Oxford, but relationship with Faculty/University was already highly dysfunctional. (I was consulted as part of a review re: FHI's position within Oxford and various options before said personal controversies).
I feel this makes it an unlikely great filter though. Surely some aliens would be less stupid than humanity?
Or they could be on a planet with far less fossil fuels reserves, so they don’t have the opportunity to kill themselves.
I feel really bad for the person behind the “notkilleveryonism” account. They’ve been completely taken in by AI doomerism and are clearly terrified by it. They’ll either be terrified for their entire life even as the predicted doom fails to appear, or realise at some point that they wasted an entire portion of their life and their entire system of belief is a lie.
False doomerism is really harming people, and that sucks.
The future of humanity institute is the EA organisation at oxford run by Nick Bostrom, who got in trouble for an old racist email and subsequent bad apology. It is the one that is rumored to be shutting down.
The Future of Life institute is the EA organisation run by Max Tegmarck, who got in trouble for offering to fund a neo-nazi newspaper (He didn’t actually go through with it and claimed ignorance). It is the one that got the half a billion dollar windfall.
I can’t imagine how you managed to conflate these two highly different institutions.
The committed Rationalists often point out the flaws in science as currently practiced: the p-hacking, the financial incentives, etc. Feeding them more data about where science goes awry will only make them more smug.
The real problem with the Rationalists is that they* think they can do better*, that knowing a few cognitive fallacies and logicaltricks will make you better than the doctors at medicine, better than the quantum physicists at quantum physics, etc.
We need to explain that yes, science has it’s flaws, but it still shits all over pseudobayesianism.