Superintelligence: The Idea That Eats Smart People

A very good old post about debunking #AI risk doomsday scenarios from 2016. Most points still hold, except #LLMs show that we are approaching #AGI and accelerating past human cognitive capability in a yet another domain, language, which seems to be a general enough missing peace of cognition which will change everything.

#Alignment changed from a philosophical topic into an engineering challenge and was quickly solved in practice in a "good enough" fashion.

https://idlewords.com/talks/superintelligence.htm

Superintelligence: The Idea That Eats Smart People

My personal opinions are:
- #SimulationArgument is a philosophical sleight-of-hand, smoke and mirrors. The field of philosophy is full of these on-surface convincing arguments leading to absurd conclusions. They are generally based on a subtle deceit amplified thousand-fold by logic.
- Paperclip maximization argument is criticism of #capitalism, not #AI.
- #Panpsychism is true, and simple cognition isn't rare or difficult in the universe.
- #QuantumComputation isn't necessary for cognition or #AGI.
- #EffectiveAltruists are trying to solve a non-linear moral control problem by looking at a wrong thing: Far future instead of the present. You can't assume #future trajectories conditioned on people behaving sensibly if you are yourself advocating for behaving irrationally. Your alternative model of #ethics screws up the premise for the future taken as granted. Analogous to: https://xkcd.com/989/
Cryogenics

xkcd