One of the weirdest things to me about the tech scene for AI is you regularly hear variants on "Super-human intelligence is coming next year and will permanently alter life as we know it. Prepare your business and finances accordingly." Setting aside what you think about the first half, how does the second half make any sense? Like... what would you do differently, other than move to the woods?

@ZachWeinersmith

"I am not in the equally unserious camp that generative AI does not have the potential to drastically change the world. It clearly does. When I saw the early demos of GPT-2, while I was still at university, I was half-convinced that they were faked somehow. I remember being wrong about that, and that is why I'm no longer as confident that I know what's going on.

However, I do have the technical background to understand the core tenets of the technology, and it seems that we are heading in one of three directions.

The first is that we have some sort of intelligence explosion, where AI recursively self-improves itself, and we're all harvested for our constituent atoms because a market algorithm works out that humans can be converted into gloobnar, a novel epoxy which is in great demand amongst the aliens the next galaxy over for fixing their equivalent of coffee machines. It may surprise some readers that I am open to the possibility of this happening, but I have always found the arguments reasonably sound. However, defending the planet is a whole other thing, and I am not even convinced it is possible. In any case, you will be surprised to note that I am not tremendously concerned with the company's bottom line in this scenario, so we won't pay it any more attention."

https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/

I Will Fucking Piledrive You If You Mention AI Again — Ludicity