Sam Altman's response to Molotov cocktail incident

https://blog.samaltman.com/2279512

-

Here is a photo of my family. I love them more than anything. Images have power, I hope. Normally we try to be pretty private, but in this case I am sharing a photo in the...

Sam Altman

Can someone help me to understand why OpenAI and Anthropic talks as if the future of humanity controlled by them? We have very strong open (weight) Chinese models possibly only 6 months behind of them, gene is out of the bottle, is 6 months of difference really that important? And they don’t have good reasons for that 6 months to stay that way.

Am I missing something or are these just their usual marketing? I’m not arguing about importance of AI but trying to understand why OpenAI and Anthropic are so important?

Some people think there will be an exponential takeoff, which means that a 6 month lead effectively rounds up to infinity.

Is this belief grounded on some kind of derivation, or just a prima facie belief?

If it is grounded on a logical derivation, where can one find such a derivation, and inspect its premises?

It's an old idea, "the singularity". The machines become smart enough to improve themselves, and each improvement results in shorter (or more significant) improvement cycles. This leads to an exponential growth rate.

It's been promised to be around the corner for decades.

https://en.wikipedia.org/wiki/Technological_singularity

Technological singularity - Wikipedia

Those are the people betting on a business model of “create Robot God and ask him for money.” Why pay attention to them?
It's a marketing strategy. If it's almost certainly conscious and capable of ending the world if it desired (even if it isn't), imagine how good it could be at building your dream SaaS!
It turns out there is literally no amount of being publicly right about a longshot bet sufficient for people to conclude you hold your beliefs because you think they are true.
Do any of the open weight models from smaller labs exist if they can't distill from the SoTA models that are throwing billions of dollars of compute into pretraining?
I’ve been wondering the same. And I think pretty much all the impressive small lab models were guilty of it, right? At least there is still larger players like DeepSeek and mistral to provide a bit of diversity in the market
Does it matter? The frontier models stole the whole internet, then the second-level models stole from them… It’s all theft.
Hard agree.

It is not about the US or the Chinese. Its about the "Elephant Rider" mind everyone has. Once the Elephant has been injured or scared what it does next is not easy to control, and what story the Rider makes up to maintain coherence becomes another layer of the deeper problem. If the story resonates more elephants get triggered. Social media/attention economy make it even more complex to calm things down.

Modern Corporations are a failed experiment because they dont think Elephant injuries and fears are something they have to worry about it.
If you compare the curiculum of a business school to a seminary the difference in how they think about fear and anxiety at individual and group level and what to do about it is totally different. We are learning as unpredictability accelerates its very important to pay attention to hurt and repair mechanisms.

6 months is an incredible amount of time to control AGI or ASI by yourself. That lead is insurmountable.