Sam Altman's response to Molotov cocktail incident
Sam Altman's response to Molotov cocktail incident
Unserious answer about a very serious event.
I don't believe a word of Sam's "I believe" section.
I don't think that's unpopular, it is pretty well written. But the "I believe" section is extraordinarily hard to believe given Altman's history.
> Working towards prosperity for everyone, empowering all people
> We have to get safety right
> AI has to be democratized; power cannot be too concentrated
None of these statements, IMO, reflect his actions over the past 5 years.
> we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future
I agree with this, but there is a near 0% chance of that happening anytime soon in the US. I think he probably is aware of this.
Just my opinion, but it comes off as very insincere.
To be clear, what happened is still awful and there's absolutely no justification for it.
Ha, I was giving an AI bootcamp to a room full of people and someone asked me my opinion of Altman. I hesitated for a second and replied that I would not trust Altman further than I could throw a rock about anything.
If Graham says this guy will always stop at nothing to get whatever he wants, which I absolutely believe, then why would you trust anything that comes out of a person like that’s mouth?
Who tf is dumb enough to not do it, though?
If I was non-tech and owned a business, and someone (reputable) offers to teach me everything I need to get up to date with the most revolutionary technology of the decade (perhaps century?) for like ... 500 dollars? Why not?
>write text a little faster
You might actually need to attend an AI bootcamp. This is not 2022's GPT, AI can deliver plenty of value for a business owner these days.
10 hours ago a post made the frontpage here [0] about how OpenAI is backing a law that "would limit liability for AI-enabled mass deaths or financial disasters". Now he's here saying he believes that "working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for [him]".
I know he doesn't believe a word of what he wrote in that post except, perhaps, that he cannot sleep and is pissed. I know I should be used to people openly lying with no consequence, but it still amazes me a bit.
The reason he's saying that is because he doesn't want you to create that structure. He wants you to not create the laws or checks & balances on him because you "trust that he doesn't really want the power".
It has worked for him, repeatedly.
OpenAI has also repeatedly and quietly lobbied against them.
You linked a vague PDF whose promised actions are:
> To help sustain momentum, OpenAI is: (1) welcoming and organizing feedback through [email protected]; (2) establishing a pilot program of fellowships and focused research grants of up to $100,000 and up to $1 million in API credits for work that builds on these and related policy ideas; and (3) convening discussions at our new OpenAI Workshop opening in May in Washington, DC.
Welcoming and organizing feedback!
A pilot!
Convening discussions!
This "commitment" pales in comparison to the money they've spent lobbying against specific regulation that cedes power.
Please don't fall for this stuff.
Incendiary and false headline aside, no sane person would suggest that a hardware store that sold an axe that was used by an axe murderer should be held liable unless that store knew what was about to unfold.
Unless AI companies knowingly participate in murder plots, they should not be liable.
Is Microsoft liable for providing Notepad, a product which can be used to write detailed and specific mass murder plots?
Is Toyota liable for selling someone a car that is later used for vehicular manslaughter?
Liability should depend on your participation in the event, of course. Otherwise you wouldn't be able to buy an axe, or a car, or use the internet at all. A closer analogy is ISPs not being liable for copyright infringement done by users, and subsequently not being required to police such activity for rights holders.
> Incendiary and false headline aside
The text of the bill literally starts with "Creates the A.I. Safety Act. Provides that a
developer of a frontier AI model shall not be held liable for critical harms caused by the frontier model if (conditions)", and defines "critical harms" as "death or serious injury of 100 or more people or at least $1,000,000,000 of damages". The headline is, IMO, shockingly accurate.
> Is Toyota liable for selling someone a car that is later used for vehicular manslaughter?
No, but they are liable for selling a car with defective brakes, even if they don't know that the brakes are defective. And if the ex-Monsanto has to pay millions in compensation for causing cancer with a product that they tested to hell and back, then I don't see how that's different when the one causing cancer is an AI just because the developers pinky swear that it's safe.
People championing the absolution of billionaires who create a chatbot that can't spell strawberry who then say it should be allowed to choose who lives and dies wasn't what I expected at the turn of the decade.
Beautiful.
> I know I should be used to people openly lying with no consequence, but it still amazes me a bit.
Well that makes two of us. Character seems to mean nothing today.
> There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me.
For context his blog post seems to be a response to this deep-dive New Yorker article:
"Sam Altman May Control Our Future—Can He Be Trusted?"
https://www.newyorker.com/magazine/2026/04/13/sam-altman-may...
Yes, it was good. It seems clear that Farrow and his co-author approached it in a methodical, fair-minded way.

He has to be talking about the New Yorker article, which wasn't incendiary at all. If anything, it seemed fully neutral to me, reporting what they could justify as facts but going out of their way to not specifically paint him or anyone else in a negative light beyond a listing of events that they presumably have solid sourcing on (if not, sue them; if so, stfu).
If a neutral look at your actions seems incendiary to you, maybe you need to rethink your own life and actions.
It should go without saying I don't think people should be attempting to light other people's houses on fire regardless of how distasteful they find those people.
Right, but the picture those statements painted collectively was not flattering. And that was certainly intended by the authors. Thus, critical, but not at all "incendiary."
Update: To clarify, my personal stance is that the critical tone was both intended by the authors and, in my opinion, appropriate given how much power Mr. Altman holds. If he has a history of behaving inconsistently, that deserves daylight.
Are you arguing that because the authors knew the pattern they were documenting was unflattering, the piece is somehow compromised? That they clearly had an agenda? That's called reporting. They called a hundred-plus named sources and the picture those sources independently painted was damning. Altman has a history of telling repeated, easily-checked lies, followed by fresh lies when caught in the first ones.
Are you suggesting that they should have "both sides"-ed by reporting company PR and Sam-friendly sources and giving them equal weight? Sometimes the facts point in one direction.
> Are you arguing that because the authors knew the pattern they were documenting was unflattering, the piece is somehow compromised?
Uh, no? Lol, I'm on your side, bud. Put away the pitchfork. I thought it was a really good and fair article. I am not the adversary you're looking for.
It's never OK to physically attack someone like this. Full stop.
Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.
> why one person potentially being responsible for hundreds or thousands of deaths is acceptable
I am not sure who exactly is that one person ? Is it Altman, who is according to many people not that knowledgeable in AI in the first place; the scientist who found a breakthrough (who is it ?); is it the president of the United States who is greenlighting the strikes; the general who is choosing the target (based on AI suggestions); the missile designer; the manufacturer; the pilot who flew the plane ?
I get the point of concentrating power in fewer hands, but the whole "all the problems of this world are caused by an extremely narrow set of individuals" always irks me. Going as far as saying there is just one is even mor ludicrous.
Ah the old 'everyone is responsible so nobody is responsible' canard.
I will give you a helpful rule of thumb: when in doubt the guy with a bank account larger than the total lifetime income of hundreds of thousands of people is probably the one to blame.
I’m fine with holding them all accountable to varying degrees. For example, yes, ultimately the president is responsible, but so is the person who dropped bombs instead of refusing an illegal order; just like the street dealer, gang banger, trafficker, and cartel boss are all guilty of all of their various crimes.
What do you find difficult to understand about that?
The entire purpose of government is to have a monopoly on violence. Democracies give their government the power to decide when and against whom to deploy violence.
There is a real difference between giving a democratic government the tools to kill people vs attempting to kill people yourself. If you don’t believe this then you don’t believe in democracy.
I'm not sure the next batch of schoolgirls getting bombed will particularly care whether the choice was made "democratically" or not.
I also won't particularly care about the distinction when AI is inevitably used to enact violence on the US population.
> The entire purpose of government is to have a monopoly on violence.
... Isn't that rather against the spirit of the US' constitution? I can see it being a thought with other nations, but not this particular one.
> A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.
Which kinda follows the spirit of English Common Law:
> The ... last auxiliary right of the subject ... is that of having arms for their defence, suitable to their condition and degree, and such as are allowed by law. Which is ... declared by ... statute, and is indeed a public allowance, under due restrictions, of the natural right of resistance and self-preservation, when the sanctions of society and laws are found insufficient to restrain the violence of oppression. - Sir William Blackstone
A "monopoly on violence" is exactly the thing our laws are supposed to protect us against. Because if a state has that, then they have a monopoly against all rights, because they alone can employ violence to curb those who do not subscribe to the state's ideology.
I'm pretty much a pacifist. I _like_ Australia's gun laws. But, a government's purpose is to protect their people. They are to be representative - or to be replaced. If they leave no other choice for that, then violence is the only answer left.
> There is a real difference between giving a democratic government the tools to kill people vs attempting to kill people yourself. If you don’t believe this then you don’t believe in democracy.
Is this what we just saw with America attacking Iran?
If only that sentiment was reciprocal!
When the job losses hit in earnest and the vague handwaving about making it right all inevitably turns out to be hollow, those on top will be exceedingly comfortable using violence to keep the underclass in line. It has happened before and it will happen again.
The ‘graduation day massacre of 2047’, ycombinator’s greatest tragedy…. The ceremony was interrupted by ‘Anti-AI’ + ‘Pro-Trump/Palestine Gaza Hotel & Casino’ protesters (who all refused to wear their anti COVID-47 plastic vampire teeth) and, with good cause, were massacred by the Cyber-Hot-Pinkertons
I forgot what I was typing this in response to, so I’m just going to stop and post lol
My assumption based on many factors is that it is precisely why the carpet surveillance systems like Flock are being rolled out in preparation.
There are people in control who don’t make 1, 5, or 10 year plans; they make 20, 50, 100, and 500 year plans; and they know human nature quite well, which allows them to of not predict, have an anxious understanding for what their plans will cause and what needs to be prepared for in advance.
It doesn’t need coordination to be organized and have the same incentives. Just like the wave of consolidation in media. Dario and Sam don’t need to talk to know what is in both their interest.
The concentration of wealth is at an all time peak. The top 1% own more stocks than the other 99%. Nobody thinks about that hard enough. The callousness by which people’s livelihoods dignity and safety are threatened is tremendous
> Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.
We should call it what it really is: oligapolization of intellectual work. The capital barrier to enter this market is too high and there can be no credible open source option to prevent a handful of companies from controlling a monster share of intellectual work in the short and medium term. Yet our profession just keeps rushing head first into this one-way door.