Sam Altman's response to Molotov cocktail incident

https://blog.samaltman.com/2279512

-

Here is a photo of my family. I love them more than anything. Images have power, I hope. Normally we try to be pretty private, but in this case I am sharing a photo in the...

Sam Altman

Unserious answer about a very serious event.

I don't believe a word of Sam's "I believe" section.

unpopular opinion but i think it's written quite well
Perhaps by ChatGPT
Yes, clearly not written with his own product.
If that's the case, why doesn't he trust his own product enough to write this?
He doesn't trust it for anything else either as far as I can tell. In an interview he's boasted about how he uses a paper notebook for everything all day.

I don't think that's unpopular, it is pretty well written. But the "I believe" section is extraordinarily hard to believe given Altman's history.

> Working towards prosperity for everyone, empowering all people

> We have to get safety right

> AI has to be democratized; power cannot be too concentrated

None of these statements, IMO, reflect his actions over the past 5 years.

> we urgently need a society-wide response to be resilient to new threats. This includes things like new policy to help navigate through a difficult economic transition in order to get to a much better future

I agree with this, but there is a near 0% chance of that happening anytime soon in the US. I think he probably is aware of this.

Just my opinion, but it comes off as very insincere.

To be clear, what happened is still awful and there's absolutely no justification for it.

it's "written well" but not at all a smart piece of writing. leading with a photo of a cute baby before engaging in an extended defense of one's own integrity is so obvious as to be insulting

Ha, I was giving an AI bootcamp to a room full of people and someone asked me my opinion of Altman. I hesitated for a second and replied that I would not trust Altman further than I could throw a rock about anything.

If Graham says this guy will always stop at nothing to get whatever he wants, which I absolutely believe, then why would you trust anything that comes out of a person like that’s mouth?

Who tf is dumb enough to pay for an AI bootcamp, genuinely curious. If you're selling AI bootcamps, or whoever is, they are just as much a scam artist as Sam.

Who tf is dumb enough to not do it, though?

If I was non-tech and owned a business, and someone (reputable) offers to teach me everything I need to get up to date with the most revolutionary technology of the decade (perhaps century?) for like ... 500 dollars? Why not?

Its neural network autocomplete that helps you write text a little faster, chill with "the most revolutionary technology of the last decade/century" talk. You're offending a lot of experts in way more important areas of research.

>write text a little faster

You might actually need to attend an AI bootcamp. This is not 2022's GPT, AI can deliver plenty of value for a business owner these days.

You're cooked if this is actually how you see AI in 2026.
That’s so shockingly ignorant/reductive that you shouldn’t be surprised when people start ignoring you in technical conversations.
Yeah, people learning new technology is terrible. /s
You don’t even know what is covered. It could be anything from how to prompt to how to create your own models from numpy primitives.

10 hours ago a post made the frontpage here [0] about how OpenAI is backing a law that "would limit liability for AI-enabled mass deaths or financial disasters". Now he's here saying he believes that "working towards prosperity for everyone, empowering all people, and advancing science and technology are moral obligations for [him]".

I know he doesn't believe a word of what he wrote in that post except, perhaps, that he cannot sleep and is pissed. I know I should be used to people openly lying with no consequence, but it still amazes me a bit.

[0] https://news.ycombinator.com/item?id=47717587

OpenAI backs Illinois bill that would limit when AI labs can be held liable | Hacker News

I think it's good for CEOs of powerful companies to make statements about how they don't want too much personal power and it's important to ensure everyone does well, even and perhaps especially if there's reason to suspect they don't believe it. Saying it doesn't solve the problem, but it helps create a permission structure for the rest of us to get it to actually happen.

The reason he's saying that is because he doesn't want you to create that structure. He wants you to not create the laws or checks & balances on him because you "trust that he doesn't really want the power".

It has worked for him, repeatedly.

No, I don't think that's accurate. Altman has repeatedly and loudly demanded for these to be created, including a new detailed policy proposal just this month (https://cdn.openai.com/pdf/561e7512-253e-424b-9734-ef4098440...).

OpenAI has also repeatedly and quietly lobbied against them.

You linked a vague PDF whose promised actions are:

> To help sustain momentum, OpenAI is: (1) welcoming and organizing feedback through [email protected]; (2) establishing a pilot program of fellowships and focused research grants of up to $100,000 and up to $1 million in API credits for work that builds on these and related policy ideas; and (3) convening discussions at our new OpenAI Workshop opening in May in Washington, DC.

Welcoming and organizing feedback!

A pilot!

Convening discussions!

This "commitment" pales in comparison to the money they've spent lobbying against specific regulation that cedes power.

Please don't fall for this stuff.

Incendiary and false headline aside, no sane person would suggest that a hardware store that sold an axe that was used by an axe murderer should be held liable unless that store knew what was about to unfold.

Unless AI companies knowingly participate in murder plots, they should not be liable.

Is Microsoft liable for providing Notepad, a product which can be used to write detailed and specific mass murder plots?

Is Toyota liable for selling someone a car that is later used for vehicular manslaughter?

Liability should depend on your participation in the event, of course. Otherwise you wouldn't be able to buy an axe, or a car, or use the internet at all. A closer analogy is ISPs not being liable for copyright infringement done by users, and subsequently not being required to police such activity for rights holders.

> Incendiary and false headline aside

The text of the bill literally starts with "Creates the A.I. Safety Act. Provides that a
developer of a frontier AI model shall not be held liable for critical harms caused by the frontier model if (conditions)", and defines "critical harms" as "death or serious injury of 100 or more people or at least $1,000,000,000 of damages". The headline is, IMO, shockingly accurate.

> Is Toyota liable for selling someone a car that is later used for vehicular manslaughter?

No, but they are liable for selling a car with defective brakes, even if they don't know that the brakes are defective. And if the ex-Monsanto has to pay millions in compensation for causing cancer with a product that they tested to hell and back, then I don't see how that's different when the one causing cancer is an AI just because the developers pinky swear that it's safe.

People championing the absolution of billionaires who create a chatbot that can't spell strawberry who then say it should be allowed to choose who lives and dies wasn't what I expected at the turn of the decade.

Beautiful.

> I know I should be used to people openly lying with no consequence, but it still amazes me a bit.

Well that makes two of us. Character seems to mean nothing today.

> There was an incendiary article about me a few days ago. Someone said to me yesterday they thought it was coming at a time of great anxiety about AI and that it made things more dangerous for me.

For context his blog post seems to be a response to this deep-dive New Yorker article:

"Sam Altman May Control Our Future—Can He Be Trusted?"

https://www.newyorker.com/magazine/2026/04/13/sam-altman-may...

https://news.ycombinator.com/item?id=47659135

Sam Altman May Control Our Future—Can He Be Trusted?

New interviews and closely guarded documents shed light on the persistent doubts about the head of OpenAI, Ronan Farrow and Andrew Marantz write.

The New Yorker
Ronan Farrow, one of the journalists who worked on this article, talked to Katie Couric on her YouTube channel about this. They worked on this across ~18 months. I thought this interview was illuminating.

Yes, it was good. It seems clear that Farrow and his co-author approached it in a methodical, fair-minded way.

https://www.youtube.com/watch?v=wr_sB1Hl0oM

Who Is Sam Altman?

YouTube

He has to be talking about the New Yorker article, which wasn't incendiary at all. If anything, it seemed fully neutral to me, reporting what they could justify as facts but going out of their way to not specifically paint him or anyone else in a negative light beyond a listing of events that they presumably have solid sourcing on (if not, sue them; if so, stfu).

If a neutral look at your actions seems incendiary to you, maybe you need to rethink your own life and actions.

It should go without saying I don't think people should be attempting to light other people's houses on fire regardless of how distasteful they find those people.

Wouldn't it be more correct to call the article "critical" and not "incendiary"? I looked it over and I don't remember seeing any calls to violence. Altman needs to remember that he holds an incredible amount of power in this moment. He and other current AI tech leaders are effectively sitting on the equivalent of a technological nuclear bomb. Anyone in their right mind would find that threatening.
"Critical" even feels strong. The article was essentially a collection of statements others have made about Sam.

Right, but the picture those statements painted collectively was not flattering. And that was certainly intended by the authors. Thus, critical, but not at all "incendiary."

Update: To clarify, my personal stance is that the critical tone was both intended by the authors and, in my opinion, appropriate given how much power Mr. Altman holds. If he has a history of behaving inconsistently, that deserves daylight.

Are you arguing that because the authors knew the pattern they were documenting was unflattering, the piece is somehow compromised? That they clearly had an agenda? That's called reporting. They called a hundred-plus named sources and the picture those sources independently painted was damning. Altman has a history of telling repeated, easily-checked lies, followed by fresh lies when caught in the first ones.

Are you suggesting that they should have "both sides"-ed by reporting company PR and Sam-friendly sources and giving them equal weight? Sometimes the facts point in one direction.

> Are you arguing that because the authors knew the pattern they were documenting was unflattering, the piece is somehow compromised?

Uh, no? Lol, I'm on your side, bud. Put away the pitchfork. I thought it was a really good and fair article. I am not the adversary you're looking for.

It's never OK to physically attack someone like this. Full stop.

Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.

Sam eagerly pursued DoD contracts to weaponize AI. And then lobbied for legislation to ensure OpenAI cannot be held accountable if people are killed due to their systems.
I find it interesting that Altman's fans seem to keep skipping past this fact. I'd love to hear their defense as to why one person potentially being responsible for hundreds or thousands of deaths is acceptable, but attacking that one person isn't. If violence is never the answer, they should be condemning Altman with even more vigor.
Yeah, it's kind of terrifying, how this incident seems to have faded from people's memories.

> why one person potentially being responsible for hundreds or thousands of deaths is acceptable

I am not sure who exactly is that one person ? Is it Altman, who is according to many people not that knowledgeable in AI in the first place; the scientist who found a breakthrough (who is it ?); is it the president of the United States who is greenlighting the strikes; the general who is choosing the target (based on AI suggestions); the missile designer; the manufacturer; the pilot who flew the plane ?

I get the point of concentrating power in fewer hands, but the whole "all the problems of this world are caused by an extremely narrow set of individuals" always irks me. Going as far as saying there is just one is even mor ludicrous.

Accountability sinks are good value and wealthy people always make sure they have enough of them

Ah the old 'everyone is responsible so nobody is responsible' canard.

I will give you a helpful rule of thumb: when in doubt the guy with a bank account larger than the total lifetime income of hundreds of thousands of people is probably the one to blame.

I’m fine with holding them all accountable to varying degrees. For example, yes, ultimately the president is responsible, but so is the person who dropped bombs instead of refusing an illegal order; just like the street dealer, gang banger, trafficker, and cartel boss are all guilty of all of their various crimes.

What do you find difficult to understand about that?

The entire purpose of government is to have a monopoly on violence. Democracies give their government the power to decide when and against whom to deploy violence.

There is a real difference between giving a democratic government the tools to kill people vs attempting to kill people yourself. If you don’t believe this then you don’t believe in democracy.

I'm not sure the next batch of schoolgirls getting bombed will particularly care whether the choice was made "democratically" or not.

I also won't particularly care about the distinction when AI is inevitably used to enact violence on the US population.

> The entire purpose of government is to have a monopoly on violence.

... Isn't that rather against the spirit of the US' constitution? I can see it being a thought with other nations, but not this particular one.

> A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.

Which kinda follows the spirit of English Common Law:

> The ... last auxiliary right of the subject ... is that of having arms for their defence, suitable to their condition and degree, and such as are allowed by law. Which is ... declared by ... statute, and is indeed a public allowance, under due restrictions, of the natural right of resistance and self-preservation, when the sanctions of society and laws are found insufficient to restrain the violence of oppression. - Sir William Blackstone

A "monopoly on violence" is exactly the thing our laws are supposed to protect us against. Because if a state has that, then they have a monopoly against all rights, because they alone can employ violence to curb those who do not subscribe to the state's ideology.

I'm pretty much a pacifist. I _like_ Australia's gun laws. But, a government's purpose is to protect their people. They are to be representative - or to be replaced. If they leave no other choice for that, then violence is the only answer left.

The above posts forgot the word "legitimate" before "monopoly": a state is defined as the entity that has the legitimate monopoly on violence within a defined geographic area. A state can cease to have the legitimate monopoly before they cease to have the monopoly.

> There is a real difference between giving a democratic government the tools to kill people vs attempting to kill people yourself. If you don’t believe this then you don’t believe in democracy.

Is this what we just saw with America attacking Iran?

This is a distinction without meaning. It makes no moral difference who dispenses justice, if said justice is justified.
Military power and attacks on private individuals are different things. It's perfectly consistent to be against attacks on private individuals while being in favor of building military weapons.
The bombed schoolgirls were "private individuals" in any reasonable meaning of "private individual".
There's thirty-some-odd million people in Ukraine who very much would like to get AI weapons before the Russians do. They're coming whether you want them or not.

If only that sentiment was reciprocal!

When the job losses hit in earnest and the vague handwaving about making it right all inevitably turns out to be hollow, those on top will be exceedingly comfortable using violence to keep the underclass in line. It has happened before and it will happen again.

The ‘graduation day massacre of 2047’, ycombinator’s greatest tragedy…. The ceremony was interrupted by ‘Anti-AI’ + ‘Pro-Trump/Palestine Gaza Hotel & Casino’ protesters (who all refused to wear their anti COVID-47 plastic vampire teeth) and, with good cause, were massacred by the Cyber-Hot-Pinkertons

I forgot what I was typing this in response to, so I’m just going to stop and post lol

My assumption based on many factors is that it is precisely why the carpet surveillance systems like Flock are being rolled out in preparation.

There are people in control who don’t make 1, 5, or 10 year plans; they make 20, 50, 100, and 500 year plans; and they know human nature quite well, which allows them to of not predict, have an anxious understanding for what their plans will cause and what needs to be prepared for in advance.

The flock systems are being installed by cities not the feds. You make it seem like someone has some master plan. Does not make flock any less dangerous but its not as organized as you make it seem.

It doesn’t need coordination to be organized and have the same incentives. Just like the wave of consolidation in media. Dario and Sam don’t need to talk to know what is in both their interest.

The concentration of wealth is at an all time peak. The top 1% own more stocks than the other 99%. Nobody thinks about that hard enough. The callousness by which people’s livelihoods dignity and safety are threatened is tremendous

Exactly this

> Separately; Sam's belief that "AI has to be democratized; power cannot be too concentrated." rings incredibly hollow. OpenAI has abandoned its open source roots. It is concentrating wealth - and thus power - into fewer hands. Not more.

We should call it what it really is: oligapolization of intellectual work. The capital barrier to enter this market is too high and there can be no credible open source option to prevent a handful of companies from controlling a monster share of intellectual work in the short and medium term. Yet our profession just keeps rushing head first into this one-way door.