The expectation is for the Foundation to use its equity stake in the OpenAI Group to help fund philanthropic work. That will start with a $25 billion commitment to “health and curing diseases” and “AI resiliance” to counteract some of the risks presented by the deployment of AI.
Paying yourself to promote your own product. Promising to fix vague “risks” that make the product sound more powerful than it is, with “fixes” that won’t be measurable.
In other words, Sam is cutting a $25 billion check to himself.
AI companies are definitely aware of the real risks. It’s the imaginary ones ("what happens if AI becomes sentient and takes over the world?") that I imagine they’ll put that money towards.
Meanwhile they (intentionally) fail to implement even a simple cutoff switch for a child that’s expressing suicidal ideation. Most people with any programming knowledge could build a decent interception tool. All this talk about guardrails seems almost as fanciful.