OpenAI Execs Mass Quit as Company Removes Control From Non-Profit Board and Hands It to Sam Altman
OpenAI Execs Mass Quit as Company Removes Control From Non-Profit Board and Hands It to Sam Altman
Altman downplayed the major shakeup.
"Leadership changes are a natural part of companies
Is he just trying to tell us he is next?
/s
They always are and they know it.
Doesn’t matter at that level it’s all part of the game.
You know guys, I’m starting to think what we heard about Altman when he was removed a while ago might actually have been real.
/s
like the taste of their feet.
There’s an alternate timeline where the non-profit side of the company won, Altman the Conman was booted and exposed, and OpenAI kept developing machine learning in a way that actually benefits actual use cases.
Cancer screenings approved by a doctor could be accurate enough to save so many lives and so much suffering through early detection.
Instead, Altman turned a promising technology into a meme stock with a product released too early to ever fix properly.
That is a different kind of machine learning model, though.
You can’t just plug in your pathology images into their multimodal generative models, and expect it to pop out something usable.
And those image recognition models aren’t something OpenAI is currently working on, iirc.
I agree but I also like to point out that the AI craze started with LLMs and those MLs have been around before OpenAI.
So if openAI never released chat GPT, it wouldn’t have become synonymous with crypto in terms of false promises.
They are not ‘faulty’, they have been fed wrong training data.
This is the most important aspect of any AI - it’s only as good as the training dataset is. If you don’t know the dataset, you know nothing about the AI.
That’s why every claim of ‘super efficient AI’ need to be investigated deeper. But that goes against line-goes-up principle. So don’t expect that to happen a lot.
Putting my tin foil hat on…Sam Altman knows the AI train might slowing down soon.
The OpenAI brand is the most valuable part of the company right now, since the models from Google, Anthropic, etc. can beat or match what ChatGPT is, but they aren’t taking off coz they aren’t as cool as OpenAI.
The business models to train & run models is not sustainable. If there is any money to be made it is NOW, while the speculation is highest. The nonprofit is just getting in the way.
This could be wishful thinking coz fuck corporate AI, but no one can deny AI is in a speculative bubble.
It honestly just never occurred to me that such a transformation was allowed/possible. A nonprofit seems to imply something charitable, though obviously that’s not the true meaning of it. Still, it would almost seem like the company benefits from the goodwill that comes with being a nonprofit but then gets to transform that goodwill into real gains when they drop the act and cease being a nonprofit.
I don’t really understand most of this shit though, so I’m probably missing some key component that makes it make a lot more sense.
A nonprofit seems to imply something charitable, though obviously that's not the true meaning of it
Life time of propaganda got people confused lol
Nonprofit merely means that their core income generating activities are not subject next to the income tax regimes.
While some non profits are charities, many are just shelters for rich people's bullshit behaviors like foundations, lobby groups, propaganda orgs, political campaigns etc
Non profit == inflated costs
(Sometimes)
That is a good point, but I think I’d like to make the distinction of saying LLM’s or “generic model” is a garbage concept, which require power & water rivaling a small country to produce incorrect results.
Neural networks in general that can (cheaply) learn on their own for a specific task could be huge! But there’s no big money in that, since its not a consolidated general purpose product tech bros can flog to average consumers.
To be fair, the article linked this idiotic one about OpenAI’s “thirsty” data centers, where they talk about water “consumption” of cooling cycles… which are typically closed-loop systems.
But even then, is the water truly consumed? Does it get contaminated with something like the cooling water of a nuclear power plant? Or does the water just get warm and then either be pumped into a water body somewhere or ideally reused to heat homes?
There’s loads of problems with the energy consumption of AI, but I don’t think the water consumption is such a huge problem? Hopefully, anyway.
Does it get contaminated with something like the cooling water of a nuclear power plant?
This doesn’t happen unless the reactor was sabotaged. Cooling water that interacts with the core is always a closed-loop system. For exactly this reason.