OpenAI co-founder makes spectacular return mere days after ousting, with the board that fired him mostly swept away
OpenAI co-founder makes spectacular return mere days after ousting, with the board that fired him mostly swept away
I mean, the non-profit board appears, at current glance, to have fired the CEO for their paranoid-delusional beliefs, that this LLM is somehow a real AGI and we are already at a point of a thinking, learning, AI.
Just delusional grandeur on behalf of the board, or they didn't and don't understand what is really going on, which might be why they fired the CEO: for not informing the board, truly, what level OpenAI's AI is actually at. So the board was trying to reign in a beast that is merely a puppy, with information that was wrong.
As I used the word "appears", I am postulating based on how the company is controlled, the non-profit entity, as well as certain statements that board members have made in the past such as Ilya Sutskever (now ex-board??), whose thoughts have likely been influenced by his mentor Geoffrey Hinton who is quoted on 60 Minutes saying the AI is about to be "more intelligent than us". Ilya is known for, beyond his scientific endeavors into AI and his position of Chief Scientist of OpenAI, some odd behavior on his commitment to AI safety though I'm sure his beliefs come from the right place.
There's a lot more to this, for each board member and Sam, but it makes me believe that a large wall was erected in information leading to a paranoid board.
The fact that the employees were able represent their defacto power in a crisis is good, but the fact that the don’t have explicit power in the decision making process is why this able to happen in the first place.
There are no good kings, even if the best men were made kings, they would be inherently tainted by the position.
The fact that the employees were able represent their defacto power in a crisis is good
That's all that I'm saying.
If you've got issues with the whole concept of hierarchical power structures or there being such a thing as "leaders", that's a bit beyond the scope of this particular situation.
Actually that’s just self interest. Both capitalism and socialism claim to benefit workers. But only socialism has remotely shown to do that to any extent. Capitalist hoarding and speculation is the primary driver of inflation and things like the inafordability of housing.
If you labor for a living, you aren’t a capitalist. You’re labor.
Genuinely confused by your first statement (in particular effective altruism). What does that have to do with the board?
Not an attack, just actually clueless.
Similarly confused, specially in how could someone actually make an assessment like that of the board when they’re mostly faceless entities to the public.
I think they might be projecting an image of the kind of person who would want to stop AI onto anyone who even remotely does something similar to stopping AI.
an insane cult of effective altruism / longtermism / LessWrong
I’m out of the loop. What’s the problem with those things?
famously lack class consciousness
How much money do you suppose the average OpenAI employee makes? What class do you imagine they’re part of?
I’m the developers make the lower half of six figures, but they still have to sell their labor to survive, so they’re still working class.
I’ve been an SF Bay Area software developer for almost thirty years, so I know them well. I consider us members of the professional–managerial class (PMC). We generally think we’re “above” the working class (we’re not), and so we seldom have any sense of solidarity with the rest of the working class (or even each other), and we think unionization is for those other people and not us.
When Hillary Clinton talked about the “basket of deplorables,” she was talking to her PMC donors & voters about the rest of the working class, and we eat that shit up. Most of my peers have still learned no lessons from her election defeat, preferring to blame debunked RussiaGate conspiracy theories.
I hate everything about this: the lack of transparency, the lack of communication, the chaotic back and forth. We don’t know now if the company is now in a better position or worse.
I know it leaves me feeling pretty sick and untrusting about it considering the importance and potential disruptiveness (perhaps extreme) of AI in the coming years.
For fuck’s sake. You want bad things to happen… so good things happen, later. Bad shit happening is the part that’s objectionable. Saying ‘but I want good things’ isn’t fucking relevant to why someone’s hassling you about this!
The bad shit you want to happen first is the only part that’s real!
You want bad things to happen
No, that's entirely you assuming things about my position. I don't want bad things to happen.
I actually like the chaoticness
Given the rumors he was fired based on undisclosed usage of some foreign data scraping company’s data, it ain’t looking good.
Now that there’s big money involved, screw ethics. We don’t care how the training data was acquired.