OpenAI co-founder makes spectacular return mere days after ousting, with the board that fired him mostly swept away

https://sopuli.xyz/post/6139715

OpenAI co-founder makes spectacular return mere days after ousting, with the board that fired him mostly swept away - Sopuli

I guess this will have to do as entertainment until GRRM finishes his damn book.
Any day now! I have a friend that got hyped up every time George published another chapter from WoW, but I just refuse to read any of them. I want a complete book. I’m not sure he’s got any idea of how to finish his own story.
I didn’t know he wrote for World of Warcraft
I know you’re joking, but it stands for Winds of Winter if anyone is confused.
He’ll never finish it.
Yeah that’s my feeling as well
It's okay, though, we'll have an AI that can do it soon enough.
Man what a clusterfuck. Things still don’t really add up based on public info. I’m sure this will be the end of any real attempts at safeguards, but with the board acting the way it did, I don’t know that there would’ve been even without him returning. You know the board fucked up hard when some SV tech bro looks like the good guy.

I mean, the non-profit board appears, at current glance, to have fired the CEO for their paranoid-delusional beliefs, that this LLM is somehow a real AGI and we are already at a point of a thinking, learning, AI.

Just delusional grandeur on behalf of the board, or they didn't and don't understand what is really going on, which might be why they fired the CEO: for not informing the board, truly, what level OpenAI's AI is actually at. So the board was trying to reign in a beast that is merely a puppy, with information that was wrong.

Where are you getting this information?

As I used the word "appears", I am postulating based on how the company is controlled, the non-profit entity, as well as certain statements that board members have made in the past such as Ilya Sutskever (now ex-board??), whose thoughts have likely been influenced by his mentor Geoffrey Hinton who is quoted on 60 Minutes saying the AI is about to be "more intelligent than us". Ilya is known for, beyond his scientific endeavors into AI and his position of Chief Scientist of OpenAI, some odd behavior on his commitment to AI safety though I'm sure his beliefs come from the right place.

There's a lot more to this, for each board member and Sam, but it makes me believe that a large wall was erected in information leading to a paranoid board.

OpenAI's Ilya Sutskever once burned wooden AI effigy: report

Ilya Sutskever, OpenAI's chief scientist who was central to Sam Altman's firing, burned an effigy to show his commitment to safe AI, The Atlantic reported.

Insider
Really? I thought it was because he supposedly raped his younger sister.
Could be, but words on Twitter and no lawsuit don't really equal getting ejected from your CEO position. Imagine if CEOs got ejected for stuff akin to that, there'd be no CEOs left.
Excuse me what
His sister accused him of some stuff a few years ago but nothing ever came of it. She apparently has credibility issues so I think the general view is the allegations are the delusions of a mentally ill person and/or a shakedown attempt.
I guess the entire workforce calling the board incompetent twats and threatening to quit was actually effective.
Sounds like they got together and forced their hand. Wonder if there’s a term for that?
Maybe some type of group or team. Or union. Nah that will never stick
What about his deal with Microsoft?
And the lord is back in his fiefdom
Because 95% of the people that worked for him demanded it.
Then he’s a popular lord
So what's the problem?
That you don’t see the problem
Explain, then. "It should be obvious" is not an explanation.

The fact that the employees were able represent their defacto power in a crisis is good, but the fact that the don’t have explicit power in the decision making process is why this able to happen in the first place.

There are no good kings, even if the best men were made kings, they would be inherently tainted by the position.

The fact that the employees were able represent their defacto power in a crisis is good

That's all that I'm saying.

If you've got issues with the whole concept of hierarchical power structures or there being such a thing as "leaders", that's a bit beyond the scope of this particular situation.

Heck you could even keep the hierarchy, but with no representation of the workers in leadership you lose an major perspective on the organization.
On the one hand, the board was an insane cult of effective altruism / longtermism / LessWrong, so fuck them. But on the other hand, this was a worker revolt for the capitalists, which I guess shouldn’t be surprising since tech workers famously lack class consciousness.
Effective altruism - Wikipedia

That's what happens when the wealth is shared with those who make it. Everyone becomes a capitalist.

Actually that’s just self interest. Both capitalism and socialism claim to benefit workers. But only socialism has remotely shown to do that to any extent. Capitalist hoarding and speculation is the primary driver of inflation and things like the inafordability of housing.

If you labor for a living, you aren’t a capitalist. You’re labor.

Nah. It’s more like the pusher man. Give them their first taste for free, and they’ll be a customer for life.

Genuinely confused by your first statement (in particular effective altruism). What does that have to do with the board?

Not an attack, just actually clueless.

Similarly confused, specially in how could someone actually make an assessment like that of the board when they’re mostly faceless entities to the public.

I think they might be projecting an image of the kind of person who would want to stop AI onto anyone who even remotely does something similar to stopping AI.

Several of the [former] board members are affiliated with the movement. EA is concerned with existential risk, AI being perceived as a big one. OpenAI’s nonprofit was founded with the intent to perform research AI safely, and those members of the board still reflected that interest.

an insane cult of effective altruism / longtermism / LessWrong

I’m out of the loop. What’s the problem with those things?

It’s basically the paperclip maximizer combined with human arrogance/hubris. Just skim the criticism sections of the articles linked.
People are asking what is wrong with these cults. It’s a lot to cover so I won’t try. People who follow to the podcasts Tech Won’t Save Us or This Machine Kills will already be familiar with them. Here’s an article relevant to the moment that talks about them a little: Pivot to AI: Replacing Sam Altman with a very small shell script
Tech Won’t Save Us

Tech Won't Save Us

famously lack class consciousness

How much money do you suppose the average OpenAI employee makes? What class do you imagine they’re part of?

I’m the developers make the lower half of six figures, but they still have to sell their labor to survive, so they’re still working class.

I’ve been an SF Bay Area software developer for almost thirty years, so I know them well. I consider us members of the professional–managerial class (PMC). We generally think we’re “above” the working class (we’re not), and so we seldom have any sense of solidarity with the rest of the working class (or even each other), and we think unionization is for those other people and not us.

When Hillary Clinton talked about the “basket of deplorables,” she was talking to her PMC donors & voters about the rest of the working class, and we eat that shit up. Most of my peers have still learned no lessons from her election defeat, preferring to blame debunked RussiaGate conspiracy theories.

Fucking Kendall Roy on the OpenAi board or something
I maintain that this had something to do with a disagreement over which commercial applications are permissible for GPT-4, and that Sam Altman somewhere along the line negotiated a deal that allowed some actor to participate in one of the "forbidden applications" by proxy via a seemingly unrelated agreement. I'm talking Financial Forecasting (High Frequency Trading), Military, and Policing/Surveillance. Now that Sam's back and unfettered, I'm guessing we are going to see some of those applications come out into the light.
Why do you maintain this? None of the details that have come out so far have suggested this, or not that I have seen.

I hate everything about this: the lack of transparency, the lack of communication, the chaotic back and forth. We don’t know now if the company is now in a better position or worse.

I know it leaves me feeling pretty sick and untrusting about it considering the importance and potential disruptiveness (perhaps extreme) of AI in the coming years.

Same here. I like Sam Altman but if the board removed him for a good reason and he was reinstated because the employees want payouts, humanity could be in big trouble.
I actually like the chaoticness, because I don't like having one small group of people as the self-appointed and de-facto gatekeepers of AI for everyone else. This makes it clear to everyone why it's important to control your own AI resources.
I’m with you there, I just hope the general public come to that realization.
Just like it did with climate change?
Accelerationism is human sacrifice. It only works if it does damage… and most of the time, it only does damage.
Not wanting a small group of self-appointed gatekeepers is not the same as accelerationism.
… the goal is not what makes it acceleration.
"Accelerationism" is a philosophical position. The goal is entirely what makes it accelerationism. Quit swapping words in each new comment.

For fuck’s sake. You want bad things to happen… so good things happen, later. Bad shit happening is the part that’s objectionable. Saying ‘but I want good things’ isn’t fucking relevant to why someone’s hassling you about this!

The bad shit you want to happen first is the only part that’s real!

You want bad things to happen

No, that's entirely you assuming things about my position. I don't want bad things to happen.

I actually like the chaoticness

Because it is having a good outcome, the disruption of of a monopoly.
Do you have object permanence?

Given the rumors he was fired based on undisclosed usage of some foreign data scraping company’s data, it ain’t looking good.

Now that there’s big money involved, screw ethics. We don’t care how the training data was acquired.