I think the problem for xAI is that it can really only hire two types of researchers - people who are philosophically aligned with Elon, and people who are solely money-motivated (not a judgment). But frontier AI research is a field with a lot of top talent who have strong philosophical motivation for their work, and those philosophies are often completely at odds with Elon. OpenAI and Anthropic have philosophical niches that are much better at attracting the current cream of the crop, and I don't really see how xAI can compete with that.
It's worse than that. Elon is a notoriously bad employer, and the only people that put up with him were the people that shared his vision. Pretty much the only people that will work for him now are second rate researchers and people that think gooner AI and racism is a worthwhile mission.
There's some texture here. Elon's enriched pretty much everybody who's ever worked for and invested with him. He makes money for people throughout his orgs. Many ex-employees have said to me: "incredible opportunity, made great money, worked insanely hard, once is plenty".
My ex-Twitter employee coworkers beg to differ. They made plenty of money before Elon came around. Once he was in the company, one of them actually hired a personal attorney to confirm that he wasn’t going to be burned by the things Musk was asking him to do, before he finally decided it wasn’t worth it to work there anymore and left.
I think Musk is odious but I think there's a lot of complicating evidence to the story of what happened at Twitter. And: very smart people, like Dan Luu, were complaining about their culture long before Musk arrived.

Is there anything from Dan Luu you could point me offhand at about Twitter's culture? The only thing I recall was a blog about technical issues but that didn't seem to have much bearing on the culture.

My understanding is Twitter always had cultural issues but it was not very different from other tech companies of the time, and what most of us would consider "directionally correct." I have it on pretty good authority from a very senior engineer who left before Elon took over (so no grudges other than, you know, "because Elon") that a lot of the things he said publicly about Twitter's technology was highly misleading or downright false. Like, IIRC, something about them not having CI/CD. Total lie.

I have no idea what Musk did or didn't say. I don't pay attention to him; I think he's odious. But he did cut more than half the entire workforce and the service works as well as it ever has, which is pretty damning. I'm not willing tie myself into the pretzel required to explain how antebellum Twitter was well-managed given that.

There's some fraction of that workforce that supported projects intended to make Twitter a viable standalone business, which it probably no longer is. Backoffice / line of business projects intended to support advertisers, that sort of thing. But I don't think you can explain a RIF of Twitter's scale that way.

(I'll try to dig up the Luu post I'm thinking of.)

Many of the workforce he laid off were content moderators -- I've read it was a serious effort with a large number of people doing thankless work. There is now way more anti-Semitic content on X, more racial insults, etc.
Come on. No they weren't.

Well not just content moderators, but he gutted Trust and Safety and the content moderation function of the company, which is surprisingly larger than the moderators themselves. Having worked peripherally with similar departments that had multiple teams, even though a lot of it comes down to human moderators, there is a ton of technology around the moderators, and even more keeping the content getting to them in the first place.

Firstly, this is a red queen’s race because like security, new types of unwanted content, threats and risks keep arising as the information (and misinformation) landscape and overall zeitgeist keeps shifting. The work is never done and the best that can be done is to build platforms and frameworks to streamline it. There is also a lot of fractal complexity everywhere.

E.g. there’s a ton of technology needed to support the moderators themselves. Infrastructure like review queues to enable them to rapidly handle content classified by type, risk level and priority. Like Jira but not Jira because it can’t scale to the number of queues and issues involved here. So you basically re-implement and maintain a Greenspun’s 10th rule version of Jira.

There is still a huge amount of invisible complexity beyond that. For instance, you need to manage how much of a certain type of content gets exposed to a given moderator because some types (CSAM, gore) lead to burnout and PTSD. You also need to blur these things.

(Also the same type of content often gets reshared, so you need things like reverse image search to auto-filter that, because running the whole pipeline each time is expensive.)

This of course necessitates a ton of machine learning. Because risks keep shifting, and (pre-LLMs) each type requires the entire ML lifecycle and related infra: collecting and cleaning data, building classifiers for them, deploying them, seeing how well they work, and tuning them, and then replacing them when the bad actors eventually adapt to newer means.

ML is also of course needed for bots, spam and scams, which keep evolving. Entirely different techniques here though.

Then there is all the infra needed to handle the fallout of moderation. Counting strikes against users, dealing with their complaints, handling escalations, each case with a long history of interactions that needs to be collated for quick evaluation. Easier said than done because of course the backend is not an RDBMS but a bunch of MongoDB-alikes because webscale.

And all of this is a signal for the ranking used for feed, the main product, which keeps evolving, so a ton of “fire and motion” happening there. You introduce a new feature in the feed? You just introduced a dozen different abuse vectors.

Then there are policy makers and the technology needed to support them. Policy is always shifting as the landscape is shifting. This also includes dealing with regulations, which are also often shifting and require ways to deal with legal requirements and various legal systems like NCMEC. And this varies by jurisdiction. Like not just by countries, sometimes even by states.

(Funny story about NCMEC – it has an API to report CSAM, but I could not find it. So I googled something like “child porn API” and got a blank results page. Pretty sure I’m now on a list somewhere.)

I could go on and on. And I wasn’t even working in this area, just supporting these teams! Admittedly in our case I'd put the relevant headcount in the hundreds and not thousands, but our scale was also very different. For a company that is ENTIRELY about user-generated content at massive scale, up to national-level events like Arab Spring -- even if there was a lot of bloat -- I would not be surprised to learn this function was the majority of the workforce.

And Elon killed pretty much all of this. And, well, we see the results everyday.

I get that he shredded trust & safety, and that Twitter got way worse afterwards in that regard. But he fired more than half the workforce, and they were not mostly T&S people.

I dunno, most reports from the time (and a quick Google AI overview just now) mentioned the cuts largely focused on T&S and moderation teams. Even the ML teams he cut reprotedly were working more on safety and integrity issues. Many who worked on "woke" issues were also cut, but the line between T&S and "woke" gets blurry quickly.

To be fair, this could be due to the bias in reporting, as media outlets may have had incentives for over-emphasizing the T&S angle.

I do not deny there was bloat. There was bloat in most tech firms at the time. But I don't think it was 80% bloat. My post was to explain how, even if T&S / moderation seems like a small function, it can require an unexpectedly large headcount -- probably even more for a pure-UGC company like Twitter -- and so could realistically account for the bulk of the cuts.

Come on. Zillions of developers have complained about getting RIF'd. It's not a mystery. I don't like Musk's Twitter. I don't like Musk. But pretending isn't getting us anywhere.
I'm not sure I follow. Assuming you mean the zillions of developers that got RIF'd at Twitter, do we know how many were bloat versus working on the T&S and related functions? I tend to believe the latter based on media reports and because that has clearly had an impact on the product.
It's OK if our premises are too far apart to hash this out. No, I don't think shredding T&S is one of the principal components of the giant Twitter RIF. Yes, T&S got killed; yes, that's bad. No, you can't explain how Musk manages to keep Twitter technically functioning as well as it does by pointing to T&S.