The Future of Everything is Lies, I Guess: Work

Thank you for this aphyr.

My one ask is people seem to put “CEOs” on a pedestal any time things come up, like they’re an alien life form and oh no they’re going to do something terrible. There are good company executives and shitty ones. You should try to start a company and see if you can be one of the better ones.

Class warfare generalizations have become the safe outlet for internet rage because going after CEOs and billionaires is most “punching up” construction that is generally relatable.

An unintended side effect that I’ve noticed is that it normalizes bad behavior of CEOs for those who invest a lot of “CEOs bad” grist (Reddit, Threads, even Hacker News). When someone, usually early career, takes a job with a bad CEO after years of reading “CEOs bad” content online, they can go into a learned helplessness mode because they think the behavior they’re seeing is normal. They don’t believe changing jobs would help because they’ve learned from social media to believe that their CEO’s bad behavior is actually normal.

This has becoming a frequent topic when in a rotational mentorship program where I volunteer: Early career folk join some toxic startup and stay because the internet told them all CEOs are like this. We have to shake them free from those ideas and get them to realize that there are good and bad companies out there and they have options.

>Class warfare generalizations have become the safe outlet for internet rage because going after CEOs and billionaires is most “punching up” construction that is generally relatable.

Mainly because "CEOs and billionaires" have fucked us over time and again, with their with their lobbying and bribing, with their power grabs, with their consolidation of news, entertainment, streaming, and social media properties, with their participation in the millitary industrial complex, with their censorship and partisanship, and with their rent seeking and worsening of their products...

The downvotes in absence of any reply suggest there's a group of individuals who think your position is so correct it's functionally unassailable but are offended you said it out loud.

> Early career folk join some toxic startup and stay because the internet told them all CEOs are like this.

I literally did this 12 years ago based on this reasoning, its good you're trying to counter that with the next generation.

With that said, I do wish there was more discourse around systemic issues rather than the usual finger-pointing towards rival social groups. Unfortunately I feel like our language gets in the way, systems issues are more abstract, but "bad people" are more visceral and easy to talk about.

“No war but class war” rings as true in 2026 as it did 40 years ago
When companies do something terrible (and they do, all the time) who are you going to blame for it? It's not at all surprising that CEOs have earned the reputation they have.
I am, oddly enough, the chief executive officer of two (trivially small) tech companies.

cheers. I think you're doing a good job and ruffling some feathers here! Your content has been great.

I highly recommend reading Marx. Your content has related Marxist topics like the 'Fetishism of Commodities' (Software as Witchcraft) and the Labor Theory of Value.

There's a copy of Das Kapital on the shelf behind me right now, though I don't count myself conversant enough to go super deep on class critique. Figured I'd point a few very vague fingers in that direction and let folks with more experience talk about it.

> people seem to put “CEOs” on a pedestal any time things come up, like they’re an alien life form

Might I suggest a viewing of the 2025 film "Bugonia"?

>My

And who are you? An account created for one post? There is a pattern of green account with usernames vaguely related to the subject matter of their comments.

I think I’ve seen this post shared almost everyday for the past week or so

The interesting question to me at the moment is whether we are still at the bottom of an exponential takeoff or nearing the top of a sigmoid curve. You can find evidence for both. LLMs probably can't get another 10 times better. But then, almost literally at any minute, someone could come up with a new architecture that can be 10 times better with the same or fewer resources. LLMs strike me as still leaving a lot on the table.

If we're nearing the top of a sigmoid curve and are given 10-ish years at least to adapt, we probably can. Advancements in applying the AI will continue but we'll also grow a clearer understanding of what current AI can't do.

If we're still at the bottom of the curve and it doesn't slow down, then we're looking at the singularity. Which I would remind people in its original, and generally better, formulation is simply an observation that there comes a point where you can't predict past it at all. ("Rapture of the Nerds" is a very particular possible instance of the unpredictable future, it is not the concept of the "singularity" itself.) Who knows what will happen.

"given 10-ish years at least to adapt, we probably can"

Social media would like a word...

We can adapt by shutting down social media. We don't really need that. It's been pretty bad since before the AI wave took off.

We are bottom. It's just a start.

We are in era of pre pentium 4 in AI terms.

And you have evidence as basis for this very confident statement... where?
Intuition. It comes from the spiritual awakening and being aware of your consciousness. Only Time will prove what turns out be right.
You worship the AI?
I see AI has great utility and we'll figure out ways to better it. If I had any power, i would run Nuclear Power plants to run AI dafacenters and find other near infinite sources of energy to create deeper and deeper AIs. This level of ai tech is at its infancy, it's evidently clear. People are assuming it will stall soon, and won't go beyond a certain point. I don't believe this at all, I am believing it will go much much fatherer then this

We aren’t anywhere near AGI. They’ve consumed the entirety of human knowledge and poisoned the well, and it still can’t help but tell you to walk to the car wash.

A peasant villager was sentient without a single book, film or song. You don’t need this much data to be sentient. They’re using a stupid method, and a better one will be discovered some day.

> The interesting question to me at the moment is whether we are still at the bottom of an exponential takeoff or nearing the top of a sigmoid curve.

Even using the models we have today, we have revolutionized VFX, video production, and graphics design.

Similarly, many senior software engineers are reporting 2-10x productivity increases.

These tools are some of the most useful tools of my career. I don't even think the general consumer public needs "AI" in their products. If we just create control surfaces for experts to leverage and harness the speed up and shape and control the outcomes, we're going to be in a very good spot.

These alone will have ripple effects throughout the economy and innovation. We've barely begun to tap into the benefits we have already.

We don't even need new models.

Somewhere around 2005-2007, when people were wondering if the Internet was done, PG was fond of saying "It has decades to run. Social changes take longer than technical changes."

I think we're at a similar point with LLMs. The technical stuff is largely "done" - LLMs have closer to 10% than 10x headroom in how much they will technologically improve, we'll find ways to make them more efficient and burn fewer GPU cycles, the cost will come down as more entrants mature.

But the social changes are going to be vast. Expect huge amounts of AI slop and propaganda. Expect white-collar unemployment as execs realize that all their expensive employees can be replaced by an LLM, followed by white-collar business formation as customers realize that product quality went to shit when all the people were laid off. Expect the Internet as we loved it to disappear, if it hasn't already. Expect new products or networks to arise that are less open and so less vulnerable to the propagation of AI slop. Expect changes in the structure of governments. Mass media was a key element in the formation of the modern nation state, mass cheap fake media will likely lead to its fragmentation as any old Joe with a ChatGPT account can put out mass quantities of bullshit. Probably expect war as people compete to own the discourse.

You are very strong on the "slop" bias. Why?

In managing a large to enterprise sized code base, I experience the opposite. I can guarantee a much more homogenous quality of the code base.

It is the opposite of slop I am seeing. And that at a lower cost.

Today,I literally made a large and complex migration of all of our endpoints. Took ai 30 minutes, including all frontends using these endpoints. Works flawlessly, debt principal down.

Which company do you work at so we can avoid your migrated endpoints?

> Somewhere around 2005-2007, when people were wondering if the Internet was done

Literally who wondered that? Drives me nuts when people start off an argument with an obvious strawman. I remember the time period of 2005-2007 very well, and I don't remember a single person, at least in tech, thinking the Internet was done. I don't know, maybe some ragebait articles were written about it, but being knee-deep in web tech at that time, I remember the general feeling is that it was pretty obvious there was tons to do. E.g. we didn't necessarily know what form mobile would take, but it was obvious most folks that the tech was extremely immature and that it would have a huge impact on the Internet as it progressed. That's just one example - social media was still in its nascent stages then so it was obvious there would be a ton of work around that as well.

I really appreciate this series of posts, as it serves as a good summary of key points of the discourse around AIs, and links to the relevant articles etc. I find following all those discussions myself exhausting, so if I can find this all in one place and read it nicely grouped, that is very helpful.

I love the analogy of AI coding as witchcraft! It’s very accurate to how working with these tools feels - At one point I was forced to invoke a “litany against stubbing” in a loop to make claude code actually implement a renode setup for some firmware. That worked really well.

It feels like hexing the technical interview come to real life ;)

Everyday I sit down to build a product for my clients. I am a one man shop _now_ before I had people helping me. My mental state is not good.
A very odd thing happens when claude or codex complete code fast, I begin to think of all the other things that are needed to make AI Agent work better. I begin to worry about problems that other people use to help me with and think "Can I do those too?". Problems like product design, devops work etc. In a bid to try I get nerd sniped by the velocity people seem to have — and these are respected devs not just twitter claims. And because I am so bad at "doing it all" its causing my mental health to suffer because of long hours i have to put it in. I miss my friends and colleagues who I worked with.

I always struggled with coding before 2023, but i made ends meet and put food on the table and could work sane hours and knew what I needed to do. Logically I should have been happy that I did not have to grind on code — and some days I truly am — but it would yield such poor quality of life at such a high cost was not what I expected...