The Future of Everything is Lies, I Guess: New Jobs

I am personally of the opinion that ML will end up being 'normal technology', albeit incredibly transformative.

I think you can combine 'Incanters' and 'Process Engineers' into one - 'Users'. Jobs that encompass a role that requires accountability will be directing, providing context, and verifying the output of agents, almost like how millions of workers know basic computer skills and Microsoft Office.

In my opinion, how at-risk a job is in the LLM era comes down to:

1: How easy is it to construct RL loops to hillclimb on performance?

2: How easy is it to construct a LLM harness to perform the tasks?

3: How much of the job is a structured set of tasks vs. taking accountability? What's the consequence of a mistake? How much of it comes down to human relationships?

Hence why I've been quite bullish on software engineering (but not coding). You can easy set up 1) and 2) on contrived or sandboxed coding tasks but then 3) expands and dominates the rest of the role.

On Model Trainers -- I'm not so convinced that RLHF puts the professional experts out of work, for a few reasons. Firstly, nearly all human data companies produce data that is somewhat contrived, by definition of having people grade outputs on a contracting platform; plus there's a seemingly unlimited bound on how much data we can harvest in the world. Secondly, as I mentioned before, the bottleneck is both accountability and the ability for the model to find fresh context without error.

In some sense, technology is "not normal" regardless.

If we think of the digitization tech revolution... the changes it made to the economy are hard to describe well, even now.

In the early days, it was going to turn banks from billion dollar businesses to million dollar ones. Universities would be able to eliminate most of their admin. Accounting and finances would be trivialized. Etc.

Earlier tech revolution s were unpredictable too... But at lest retrospectively they made sense.

It's not that clear what the core activities of our economy even are. It's clear at micro level, but as you zoom out it gets blurry.

Why is accountability needed? It's clearly needed in its context... but it's hard to understand how it aggregates.

Accountability is really a way to address liability. So long as people can sue and companies can pay out, or individuals can go to jail, there is always going to be a question of liability; and historically the courts have not looked kindly at those who throw their hands up in the air and say “I was just following orders from a human/entity”

>nd historically the courts have not looked

This is dependent on having a court system uncaptured by corruption. We're already seeing that large corporations in the "too big to fail" categories fall outside of government control. And in countries with bribing/lobbying legalized or ignored they have the funds to capture the courts.

While this is true, this is somewhat mitigated by the fact that few sectors are truly monopolized and large corporations also sue each other.

A huge component of compulsory (either by statute or de-facto as a result of adjacent statute, like mandatory insurance + requirements thereof) professional licensure is that if you follow the rules set by (some entity deputized by) government the government will in return never leave you holding the bag. The government gains partial control and the people under it's control get partial protection.

"oh I'm sorry your hospital burned down mr plantiff but the electrician was following his professional rules so his liability is capped at <small number> you'll just have to eat this one"

I would wager that a solid half if not more of the economy exists under some sort of arrangement like that.

Right, but usually that also involves verifying that the electrician actually followed the professional rules, and if not, they have liability

So the court checks if they were "just following orders"?

Sounds to me like following orders is in fact this magical thing that causes courts to direct liability away from the defendant.

Sort if like how one could be held liable for copyright infringement?

> Hence why I've been quite bullish on software engineering (but not coding). You can easy set up 1) and 2) on contrived or sandboxed coding tasks but then 3) expands and dominates the rest of the role.

Why can't LLMs and agents progress further to do this software engineering job better than an actual software engineer? I've never seen anyone give a satisfactory answer to this. Especially the part about making mistakes. A lot of the defense of LLM shortcomings (i.e., generating crappy code) comes down to "well humans write bad code too." OK? Well, humans make mistakes too. Theoretically, an LLM software engineer will make far fewer than a human. So why should I prefer keeping you in the loop?

It's why I just can't understand the mindset of software engineers who are giddy about the direction things are going. There really is nothing special about your expertise that an LLM can't achieve, theoretically.

We're always so enamored by new and exciting technology that we fail to realize the people in charge are more than happy to completely bury us with it.

> It's why I just can't understand the mindset of software engineers who are giddy about this brave new world. There really is nothing special about your expertise that an LLM can't achieve, theoretically.

They’re stupid or they’re already set up for success. The general ideas seems to be generalists are screwed, domain experts will be fine.

> domain experts will be fine

But I don't see how this holds up to even the slightest amount of scrutiny. We're literally training LLMs to BE domain experts.

I think these arguments tend to reach impasse because one gravitates to one of two views:

1) My experiences with LLMs are so impressive that I consider their output to generally be better than what the typical developer would produce. People who can't see this have not gotten enough experience with the models I find so impressive, or are in denial about the devaluation of their skills.

2) My experiences with LLMs have been mundane. People who see them as transformative lack the expertise required to distinguish between mediocre and excellent code, leading them to deny there is a difference.

Not sure that's what I was getting at. People in camp 2 don't think an LLM can take over the job of a real software engineer.

It's people in camp 1 that I wonder about. They're convinced that LLMs can accomplish anything and understand a codebase better than anyone (and that may be the case!). However, they're simultaneously convinced that they'll still be needed to do the prompting because ???reasons???.

I was thinking today that I need to pivot to making and selling shovels, but then other issue is is anyone going to need shovels in the future.

I was at 2) until the end of last year, then LLM/agent/harnesses had a capability jump that didn't quite bring me to be a 1) but was a big enough jump in that direction that I don't see why I shouldn't believe we get there soonish.

So now I tend to think a lot of people are in heavy denial in thinking that LLMs are going to stop getting better before they personally end up under the steamroller, but I'm not sure what this faith is based on.

I also think people tend to treat the "will LLMs replace <job>" question in too much of a binary manner. LLMs don't have to replace every last person that does a specific job to be wildly disruptive, if they replace 90% of the people that do a particular job by making the last 10% much more productive that's still a cataclysmic amount of job displacement in economic terms.

Even if they replace just 10-30% that's still a huge amount of displacement, for reference the unemployment rate during the Great Depression was 25%.

An enormous amount of domain expertise is not legible to LLMs. Their dependence on obtaining knowledge through someone else's writing is a real limitation. A lot of human domain expertise is not acquired that way.

They still have a long way to go before they can master a domain from first principles, which constrains the mastery possible.

> Why can't LLMs and agents progress further to do this software engineering job better than an actual software engineer?

Because a machine can never take accountability. If a software engineer throughout the entire year has been directing AI with prompts that created weaker systems then that person is on the chopping block, not the AI. Compared to another software engineer who directed prompts to expand the system and generate extra revenue streams.

> Because a machine can never take accountability.

A business leader can though.

> Compared to another software engineer who directed prompts to expand the system and generate extra revenue streams.

I think you're missing the point. Why can't an LLM advance sufficiently to be a REAL senior software engineer that a business person/product manager is prompting instead of YOU, a software engineer? Why are YOU specifically needed if an LLM can do a better job of it than you? I can't believe people are so naive to not see what the endgame is: getting rid of those primadonna software engineers that the C-suite and managers have nothing but contempt for.

why would it be a manager? hire a cheap intern to be the scapegoat, if the job market is bad enough. no reason for liability to fall on the suits
That's how things work already in every workplace where there's any real danger. The companies construes its policies and paper trail in bad faith so that the employees are always operating contradictory to policy/training and then when something happens blame can be shifted on them.

You can say this about every single role.

Why can't VCs feed your pitch deck into an AI and get a business they own 100%?

If the only thing you're paying for is compute time...

Some.people are claiming it's about taste. Why can't an AI learn taste?

> A business leader can though.

If a 'business leader' is prompting out software through their agents, ensuring it works, maintaining it, and taking accountability... they're also a software engineer

These titles are mostly semantics

It's not about whether they make mistakes (they do! although the exact definition of a mistake is nuanced), but whether they can take accountability if the software fails and millions are lost or people die. A large part of the premium paid on software engineers is to take accountability for their work. If a "business person" directs their agent to build some software and takes accountability -- congrats! They are also now a software engineer :)

The lines between a software engineer / business person / product / design and everything else will blur, because AI increases the individual person's leverage. I posit that there will be more 'software engineers' in this new world, but also more product people, more business people, more companies in general.

> I think you can combine 'Incanters' and 'Process Engineers' into one - 'Users'

I wanted to talk about this more but couldn't quite figure out how to phrase it, so I cut a fair bit: with "incanters" I'm trying to point at a sort of ... intuitive, more informal practitioner knowledge / metis, and contrast it with a more statistically rigorous approach in "statistical/process engineers". I expect a lot of people will fuse the two, but I'm trying to stake out some tentpoles here. Users integrate a continuum of approaches, including individual intuition, folklore, formal and informal texts, scientific papers, and rigorously designed harnesses & in-house experiments. Like farming--there's deep, intuitive knowledge of local climate and landraces, but also big industrial practice, and also research plots, and those different approaches inform (and override) each other in complex ways.

The problem with AI is that it isn't like any previous technology. There may be temporary jobs to fill in the gaps but they won't be careers. The AI will do the process engineering and self optimization. The prompt witchcraft is a good example because today its totally unnecessary and doesn't actually increase performance, and they'll continue to make it easier to direct/steer the models.

We're literally trying to build an intelligence to replace us.

We?
The human species. "We" doesn't include everyone and doesn't necessarily imply the process happens through collaboration and planning (conspiracy). The race to automation is happening as expected; outside any group control and bound by competition. Game theory suggests the end result is us being replaced, if we make it that far. "We" as a species are the ones making it happen.
Good one. "We" are not Demon Sam Altman or that clown of Anthropic or Google or Microsoft
Loved that section about "meat shields". LLMs cannot be held accountable. Someone needs to be involved in decision making, with real stakes if those decisions are bad.
the name is very sticky too. I can't imagine not calling people taking the blame meat shields now
why can't the name be 'scape goat'? Since that's what they are - the "real" responsibility rests on the owners, and they happily shed it as limited liability ownership of shares.

It just makes logical sense really; the human using the tool is in the end responsible.

Whether the tool is too powerful or ethical to use is an orthogonal discussion, in my opinion. Taken to the extreme, nuclear weapons still need someone fire or drop them. (We should still have discussions on safety and ethics always!)

Data & Society put out a paper on this role back in 2019 but used the term "moral crumple zones" since they were focusing on how to assign blame in autonomous vehicle crashes: https://www.researchgate.net/publication/351054898_Moral_Cru...

"Meat shields" has a nice physicality to it, though

Thank you for this--I remember reading this paper when it came out, but forgot it by the time I wrote this section. Will add a citation.
All plausible, but not very transformative. Like imagining that the new jobs enabled for the automobile include automobile maintenance, tire shops, and so on. Traveling nurses, motel operators, military tanks, doordash, suburban life, beer sales at NASCAR, those were all enabled by the car (and its larger sibling the truck).
Still missing are the jobs snd industries enabled by "AI" that are not themselves "AI".
I think the reason AI isn't going to replace CEOs, or anyone in the C suite, is pretty obvious. They see themselves as the company. Everyone else is a resource. AI is here to replace resources, just like investing in a brand new lawn mower. For them, replacing an executive with AI is like saying you're going to marry a broom.

They're just a thin layer to be replaced last. They're just arrogant enough to think they're the company, but ultimately the endgame is -- all humans become economically insignificant compared to the automated economy.

https://www.theguardian.com/technology/2026/apr/13/meta-ai-m...

Meta creating AI version of Mark Zuckerberg so staff can talk to the boss

Digital clone being trained on his thoughts, tone and mannerisms to help workers feel connected

The Guardian
I think the more likely reason would be that legally someone needs to be in charge of the business.
That's true, too. I guess we will see if executive pay and credentials start going down. They could technically have AI make all the decisions while someone just plays the patsy.