I once worked at a company that sold industry-specific core-business software to deep-pocketed corps who couldn’t / wouldn’t / shouldn’t roll their own. I got into a discuss with my manager about whether our products were essentially — my words — a hoax.

Me: “Look, our products are riddled with bugs and holes. They’re nearly impossible to deploy, manage, and maintain. They frequently don’t even work •at all• on the putative release date, and we sell the mop-up as expensive ‘consulting.’”

1/

“How can it not be a hoax?!”

He said something that completely changed how I look at the workings of business:

“Paul, you are making the mistake of comparing our software to your ideal of what it •should• be. That’s not what these companies are doing. They’re comparing it to what they already have now. And what they have now is •terrible•.”

2/

He continued: “They’re doing business with Excel spreadsheets, or ancient mainframes, or in many cases still using pen and paper processes [this was the early 00s], and those processes are just wildly labor-intensive and error-ridden. They lose unimaginable amounts of money to this. For them to pay us a measly few million to get software that takes 18 months to get deployed and just barely working? That is a •huge• improvement for them.”

In short: our product sucked, but it wasn’t a hoax.

3/

There’s a weird disconnect about gen AI between the MBA crowd and the tech crowd: either it’s the magical make-money sauce CEOs can just pour on everything, or it’s fake and it’s all a hoax.

A lot of that is just gullibility and hype at play, huge amounts of investor money and wishful thinking desperately hoping to find huge payoffs in whiz-bang tech.

But: companies do actually deploy gen AI, and it sucks, and they •don’t stop•. Why?!

4/

I suspect that conversation long ago might shed some light on how companies are actually viewing gen AI right now. Behind all the flashy “iT cOuLD bE sKYnEt” nonsense, there’s something much more disappointingly cynical but rational: Gen AI sucks. They know it sucks. But in some cases, in some situations, viewed through certain bottom-line lenses, it sucks slightly less.

5/

So Megacorp’s new AI customer support tool describes features that don’t exist, or tells people to eat nails and glue, or is just •wrong•.

Guess what? Their hapless, undertrained, poverty-wage, treated-like-dirt humans who used to handle all the support didn’t actually help people either. Megacorp demanded throughput so high and incentivized ticket closure so much that their support staff were already leading people on wild goose chases, cussing them out, and/or quitting on the spot.

6/

Gen AI doesn’t cuss people out, doesn’t quit on the spot, and has extremely high throughput. It leads people on wild goose chases •far• more efficiently than the humans. And hell, sometimes, just by dumb luck, it’s actually right! Like…maybe more than half the time!

When your previous baseline is the self-made nightmare of late stage capitalism tech support, that is •amazing•.

7/

And you can control it (sort of)! And it protects you from liability (maybe)! And all it takes is money and environmental disaster!

Run that thought process across other activites where corps are deploying gen AI.

I suspect a lot of us, despite living in this modern corporate hellscape, still fail to understand just how profoundly •broken• the operations of big businesses truly are, how much they function on fakery and deception and nonsense.

So gen AI is fake? So what. So is business.

8/

I am hamming this up for cynical dramatic effect, but I do think there’s a serious thought here: so much activity within business delivers so little of actual value to the world that replacing slow human nonsense crap with fast automated nonsense crap seems like a win.

Trying to imagine the world through MBA goggles on, it seems perfectly rational.

When people consider gen AI, I ask them to ask themselves: “Does it matter if it’s wrong?” Often, the answer is “no.”

9/

If you’ll indulge another industry story — sorry, this thread is going to get absurdly long — let me tell you about one of the worst clients I ever had:

Group of brothers. They’d made fuck-you money in marketing or something. They founded a startup with a human benefit angle, do some good for the world, yada yada.

Common now, but new-ish idea at the time: gamified online health & well-being platform that a company (or maybe insurer, whatever) offers to its employees.

10/

The big brilliant idea at the heart of the product they were building? The Life Score: a number that quantifies your overall well-being, a number that you can try to raise by doing healthy activities.

How exactly was this number to be calculated? Eh, details.

11/

They had this elaborate business plan: the market opportunity, the connections, the moving parts — and in the middle of this giant world-domination scheme, a giant hole. Just black box (currently empty) labeled “magic number that makes people get healthier.”

The core feature of their product, the lynchpin that would make the entire thing actually useful, was just a big-ass TBD.

12/

I was hired to implement, but quickly realized they had no idea what they wanted me to build. Worse: they hadn't hired any of the people (like, say, a health actuary or a behavioral psychologist) who would be remotely qualified to help them figure it out. The architect of their giant system was a chemical engineer of some kind who was trying to get into tech. Lots of big ideas about what it would •look like•, but nobody in sight had a clue how this thing would actually •work•. Zero R&D.

13/

No worries. Designers were cranking out UI! Marketers were…marketing! Turning the Life Score from vague founder notion to working system was a troublesome afterthought.

So…like a fool, I tried to help them suss it out. It turned out they •did• sort of have a notion:

1. Intake questionnaire about your lifestyle
2. Assign points to responses
3. System suggests healthy activities
4. Each activity adds points to your score if you do it

14/

And then, like a •damn• fool, I pointed out to them the gaping chasm between (2) and (4). Think about it: at the start, the score measures (however dubiously) the state of your health. But after you do some activities, the score measures how many activities you did.

The score •changes meaning• after intake. And it's designed to go up over time. Even if your health is getting worse.

And like an •utter• damn fool, I thought this was a flaw.

15/

It was only after the whole contract crashed and burned (they were, it turns out, truly awful people) that I realized that my earnest data-conscious questions were threatening their whole model.

Their product was there to make the “healthy” line go up. Not to actually make people healthy, no! Just to make the line go up.

It was an offer of plausible deniability: for users, for their employers, for everyone. We can all •pretend• we’re getting healthier! Folks will pay good money for that.

16/

Of •course• their whole business plan had a gaping hole at the center. That was the point! If that Life Score is •accurate•, if it actually describes the real-world state of a person’s health in any kind of meaningful way, that wrecks the whole thing.

Now, of course, there would be no Paul to ask them annoying questions about the integrity of their metrics. They’d just build it with gen AI.

17/

Would gen AI actually be a good way to help people get healthy with this product? No. But that was never the goal.

Would gen AI have been a good option for these rich people trying to get richer by building a giant hoax box that lets a bunch of parties plausibly claim improved employee health regardless of reality? Hell yes.

18/

Again, my gen AI question: Does it matter if it’s wrong?

I mean, in some situations, yes…right? Like, say, vehicles? that can kill people?

Tesla’s out there selling these self-crashing cars that are •clearly• not ready for prime time, and trap people inside with their unopenable-after-accident doors and burn them alive. And they’re •still• selling crap-tons of those things.

If it doesn’t matter to •them•, how many biz situations are there where “fake and dangerous” is 100% acceptable?

19/

Does it matter if it’s wrong?

In the nihilism of this current stage of capitalism, “no” sure looks like a winning bet.

/end

Because I’ve apparently driven some people to despair with this thread, some rays of hope:

First, note that point of my very first example is that the product was •not• a hoax. It really did make things better for real people. Sometimes it can be hard for perfectionists like myself to accept that better is •better•. Sometimes we do actually build things that matter, even if they kind of stink relative to some nonexistent Platonic ideal. Take the win, Paul!

Ah, but the rest of the thread…

A/1

@inthehands

Extend this thought experiment to political campaign funding and tech billionaires.

Silicon Valley bought an election win for a set of GOP crooks because "innovation" has been redefined as:
1. Successful scams & frauds
2. Tax evasion
3. Corporate welfare & subsidies
4. Monopolies
5. Regulatory capture
6. Pollution & climate denial
7. Deregulation

Silicon Valley does not want saleable products that generate revenue.

They want Saudi cash. They want Russian oligarchs...

1/2

@inthehands spot on for so much of what seems to be going on right now - every last health or weight loss type app
@inthehands off topic to this thread, but damn Paul, you've been posting some amazing and on point thoughts and stories the past few days. Thanks for sharing!
@inthehands or as I usually put it, the bullshit machine looks awful nice to the folks that have made it with bullshit.

@inthehands

Gaaaaaaaaaaaaaaaaaaaaaaaaaa

@inthehands I'm just off to build a browser tweak to replace "ai" with "giant hoax box"
@inthehands, thanks for the interesting read! Made perfect sense to me, and I have indeed struggled until now to understand how so many allegedly business-savvy people could see the extent of GenAI hallucinations and still think "this is exactly what we need!"

@inthehands that was worth the long thread !

I'd add that for AI to be a valid business option it doesn't even need to be better than the existing solution, it just need to be more profitable than doing nothing.

It's easier for a company to bet on a new project that mostly costs electricity and hardware and that can be stopped at any moment than on one that requires hiring people, and will need even more people if it succeeds.

@inthehands Jesus Christ, Paul.

BANG!
*thud*

@inthehands you are absolutely right, this happens in tech, and all business. It just might not be what happens with genAI though. GenAI might actually be the hoax us cynics think it is and be a total wipeout of hundreds of billions of dollars (and emissions and wrecked jobs.) The church of Altman looks a lot more like a hoax than not. Part of it "sucking less" is like a psychic.

@inthehands that resonates. A factor that deserves some more attention is that inside those large corps, most employees are completely cynical. They don't care about doing a good job anymore at all, they just care about not getting burnt and taking home the money. Most, not all, but that hardly matters.

This leads me to the conclusion that chatgpt would do better than most employees in large orgs at writing internal emails, for example. Because _nothing creative of use_ happens in there anyway.

@inthehands in a sane world, after you realized the whole health score was a hoax, you'd be able (or even required) to report them to some kind of institution that would send inspections and lawsuits their way.

We don't live in a sane world.

@inthehands This thread started out as incredibly deflating and ended up flatly horrifying.
@inthehands I once half in jest claimed that the real reason proprietary software companies don't disclose their source code is not because it is a 'valuable business secret' but because they are ashamed of it. The code quality, I mean. Those rare opportunities I have had to look at source have not changed my mind...

@inthehands David Graber's "BS jobs" should be a high school textbook. Late Stage Capitalism is a religion, and his 2013 artivle was 95 theses nailed to its door.

Especially when you ponder the "real" jobs done entirely in support of BS: top floor of management consultants bringing in money, supported by IT, HR, accounts receivable, payroll, janitors, cafeteria staff, and their managers... And all the jobs with 2 hours of real work a week and 38 looking busy.

@inthehands my takeaway from this thread: if anyone suggests using ai for a task, ask them if it matters if it is wrong. If yes, dont use it, if no, why do you need it?
Edit:
I think the better question is: "is the degree of inconsistency/wrongness acceptable?"
E.g. calculating expected travel times; it will be wrong, but its ok and still very helpful if it is off by a minute or two

@inthehands For the second time within five minutes, I'm put in mind of The Marching Morons (1951):

https://archive.org/details/Galaxy_v02n01_1951-04/page/n129/mode/1up?view=theater

Galaxy v02n01 (1951 04) : Free Download, Borrow, and Streaming : Internet Archive

(Contents information excerpted from The Internet Speculative Fiction Database)Art:Nice Girl with Five Husbands by Phil BardNice Girl with Five Husbands [2] by...

Internet Archive
@inthehands gen AI does shitty work faster than I do shitty work, so it's an obvious value proposition to the C-suite.
@inthehands This is the kind of nonsense that gets people added to my blocklist with extreme prejudice.

@inthehands It was not until the last few years in my late 30's that I finally began to understand just how much of the "real world" works like this.

There is SO much left unsaid, because saying it out loud makes people very upset and uncomfortable, so the vast majority of people just... don't talk about it, or even reference it indirectly, ever. It's just too heavy to grapple with, so they don't, to catastrophic results. Even in the next post you've written about folx despairing after reading.

@inthehands well Goodhart’s law applies, even if it was a crappy metric to start with.
@thias
Absolutely. Getting to the place where Goohart is the relevant problem was my foolish dream.
@inthehands oh Paul I felt so so so much empathy reading this. Because everyone who is an evidence bringer and data questioner ends up having this shocking moment at some point. When we realize that it is *what we are good at doing* and *what is good to do* that has made people who *asked us to do that thing* turn on us....how I recognize it
@grimalkina
What a kind and humane reply, Cat. Of course it would be you who infers the author’s emotional and psychological experience from their writing, who spots the human in the situation. I appreciate you so much for it.
@inthehands Oh, this absolutely describes my employer's "wellness plan". They used to literally withhold some of your salary unless you submitted a blood test proving you're not a smoker - after some of the wellness plan regulations got tossed around in the last few years, they introduced an alternative where you could instead choose to take a "tobacco cessation life coaching session." Which is as useless as it sounds.

@inthehands

This actually sounds like one of the vendors I have to deal with.

They're putting this database system together for us—but show no signs whatever of every having dealt with basic database systems...?

@inthehands *insert Southpark meme here*
@Mela
For all the problems I have with that show, which are many, it really did capture this specific kind of cynicism very well.
@inthehands
Wow! What a great read! Like many others commenting, I'd never thought of it that way, but what you say resonates with me too:
"so much activity within business delivers so little of actual value to the world that replacing slow human nonsense crap with fast automated nonsense crap seems like a win." That changing the comparator - from the perfect to the crappy existing in real life - has so much explanatory power. Thanks for sharing these stories that say so much, alas about modern corporate life. 😐
@inthehands Interesting thing I don't remember the source of - the relationship of average salary to number of employees with an MBA is an inverse one. More MBA's = lower average salary!
@inthehands oh ouch you just hit me in (current job) better than Excel and (previous job) automated response substituting when it should be augmenting human support agents.

@inthehands There were customers who leaned hard into making the automated response system work really well in ways that helped everybody touching the system, and there were a few really perfect cases: "Your flight has been cancelled, click here to choose what to do about it."

Better than Excel means that the analysts aren't wrestling the data intake and reporting pipeline and can do some analysis and report on it.

@inthehands Agreed, and there's another level of fakery here that interests me. I suspect a bunch of the corporate "AI" projects are just taking advantage of the hype wave to rebuild something that needed rebuilding. That key people know the "AI" benefit is zero, but it's the only way to get the rest of the project done.
@inthehands
This sort of reminded me of this blog about how come we are so inefficient that we could give a decent standard of living to the *entire world* at a third of the price we pay for the current shitty one: http://www.cottica.net/2024/08/07/the-hidden-inefficiency-reflecting-on-modes-of-provisioning-in-new-economic-thinking/
The hidden inefficiency. Reflecting on modes of provisioning in new economic thinking | "Belay that order, comrades"

@inthehands thanks now I’m even more cynical
@inthehands it's like Dwight vs the website

@inthehands Nailed it, right there

[Gen AI] leads people on wild goose chases •far• more efficiently than the humans.

@catsalad so many people so frequently confuse velocity for speed and miss that it's a vector.