Recently, I mused over what it would take, from my perspective, to significantly change my view that the tech industry's infatuation with non-intelligent "intelligence" is a net-negative for society.

https://pythonbynight.com/blog/what-does-it-take

Below are a few choice quotes from my post. (a sort of TLDR)

1/

#LLM #AI

More. Better. @ PyCascades 2026

PyCascades 2026 talk on what it takes to be a better software engineer.

Python By Night

The leaders of this technology are categorically unethical and detached from society, and I believe their leadership is taking us into a xenophobic future only fit for technocrats subsisting off of slave labor.

2/

Deceptive designs that profit off of anthropomorphism, and dark patterns used to gather private data should be outlawed. (This would have the added benefit of also crippling the predatory ad-tech industry.)

3/

I would need to see a transparent attempt to compensate the "humans in the loop" with salaries commensurate to the tasks that they are asked to perform, as well as benefits for any mental health strain or other risks associated with these tasks.

4/

Explicit regulation should prevent for profit companies proliferating their tools into the educational sector without any form of oversight.

5/

And one would hope that the tech companies that facilitate the generation of CSAM would be extremely eager to discourage, prevent, or disallow the creation of said content.

In the case of some companies, they're not only being passive about this, they are actively encouraging it.

6/

...existing in a world where tech companies have access to my content (or any content) and can swallow it up wholesale without explicit consent is utterly demoralizing.

7/

For the ones that think LLM tools will allow people to be more creative, as they have more power and resources at their disposal—I'd say just take a look at how it's being used now.

Create bland essays. Answer bland emails. Write bland README docs. Produce bland code.

There are no sharp edges.

8/

What about you?

What is the worst thing that could possibly convince you that buying into LLM-usage is perhaps not all it's cracked up to be?

What if it deletes all your email? Or your hard drive?

Is it a self-serving metric, like how expensive is the privilege of using a tool? If the companies charged you $500/month or $1000 a month, would you say no?

Are there other factors that might sway you outside of how useful (or useless) it is to you?

You don't have to tell me... It's very unlikely that you'll change your own mind.

9/

I'm not swayed by an ethics that values an unknown future, no matter the cost. Technologists make poor visionaries. Their prophecy is not one I believe in, nor one that I would ever want fulfilled.

10/end

@pythonbynight I don’t need convincing. It’s always been clear to me that ‘AI’ is a net-negative for society. My stance on LLM usage could probably be milder if these tools were coming from better people than the grifters and douchebags we all know and hate. But this stuff is weaponised and being used in maliciously manipulative ways. So I’ll keep pushing against it.

@pythonbynight Very much this, I've been referring to our current society as a cyberbland dystopia. I think the blandness of it all is also why there's less resistance to it than I feel there should be, because it doesn't feel offensive enough, or rather, the offensiveness is covered up by just how bland and boring it all is.

@tante

@ainmosni Yeah. I have a sense (though no proof) that these systems are very appealing both, to individuals that fit within a certain "average" (culturally or demographically), or those who may feel outside of the status quo and would like to blend in as much as possible.

In both cases, the user gets the sense that their usage elevates their usefulness/utility, and perhaps are drawn into a somewhat warped hero-complex ("I can do so much more now that I could never do before...")

That's obviously a generalization, and there's likely lots of room for nuance there...

But these systems certainly do not celebrate the jagged edges... the things that make us unique/outsiders.

It will smooth away your bad grammar, your colloquialisms, ticks, and differentiators. And if you try to coerce it to be "more like you," what you get back is just an amalgamation of something else...

@pythonbynight they mean, instead of typing "man putting his palm on his face stock picture" into Google they'd type "ChatGPT, please generate a picture of a man putting his palm on his face" so they can add their personal touch to their PowerPoints. See? Creativity!

@pythonbynight Personally i think LLM's can boost productivity, by assisting in those bland things like writing a readme or answering emails.
Maybe it can also boost "productivity" in hobbys for example by giving you templates. For coding it could be used as an alternative for a rubber duck. But that's pretty much it.

At the end the AI company's will probably all be behind some paywall after making people lazy enough by pushing the AI stuff into everything that some people would actually pay and at your Work Place the CEO will probably give you an AI to boost your productivity, so that the work someone does gives them more money, because productivity => More Effective => More Sales => More Money for the CEO.

Overall there are more negatives than positives and the AI Bubble just keeps pushing the Dream of full automation until the Bubble pops and the market crashes.
ATP. AI Glacers are just the new Crypto Bros.

@pythonbynight This was worth the read.

Thank you.

@onepict What makes it worth writing it is someone reading it!

So thank you as well!