Great news everyone! I finally talk about AI hype. Someone finally mentioned LLMs one time too many, and the reckoning is upon us:

https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/

I Will Fucking Piledrive You If You Mention AI Again — Ludicity

@ludicity well said. Also, if I hear more about Gartner magic AI squares and positioning relative to them, I’ll be ill. Plus, I’m looking forward to the talk once/if it happens.
@ludicity I miss the old days when it was decided an app needed AI. So a data scientist made a simple linear regression, and we would add it to the app. Then, BOOM, singularity achieved. At least from an executive perspective. I always had a good laugh at them, but at least the ML did something. The only good thing I'll say about LLMs is copilot will write my unit tests. Which is why I am now arming myself for the robot revolution. As this will be considered a capital crime by them.

@richard_staackmann @ludicity

All hail the robot revolution!

(If you read this, I was always on your side.  ❤️ 🤖 )

@ludicity this was a fun read. Thank you for articulating many of the thoughts I was struggling to convey. It’s a weird time to see all of this happening
@skinnylatte Hah, I was mostly ranting, but hopefully some of the thoughts might be useful the next time the topic comes up.
@ludicity @skinnylatte if that was you ranting I shudder imaginging the time when you get incoherently angry.
@ludicity extremely funny and well-written! I will send this to my colleagues in my company that's slamming the door on its foot repeatedly trying to get in on the AI craze (i am in hell)
@mehluv Thank you for the kind words!

@ludicity I love reading this. It hits very hard, and on the right place. Thank you !

cc @lkanies

@ludicity good thing "GenAI experience sharing session", and helpful brainstorming with my team and management is not tomorrow.  . I can't wait to hear about all the use cases we found. This article is wrongthink
@ludicity At last, an informed and balanced critique that potential adopters of AI everywhere were waiting for (but didn't know). Thank you.

@underlap @ludicity "If you liked that, you'll love this...":

Why ChatGPT is literally a bullshitter

https://link.springer.com/article/10.1007/s10676-024-09775-5

ChatGPT is bullshit - Ethics and Information Technology

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.

SpringerLink
@underlap The first time I've ever been called balanced, what a time to be alive. (Also thank you!)
@ludicity Good stuff and pretty much where I’ve been for a while now. The grift around LLMs is infuriating. With a few exceptions, I immediately distrust anyone who talks about “AI”.

@bjn @ludicity The grift-hopping is what gets me.

Hardly anyone notices these quantum blockchain AI grifts collapse, because the new grift is already there to distract people.

It's a bit like a virus that mutates, and occasionally it mutates to something that grows big, sucks in billions and billions of investor money and just gives it to somebody else without creating anything of worth.

What a weird system. Too bad the last two iterations are literally burning the planet in the process.

@ludicity My only quibble with it is that I’m quite sure you don’t need the last sentence in the opening section. Righteousness needs no forgiveness.

@ludicity Thanks for a refreshing & entertaining reality check on the AI grift.

I'm a grumpy old semi-competent Scrum-averse Database Guy (mmm, lovely schemas ...) grinding through the twilight years of my career in the bowels of the data mines, keeping Broken Old Shit running because nobody can afford to fix things properly, while waves of functionally pointless & financially/environmentally disastrous LLM slurry flood in around us.

So good to know we're not alone in our weary scepticism!

@ludicity Excellent post  I completely get your point - as someone who spent over a decade in data warehousing, seeing "enablers" suddenly pushing data lakes as a solution to everything (completely without ACID in the beginning) gives me similar feelings  Industry is poisoned with Gartner's blabber and various kinds of techbros catch up quickly 

Thanks for writing this 
Someone get the burn cream, stat, ‘cause AI bros got roasted in the hot and spicy (well, unsure on hot but definitely ghost pepper spicy) take on the generative #AI hype train by @ludicity 🔥🌶️👏😂 https://ludic.mataroa.blog/blog/i-will-fucking-piledrive-you-if-you-mention-ai-again/
I Will Fucking Piledrive You If You Mention AI Again — Ludicity

@ludicity pure brilliance, thanks. I’m going to paraphrase your student cheating idea next time some upper management ignoramus suggests it in a faculty meeting.
@ludicity as someone in cybersecurity, I can say that you are correct that zero trust has meaning, but that meaning is not how people who develop products treat it.
@TindrasGrove Good to know! It's interesting because I really am not a sophisticated actor in the security space, but it's still quite obvious when some people are full of it. Although, of course, I'm sure slightly more savvy grifters sneak past my detectors.

@ludicity I think there’s some significant overlap in our fields (especially when it comes to who is actually using the not-snake-oil), so there’s some amount of transferability in BS detection skills.

Last week I went to a local data analytics conference, and the talk I got the most out of was the one person who said “you don’t need AI for any of this!!”

Jamie Gaskins (@[email protected])

@[email protected] I hate how, as soon as a word/phrase is taken seriously, its meaning is twisted. Agile: I Can't Believe It's Not Waterfall™ DevOps: the people we throw our code over the wall to SRE: wrong DevOps with new vocabulary (the definitions are the same, we just changed the names) Monitoring: alerting Alerting: posting to a Slack channel nobody's watching TDD: there are tests in the repo MVC: my app has 3 parts

zomglol

@jamie @ludicity yessss

The people who try to sell zero trust as a product, not an architectural philosophy, seem to mean SSO, but ✨fancy✨

@TindrasGrove @jamie I just spoke to my brother (read team supernerd) and asked him to explain ZT, as I got many, many emails about it and some disagreed with each other.

Within 30 seconds I said "Wait, so it's a philosophy, not a feature".

I literally just do databases and it's obvious, what the hell are all these dweebs learning?

@ludicity @TindrasGrove Databases definitely have fewer disagreement in definitions (and arbitrary definitions are pretty rare) because SQL is standardized but they aren’t immune to it, either.

For example, SERIALIZABLE transaction isolation means different things in Postgres and MySQL. And some of MySQL’s consistency guarantees are only truly guaranteed up to some level of write throughput to a given table. It’s wild out there.

@jamie @TindrasGrove Hm, I should do some deep dives. I've been meaning to crack open The Art of Postgres.

At least one email I received was from someone who was very, very confidently wrong though on ZT.

@ludicity @TindrasGrove I have no doubt. Arbitrary definitions are rampant in security because almost nobody has sufficient experience to check them.
@ludicity @TindrasGrove More to your point, though, people often do what they’re incentivized to do. If using some terminology is better for them on a metric that they care about, they may use it even if it’s not accurate. That catches on because other people do the same and many care about the same metrics.

@ludicity @jamie YES!!

It’s really easy to tell who’s full of it because they try to sell ZT as a product, not as an architectural philosophy.

They *want* it to be a product, because it’s possible to “achieve” implementing a product. You can’t “achieve” a philosophy. You just improve your process, incrementally, for ever and ever.

@ludicity If you don't mind me nitpicky (!): there's a typo in "What are are doing as a society?"

I never know whether to mention these or not, since not fixing them doesn't really affect readability or meaning but does cost attention to address 😅

@MHLoppy Fixed! I actually really appreciate these typo catches. Editing directly in the Mataroa means that I don't have normal checks, I just use the mark one eyeball. I'll come up with a better workflow sometime soon.

@ludicity

I think there’s another possibility that it seems you didn’t mention (I skimmed, sorry): namely, that generative models actually get worse over time as they consume more and more of their own excrement. The ensuing AI centipede of excremental progress will eventually collapse into a shit pile.

@TonyVladusich @ludicity
This means excellent job security for our cousins in India

There is a boom in data annotation driven by demand for training data cleaning services
https://m.economictimes.com/tech/technology/indian-gig-workers-toil-at-frontlines-of-ai-revolution/articleshow/109864213.cms

Indian gig workers toil at frontlines of AI revolution

Akash and Ikshita engage in online work. Gig workers in India train AI models through microtasks. India emerges as a data annotation hub with a potential $8.22 billion global market and a million-strong workforce by 2028.

Economic Times

@ludicity reads the title: oh, this is gonna be good

2 paragraphs in: yeah I'm enjoying this

@ludicity a truly magnificent post. 👏

Thank you.

@ludicity The LLM output of the articles sounds a bit passive-aggressive, can you adjust a bit your prompt for the blog post?

Ok, that joke was tasteless, what I literally hate is that EVERYTHING under the sun nowadays is called an AI. You know, things that 5–10 years would just go by “algorithm”. Or “heuristic”. Today for marketing purposes the same f%cking two if statements are already “AI”.

@ludicity
Seriously:
“you outsource your decisionmaking to the thing that sometimes tells people to brew lethal toxins for their families to consume? What does that even mean?”

You might be on a good trail here.

Management in your average company has been usually outsourced to the people just clever enough to feed themselves in public without embarrassing the company.

@ludicity Sorry, I know that even a decade as an IT consultant gives too small a sample, but that's my conclusion.

A little bit more than a decade ago, I just started at a company that makes toll systems, and as it happened, in the first week I had the privilege of listening to the 2 hour “state of the company” speech given by the C-idiots.

@ludicity At the start the big picture: e.g. observations like video-based systems are the future, no complicated setup, no need for devices in the cars, bla bla.

In the second hour, more concrete measures and how they apply to the company: downsize the video camera engineering group by 90% down to 2 engineers, we'll ask our competition to sell us their proprietary video tech on the cheap instead of continuing to develop it in-house.

@ludicity 🤷
Considering such great strategic vision, nobody even questioned rumours that the CTO had problems navigating the office numbering on our floor the next day. But hey, I'm sure he could compete in genius level with Elon.

@ludicity I'm just here like, I want to learn the basics of the systems because the math seems cool and I want to make a bot to play an old game poorly. :V

I really do hope the industry gets over itself soon, the amount of inertia this forest fire of a dump has is terrifying.

@ludicity I believe zero trust means that you don't have generic administration profiles that have access to the entire organization. You only give people what they need for their jobs and nothing else, essentially how permissions should have been since the start.
@ludicity I may not vibe with the style, but it’s a great article anyway. Thanks!

@ludicity "I don't actually know what 'zero-trust' architecture means, but I've heard stupid people say it enough that it's probably also a term that means something in theory but has been sullied beyond all use in day-to-day life."

Yeah, that's spot-on actually. Zero-trust used to mean "don't allow anybody to do something just because of their IP address" - i.e. place zero trust in the network. Now it somehow means more VPNs. No, I don' t know either.

@ludicity Thank you. I am not a software person at all, but this answers all my suspicions about AI, and tells me I was right.

@Erik_Buchanan I feel like being on Mastodon makes you an Honorary Technology Person.

(Also I just read the blurb for The Wire Noose and bought a copy 😁 )

@ludicity as the only local support for 150+ users I'm fed up with users and managers trying to sell me "AI".. and I'm getting angrier and more confrontational every time, now I just ask them "how much energy and water was wasted in what you just did?"

But lovand behold! This is the future!!

Fuck that noise

@ludicity > if you continue to try { thisBullshit(); } you are going to catch (theseHands)

My God. Yes. YEEESSSSSSSSSS

@ludicity Your rants are a beacon of sanity in a bleak world of technology circle jerking.
"..spending half of the planet's engineering efforts to add chatbot support to every application under the sun when half of the industry hasn't worked out how to test database backups regularly."
@ludicity that try/catch line was absolutely fucking majestic
@gsuberland THANK YOU. I was so proud but out of 800K total hits, I think I only received like 5-6 people mentioning it. I was so proud of myself, I don't even write JavaScript.