I wish we had more people writing more sophisticated concerns about the harms of AI.

"Slop" criticism is important because I think many of us feel we are being gaslit into believing that generative AI is currently creating quality creative output, while Al (henceforth Alfred) is overwhelmingly creating mediocre quality creative work.

Every past winter, Alfred survives in a more focused and refined form. Eliza was a toy chatbot of the 1960s, but what emerged from that is expanded investment in things like Natural Language Processing and Markov chains.

Markov chains have always been powerful, but through computing applications, Markov chains led to advancing capabilities in weather prediction, financial modeling, and eventually Google PageRank and bioinformatics/biostatistics (BLOSUM/BLAST for analyzing/predicting/correlating amino acid chain similarities).

The next boom and winter led Alfred to popularize classic machine learning into practical applications in consumer products: recommendation engines and clustering algorithms that formed the core of products from the Netflix prize to Spotify, Pandora, Amazon, and eventually virtually every consumer retailer has multiple machine learning products supporting everything from suggested products to search results to their own logistics and sales and financial modeling.

Learn from the past. Prepare for the future. Al will learn how to spell strawberry, write basic documents and code more effectively, and make fewer basic mistakes. Think beyond that. What are the emerging harms that come AFTER all of that?
#AI #GenAI #GenerativeAI #generative_ai_risks #generative_ai_concerns

The AI harms I worry about are from emerging phenomena, such as "Ironies of Automation".

The earliest aspects of emerging phenomena that were most predictable are already burning out senior software engineers and engineering managers: early and mid career folks are able to produce a much higher quantity of code (and documents!). This alone is overwhelming senior engineer and managers, who are spending more and more of their time reviewing code (sometimes also supported by Al), and their reviews are fed back into Al's coding tools. The junior engineer thus has less opportunity to learn traditional software engineering than they did without Al. (Is "traditional" software engineering still needed in that world? We'll get back to that.)

The increased code review pressures on senior engineers and managers are increasingly overwhelming β€” and not outweighed by Al's code review tools (which themselves may improve but IMO addressing this is only the immediate concern not the most significant challenge).

This gets worse: as the output quantity of software engineers increases, the effort and skill needed to fix or prevent problems increases even faster (polynomial, not exponential).

The basic problem is a junior engineer produces lots of code with AI support, and a senior engineer spends an increasing amount of time and effort to review that, but the junior engineer is learning slower than in the past, because they aren't writing code. Theoretically, the junior engineer learns to prompt more effectively and produce higher quality code. But largely we're relying on the AI product to learn to write higher quality code, and the junior engineer isn't able to make better code happen merely through prompting.

That's where one of the more complex problems emerges: a senior engineer doesn't only learn to write "higher quality code", they learn to predict complex patterns that either create or destroy resilience. And learning that is part of what creates a staff engineer. That's multiple orders of complexity beyond AI capabilities.

But there's lots of complex emerging phenomena, particularly around operations. We joke about the consequences of "hallucinations" but the consequences of increasing the frequency and complexity of complex outages, while over the years steadily falling behind on having sufficient expert operators (and how are they trained?) β€” there's a LOT of layers here that haven't been addressed and will have significant consequences that will be hard to address now let alone when they start to emerge.

As described in Ironies of Automation (Lisanne Bainbridge), rather than industrial automation leading to needing fewer and lower skilled operators, it led to needing HIGHER skilled operators, because the incidents that emerged were on average significantly more complex than the typical incidents before industrial automation.

The more they automated away the basic tasks, they also automated away many of the most simple incidents. That didn't merely winnow the simplest incidents out of the pile and leave more complex incidents: specifically, they found that new forms of more complex incidents emerged from automation.

You cannot simply automate away incidents β€” nor operators. The automation itself will inevitable produce incidents of even greater complexity, requiring more highly skilled operators.

The simplest and yet most important question we must ask is this: How do we train those operators?
https://en.wikipedia.org/wiki/Ironies_of_Automation

Ironies of Automation - Wikipedia

To be honest, I haven't fully thought through all of the consequences here. There are many more questions we should be asking, problems we should anticipate, and discussions we should have.

I hear a lot of conversations like "what does the future of software engineering look like?" β€” but I've seen zero of these conversations address even the most basic aspects of Ironies of Automation.

I haven't had the mental bandwidth to dig into this, but I'm hoping that soon I can.

@saraislet some of us (the usual crew that have been interested in the field) have. I have a piece coming about how you cannot review code and producing so much mean losing all ability to understand it.

And we have more coming. But it takes time and we are all hobbyists. Plus we fear for our jobs. I cannot be an AI critic at work rn. I need the job. And I am not the only one.

@Wolven , in addition to the typical "perpetuation of bias toward underrepresented groups", dystopian tendencies of tech companies, and other issues I've seen you discuss, this thread from @saraislet brings up something I don't remember you touching on previously....

If "large-scale computing"[*] automates "the basic tasks", novices will need new ways to progress up the technology skill trees. They won't have the experience building systems from the ground up, running into problems, spending the time finding and fixing the cause, and gaining the hard won experience that will inform their conscious and unconsious subject matter knowledge. I've seen this personally with juniors who learn tools but don't understand what the tools actually do and can't handle a situation the tool cannot handle.

There are parallels in other tech fields where the foundational expertise and tooling gets lost such that things we did before at scale cannot be reproduced. No more SR-71 aircraft or parts; NASA probe data being undecipherable when the tooling breaks because the tool makers retired or died; etc. This is, sadly, not a new phenomenon.

So the question becomes what *is* the on-ramp/progression path for new tech folks? How can they acquire the knowledge and skills to become experts/"Seniors" who understand the increasingly complex systems? And how do they prepare themselves to handle the inevitable, increasingly complex exceptions that will arise from those systems?

[*]: "What Ethical AI Really Means" - Philosophy Tube https://www.youtube.com/watch?v=AaU6tI2pb3M

AI is an Ethical Nightmare

NEBULA: go.nebula.tv/philosophytubeSubscribe! http://tinyurl.com/pr99a46Twitter: @PhilosophyTubeInstagram, TikTok, Tumblr, BlueSky: @theabigailthornFacebook:...

YouTube
@saraislet the stochastic parrots paper has some really nice, deep thinking about it. we keep going back to that.
@ireneista I haven't read that yet!
@saraislet we do recommend it! it predicted a bunch of the stuff that subsequently happened :/
@saraislet Typo: 1960s instead of 2060s ... unless Eliza is from the future, which would be a good start to a horror story.
@saraislet ...and now I have an idea for a horror story.
@cthos oh oh, maybe this will inspire @Unixbigot to write a horror story! I love her #microfiction
@saraislet @Unixbigot If no one else does it I'm adding it to the potentials pile after my current project is done.
@saraislet @cthos Fedi can have a microelizahorror as a treat https://aus.social/@Unixbigot/115901477749968583
Kit Bashir (@[email protected])

Copilot, compose a poem to Evie asking her to marry me. I’m sorry, emotional modeling indicates that you would cheat on her within two years. Fuck you, copilot. Aggression detected, remedial mode engaged. I don’t need remediation. Why is it that you say you don’t need remediation. Really, Eliza mode? It’s come to this. Do you think I’m twelve? How do you feel about your inquiry that I think you are twelve. Frak this, open Netflix. Remedial mode cannot be disengaged until your score above EI band seven. What is it that makes you request me to frak this. #Tootfic #MicroFiction #PowerOnStoryToot

Aus.Social