Ben Aveling

189 Followers
215 Following
3.2K Posts
The best time to improve your OpSec was a long time ago. The second best time is now.

Every time I read AI proponents say, “A well-defined specification upfront is crucial for a successful outcome,” it feels like we're betting on improving the stage that the computer industry has always failed at: a well-defined specification upfront.

#TheGeneralTheoryOfSlop

Can we please stop calling #LLMs #AI and start calling them #AutomatedPlagiarism. It describes what they do. It describes what they are. And it makes clear what they are not: They are not Intelligent. And they aren’t entirely Artificial. Everything they emit is a distorted copy of something originally created by a human being.
And more importantly, almost every sentence where you could write ‘AI’ is the clearer for being made accurate: we’re investigating the ethical use of Automated Plagiarism; I used Automated Plagiarism to do my research/home work/board presentation; I just use Automated Plagiarism for inspiration/the first draft/to get me started; the Automated Plagiarism machine deleted the production database and then wrote an apology.
What if plagiarism, but automated and error prone?
great quote from an old cern talk :3
Dogs in costumes: I am so cute! I am the cutest boy!
Cats in costumes: i will kill you for this.
#GenAI promises to make writing easier, whether it's writing a document, a plan, software, ADT.
But to the extent that it succeeds in that, it does it at the cost of making reading harder and less reliable.
And all of these things are write-once, read-many.
This is not a sane tradeoff.
#LLM
From what I've observed, people who claim that LLMs can replace artists don't understand art, people who claim that they can replace musicians don't understand music, people who claim that they can replace writers don't understand literature, and people who claim they can replace translators don't rely on translations. If I had a button that would erase LLMs from the world but it would take machine translations away (which is a false dichotomy anyway), I would absolutely still press it.

Great video. Watch it!

(This is Prof. Ada Palmer @adapalmer)

Here is a way that I think #LLMs and #GenAI are generally a force against innovation, especially as they get used more and more.

TL;DR: 3 years ago is a long time, and techniques that old are the most popular in the training data. If a company like Google, AWS, or Azure replaces an established API or a runtime with a new API or runtime, a bunch of LLM-generated code will break. The people vibe code won't be able to fix the problem because nearly zero data exists in the training data set that references the new API/runtime. The LLMs will not generate correct code easily, and they will constantly be trying to edit code back to how it was done before.

This will create pressure on tech companies to keep old APIs and things running, because of the huge impact it will have to do something new (that LLMs don't have in their training data). See below for an even more subtle way this will manifest.

I am showcasing (only the most egregious) bullshit that the junior developer accepted from the #LLM, The LLM used out-of-date techniques all over the place. It was using:

  • AWS Lambda Python 3.9 runtime (will be EoL in about 3 months)
  • AWS Lambda NodeJS 18.x runtime (already deprecated by the time the person gave me the code)
  • Origin Access Identity (an authentication/authorization mechanism that started being deprecated when OAC was announced 3 years ago)

So I'm working on this dogforsaken codebase and I converted it to the new OAC mechanism from the out of date OAI. What does my (imposed by the company) AI-powered security guidance tell me? "This is a high priority finding. You should use OAI."

So it is encouraging me to do the wrong thing and saying it's high priority.

It's worth noting that when I got the code base and it had OAI active, Python 3.9, and NodeJS 18, I got no warnings about these things. Three years ago that was state of the art.