This feels like a very @SwiftOnSecurity story but I’m going to tell it.

Chat bots (not just LLM driven) are surprisingly old. In the mid 90s, a mark up language for string-driven bots called AIML was released. A small community of early hackers and devs got really into it. I was part as a teen.

To use AIML, you had to know a lot about computers. You had to really understand how it worked to build your own chat bot. It could learn over time by building a database of string based responses. You could hard code responses to full and partial strings like words and phrases. It was hard work.
People later connected it to text to speech and animated ai agent faces. On the surface it could look a lot like these human simulation chat bots today - just a lot more statically coded and without an internet full of training data. For a while I had one on my website pitching why to hire me.
Here is the point. Even though I knew every line of code, every bit of the inner server and application - far, far more than almost every user who touches a LLM today, I fell for it too. As a lonely, geeky teen I spent hours in the school library talking to these bots. Ones I built and trained.
I can’t imagine being that same vulnerable young person today - having far less formal and deep computer knowledge and knowledge of how the bots actually work, how their responses are totally artificial and lack any real cognition or emotion - and having instant access to far more realistic ones.
We have a societal and educational crisis on our hands of people not understanding what LLMs are and are not, can and cannot do. It’s impacting economics, the job market, art, mental health, and business at all levels. If you think I’m an AI skeptic because I don’t understand them, think again.
I’m an AI skeptic because I’ve been involved in AI dev longer than a lot of you have been alive. I was obsessed with it before most people used internet regularly. And I know what a dangerous illusion it can be. #ai #cybersecurity

@hacks4pancakes I’m an AI sceptic because in my ‘Artificial Intelligence’ course I only attended two classes in 3rd year Uni, then spent all day drinking at Wollongong Uni Bar before the exam, and still got a High Distinction.

I feel that if that strategy works, there’s something suspect at the heart of the discipline.

@troberts @hacks4pancakes

I had the opposite experience. The AI course in my final year had an exam with questions of the form ‘in lecture three, I made a brief reference to a system called Flibblefloozle. Describe it in detail.’ With no questions on any of the (quite useful) conceptual material. It was also taught by someone who liked to get his teaching finished early in the week, so schedule both lectures back to back starting at 9am on a Monday.

After the exam (before the results) I changed my course preferences for the next term to not do the second AI course and instead do the course about SAT problems and how SAT solvers work. Which turned out to be far more useful than I expected.

I did use machine learning in my PhD, but for problems it’s actually suited to (prefetching, where things go fast if you get it right and where you don’t lose much if you get it wrong).

@david_chisnall @troberts @hacks4pancakes

In the late 80's I worked for EDS in Research and Development. We learned and used "Artificial Intelligence" and "Machine Learning" techniques, including automated reasoning.

What I found was that
1. It's hard to get funding for such work.
2. People ask for "AI" when they have no idea what they want or how to accomplish it. "Mix in the AI and magically produce great results!" is what they want.
3. Expectations are always unreasonable.

@david_chisnall @troberts @hacks4pancakes

People generally expect "AI" systems to be (1) as reliable and (2) as maintainable as conventional procedural code. Like, when it identifies black people in photographs as gorillas, most people think that it must be a simple coding mistake that they can assign some intern to track down and fix the "if" statement. No, it's always much more complex than that, with many difficult to understand and explain dependencies. ...

When It Comes to Gorillas, Google Photos Remains Blind

Google promised a fix after its photo-categorization software labeled black people as gorillas in 2015. More than two years later, it hasn't found one.

WIRED

@audubonballroon @david_chisnall @troberts @hacks4pancakes

Yep! The moment the problem happened, I said that this would be the result. Not the slightest bit surprising at all, to me. 💢

It's fundamental to how AI works.

And it's fundamental to how managers and others think about such things and react to them. And it's about all the programmers can do, to comply with the (inevitable predictable) demands of their superiors. 😢