This feels like a very @SwiftOnSecurity story but I’m going to tell it.

Chat bots (not just LLM driven) are surprisingly old. In the mid 90s, a mark up language for string-driven bots called AIML was released. A small community of early hackers and devs got really into it. I was part as a teen.

To use AIML, you had to know a lot about computers. You had to really understand how it worked to build your own chat bot. It could learn over time by building a database of string based responses. You could hard code responses to full and partial strings like words and phrases. It was hard work.
People later connected it to text to speech and animated ai agent faces. On the surface it could look a lot like these human simulation chat bots today - just a lot more statically coded and without an internet full of training data. For a while I had one on my website pitching why to hire me.
Here is the point. Even though I knew every line of code, every bit of the inner server and application - far, far more than almost every user who touches a LLM today, I fell for it too. As a lonely, geeky teen I spent hours in the school library talking to these bots. Ones I built and trained.
I can’t imagine being that same vulnerable young person today - having far less formal and deep computer knowledge and knowledge of how the bots actually work, how their responses are totally artificial and lack any real cognition or emotion - and having instant access to far more realistic ones.
We have a societal and educational crisis on our hands of people not understanding what LLMs are and are not, can and cannot do. It’s impacting economics, the job market, art, mental health, and business at all levels. If you think I’m an AI skeptic because I don’t understand them, think again.
I’m an AI skeptic because I’ve been involved in AI dev longer than a lot of you have been alive. I was obsessed with it before most people used internet regularly. And I know what a dangerous illusion it can be. #ai #cybersecurity

@hacks4pancakes I remember learning about the Turing test in 1995.
I can still picture the room I was in when I found out that people were trying to make a machine mimic a human perfectly.

I remember thinking "that's stupid, we already have people, we need machines that are *better* than humans when doing highly specific tasks.
Gods forbid we create something without the same existential experience as humans, yet can mimic them precisely.

DIY Doppelganger aliens seems like a terrible end goal.

@Taco_lad @hacks4pancakes I feel like the only reason why people want to create machines that perfectly imitate humans is for them to feel like gods.