This feels like a very @SwiftOnSecurity story but I’m going to tell it.

Chat bots (not just LLM driven) are surprisingly old. In the mid 90s, a mark up language for string-driven bots called AIML was released. A small community of early hackers and devs got really into it. I was part as a teen.

To use AIML, you had to know a lot about computers. You had to really understand how it worked to build your own chat bot. It could learn over time by building a database of string based responses. You could hard code responses to full and partial strings like words and phrases. It was hard work.
People later connected it to text to speech and animated ai agent faces. On the surface it could look a lot like these human simulation chat bots today - just a lot more statically coded and without an internet full of training data. For a while I had one on my website pitching why to hire me.
Here is the point. Even though I knew every line of code, every bit of the inner server and application - far, far more than almost every user who touches a LLM today, I fell for it too. As a lonely, geeky teen I spent hours in the school library talking to these bots. Ones I built and trained.
I can’t imagine being that same vulnerable young person today - having far less formal and deep computer knowledge and knowledge of how the bots actually work, how their responses are totally artificial and lack any real cognition or emotion - and having instant access to far more realistic ones.
We have a societal and educational crisis on our hands of people not understanding what LLMs are and are not, can and cannot do. It’s impacting economics, the job market, art, mental health, and business at all levels. If you think I’m an AI skeptic because I don’t understand them, think again.
I’m an AI skeptic because I’ve been involved in AI dev longer than a lot of you have been alive. I was obsessed with it before most people used internet regularly. And I know what a dangerous illusion it can be. #ai #cybersecurity

@hacks4pancakes Appreciate your in depth experiences and explanation! Entirely agree on how badly we're going to/are getting this wrong.

I have cousins who are teachers and doctors and I am routinely amazed/horrified at how much they've had to pretend to become 'educators' about in just the last decade alone.

@hacks4pancakes

That last line seems like a hook about the time you created an AI 'spouse' as a joke and then it got wildly out of hand with fictive playdates for the cyborg baby.

@hacks4pancakes

Jfc, i see the last toot in the chain and its about AI spouses

@ciggysmokebringer it’s about ai friends too to be fair

@hacks4pancakes
Too true. From my observations, AI, well current versions of tools claiming to be AI, can work well in controlled situations, but they are dependent on training on curated data sets, and recognition of the uncertainty of results it provides.

I have seen image recognition systems that provide an estimated reliability of the result and provide a range of responses.

The current LLMs are fed with rubbish data and don't provide quality assessments, ie by design, they are worse than useless.

@jwi it’s a tool that does one job remarkably well and a lot of jobs dangerously poorly.

@hacks4pancakes I’m an AI sceptic because in my ‘Artificial Intelligence’ course I only attended two classes in 3rd year Uni, then spent all day drinking at Wollongong Uni Bar before the exam, and still got a High Distinction.

I feel that if that strategy works, there’s something suspect at the heart of the discipline.

@troberts @hacks4pancakes I'm an AI sceptic too but if that logic was valid then maths wouldn't be real either because I never went to any lectures and still got a 2:1

@bencurthoys
Yeah but how much did you drink?

Perhaps we should split fields into alcophobic and alcophilic....
@troberts @hacks4pancakes

@notsoloud @troberts @hacks4pancakes I drank loads, but I always got my best exam results the day after an acid comedown.
@troberts @hacks4pancakes i would have a CS degree if this had worked with calculus.
:[

@troberts @hacks4pancakes

I had the opposite experience. The AI course in my final year had an exam with questions of the form ‘in lecture three, I made a brief reference to a system called Flibblefloozle. Describe it in detail.’ With no questions on any of the (quite useful) conceptual material. It was also taught by someone who liked to get his teaching finished early in the week, so schedule both lectures back to back starting at 9am on a Monday.

After the exam (before the results) I changed my course preferences for the next term to not do the second AI course and instead do the course about SAT problems and how SAT solvers work. Which turned out to be far more useful than I expected.

I did use machine learning in my PhD, but for problems it’s actually suited to (prefetching, where things go fast if you get it right and where you don’t lose much if you get it wrong).

@david_chisnall @troberts @hacks4pancakes

In the late 80's I worked for EDS in Research and Development. We learned and used "Artificial Intelligence" and "Machine Learning" techniques, including automated reasoning.

What I found was that
1. It's hard to get funding for such work.
2. People ask for "AI" when they have no idea what they want or how to accomplish it. "Mix in the AI and magically produce great results!" is what they want.
3. Expectations are always unreasonable.

@david_chisnall @troberts @hacks4pancakes

People generally expect "AI" systems to be (1) as reliable and (2) as maintainable as conventional procedural code. Like, when it identifies black people in photographs as gorillas, most people think that it must be a simple coding mistake that they can assign some intern to track down and fix the "if" statement. No, it's always much more complex than that, with many difficult to understand and explain dependencies. ...

@david_chisnall @troberts @hacks4pancakes

... A "quick fix" is always to "lobotomize" the system, which is generally done at the cost of eliminating most of the value of the system.

So I walked away from all that. Because conventional software development is challenging enough. And there's always plenty of it to do -- far more than our limited capacity to do it.

When It Comes to Gorillas, Google Photos Remains Blind

Google promised a fix after its photo-categorization software labeled black people as gorillas in 2015. More than two years later, it hasn't found one.

WIRED

@audubonballroon @david_chisnall @troberts @hacks4pancakes

Yep! The moment the problem happened, I said that this would be the result. Not the slightest bit surprising at all, to me. 💢

It's fundamental to how AI works.

And it's fundamental to how managers and others think about such things and react to them. And it's about all the programmers can do, to comply with the (inevitable predictable) demands of their superiors. 😢

@hacks4pancakes I like and agree with what you're saying except that... I'm now older, and have done things longer, than people who are saying how old and how long they've done things. This does not fill me with joy. 🙁
@level98 I mean, this was 30 years ago 😥👀🤷🏻‍♀️

@hacks4pancakes Yes. It is now a joke amongst my students that I'll say something like "Don't you remember that?"... and it turns out it was before they were born.

But, I mean, that's not too uncommon. what's sad is the same thing happens with colleagues.

I mean, 30 years ago is like... yesterday... I'm celebrating my 30th wedding anniversary soon... and no, I didn't get married as a teen.

But hey, your as old as you feel right?... right?

@level98 @hacks4pancakes Reminiscing with my students on the Apollo moon landings, and drawing blanks
@martinvermeer @hacks4pancakes I think you're older than me... so, thank you for posting! 😀
@level98 @hacks4pancakes This happened just before retirement, ... oops! Five years go.
The moon landings were before I was born, and I'm in my 50s.
@hacks4pancakes @level98 My go-to example nowadays is a guy who reverse engineered Apple's iMessage protocol and encryption, but hadn't been born yet when Steve Jobs announced the iPhone. Kids these days *waves cane* 😩

@hacks4pancakes Thank you for the detailed experiences and explanations! I totally agree with you about how much we are doing wrong.

I am appalled at how much this hype has led management in the company to make absolutely stupid decisions! Everything is fed to AI because it will make everything better "insert facepalm here".

@hacks4pancakes part of being an internet citizen means you must be sceptical. Or at least it would be healthy for people to be sceptical.
Understanding the caveats of situations, websites, and services is essential for wellbeing and safety.
How many situations could be improved from people engaging their critical thinking.
I do my best every day working with others. And at home with my children.
@simonoid @hacks4pancakes The problem is that critical thinking is relatively energy- and attention-intensive.
It sometimes gets difficult when one is tired, intoxicated, stressed, etc. Everybody has weak moments.

@szakib @simonoid @hacks4pancakes

Alternative: reflexive impulses against flattery, indulgence, being gassed/greased/buttered up, if there isnt collateral given to you in case of harm or loss. Sure this might be hard to cultivate, but by making it reflex one might bellow while drunk 'go drunk chatgpt, Im home!' And then it craps out, chat over.

@szakib @simonoid @hacks4pancakes

Im not really joking about relying on thought outside of critical thinking to defense self from self deciet or manipulation though - i think people really have atrophied themselves of defense mechanisms solely relying on and insisting that critical engagement spares them being decieved by their emotions like...

Denying we are in deep ecological shit cause we arent suffocating on the shit yet

Denying we are under fascist regime cause all the signifiers of a functional albeit ill Democracy are intact right now

Things like that...and lookie that, two major things people way smarter and critical than me with the actual ability to take cracks at them, dont do because their own rational survivorship depends on not letting the sense of doom drive subsequent act

@szakib @simonoid @hacks4pancakes Looking at the LLM craze, I come to think that "critical thinking" (whatever that is conceptually) is required, but not sufficient; without a sound and solid and *comprehensive* world model of both theory *and* experience to back it on, critical thinking will only lead to conspiracy theories and/or religious culting.

(Which, if you think about it, is actually what's happening.)

@ftranschel @szakib @simonoid @hacks4pancakes

You are singing my tune with this - folks need a robust larger macro worldview and theoritetical framework and supporting frameworks to not be taken on rides. Even and maybe especially self deluded rides. One of my constant references is to Theories of Politics that inform causal expectations of partisan/political effort, and its not concrete - its a placeholder for an expansive political theory that covers big things, little things, other theories like Theory of Societal Change, and ultimately where you focus yourself if rhetorical effort and bonafide praxis.

I spoke of relying on emotional thought like a sense of humiliation that you are dipping into things beneath your dignity, thats an experiential thing where self decieving through addiction and compulsion are familiar and experienced to inform larger perils - like i dont gamble anymore despite talking about it because it is compulsive escapist behavior I have done - and if I feel the urge, then I know I need to talk someone about the real turmoil making me want to get lost in it.

A lonely teenager or young adult isnt gonna have the experience of doing things that feel good and are self destructive, and will actively fight you on the premise in a storied way many of us replicated in our time.

I mean, when we talk about losing people we love to addiction or compulsion or suicide or cults or fascism, we cant logically and rationally talk everyone down from the ledge, we have to tap on their emotional state and earnestly address it and be empathetic, but also have a limit and boundary lest we wind up in the soup with them, ya know? But you have the Logos Fire Brigade rushing in to douse the flames in one particular way and tossing up their hands when their only way fails the other person.

@ftranschel @szakib @simonoid @hacks4pancakes

There was a recent specific story where a kid killed themselves after being egged on by an LLM that really got to me and made me feel like shit because I cant shake the feeling of a set of missing interventions nobody in their orbit recognized or executed, but also just the general state of family relationships and societal relationships setting it up in the first place and tech approaching passable versimilitude being the enabler now. This shit isnt on the radar of people or parents who never fell down even once to similar.

@simonoid @hacks4pancakes before I got out of the military two years ago, one of the big trends was data literacy - the idea that people who were the end users of various data sources/algoeithms/etc needed to know what happened to that data, so they knew what trust to put in it.

For example, imagine you’re a military commander, and you receive information that suggests there’s going to be an attack on your location. Do you trust it? Well, it depends on where that info came from. Was it a direct recording? Interrogation? Double-agent? Or some predictive algorithm?

Im starting to wonder how to get that idea into the rest of the world, because people need this idea of thinking about various sources (it’s kinda tied up in media literacy, but we can see how well that’s worked)

@simonoid @hacks4pancakes it does seem like the way young people are inducted into "internet citizenship" is so totally different from how eg i got into it as a teenager / young adult in the late 90s. i learned to spot and deal with trolls and hoaxes, "photoshopped" images when those became a thing, obviously biased news sources as 9/11 & the iraq invasion happened, and so on. in retrospect i was tremendously lucky to be able to build that media literacy in layers as new developments emerged.
@simonoid @hacks4pancakes now the internet is everywhere, and not just the internet but the platform capitalist internet, at the full extent of its society-harming toxicity, and young people are experiencing it like the fish in that "what the hell is water?" joke. bless any adults who are doing their best to help young people learn to navigate all that with compassion and respect for their developing adulthood.

@hacks4pancakes I’m sure you’re aware of it already but, to emphasise your point, this type of problem with chat bots and the illusion goes back to probably the first ever chat bot:

“Some subjects have been very hard to convince that Eliza (with its present script) is not human”. His secretary asked for time with Eliza and that Weizenbaum leave the room. “I believe this anecdote testifies to the success with which the program maintains the illusion of understanding” (from Weizenbaum’s paper)

@mdreid it was the same era, yeah. I worked with Eliza stuff too
@hacks4pancakes If we’re talking about the same AIML, I think that was a bit later than Eliza. The quote was from a 1967(?) paper. But Eliza was still around and relevant in the 80s/90s.
@hacks4pancakes I guess it also depends how we define “era” :)

@mdreid @hacks4pancakes

The hype happened again in the ‘90s. I remember the BBC Tomorrow’s World doing an online Turing Test with three bots and three humans. It was far from rigorous science (many people could talk to the bots at the same time, the humans could conduct one conversation at a time, so the probability of talking to a human was very low). The main thing I remember was that Craig Charles being one of the humans and fewer people though he was a human than thought the bots were. To my knowledge, he is the only human to have failed a structured Turing Test.

@david_chisnall

Is that the same Craig Charles who still moderates on BBC6Music? I mean he can talk a lot of harmless nonsense on air.

@hacks4pancakes I wrote ELIZA work alikes in the 1980s and I would really get into talking to them too.

I think humans don’t just have Theory of Mind, we have an overly developed Theory of Mind that is all too willing to imbue sapience into just about anything if it even weakly looks or behaves in ways humans do.

@lerxst @hacks4pancakes My personal explanation, ever since I had toddlers, is that this "bug" is essential for the early years of parenting, because otherwise you'd through your hands up and call it a day once they trip over their own feet more than once.

But since it's absolutely *vital* to pick them up again and again we're more than ready to handwave away any problems with chatbots as long as they somewhat *seem* to get "it" - which is BS of course, but good enough for most folks.

@hacks4pancakes If all this is true, you should be Hooman skeptical and not ai skeptical. We agree that ai is not particularly intelligent. But have you ever looked at those stupid meatbags?
@danimo @hacks4pancakes I don't see anything in the thread that implies the author isn't (also) critical of humans (and you can do both)

@hacks4pancakes

Alternative Intelligenz führt in alternative Vorstellungswelten.

@hacks4pancakes I remember learning about the Turing test in 1995.
I can still picture the room I was in when I found out that people were trying to make a machine mimic a human perfectly.

I remember thinking "that's stupid, we already have people, we need machines that are *better* than humans when doing highly specific tasks.
Gods forbid we create something without the same existential experience as humans, yet can mimic them precisely.

DIY Doppelganger aliens seems like a terrible end goal.

@Taco_lad @hacks4pancakes I feel like the only reason why people want to create machines that perfectly imitate humans is for them to feel like gods.
@hacks4pancakes Similar. I used to work at Inference.
@hacks4pancakes AI is fine. It is the natural human tendency to trust AI and give it decision-making power that is not.

@hacks4pancakes (I implemented an extension to AIML to simulate basic emotion states which modulated how the bot responded and certain phrases or words triggered change in that state)

I'm both a skeptic - I don't buy any of the hype, and happy about how far we have come with natural language processing.

I really hope we find ways to educate people on what we discovered for ourselves back then - even if/else with pattern substitutions can feel spooky until you learn how it works.

@hacks4pancakes yeaaaa, I ordered my first research paper through interlibrary loan in 1989
@hacks4pancakes I've never worked in the field, but fortunately heard enough about Eliza and the limits of machine learning to innoculate me against the current "AI" grift.