"each individual kid is now hooked into a Nonsense Machine"
Edit: I got those screenshots from imgur. It might be from Xitter, with the account deleted or maybe threads with the account not visible without login? 🤷
2nd Edit: @edgeofeurope found this https://threadreaderapp.com/thread/1809325125159825649.html
#school #AI #KI #meme #misinformation #desinformation
@b_rain This is our greatest doom, an angry generation of annoyed know-it-alls who know absolutely nothing.
@Gustodon @b_rain this is... literally just a more interesting version of what we had previously, though...?
@Gustodon @b_rain So everyone gets to be a techbro high off their own supply?

@b_rain This also ties into how the way we design things influences how people percieve them.

Before ChatGPT, there was "OpenAI Playground," a paragraph-sized box where you would type words, and the GPT-2 model would respond or *continue* the prompt, highlighted in green.

Then ChatGPT came along, but it was formatted as a chat. Less an authoritative source, more a conversational tool.

Now the ChatGPT home page is formatted like a search engine. A tagline, search bar, and suggested prompts.

@boltx @b_rain This stuff is why I have a subscription to encyclopedia brittanica.

There's too much AI slop, and it's begun to spread.

@boltx @b_rain you probably know that but it wasn't until a few weeks ago that I learned that under the hood, they have to write "user:" before your prompt and "agent:" after it, before the interface hands it to the LLM, otherwise it would just continue writing your prompt.

@jollysea Technically the models can vary a bit in how they handle that (e.g. they could be using an XML format with <user> and <llm> for example) but yeah, that's the structure essentially all conversational LLMs have to follow.

In the end, LLMs are just word prediction machines. They predict the most likely next word based on the prior context, and that's it. If nothing delineated between the original prompt and the LLM's response, it would naturally just continue the prompt.

@jollysea That was actually one of the most fun parts about the original interface. If you wanted it to continue some code, just paste in your code and it'll add on to it. Have a random idea for a poem? Write the first line, and it'll write a poem that continues from that starting line in a more cohesive manner.

Now any time you ask an LLM to do something, it won't just do the thing you wanted, it'll throw in a few paragraphs of extra text/pleasantries/re-iteration you didn't ask for.

@boltx @jollysea @LordCaramac but also it was hard to project another mind into that interface so they had to change it for marketing reasons 🤷
@lechimp @boltx @jollysea GPT2 was a lot of fun, but for some reason, I find GPT3 and later versions rather boring.

@LordCaramac I'd assume that has something to do with how GPT2 was a lot more loosely fine-tuned than GPT3 and subsequent models.

GPT2 was more of an attempt at simply mimicking text, rather than mimicking text *in an explicitly conversational, upbeat, helpful tone designed to produce mass-market acceptable language*

Like how GPT2 would usually just do the thing you asked for, whereas GPT3 and others now all start with "Certainly! Here's a..." or something similar.

@boltx GPT2 was often quite unhinged and produced text that was quite surreal and like a weird hallucination or the ramblings of a madman. I liked it.

@jollysea @boltx @b_rain yeah. OAI and friends are desperately trying to add as much abstraction & as much features on top of LLMs to hide the fact they're just the predictive text on your phone but overgrown.

its just that the training data always had a specific 'token' to delineate user input and expected output so the LLM behaves like a chat bot

Teach kids (and adults) to check sources. Where does chatGPT get this info? Learning to check sources is a useful skill in manu situations. Note that Wikipedia lists its sources. ChatGPT makes them up.

Also teach them that ChatGPT is the ultimate bullshitter. It's designed to always produce an answer, regardless of whether it's true or false. It has no concept of truth. It just makes stuff up based on the content it's trained on, which means it's sometimes correct, but mostly by accident. It can also be very wrong.

No matter what you use it for, always, always double check the output of these LLMs. Because it's just as likely to be bullshit as true.

which is why kids need to learn about the possibilities and limitations of genAI in school. Like we taught them to be critical of what they found on Wikipedia 20 years ago, how to search properly, and to check other sources if something seemed off etc.
@b_rain @Gustodon This hellscape is a fascist wet dream.
@markc568 @b_rain @Gustodon Goebbels would have had a stroke when thinking about all the possibilities.
@b_rain The writer contrasts using ChatGPT with using a search tool. But Google and other search engines are privileging responses from LLMs.

@b_rain I've had to explain that LLMs are nothing at all like search engines to friends who are highly educated and, overall, likely smarter and more capable of complex thought than myself.

They're just not computer science educated, and understand computers as good at numbers and large database retrievals.

Which is the exact opposite of #LLMs.

Society isn't ready for them at all.

@larsmb @b_rain They're not smarter and more capable of complex thought than you if they get this wrong.

They've failed to pass a very very low bar for intelligence.

@larsmb @b_rain Namely, failure to apply a critical lens to narratives provided by a party who stands to profit from having them believe what they're told to believe about something they lack sufficient background to understand.

@larsmb @b_rain And indeed lots of supposedly "highly educated" people in lots of specialized fields have this very basic lack of intelligence.

For example there are tons of doctors who take claims by drug companies at face value rather than applying a critical lens, looking for genuinely independent studies, or applying their own understanding (that they theoretically needed to graduate, but probably never had) of the relevant mechanisms of action.

@dalias @b_rain No. This is insulting. I don't take kindly to insults to my friends, so kindly: don't.

They're not experts in tech. They're being willfully misled about the capabilities and functions.

They all understood it when explained, but the systems are not always explained, because then they couldn't be sold.

The term for "one can't spend time to understand everything and must rely on and trust others at some point" is not "unintelligent" but "human".

@larsmb @b_rain OK I'll try to refrain from doing that.

What I'm trying to say is that the answer to "can't understand everything" is not "trust whoever has a loud marketing department" but "assume hype and extraordinary claims false by default especially when they're coming from corporate sources in the same industry that's making the hyped products" and "seek out actual domain expertise as the source of trustworthy information on topics you don't understand".

I deem these principles very basic to critical thinking/intelligence.

@dalias @larsmb @b_rain A few days ago, I started a thread complaining about leftists advocating the use of LLMs. A few people said something along the lines of them not really being leftists if they did.

After that, I saw posts about "vibe coding" from two socialists I've known for decades, both experienced software developers. One had just returned from a trip to Palestine, where he'd been a volunteer on the ground.

I loathe LLMs, but I can't just dismiss these people as fools. They're not.

@foolishowl @larsmb @b_rain You don't have to "dismiss" someone to acknowledge that they have an exploitable cognitive vulnerability. If you care about them or deem them valuable to a movement or whatever, you can try to help them see that or if they can't, at least try to get them to channel the bs in a way that's somewhat productive...
@dalias @b_rain @larsmb @foolishowl Social engineering (including propaganda and manipulation) wouldn't be an active field of research & effort if it didn't work.

Many people prefer to avert their attention rather than understand to mitigate what risk they can (unfortunately, some things work /even/ if one knows how they work and that they're being used).
@foolishowl @dalias @larsmb @b_rain vibe coding is the last buzzword. Honestly, you still spend almost the same time you use to do the prompting to fix the small typos and bugs on the code. Not as useful as everyone wants to make it sounds.
@fabiocosta0305 @foolishowl @dalias @larsmb @b_rain No. What you described is what a sensible coder who knows what LLMs are does with access to one.
Vibe coding does not describe that.
Vibe coding is literally using the output of LLMs as is. Why yes, code produced by LLMs is buggy at best, made up nonsense at worst. And there are people deploying it to production.
@raffitz @foolishowl @dalias @larsmb @b_rain people said supporting COBOL code from 50+ year is "tech debt" (I hate this term). Let us wait 5 years on vibe coding
@fabiocosta0305 @raffitz @dalias @larsmb @b_rain We'll be completely ruined if we sit around waiting for five years.

@foolishowl @fabiocosta0305 @raffitz @larsmb @b_rain The non sequitur analogies from people who have no understanding of any of this but want to be armchair experts is so frustrating.

"The language is crusty and has difficulties you wouldn't encounter using a different language" is completely a different problem from "you are gluing together massive amounts of utterly random stolen code with no clue what it does and taking it as a matter of faith that it will safely do what you wanted it to do".

@dalias @b_rain @larsmb As an easy rule of thumb: If corposcum stand to benefit from it, it's probably lies and probably has already killed people.
@dalias @larsmb @b_rain
To outsiders, what tech has achieved in the last 3 decades is absolutely indistinguishable from magic.
If you don't have domain knowledge and it _seems_ like the same folks who caused computers, then the Internet, then smartphones to upend society are now saying "a new, artificial intelligence is here that can do all the things, embrace it or be left behind", it's very hard to separate hype from opportunity and threat.
@jaystephens @larsmb @b_rain That is absolutely not the case if you've actually tried using it rather than reading/watching propaganda. The progress in the past 2 decades has utterly stagnated. Computers are slower, clunkier, harder to use, mess up more often in more unpredictable ways, etc.
@dalias @larsmb @b_rain
Totally agree bloatware and enshitification have outpaced Moore's law, but the smartphone is less than 2 decades old, and unlike Internet old hands, late adopters don't have much memory of a better time. In the mid 2010s in my job I was still regularly upskilling mums returning to work who were switching to a computer from a pen and a paper book. Those people are even now only mid 50s to mid 60s.
@jaystephens @larsmb @b_rain I don't see how someone without context sees this "magical technological progress" and not a mess of attacks on their attention by stuff that never works as expected. 🤷
@dalias @larsmb @b_rain
Because they largely didn't have any expectations. To them It's just things that used to be scifi but are now real - instant global email instead of snail mail. Free phone calls instead of quarters in a slot. Video calls. Streaming movies. Then all the above in your pocket along with real time weather forecast and GPS navigation with real time mapping etc etc.

@dalias @jaystephens @larsmb @b_rain feels a bit "funny" to me that you seem to struggle so much with understanding how people fall for the corporate LLM sharade.

reading this thread looks like its the same for you as its its for them who struggle to grasp what LLM's actually do and what their problems are, just on another level.

like you wrote, they not getting that is their "vulnerability" and you not being able to get them is yours.

@glowl @jaystephens @larsmb @b_rain I've written elsewhere in the thread about how it's a vulnerabilty. What I take exception to is the idea that it's "late adopters" who are especially vulnerable to being bedazzled by the scam. I think it's plausible that they're *more* resilient on average, by virtue of not having been wowed by the previous "big thing" either.

My impression is that it's people with a proclivity for admiring authority and for wanting to be in in-groups who are most vulnerable to techno-futurist scams like "AI". They don't understand or care about the actual technology because their interest isn't in it, but in projecting an image of being someone who's in "club tech".

There is difference in how we see tech from another person. My dev cousin is jealous of me because my work laptop is a Macbook Pro which I am not a fan of. I am jealous of her because she works on a Thinkpad. I am sure many of us will understand when I say my fingers miss that keyboard. That same me is not much into mechanical keyboards. I use Arch, a bleeding edge distro on my Thinkcenter but everything on it is retro. My display server is X11, terminal is XTerm and shell is Bash. Thinkcenter itself is old and low spec but it feels faster than the Macbook because the setup is minimal here while MacOS is bloated. So for me it is as if the tech is stuck at 2012 when the Thinkcenter was built. This is however not true for someone who appreciates Macbook and MacOS. They see magic.

CC: @jaystephens@mastodon.social @larsmb@mastodon.online @b_rain@troet.cafe
@abhijith @jaystephens @larsmb @b_rain Somehow I'm completely missing where this "magic" is. The non-insider folks I encounter in normal life are upset that their computers keep getting slower, that every few months an update moves things around and they can't find anything, that their old emails or photos seem to have disappeared randomly, that they're missing calls because WhatsApp can't update because their phone storage is full and each version ships with a forced-upgrade timebomb, etc. etc. etc. None of this feels like "magic" to them. It feels like shit.
@dalias @abhijith @larsmb @b_rain
Two things can both be true.
Just like I am amazed that to fly from Australia where I live to London where my family is now costs me 3 days' wages instead of 2 weeks' wages like it did in the 1980s, and is nearly twice as fast, but also horrified by the endless queuing, paperwork. tiny uncomfortable seats, and intrusive security theatre.
_what_ I can do is amazing, _how_ I have to do it now sucks.
@larsmb @dalias @b_rain nah dog your friends are dumb as shit sorry
@killeveryhetero @larsmb @b_rain I don't think that reply was helpful.
@killeveryhetero @larsmb @dalias @b_rain wtf why are you insulting people you don't even know

@larsmb thank you for standing up for your friends!

This "if they don’t know X" is fucking elitism.

To take up the point about doctors: those are the people who keep you alive when you need them.

I remember when my doctor saved my life by saying "go to the surgeon and get that toenail cut off *ASAP*".

It was infected and days later I’d have likely fallen prey to sepsis that could easily have killed me.

The doctor may not understand LLMs, but he knows how to keep me alive.
@dalias @b_rain

@larsmb @dalias @b_rain Recently LLMs, especially Gemini, have been getting new features that integrate web search (my understanding is that the LLM is in charge of a web clawler, collecting results that it then attempts to cite URLs directly), which may be the cause of some confusion here. It is obviously not a search engine, but IIRC the UI will usually say something like "Searching the web".

And I know I am not speaking to the right audience here, but this is a feature I actually stikes a sensible symbolic-subsymbolic balance between traditional "general, static" sort and what an LLM would generate anything that sounds like an answer.

@pkal Yes, I'm aware. The problem with that functionality is that it then throws that as additional context into the same system with the same constraints, so that doesn't really overcome the fundamental limitations.

@dalias @b_rain

https://mastodon.online/@larsmb/114726700089812906

Lars Marowsky-Brée 😷 (@larsmb@mastodon.online)

¹ I know some of the "reasoning" models are enhanced to pull in additional context via searches and MCPs etc. The level of fail they still produce remains mind-blowing, because they then throw the additional context into the same statistical model with the very same limitations.

Mastodon

@larsmb @dalias @b_rain Agree.

Also, most experts like scientists, engineers and physicians aren’t used to be actively misled. Those fields are built on trusting everyone else’s expertise and assuming good intentions. It wouldn’t work otherwise. LLMs are a very different beast.

That’s also why there seems to be an uptick in completely fraudulent scientific publications. It’s relatively easy to do because the reviewers don’t immediately assume fraud.

@xerge @larsmb @b_rain "Also, most experts like scientists, engineers and physicians aren’t used to be actively misled."

LMAO. A good 75% or more (probably 90% now) of scientific publication is fraud (fabricated data, false citations, false authorship, plagiarism, etc.). Someone in the field who isn't paying attention enough to see that is lacking a basic skill they need to do their job.

I'm not saying you're wrong, just that ignorance of it doesn't absolve them.

@dalias @larsmb @b_rain

As someone that has worked as a research scientist in chemistry for the last 30 years I can guarantee that the amount of fraudulent publications is a lot lower than that. Probably significantly below 10% in the hard sciences. Hard numbers are difficult to find.

It happens, but when it happens it is usually falsified data, that can only be caught by replication. Carelessness and mistakes obviously also happen.

@dalias @larsmb @b_rain

Contemporary science (and engineering) is so complex that without trust nothing would work.

Some people do take advantage of that, but it remains pretty rare.

I suspect that those high numbers come out of some right-wing conspiracy mill and are part of the right-wing war on science (and on reality itself).

@xerge @larsmb @b_rain I'm counting "cited something not relevant as a favor", "cited something claiming it supports a claim it doesn't", "included a non-author on authors list for prestige", "included LLM vomit undisclosed", etc. as academic fraud. Because these things are. Falsification of data may only account for 10%, but I suspect it's much higher now. Especially combined with LLM usage.
@dalias @b_rain @larsmb One would think that corposcum being so upfront about it would've made it so much easier to notice.

It used to be that bias was indicated mainly through ranking and non-indexing (used to be blatant the minute one search how to do download things for free, certain topics /somehow/ had zero relevant results). So the assumption was that search engines showed what they wanted the user to know, not what there was to know.
@dalias @larsmb @b_rain This!
A key competence is to know which tool to use for which task.
You don't know the purpose of a tool and use it anyway? You are surely not smart.
You know that it is the wrong tool but don't care for its limits? You are surely not smart.
This has nothing to do with some digital understanding. Why would one use a torx screw driver for pozidriv screws? Why would one want to use an LLM as retrieval machine? A smart person knows what their tools are able to do.
@ridscherli @dalias @b_rain Nobody knows that immediately when confronted with a completely new tool, especially one that behaves very differently to all other tools that looked the same before, and when they've been lied to by almost everyone (not just for-profit corporations) about the capabilities.
I think it is very important that we understand why LLMs are so often misunderstood so we can fix it — not be condescending to our friends is probably helpful in that regard.

@larsmb @ridscherli @dalias @b_rain
Replace Torx with Phillips in the example above (talking to anyone who is not an experienced contractor).
It's very easy to get PH (Phillips) and PZ (Pozidriv) cross-head screws confused.
Each type was originally designed for different torque levels.

"To prevent slippage and damaging of screws, you should only use a Phillips head screwdriver on a Phillips head screw, and you should only use a Pozidriv screwdriver on a Pozidriv screw."
https://shop4fasteners.co.uk/blog/pozidriv-vs-phillips/