Jeff Watson

@jeffwatson
6 Followers
64 Following
417 Posts

Choose One and Boost Please

#polls #nerlingersjokes

Dilly-Dally
52.2%
Lollygag
47.8%
Poll ended at .
Every time I look at anything Liquid Glass, this is all I’ll ever see
It’s remarkable, especially this summer, to see just how much Apple got right in the first decade of the Mac. The NeXT acquisition may have saved the company, sure, but it was the durable and thoughtful design of Mac OS that persuaded users to tolerate a decaying foundation until something better came along.
It is deeply and profoundly embarrassing to watch the head of a trillion dollar company debase himself like this
https://www.threads.com/@aaron.rupar/post/DNB0zymJDjS
Aaron Rupar (@aaron.rupar) on Threads

Tim Cook pathetically kisses Trump's ass by pretending him with a "24 karat gold" gift

Threads

DRAGONFLIGHT (1978)
Acrylic on Masonite - 20" X 30"

For the first book in Anne McCaffrey's now legendary Dragonriders of Pern series, I wanted to create a literal visualization of the title. I selected an aerial point-of-view and tilted the horizon to simulate the dizzying sensation of flight. 1/4

#sciencefiction #scifi #scifiart #sff #illustration #annemccaffrey #pern
#dragon

It’s not too late to start masking.

You don’t have to do it alone. You won’t be the lone masker.

There’s an engaged Covid cautious and disability community who will help you.

There’s local mask blocs who provide respirators to those who can’t afford them.

Covid is still here, and masks work!

Wow. This is as close to perfection as I've seen when it comes to accurately representing expressive characters using nothing but #LEGO. The faces on the bus and on Totoro are spot on using nothing but bricks.

Amazing work.

https://www.brothers-brick.com/2025/07/27/lego-catbus-from-my-neighbour-totoro-puts-a-massive-grin-on-our-faces/

LEGO Catbus from My Neighbour Totoro puts a massive grin on our faces! - The Brothers Brick

Joss Ivanwood (@jayfa_mocs) has been hard at work on another brick-built Studio Ghibli masterpiece - and now it's ready!

The Brothers Brick
I saw a lot of press about the “ICEBlock” app recently, but clear links for where to download it were harder to come by. Here’s a link: https://apps.apple.com/us/app/iceblock/id6741939020
‎ICEBlock

‎Stay informed about reported ICE sightings, within a 5 mile radius of your current location, in real-time while maintaining your privacy. ICEBlock is a community-driven app that allows you to share and discover location-based reports without revealing any personal data. KEY FEATURES: • Anonymi…

App Store
×
"each individual kid is now hooked into a Nonsense Machine"
Edit: I got those screenshots from imgur. It might be from Xitter, with the account deleted or maybe threads with the account not visible without login? 🤷
2nd Edit: @edgeofeurope found this https://threadreaderapp.com/thread/1809325125159825649.html
#school #AI #KI #meme #misinformation #desinformation
@b_rain This is our greatest doom, an angry generation of annoyed know-it-alls who know absolutely nothing.
@Gustodon @b_rain this is... literally just a more interesting version of what we had previously, though...?

@b_rain This also ties into how the way we design things influences how people percieve them.

Before ChatGPT, there was "OpenAI Playground," a paragraph-sized box where you would type words, and the GPT-2 model would respond or *continue* the prompt, highlighted in green.

Then ChatGPT came along, but it was formatted as a chat. Less an authoritative source, more a conversational tool.

Now the ChatGPT home page is formatted like a search engine. A tagline, search bar, and suggested prompts.

@boltx @b_rain you probably know that but it wasn't until a few weeks ago that I learned that under the hood, they have to write "user:" before your prompt and "agent:" after it, before the interface hands it to the LLM, otherwise it would just continue writing your prompt.

@jollysea Technically the models can vary a bit in how they handle that (e.g. they could be using an XML format with <user> and <llm> for example) but yeah, that's the structure essentially all conversational LLMs have to follow.

In the end, LLMs are just word prediction machines. They predict the most likely next word based on the prior context, and that's it. If nothing delineated between the original prompt and the LLM's response, it would naturally just continue the prompt.

@jollysea That was actually one of the most fun parts about the original interface. If you wanted it to continue some code, just paste in your code and it'll add on to it. Have a random idea for a poem? Write the first line, and it'll write a poem that continues from that starting line in a more cohesive manner.

Now any time you ask an LLM to do something, it won't just do the thing you wanted, it'll throw in a few paragraphs of extra text/pleasantries/re-iteration you didn't ask for.

@boltx @jollysea @LordCaramac but also it was hard to project another mind into that interface so they had to change it for marketing reasons 🤷
@lechimp @boltx @jollysea GPT2 was a lot of fun, but for some reason, I find GPT3 and later versions rather boring.

@LordCaramac I'd assume that has something to do with how GPT2 was a lot more loosely fine-tuned than GPT3 and subsequent models.

GPT2 was more of an attempt at simply mimicking text, rather than mimicking text *in an explicitly conversational, upbeat, helpful tone designed to produce mass-market acceptable language*

Like how GPT2 would usually just do the thing you asked for, whereas GPT3 and others now all start with "Certainly! Here's a..." or something similar.

@boltx GPT2 was often quite unhinged and produced text that was quite surreal and like a weird hallucination or the ramblings of a madman. I liked it.

@jollysea @boltx @b_rain yeah. OAI and friends are desperately trying to add as much abstraction & as much features on top of LLMs to hide the fact they're just the predictive text on your phone but overgrown.

its just that the training data always had a specific 'token' to delineate user input and expected output so the LLM behaves like a chat bot

Teach kids (and adults) to check sources. Where does chatGPT get this info? Learning to check sources is a useful skill in manu situations. Note that Wikipedia lists its sources. ChatGPT makes them up.

Also teach them that ChatGPT is the ultimate bullshitter. It's designed to always produce an answer, regardless of whether it's true or false. It has no concept of truth. It just makes stuff up based on the content it's trained on, which means it's sometimes correct, but mostly by accident. It can also be very wrong.

No matter what you use it for, always, always double check the output of these LLMs. Because it's just as likely to be bullshit as true.

@mcv The biggest problem is, in my view: the answers are most of the time at least working. If you ask for something that is reasonably well trained, it will get you valid answers.
But how do you know when it’s wrong? You don’t.
Checking sources is a good sentiment, but given those things throw in vast texts, you cannot really check it; it defeats the purpose of those AIs.
Now, then don’t use ChatGPT!
But there’s no escape, you will just get another AI, from another vendor.

We’re toast.

which is why kids need to learn about the possibilities and limitations of genAI in school. Like we taught them to be critical of what they found on Wikipedia 20 years ago, how to search properly, and to check other sources if something seemed off etc.
@cmyrland I believe Wikipedia is very different. You know it is mostly correct most of the time, because it is constantly maintained. Stay in the middle of the road, check sources from time to time, especially if you find something surprising.
Problem is, you can’t really check sources for AI, nor if there are other disagreeing sources. And you can’t see how many authors worked on it, the whole process how the information was processed is opaque.
You just hope it’s correct.
Well yes, Wikipedia and LLMs are two very different beasts, and I don't mean we need to copy 1:1 what we did with wikipedia 20 years ago. We need to teach kids what LLMs are good at, and what they suck at. If people continue thinking LLMs can replace search engines and online encyclopedia, we're gonna be in a lot of trouble very soon.
@cmyrland Teaching people the strengths and weaknesses of a tech is important. It would be nice to live in a world where this really works. But I’m afraid the incentives in this world are counter-productive.
The reason why every company is cramming AI into their products is neither that this would improve the product, nor that they believe it would. It’s the certainty that a company would cut itself from future funding if they don’t cram AI in.
I have little hope for education overcoming this.
@b_rain The writer contrasts using ChatGPT with using a search tool. But Google and other search engines are privileging responses from LLMs.

@b_rain I've had to explain that LLMs are nothing at all like search engines to friends who are highly educated and, overall, likely smarter and more capable of complex thought than myself.

They're just not computer science educated, and understand computers as good at numbers and large database retrievals.

Which is the exact opposite of #LLMs.

Society isn't ready for them at all.

@larsmb @b_rain They're not smarter and more capable of complex thought than you if they get this wrong.

They've failed to pass a very very low bar for intelligence.

@larsmb @b_rain Namely, failure to apply a critical lens to narratives provided by a party who stands to profit from having them believe what they're told to believe about something they lack sufficient background to understand.

@larsmb @b_rain And indeed lots of supposedly "highly educated" people in lots of specialized fields have this very basic lack of intelligence.

For example there are tons of doctors who take claims by drug companies at face value rather than applying a critical lens, looking for genuinely independent studies, or applying their own understanding (that they theoretically needed to graduate, but probably never had) of the relevant mechanisms of action.

@dalias @b_rain No. This is insulting. I don't take kindly to insults to my friends, so kindly: don't.

They're not experts in tech. They're being willfully misled about the capabilities and functions.

They all understood it when explained, but the systems are not always explained, because then they couldn't be sold.

The term for "one can't spend time to understand everything and must rely on and trust others at some point" is not "unintelligent" but "human".

@larsmb @b_rain OK I'll try to refrain from doing that.

What I'm trying to say is that the answer to "can't understand everything" is not "trust whoever has a loud marketing department" but "assume hype and extraordinary claims false by default especially when they're coming from corporate sources in the same industry that's making the hyped products" and "seek out actual domain expertise as the source of trustworthy information on topics you don't understand".

I deem these principles very basic to critical thinking/intelligence.

@dalias @larsmb @b_rain A few days ago, I started a thread complaining about leftists advocating the use of LLMs. A few people said something along the lines of them not really being leftists if they did.

After that, I saw posts about "vibe coding" from two socialists I've known for decades, both experienced software developers. One had just returned from a trip to Palestine, where he'd been a volunteer on the ground.

I loathe LLMs, but I can't just dismiss these people as fools. They're not.

@foolishowl @larsmb @b_rain You don't have to "dismiss" someone to acknowledge that they have an exploitable cognitive vulnerability. If you care about them or deem them valuable to a movement or whatever, you can try to help them see that or if they can't, at least try to get them to channel the bs in a way that's somewhat productive...
@dalias @b_rain @larsmb @foolishowl Social engineering (including propaganda and manipulation) wouldn't be an active field of research & effort if it didn't work.

Many people prefer to avert their attention rather than understand to mitigate what risk they can (unfortunately, some things work /even/ if one knows how they work and that they're being used).
@foolishowl @dalias @larsmb @b_rain vibe coding is the last buzzword. Honestly, you still spend almost the same time you use to do the prompting to fix the small typos and bugs on the code. Not as useful as everyone wants to make it sounds.
@fabiocosta0305 @foolishowl @dalias @larsmb @b_rain No. What you described is what a sensible coder who knows what LLMs are does with access to one.
Vibe coding does not describe that.
Vibe coding is literally using the output of LLMs as is. Why yes, code produced by LLMs is buggy at best, made up nonsense at worst. And there are people deploying it to production.
@raffitz @foolishowl @dalias @larsmb @b_rain people said supporting COBOL code from 50+ year is "tech debt" (I hate this term). Let us wait 5 years on vibe coding
@fabiocosta0305 @raffitz @dalias @larsmb @b_rain We'll be completely ruined if we sit around waiting for five years.

@foolishowl @fabiocosta0305 @raffitz @larsmb @b_rain The non sequitur analogies from people who have no understanding of any of this but want to be armchair experts is so frustrating.

"The language is crusty and has difficulties you wouldn't encounter using a different language" is completely a different problem from "you are gluing together massive amounts of utterly random stolen code with no clue what it does and taking it as a matter of faith that it will safely do what you wanted it to do".

@dalias @b_rain @larsmb As an easy rule of thumb: If corposcum stand to benefit from it, it's probably lies and probably has already killed people.
@dalias @larsmb @b_rain
To outsiders, what tech has achieved in the last 3 decades is absolutely indistinguishable from magic.
If you don't have domain knowledge and it _seems_ like the same folks who caused computers, then the Internet, then smartphones to upend society are now saying "a new, artificial intelligence is here that can do all the things, embrace it or be left behind", it's very hard to separate hype from opportunity and threat.
@jaystephens @larsmb @b_rain That is absolutely not the case if you've actually tried using it rather than reading/watching propaganda. The progress in the past 2 decades has utterly stagnated. Computers are slower, clunkier, harder to use, mess up more often in more unpredictable ways, etc.
@dalias @larsmb @b_rain
Totally agree bloatware and enshitification have outpaced Moore's law, but the smartphone is less than 2 decades old, and unlike Internet old hands, late adopters don't have much memory of a better time. In the mid 2010s in my job I was still regularly upskilling mums returning to work who were switching to a computer from a pen and a paper book. Those people are even now only mid 50s to mid 60s.
@jaystephens @larsmb @b_rain I don't see how someone without context sees this "magical technological progress" and not a mess of attacks on their attention by stuff that never works as expected. 🤷
@dalias @larsmb @b_rain
Because they largely didn't have any expectations. To them It's just things that used to be scifi but are now real - instant global email instead of snail mail. Free phone calls instead of quarters in a slot. Video calls. Streaming movies. Then all the above in your pocket along with real time weather forecast and GPS navigation with real time mapping etc etc.

@dalias @jaystephens @larsmb @b_rain feels a bit "funny" to me that you seem to struggle so much with understanding how people fall for the corporate LLM sharade.

reading this thread looks like its the same for you as its its for them who struggle to grasp what LLM's actually do and what their problems are, just on another level.

like you wrote, they not getting that is their "vulnerability" and you not being able to get them is yours.

@glowl @jaystephens @larsmb @b_rain I've written elsewhere in the thread about how it's a vulnerabilty. What I take exception to is the idea that it's "late adopters" who are especially vulnerable to being bedazzled by the scam. I think it's plausible that they're *more* resilient on average, by virtue of not having been wowed by the previous "big thing" either.

My impression is that it's people with a proclivity for admiring authority and for wanting to be in in-groups who are most vulnerable to techno-futurist scams like "AI". They don't understand or care about the actual technology because their interest isn't in it, but in projecting an image of being someone who's in "club tech".

@larsmb @dalias @b_rain nah dog your friends are dumb as shit sorry
@killeveryhetero @larsmb @b_rain I don't think that reply was helpful.
@killeveryhetero @larsmb @dalias @b_rain wtf why are you insulting people you don't even know

@larsmb thank you for standing up for your friends!

This "if they don’t know X" is fucking elitism.

To take up the point about doctors: those are the people who keep you alive when you need them.

I remember when my doctor saved my life by saying "go to the surgeon and get that toenail cut off *ASAP*".

It was infected and days later I’d have likely fallen prey to sepsis that could easily have killed me.

The doctor may not understand LLMs, but he knows how to keep me alive.
@dalias @b_rain

@larsmb @dalias @b_rain Recently LLMs, especially Gemini, have been getting new features that integrate web search (my understanding is that the LLM is in charge of a web clawler, collecting results that it then attempts to cite URLs directly), which may be the cause of some confusion here. It is obviously not a search engine, but IIRC the UI will usually say something like "Searching the web".

And I know I am not speaking to the right audience here, but this is a feature I actually stikes a sensible symbolic-subsymbolic balance between traditional "general, static" sort and what an LLM would generate anything that sounds like an answer.

@pkal Yes, I'm aware. The problem with that functionality is that it then throws that as additional context into the same system with the same constraints, so that doesn't really overcome the fundamental limitations.

@dalias @b_rain

https://mastodon.online/@larsmb/114726700089812906

Lars Marowsky-Brée 😷 (@larsmb@mastodon.online)

¹ I know some of the "reasoning" models are enhanced to pull in additional context via searches and MCPs etc. The level of fail they still produce remains mind-blowing, because they then throw the additional context into the same statistical model with the very same limitations.

Mastodon

@larsmb @dalias @b_rain Agree.

Also, most experts like scientists, engineers and physicians aren’t used to be actively misled. Those fields are built on trusting everyone else’s expertise and assuming good intentions. It wouldn’t work otherwise. LLMs are a very different beast.

That’s also why there seems to be an uptick in completely fraudulent scientific publications. It’s relatively easy to do because the reviewers don’t immediately assume fraud.

@xerge @larsmb @b_rain "Also, most experts like scientists, engineers and physicians aren’t used to be actively misled."

LMAO. A good 75% or more (probably 90% now) of scientific publication is fraud (fabricated data, false citations, false authorship, plagiarism, etc.). Someone in the field who isn't paying attention enough to see that is lacking a basic skill they need to do their job.

I'm not saying you're wrong, just that ignorance of it doesn't absolve them.

@dalias @larsmb @b_rain

As someone that has worked as a research scientist in chemistry for the last 30 years I can guarantee that the amount of fraudulent publications is a lot lower than that. Probably significantly below 10% in the hard sciences. Hard numbers are difficult to find.

It happens, but when it happens it is usually falsified data, that can only be caught by replication. Carelessness and mistakes obviously also happen.

@dalias @larsmb @b_rain

Contemporary science (and engineering) is so complex that without trust nothing would work.

Some people do take advantage of that, but it remains pretty rare.

I suspect that those high numbers come out of some right-wing conspiracy mill and are part of the right-wing war on science (and on reality itself).

@xerge @larsmb @b_rain I'm counting "cited something not relevant as a favor", "cited something claiming it supports a claim it doesn't", "included a non-author on authors list for prestige", "included LLM vomit undisclosed", etc. as academic fraud. Because these things are. Falsification of data may only account for 10%, but I suspect it's much higher now. Especially combined with LLM usage.
@dalias @b_rain @larsmb One would think that corposcum being so upfront about it would've made it so much easier to notice.

It used to be that bias was indicated mainly through ranking and non-indexing (used to be blatant the minute one search how to do download things for free, certain topics /somehow/ had zero relevant results). So the assumption was that search engines showed what they wanted the user to know, not what there was to know.
@dalias @larsmb @b_rain This!
A key competence is to know which tool to use for which task.
You don't know the purpose of a tool and use it anyway? You are surely not smart.
You know that it is the wrong tool but don't care for its limits? You are surely not smart.
This has nothing to do with some digital understanding. Why would one use a torx screw driver for pozidriv screws? Why would one want to use an LLM as retrieval machine? A smart person knows what their tools are able to do.
@ridscherli @dalias @b_rain Nobody knows that immediately when confronted with a completely new tool, especially one that behaves very differently to all other tools that looked the same before, and when they've been lied to by almost everyone (not just for-profit corporations) about the capabilities.
I think it is very important that we understand why LLMs are so often misunderstood so we can fix it — not be condescending to our friends is probably helpful in that regard.

@larsmb @ridscherli @dalias @b_rain
Replace Torx with Phillips in the example above (talking to anyone who is not an experienced contractor).
It's very easy to get PH (Phillips) and PZ (Pozidriv) cross-head screws confused.
Each type was originally designed for different torque levels.

"To prevent slippage and damaging of screws, you should only use a Phillips head screwdriver on a Phillips head screw, and you should only use a Pozidriv screwdriver on a Pozidriv screw."
https://shop4fasteners.co.uk/blog/pozidriv-vs-phillips/

@dec23k @larsmb @dalias @b_rain
If
1. You have a very big phz screw
2. a small torx manual screw driver which exactly fits into the center
3. do not need much force on the screw
4. and are too lazy to go the basement for some real tools,
this might work. Tested for you :-)

And in my opinion this is a very good example for "knowing the limits of the tools". I know that it's not perfect and accept the limited results as they are without expecting more than possible or blaming others.

@ridscherli @larsmb @dalias @b_rain
All of the people I know who learned the PZ/PH difference the hard way (by stripping out a screw, which then had to be extracted) just grabbed the nearest size/type of cross-head bit and used it with an electric screwdriver. Or an impact driver.