The BBC, on identifying media as "AI-free":

"It is in response to fears that jobs or entire professions are being swept away in a wave of AI-powered automation."

Well you could also mention the issue of polluting the entire information landscape! That's a pretty important reason to want to know the provenance of things!

It isn't really "sweeping away" jobs, either. Not yet, anyway. More like, it's providing business owners with an excuse to cut staff and supply inferior services.

Shoddy framing overall in my opinion, BBC. You swallowed the hype.

https://www.bbc.co.uk/news/articles/cj0d6el50ppo

#SoCalledAI #BBC #journalism #media

Race on to establish globally recognised 'AI-free' logo

The backlash to the growing use of the tech has led to an explosion in attempts to come up with 'AI-Free' logo that could be used globally.

BBC News

About that stolen diagram with phrases like "continvouclous morging", a great little comment from WesolyKubeczek:

"I propose to adopt the word „morge”, a verb meaning „use an LLM to generate content that badly but recognizably plagiarizes some other known/famous work”.

"A noun describing such piece of slop could be „morgery”."

Morgery!

From comments at this Hacker News post:
https://news.ycombinator.com/item?id=47057829

Microsoft has now taken down the "morgery" but here is the original designer's explanation:
https://nvie.com/posts/15-years-later/

#Microsoft #morgery #SoCalledAI #language

15 years later, Microsoft morged my diagram | Hacker News

@perigee

It's complicated :-)

I don't want to be reading/hearing LLM-generated stuff myself. I don't think it's a "healthy diet" for my knowledge of the world. When I engage with a communication, I'd like to know that a human was trying to tell me the truth as they understand it.

Or if I were coding something (not that I do tons of that), I'd want to go looking for what an expert geek says is a good way to implement this thing - preferably in a context where other people can comment "I wouldn't do it like that because XYZ" or "yes I agree this is the best way".

I don't approve of how generative LLMs & related tech are being deployed nonconsensually, nor the escalation in environmental degradation which they're part of, nor the exploitation of low-waged & traumatised moderators.

When I see other people using them...

If I respected the person in the first place, I'll probably have some curiosity about why and how.

That goes along with some doubt or apprehension about whether they know what they're getting into. For example, I think it's likely that LLM-generation prices will escalate drastically at some point, and anyone who's come to actually rely on it for their workflow (vs a bit of noodling to experiment) will get a shock.

And are they okay with the wider ethical picture, are they unaware, or are they thinking like "well everyone's doing it now, so it's not gonna make any difference if I do as well"?

I've actually been thinking recently that the social dynamic around LLMs reminds me of the social dynamic around masking. The average person doesn't know the full picture, but they also don't necessarily _want_ to know.

I've noticed that people distancing from so-called AI seem to be disproportionately (not always) the ones who know "how the sausage is made", with the less-techie people being more likely to marvel at it. That's a bit like how following covid science correlates with masking.

So like if you wanna play with LLMs, or if you wanna get on a bus or train unmasked... _I_ don't like the look of it. I'm probably gonna have a bit of a doubt that you've fully understood what the long-term consequences might be of that decision, for you & others. But I know I'm not the boss of you!

And I feel like the 6 years of covid awareness have sort of installed a boundary, made partly out of resignation and exhaustion, that I need to save my energy for helping people who _want_ to know.

#LLMs #SoCalledAI #covid

@packetcat

In that post, the author describes the counted words as "thought or written", and that got me thinking about: is what an LLM does "writing"?

I'm leaning towards not - not in the sense that humans "write". I think for an LLM, it's more like "shuffling and extruding".

#LLMs #SoCalledAI

“There may be moderators who escape psychological harm, but I’ve yet to see evidence of that... content moderation belongs in the category of dangerous work, comparable to any lethal industry.”

“People are hired under ambiguous labels, but only after contracts are signed and training begins do they realise what the actual work is.”

https://www.theguardian.com/global-development/2026/feb/05/in-the-end-you-feel-blank-indias-female-workers-watching-hours-of-abusive-content-to-train-ai

#work #trauma #data #moderation #SoCalledAI

‘In the end, you feel blank’: India’s female workers watching hours of abusive content to train AI

Women in rural communities describe trauma of moderating violent and pornographic content for global tech companies

The Guardian

Weak "AI filters" are dark pattern design & "web of trust" is the real solution

The worst examples are when bots can get through the “ban” just by paying a monthly fee.

So-called “AI filters”

An increasing number of websites lately are claiming to ban AI-generated content. This is a lie deeply tied to other lies.

Building on a well-known lie: that they can tell what is and isn’t generated by a chat bot, when every “detector tool” has been proven unreliable, and sometimes we humans can also only guess.

Helping slip a bigger lie past you: that today’s “AI algorithms” are “more AI” than the algorithms a few years ago. The lie that machine learning has just changed at the fundamental level, that suddenly it can truly understand. The lie that this is the cusp of AGI - Artificial General Intelligence.

Supporting future lying opportunities:

  • To pretend a person is a bot, because the authorities don’t like the person
  • To pretend a bot is a person, because the authorities like the bot
  • To pretend bots have become “intelligent” enough to outsmart everyone and break “AI filters” (yet another reframing of gullible people being tricked by liars with a shiny object)
  • Perhaps later - when bots are truly smart enough to reliably outsmart these filters - to pretend it’s nothing new, it was the bots doing it the whole time, don’t look beind the curtain at the humans who helped
  • And perhaps - with luck - to suggest you should give up on the internet, give up on organizing for a better future, give up on artistry, just give up on everything, because we have no options that work anymore

It’s also worth mentioning some of the reasons why the authorities might dislike certain people and like certain bots.

For example, they might dislike a person because the person is honest about using bot tools, when the app tests whether users are willing to lie for convenience.

For another example, they might like a bot because the bot pays the monthly fee, when the app tests whether users are willing to participate in monetizing discussion spaces.

The solution: Web of Trust

You want to show up in “verified human” feeds, but you don’t know anyone in real life that uses a web of trust app, so nobody in the network has verified you’re a human.

You ask any verified human to meet up with you for lunch. After confirming you exist, they give your account the “verified human” tag too.

They will now see your posts in their “tagged human by me” feed.

Their followers will see your posts in the “tagged human by me and others I follow” feed.

And their followers will see your posts in the “tagged human by me, others I follow, and others they follow” feed


And so on.

I’ve heard everyone is generally a maximum 6 degrees of separation from everyone else on Earth, so this could be a more robust solution than you’d think.

The tag should have a timestamp on it. You’d want to renew it, because the older it gets, the less people trust it.

This doesn’t hit the same goalposts, of course.

If your goal is to avoid thinking, and just be told lies that sound good to you, this isn’t as good as a weak “AI filter.”

If your goal is to scroll through a feed where none of the creators used any software “smarter” than you’d want, this isn’t as good as an imaginary strong “AI filter” that doesn’t exist.

But if your goal is to survive, while others are trying to drive the planet to extinction


If your goal is to be able to tell the truth and not be drowned out by liars


If your goal is to be able to hold the liars accountable, when they do drown out honest statements


If your goal is to have at least some vague sense of “public opinion” in online discussion, that actually reflects what humans believe, not bots


Then a “human tag” web of trust is a lot better than nothing.

It won’t stop someone from copying and pasting what ChatGPT says, but it should make it harder for them to copy and paste 10 answers across 10 fake faces.

Speaking of fake faces - even though you could use this system for ID verification, you might never need to. People can choose to be anonymous, using stuff like anime profile pictures, only showing their real face to the person who verifies them, never revealing their name or other details. But anime pictures will naturally be treated differently from recognizable individuals in political discussions, making it more difficult for themselves to game the system.

To flood a discussion with lies, racist statements, etc., the people flooding the discussion should have to take some accountability for those lies, racist statements, etc. At least if they want to show up on people’s screens and be taken seriously.

A different dark pattern design

You could say the human-tagging web of trust system is “dark pattern design” too.

This design takes advantage of human behavioral patterns, but in a completely different way.

When pathological liars encounter this system, they naturally face certain temptations. Creating cascading webs of false “human tags” to confuse people and waste time. Meanwhile, accusing others of doing it - wasting even more time.

And a more important temptation: echo chambering with others who use these lies the same way. Saying “ah, this person always accuses communists of using false human tags, because we know only bots are communists. I will trust this person.”

They can cluster together in a group, filtering everyone else out, calling them bots.

And, if they can’t resist these temptations, it will make them just as easy to filter out, for everyone else. Because at the end of the day, these chat bots aren’t late-gen Synths from Fallout. Take away the screen, put us face to face, and it’s very easy to discern a human from a machine. These liars get nothing to hide behind.

So you see, like strong is the opposite of weak [citation needed], the strong filter’s “dark pattern design” is quite different from the weak filter’s. Instead of preying on honesty, it preys on the predatory.

Perhaps, someday, systems like this could even change social pressures and incentives to make more people learn to be honest.

(Weak "AI filters" are dark pattern design & "web of trust" is the real solution)

Weak "AI filters" are dark pattern design & "web of trust" is the real solution

whoeverlovesDigit (npub1wa
6u3l2) on Nostr

Weak "AI filters" are dark pattern design & "web of trust" is the real solution

The worst examples are when bots can get through the “ban” just by paying a monthly fee.

So-called “AI filters”

An increasing number of websites lately are claiming to ban AI-generated content. This is a lie deeply tied to other lies.

Building on a well-known lie: that they can tell what is and isn’t generated by a chat bot, when every “detector tool” has been proven unreliable, and sometimes we humans can also only guess.

Helping slip a bigger lie past you: that today’s “AI algorithms” are “more AI” than the algorithms a few years ago. The lie that machine learning has just changed at the fundamental level, that suddenly it can truly understand. The lie that this is the cusp of AGI - Artificial General Intelligence.

Supporting future lying opportunities:

  • To pretend a person is a bot, because the authorities don’t like the person
  • To pretend a bot is a person, because the authorities like the bot
  • To pretend bots have become “intelligent” enough to outsmart everyone and break “AI filters” (yet another reframing of gullible people being tricked by liars with a shiny object)
  • Perhaps later - when bots are truly smart enough to reliably outsmart these filters - to pretend it’s nothing new, it was the bots doing it the whole time, don’t look beind the curtain at the humans who helped
  • And perhaps - with luck - to suggest you should give up on the internet, give up on organizing for a better future, give up on artistry, just give up on everything, because we have no options that work anymore

It’s also worth mentioning some of the reasons why the authorities might dislike certain people and like certain bots.

For example, they might dislike a person because the person is honest about using bot tools, when the app tests whether users are willing to lie for convenience.

For another example, they might like a bot because the bot pays the monthly fee, when the app tests whether users are willing to participate in monetizing discussion spaces.

The solution: Web of Trust

You want to show up in “verified human” feeds, but you don’t know anyone in real life that uses a web of trust app, so nobody in the network has verified you’re a human.

You ask any verified human to meet up with you for lunch. After confirming you exist, they give your account the “verified human” tag too.

They will now see your posts in their “tagged human by me” feed.

Their followers will see your posts in the “tagged human by me and others I follow” feed.

And their followers will see your posts in the “tagged human by me, others I follow, and others they follow” feed


And so on.

I’ve heard everyone is generally a maximum 6 degrees of separation from everyone else on Earth, so this could be a more robust solution than you’d think.

The tag should have a timestamp on it. You’d want to renew it, because the older it gets, the less people trust it.

This doesn’t hit the same goalposts, of course.

If your goal is to avoid thinking, and just be told lies that sound good to you, this isn’t as good as a weak “AI filter.”

If your goal is to scroll through a feed where none of the creators used any software “smarter” than you’d want, this isn’t as good as an imaginary strong “AI filter” that doesn’t exist.

But if your goal is to survive, while others are trying to drive the planet to extinction


If your goal is to be able to tell the truth and not be drowned out by liars


If your goal is to be able to hold the liars accountable, when they do drown out honest statements


If your goal is to have at least some vague sense of “public opinion” in online discussion, that actually reflects what humans believe, not bots


Then a “human tag” web of trust is a lot better than nothing.

It won’t stop someone from copying and pasting what ChatGPT says, but it should make it harder for them to copy and paste 10 answers across 10 fake faces.

Speaking of fake faces - even though you could use this system for ID verification, you might never need to. People can choose to be anonymous, using stuff like anime profile pictures, only showing their real face to the person who verifies them, never revealing their name or other details. But anime pictures will naturally be treated differently from recognizable individuals in political discussions, making it more difficult for themselves to game the system.

To flood a discussion with lies, racist statements, etc., the people flooding the discussion should have to take some accountability for those lies, racist statements, etc. At least if they want to show up on people’s screens and be taken seriously.

A different dark pattern design

You could say the human-tagging web of trust system is “dark pattern design” too.

This design takes advantage of human behavioral patterns, but in a completely different way.

When pathological liars encounter this system, they naturally face certain temptations. Creating cascading webs of false “human tags” to confuse people and waste time. Meanwhile, accusing others of doing it - wasting even more time.

And a more important temptation: echo chambering with others who use these lies the same way. Saying “ah, this person always accuses communists of using false human tags, because we know only bots are communists. I will trust this person.”

They can cluster together in a group, filtering everyone else out, calling them bots.

And, if they can’t resist these temptations, it will make them just as easy to filter out, for everyone else. Because at the end of the day, these chat bots aren’t late-gen Synths from Fallout. Take away the screen, put us face to face, and it’s very easy to discern a human from a machine. These liars get nothing to hide behind.

So you see, like strong is the opposite of weak [citation needed], the strong filter’s “dark pattern design” is quite different from the weak filter’s. Instead of preying on honesty, it preys on the predatory.

Perhaps, someday, systems like this could even change social pressures and incentives to make more people learn to be honest.

(Weak "AI filters" are dark pattern design & "web of trust" is the real solution)

Weak "AI filters" are dark pattern design & "web of trust" is the real solution

whoeverlovesDigit (npub1wa
6u3l2) on Nostr

@GossiTheDog

Woah, that is some sickening evasion of responsibility. How dare Musk & co pretend that a bot's apology is an adequate substitute for theirs.

Edited to add: I've since seen a thing saying the quasi-apology was in response to a Twitter user's prompt - as distinct from being directly stimulated by Musk & co. So Musk _could_ still be about to apologise, as the human in charge of the bot.

#Grok #Twitter #ElonMusk #ethics #LLMs #SoCalledAI

Most chatbot servers don't have video outputs

Reminder: there are no video outputs on these chatbot data center processors driving up the prices of graphics cards.

So they can’t even sell as used GPUs to crash the consumer GPU price market when the AI bubble pops.

This is a reminder that businesses aren’t “money focused calculation machines that optimize for the maximum possible profit.” They don’t worry about every little dollar, they just print money and use it to control you.

Raising prices for you is the goal, not a byproduct of some other smarter plan.

Some people don’t need the rest of this post, and it’s very long, so I’ll put it in a comment.

https://piefed.social/c/technology/p/1596567/most-chatbot-servers-don-t-have-video-outputs

Most chatbot servers don't have video outputs

Reminder: there are no video outputs on these chatbot data center processors driving up the prices of graphics cards. So they can't even sell 


@cy

Re "existing solutions taken away", "tiny little opt-out button" etc: this is a valid critique of the stats, and at the same time there _are_ people using LLMs unforced, maybe a lot of them.

A lot of what I've seen about coercion has been in the context of coding roles, and I think that makes sense. Techie people are aware of how it works and how illusory is the so-called "intelligence". They know "how the sausage is made". Plus, the risks of using LLMs for code are relatively obvious. So it does take coercion to make the sceptical techies go against their better judgement.

But a lot of people don't need coercing. They're like "so this is the new thing, let's try it out... ooh, this bit is going to help me".

- You can ask it for ideas and it'll give you some.

- You can ask it to summarise a meeting, and even when the summary isn't perfect, it's close enough to remind you what the meeting was about, and who's got actions.

Well, it's arguable that this cohort was manipulated or deceived or not given the full picture... but I don't think it makes sense to frame that as being _forced_. This is a different group from the "you'll be judged or punished for not using it" cohort. This group is genuinely curious, and even sometimes a bit enthusiastic about spreading the word among colleagues, "have you tried...?"

I think whether you've witnessed that experimental curiosity very much depends on the circles you move in, but it wouldn't totally shock me to discover it was quite common.

@zkat @mcc @xgranade @jplebreton @aud

#LLMs #SoCalledAI #stats