@Cassandrich @Sobri | Zoe (she/her) @Scott Jenson @Phil Dennis-Jordan Also, an image doesn't always need the exact same alt-text whenever it's posted somewhere.

The alt-text must adapt to the context. It must be different according to the context in which an image is posted. Also, it must adapt to the place where it's posted. The same image, even within a very similar context, must have a different alt-text in the Fediverse than on commercial social media or a static website. Lastly, and this ties in with the Fediverse requiring different alt-texts, the audience must be taken into consideration.

Alt-text in metadata can't do either of this. An LLM can't do either of this either unless it's explicitly prompted to do so, and even that is questionable.

Many Mastodon users dream of only pressing a button or not even that, and some AI automagically generates a perfect alt-text for their image. Perfectly accurate with exactly the details required for the context and the intended audience as well as the expected audience, all while following every last image description and alt-text rule out there to a tee.

It's perfectly understandable. Mastodon had begun to feel like child's play when they were suddenly pressured into describing each and every image they post. Worse yet, it seems like over 90% of all Mastodon users do everything on a phone with no access to a hardware keyboard whatsoever. So they have to fumble their alt-texts into a screen keyboard while not even being able to see the image they're describing.

I'm neither on Mastodon nor on a phone. I've got the luxury of having a desktop computer with a hardware keyboard and being able to bllind-type. So I don't have a problem with writing my image descriptions myself with no help from an AI.

In fact, my own original images are all about an extreme niche topic. It's so obscure that no AI will ever be able to describe such images, much less explain them at my level of accuracy and detail. (Explanations go into the post text, by the way, and not into the alt-text, but I always have an additional image description in the post text for my original images anyway.)

I simply know things that no AI will ever know, not ChatGPT and not Claude either, at least not at the point in time when they need that knowledge. And I can see things that will always remain invisible for AIs.

You can develop better models all you want. But they'll never be able to do all that.

#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #AIVsHuman #HumanVsAI
Jupiter Rowland - [email protected]

@Woochancho @Diego Martínez (Kaeza) 🇺🇾 @🅰🅻🅸🅲🅴  (🌈🦄) Especially whenever humans have advantages over LLMs.

When I describe my own original images, I have two advantages.

One, I know much more about the contents of the image than any AI. That's because my original images always show something from extremely obscure 3-D virtual worlds. On top of that, I may add some extra insider knowledge or explain pop-cultural references in the long description in the post if it helps understand the image and its descriptions.

Two, the LLM can only look at the image with its limited resolution. That's all it has. In contrast, when I describe my images, I don't just look at the images. I look at the real deal in-world with a nearly infinite resolution.

For example, an LLM can only generate a description from a picture of a virtual building. But when I describe it, my avatar is in-world, standing right in front of the building whose picture I'm describing. I can move the avatar around, I can move the camera around, I can zoom in on anything. I can correctly identify that four-pixel blob as a strawberry cocktail wheras the LLM doesn't even notice it's there.

I've actually done two tests using LLaVA. I've fed it two images I had described myself previously to see what happens. It was abysmal. LLaVA hallucinated, it interpreted stuff wrongly and so forth, not to mention that LLaVA's description, even after being prompted to write a detailed description, wasn't nearly as detailed as mine.

In one image, there's an OpenSimWorld beacon placed rather prominently in the scenery. LLaVA completely ignored it. I described what it looks like in about 1,000 characters, and then I explained what it is, what OpenSimWorld is and how it works in another 4,000 characters or so.

It's an illusion that AI will soon catch up with any of this.

Oh, by the way: How is an AI supposed to pinpoint exactly where an image was made if the image shows a place of which multiple absolutely identical copies exist? Or if the image has a neutral background that doesn't even hint at where it was made? I can do that with no problem because I remember where I've made the image.

#Long #LongPost #CWLong #CWLongPost #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #LLaVA #AIVsHuman #HumanVsAI
Netzgemeinde/Hubzilla

🚨 ALERT: Groundbreaking Revelation! 🚨 In an article that could've been written by a sentient eggplant, we learn the shocking truth that AI shouldn't write for you. Why? Because apparently, humans are much better at producing endless lists of tech jargon that nobody will ever read.🥱
https://alexhwoods.com/dont-let-ai-write-for-you/ #AIwriting #AIhumor #TechJargon #HumanVsAI #GroundbreakingRevelation #HackerNews #ngated
Don't Let AI Write For You

🤖👩‍⚖️ A riveting tale of mistaken identity: Human vs. AI, where the protagonist fails to convince Aunt Mildred that they're not a chatbot. Spoiler alert: The aunt is still awaiting a #CAPTCHA result. 📜🍿
https://www.bbc.com/future/article/20260324-i-tried-to-prove-im-not-an-ai-deepfake #HumanVsAI #MistakenIdentity #AuntMildred #TechTales #HackerNews #ngated
I tried to prove I'm not AI. My aunt wasn't convinced

I asked experts if I'm real. Bad news. Even my aunt wasn't sure if I was a deepfake. AI is so convincing that a sitting prime minister struggled to prove he's alive. You might be next.

BBC

What emotion anchors your film?

The 2026 Sparknify Human vs. AI Film Festival categorizes films by emotion—not genre—asking creators to focus on what their work makes audiences feel.

Submissions are now open for human-made, AI-generated, and hybrid films.

📖 Why we made this choice: https://www.sparknify.com/post/20260205-emotion-became-the-center-en

#Filmmakers #AIFilmmaking #HumanVsAI #EmotionInCinema #Sparknify

Submissions are now open for the 2026 Human vs. AI Film Festival.

More than a competition, it’s a Turing Test for AI films and a creative compass for filmmakers—exploring how emotion defines storytelling in the age of intelligent machines.

Open to traditional + generative AI creators. Films are judged by emotion, not genre.

🎞️ Premiere: Sept 26, 2026 | SF
🏆 Top prize: $3,000

👉 Submit: https://www.sparknify.com/human-vs-ai-film-festival

#HumanVsAI #AIinCinema #FilmFestival #Sparknify

[AI 창의성의 역설, 평균은 넘었지만 천재는 못 따라간다

몬트리올대 연구에 따르면, GPT-4와 GeminiPro는 인간 평균 창의성 점수를 넘었지만, 상위 10% 인간은 모든 AI를 크게 앞섰다. AI는 평균 패턴을 잘 재현하지만, 획기적·새로운 아이디어에서는 한계가 뚜렷하다.

https://news.hada.io/topic?id=26332

#ai #creativity #humanvsai #gpt4 #gemini

AI 창의성의 역설, 평균은 넘었지만 천재는 못 따라간다

<p>몬트리올대 연구(10만 명 인간 vs ChatGPT·Claude·Gemini 등) 결과:<br /> GPT-4와 GeminiPro는 <strong>인간 평균 창의성 점수</strong>를 넘...

GeekNews

Large-scale comparisons between generative AI and tens of thousands of people illuminate creativity-related processes, a topic of significance for mental health professionals, including psychotherapists, clinical social workers, and other practitioners focusing on creativity, cognition, and expression. The study notes that AI systems such as GPT-4 can perform strongly on originality and idea-generation tasks, sometimes surpassing the average human. Yet the most creative individuals—particularly the top 10%—still exceed AI performance on richer creative work like poetry and storytelling, underscoring the enduring value of nuanced human expression in therapeutic and relational contexts.

Article Title: Researchers tested AI against 100,000 humans on creativity
Link to Science Daily Mind-Brain News: https://ift dot tt/gUNljaz

#AIcreativity #GPT4 #CreativityResearch #HumanVsAI #MindBrainNews

Copy and paste broken link above into your browser and replace "dot" with "." for link to work.

@モスケ^^ ❄️🐈🔥🐴 No. Very clearly no.

People keep thinking that AI solves the alt-text problem perfectly. Like, push one button, get a perfect alt-text for your image, send it without having to check it. Or, better yet, don't even push a button, the AI will take care of everything fully automatically.

However, at best, AI-generated alt-text is better than nothing. Oftentimes, AI-generated alt-text is literally worse than nothing.

First of all, AI does not know the context in which an image is posted. But an alt-text should always be written for a specific context because it usually depends on the context what needs to be described at all and on which level of detail.

This means that AI tends to leave out details that may be important while describing details that literally nobody is interested in.

AI can't take your target audience/your actual audience into consideration either. It can't write an alt-text specifically for that audience, fine-tuned for what that audience knows, what it doesn't know and what it needs and/or wants to know.

Worse yet, AI tends to hallucinate. It tends to mention stuff in an image that simply isn't there. It tends to describe elements of an image falsely. You could post a photo of a Yorkshire terrier, and the AI may think it's a cat because it can't distinguish it from a cat in that photo.

Seriously, AI may get even descriptions of simple images of very common things wrong. If you post images with very obscure, very niche content, AI fares even worse because it knows nothing about that very obscure, very niche content.

If you post a screenshot from social media, AI will not necessarily know that it has to transcribe the text in the screenshot 100% verbatim. And just pushing one button or running AI on full-auto, the thing that so many smartphone users are so much craving for, will not prompt it to do so.

If you want good, useful, accurate, sufficiently detailed image descriptions that match both the context of your posts and your audience, you will have to write them yourself.

Trust me. I know from personal experience. I post some of the most obscure niche stuff in the Fediverse. And I've pitted an image-describing AI against my own 100% hand-written image descriptions twice already. The AI failed miserably to even come close to my descriptions in both cases.

#Long #LongPost #CWLong #CWLongPost #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #AIVsHuman #HumanVsAI
Netzgemeinde/Hubzilla

@iolaire This is my personal analysis of the AI-generated image description, quote-posted from my original comment in the thread linked in my first comment:

RE: https://hub.netzgemeinde.eu/display/451d2f06-7746-4227-a043-76a959420c29

(6/6)

#Long #LongPost #CWLong #CWLongPost #QuotePost #QuoteTweet #QuoteToot #QuoteBoost #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #LLaVA #AIVsHuman #HumanVsAI
Universal Campus: The mother of all mega-regions -