If you are overwhelmed by the task of describing images you post on the Fediverse or put on your websites, check out some videos about paintings. You don't need to be so deeply detailed like in the videos, but get some inspiration from them. Make the internet a better place by describing the things that are important for you and give context.

Here is an example video: https://www.youtube.com/watch?v=hhJaONyG5SU

#Accessibility #AltText #ImageDescription #Images

This Masterpiece Tricked Everyone With Clever Innuendo

YouTube
From time to time I see some post highlighting the importance of having good image descriptions. But something I didn't see yet is the importance of picking good shortcodes for the custom emojis. Standard emojis have their own unicode, so there is a way for a text reader to know what that emoji is. But for custom emojis, I can only imagine using either the shortcode or some sort of hashing function to determine what the image is.
#AltText #ImageDescription

@mdanzis #ImageDescription is missing.

#plzfix by adding #AltText...

‘Alt’ text for Mastodon images

How to apply descriptions to images on Mastodon effectively.

@Tarnport I do take alt-text and image descriptions more seriously than anyone else in the Fediverse. I'm even working on a wiki about them. But it doesn't make a difference whether I actually describe my images or not, not in terms of repeats (that's what we call boosts here). At least not on my two (streams) channels which I use to post pictures. I no longer post pictures here on Hubzilla because most pictures I'd post are sensitive in some way. (streams) can make Mastodon blank out sensitive images; Hubzilla can't.

One of my (streams) channels is @jup . It specialises in 3-D virtual worlds, so that's very niche. It has 43 connections. Two are PeerTube channels. One is a podcast on Castopod. Two are groups. 13 are only inbound. Another two are my own Hubzilla channels. The rest isn't enough for me to get a significant number of repeats for any of my image posts, especially since my images probably aren't interesting for that many people to begin with.

The other one is @jup . It specialises in Fediverse memes, and as these memes are never Mastodon-centric and often not about Mastodon at all, it's very niche again. It has only 12 connections. Two are FediBuzz relays and only inbound. One is a Lemmy community, and I'm not even sure if that connection works properly; I don't post to it anyway. One is a Bonfire test account that ought to be dead now. So I'm down to 8 working outbound connections at most.

In fact, this Hubzilla channel that I'm commenting from right now must have well over 700 outbound connections, maybe over 800 by now. I could completely go out of my way describing images like in this post. But as you can see, not a single repeat, not even a single like. I've looked through my image posts. I couldn't find a single one with even only one measly repeat.

Either my reach is minimal in spite of these figures. Or the topics of my image posts are so niche and obscure that they simply aren't interesting for anyone. Or there are fewer Fediverse users who appreciate my posts and my image descriptions than there are Mastodon users who mute or block me for posting more than 500 characters at once. The latter means that a proper pair of descriptions for an in-world image costs me more reach than it grants me.

So I can't confirm that alt-text will give me more boosts.

Still, I'll continue to refuse to post any images without what I consider sufficient image descriptions and explanations. So don't worry about that. (But no, I won't go back and add image descriptions by my current standards to my old undescribed images, nor will I go back and upgrade older, outdated image descriptions that are substandard now.)

#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta
Universal Campus: The mother of all mega-regions

OpenSim's famous Universal Campus and a picture of its main building; CW: long (62,514 characters, including 1,747 characters of actual post text and 60,553 characters of image description)

@Gaëlle, patate quantique Describe it as if you were on the phone and you wanted to fully explain the image to your correspondent.
I always fully explain my images. My posts are generally about very obscure niche topics, my image posts even more so. This means I always have a whole lot to explain, sometimes on various levels (explanations of explanations of explanations of explanations).

However, I don't describe and explain my images as if I'm on a phone. If I were on a phone, my correspondent could talk back. I could ask them inhowfar they're familiar with the topic at hand, what they need described, what they need explained and what they don't. Then I could give them a description and explanation tailor-made just for them.

This doesn't work in the Fediverse. I don't have one correspondent, but potentially millions in the Fediverse and billions on the Web. I can't tailor one description and explanation for all of them all the same. I can't even ask them all what they need and what they want, and even if I could, I'd get lots of different replies that'd contradict each other.

So what I do instead is write one gigantic info dump right away. It doesn't go into the alt-text, though. For one, explanations must never go into the alt-text. They must always go into the post text where everyone can access them. See also the article about this in my alt-text and image description wiki. Besides, Mastodon, Misskey and their forks cut alt-texts with over 1,500 characters off, so I can't really exceed that. But as far as I know, Mastodon allows external posts to have 100,000 characters before it rejects them, and my character limit is over 16.7 million. So I have much more space in the post than in alt-text anyway.

When I post a meme, the image gets a regular alt-text that also transcribes the relevant bits of text verbatim, and I add an explanation section to the post text that links to whatever I can link to for explanations and explains directly what I don't have external explanations for. I've read that at least some Mastodon users prefer explanations right in the post to links because linked websites don't necessarily have to be accessible. But my experience is that if you give them the choice between one link to KnowYourMeme and seven explanations measuring over 10,000 characters, they'll prefer the link to such a monster post.

Still, I can't always link to external explanations, so I might still have to inflate meme posts with long explanations.

Examples:


My original images are rendering from very obscure 3-D virtual worlds. When I post one of these, I describe it twice.

For one, I write a full, detailed description of the entire image. I can never know if someone is curious enough to want the whole image described. Besides, more often than not, what matters in the image in the context of the post is not one specific element in the image, but the whole image, the whole scenery. This description also includes transcripts of all text within the borders of the image as long as I can read it in-world, and it includes all explanations necessary to understand the post, the image, the image description as a whole and all elements of the image description.

The long description goes into the post text.

As it's a hard requirement to have an accurate, sufficiently detailed image description in the alt-text if your posts reach Mastodon, I distill a shorter image description for the alt-text from the long one. It has to fit into 1,500 characters together with a note where to find the long image description. The alt-text doesn't contain any text transcripts because I can impossibly put 20 or more individual text transcripts into well under 1,500 characters.

Examples:


#Long #LongPost #CWLong #CWLongPost #FediMeta #FediverseMeta #CWFediMeta #CWFediverseMeta #CharacterLimit #CharacterLimits #CharacterLimitMeta #CWCharacterLimitMeta #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta
Jupiter Rowland - jupiter_rowland@hub.netzgemeinde.eu

Resolution: If I want to boost a toot with an image, I will first add an image description in the original language as a reply using #alt4you if that's not there.

#ImageDescription

@モスケ^^ ❄️🐈🔥🐴 No. Very clearly no.

People keep thinking that AI solves the alt-text problem perfectly. Like, push one button, get a perfect alt-text for your image, send it without having to check it. Or, better yet, don't even push a button, the AI will take care of everything fully automatically.

However, at best, AI-generated alt-text is better than nothing. Oftentimes, AI-generated alt-text is literally worse than nothing.

First of all, AI does not know the context in which an image is posted. But an alt-text should always be written for a specific context because it usually depends on the context what needs to be described at all and on which level of detail.

This means that AI tends to leave out details that may be important while describing details that literally nobody is interested in.

AI can't take your target audience/your actual audience into consideration either. It can't write an alt-text specifically for that audience, fine-tuned for what that audience knows, what it doesn't know and what it needs and/or wants to know.

Worse yet, AI tends to hallucinate. It tends to mention stuff in an image that simply isn't there. It tends to describe elements of an image falsely. You could post a photo of a Yorkshire terrier, and the AI may think it's a cat because it can't distinguish it from a cat in that photo.

Seriously, AI may get even descriptions of simple images of very common things wrong. If you post images with very obscure, very niche content, AI fares even worse because it knows nothing about that very obscure, very niche content.

If you post a screenshot from social media, AI will not necessarily know that it has to transcribe the text in the screenshot 100% verbatim. And just pushing one button or running AI on full-auto, the thing that so many smartphone users are so much craving for, will not prompt it to do so.

If you want good, useful, accurate, sufficiently detailed image descriptions that match both the context of your posts and your audience, you will have to write them yourself.

Trust me. I know from personal experience. I post some of the most obscure niche stuff in the Fediverse. And I've pitted an image-describing AI against my own 100% hand-written image descriptions twice already. The AI failed miserably to even come close to my descriptions in both cases.

#Long #LongPost #CWLong #CWLongPost #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #AIVsHuman #HumanVsAI
Netzgemeinde/Hubzilla

@iolaire This is my personal analysis of the AI-generated image description, quote-posted from my original comment in the thread linked in my first comment:

RE: https://hub.netzgemeinde.eu/display/451d2f06-7746-4227-a043-76a959420c29

(6/6)

#Long #LongPost #CWLong #CWLongPost #QuotePost #QuoteTweet #QuoteToot #QuoteBoost #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #LLaVA #AIVsHuman #HumanVsAI
Universal Campus: The mother of all mega-regions -

@iolaire After I have written the long description, distilled the short description from it and posted the image with both, I have asked a LLM AI for a description.

The AI of my choice was LLaVA 1.6: https://llava.hliu.cc/

The prompt was, "Describe the image in detail."

LLaVA took about half a minute to generate this image description:

The image depicts a modern architectural structure with a distinctive design. The building features a large, curved roof that appears to be made of a reflective material, possibly glass or polished metal. The roof is supported by several tall, slender columns that are evenly spaced and rise from the ground to the roof's edge. The structure has a circular emblem on the front, which includes a stylized letter 'M' and a series of concentric circles, suggesting it might be a logo or emblem of some sort.

The building is situated on a landscaped area with a well-maintained lawn and a few trees. There is a paved walkway leading up to the entrance of the building, which is not visible in the image. The sky is clear with a few scattered clouds, indicating fair weather conditions. The overall style of the image is a digital rendering or a photograph of a 3D model, as indicated by the smooth surfaces and the absence of any visible texture or imperfections that would be present in a real-world photograph. There are no visible texts or brands that provide additional context about the building's purpose or location.

(5/6)

#Long #LongPost #CWLong #CWLongPost #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #LLaVA #AIVsHuman #HumanVsAI
LLaVA