Ok so…. #AI has allowed me to much more easily add #alttext for images I would have been too lazy to detail before. I feel bad about this and at the same time am really encouraged for the future of #accessibility This is a net benefit for that community right? AI would seem to have a lot of potential in this area no?
@tramtrist There was a recent thread where alt-text users shared what’s helpful to them. A common view seemed to be that AI alt text isn’t useful unless a human checks it—if it’s wrong, they won’t know to be wary. If it’s not checked, they may as well just run the AI themselves so they know its limits. Checked AI can be fine if it’s accurate and specific, e.g., “Superman in a red cape,” not just “a man in a red cape” as AI might produce.
@FlockOfCats This is exactly the feedback Im interested in. Thank you very much!
@モスケ^^ ❄️🐈🔥🐴 No. Very clearly no.

People keep thinking that AI solves the alt-text problem perfectly. Like, push one button, get a perfect alt-text for your image, send it without having to check it. Or, better yet, don't even push a button, the AI will take care of everything fully automatically.

However, at best, AI-generated alt-text is better than nothing. Oftentimes, AI-generated alt-text is literally worse than nothing.

First of all, AI does not know the context in which an image is posted. But an alt-text should always be written for a specific context because it usually depends on the context what needs to be described at all and on which level of detail.

This means that AI tends to leave out details that may be important while describing details that literally nobody is interested in.

AI can't take your target audience/your actual audience into consideration either. It can't write an alt-text specifically for that audience, fine-tuned for what that audience knows, what it doesn't know and what it needs and/or wants to know.

Worse yet, AI tends to hallucinate. It tends to mention stuff in an image that simply isn't there. It tends to describe elements of an image falsely. You could post a photo of a Yorkshire terrier, and the AI may think it's a cat because it can't distinguish it from a cat in that photo.

Seriously, AI may get even descriptions of simple images of very common things wrong. If you post images with very obscure, very niche content, AI fares even worse because it knows nothing about that very obscure, very niche content.

If you post a screenshot from social media, AI will not necessarily know that it has to transcribe the text in the screenshot 100% verbatim. And just pushing one button or running AI on full-auto, the thing that so many smartphone users are so much craving for, will not prompt it to do so.

If you want good, useful, accurate, sufficiently detailed image descriptions that match both the context of your posts and your audience, you will have to write them yourself.

Trust me. I know from personal experience. I post some of the most obscure niche stuff in the Fediverse. And I've pitted an image-describing AI against my own 100% hand-written image descriptions twice already. The AI failed miserably to even come close to my descriptions in both cases.

#Long #LongPost #CWLong #CWLongPost #AltText #AltTextMeta #CWAltTextMeta #ImageDescription #ImageDescriptions #ImageDescriptionMeta #CWImageDescriptionMeta #AI #AIVsHuman #HumanVsAI
Netzgemeinde/Hubzilla

@jupiter_rowland this is a very important message to me and I appreciate it. I have been using my iOS mastodon app to do the initial run on the image and then review and tailor it to fix whatever my human mind feels is wrong and as you said there’s often a lot wrong. I think this is an acceptable balance.
@tramtrist If you can take the time to proof read, it's better than nothing. Otherwise, I assume most people would prefer to run their own customised model (if they want to), or at least know that it's generated and not checked for accuracy.