Even with giving “AI” the benefit of the doubt and say 80% of their alternative texts are okay to use, that means the 20% that are not good enough take more time to correct (because you first have to analyze that the text is not good enough, and then you need to write a new one. And for the rest the “analyze/determine/ok” loop is probably about the same length as the “analyze/write accurate alternative text” loop.

https://yatil.social/@yatil/116234999890719208 (1/4)

Eric Eggert (@[email protected])

"AI is the only way to make accessibility at scale work (and please don't show me different examples; of course, with a human in the loop, it's all perfect)." 💤💤💤

yatil.social
@yatil, just out of curiosity, are you aware of data on what percentage of human-written alt text is sufficient?

@j9t @yatil

How would we know which is which? And: on larger sites the alt-text may come from another company that provides the licensed images.

@jensgro @j9t If it is generated with the content by trained individuals: 100%. If it is dropped in from somewhere else, but human written, I would think 95+%. Text alternatives are trivial in probably 98+% of the cases, and the rest are complex or special situations where you need specific domain knowledge (or good instructions, which are comparitively easy to provide). So I think if an effort is made at all, it can be basically flawless.

@yatil @j9t

That are guesses. And how would know if a text is generated by AI or by (sloppy) humans? Not everything bad is AI, not everything godd is human.

@jensgro @j9t I would assume that “trained individuals” are not sloppy. I can only go from my experience training people for 20 years in this, sorry if that is not enough anecdotal evidence.

> Not everything bad is AI, not everything godd is human.

I did not say that anywhere. (My initial post says 80% of “AI” text alternatives probably good enough.) 🤷‍♂️