Even with giving “AI” the benefit of the doubt and say 80% of their alternative texts are okay to use, that means the 20% that are not good enough take more time to correct (because you first have to analyze that the text is not good enough, and then you need to write a new one. And for the rest the “analyze/determine/ok” loop is probably about the same length as the “analyze/write accurate alternative text” loop.

https://yatil.social/@yatil/116234999890719208 (1/4)

Eric Eggert (@[email protected])

"AI is the only way to make accessibility at scale work (and please don't show me different examples; of course, with a human in the loop, it's all perfect)." 💤💤💤

yatil.social
Because the main time goes into analyzing what the alternative text should be. You need to do that in any case. (2/4)
And yeah, you need to do it at scale for every image, because with “AI” it is guesswork if the alternative is good enough. For many photos, creating templates by humans where you pluck in information based on them is so much quicker and more accurate than running an “AI” over the photos. Especially as so much of the cost of “AI” is externalized and covered by VC money and environment cost. (3/4)
Human in the loop for me means a human in every loop, not human in some loops. If you mean it with “human in the loop”, then your benefits of having humans review slop instead of creating genuine content, dwindles rapidly. (4/4)
@yatil plus writing things yourself is actually way more fun than being a babysitter for a computer program
@Tijn Hard agree! It’s like being creative in itself has value to oneself and society!
@yatil, just out of curiosity, are you aware of data on what percentage of human-written alt text is sufficient?

@j9t @yatil

How would we know which is which? And: on larger sites the alt-text may come from another company that provides the licensed images.

@jensgro @j9t If it is generated with the content by trained individuals: 100%. If it is dropped in from somewhere else, but human written, I would think 95+%. Text alternatives are trivial in probably 98+% of the cases, and the rest are complex or special situations where you need specific domain knowledge (or good instructions, which are comparitively easy to provide). So I think if an effort is made at all, it can be basically flawless.

@yatil @j9t

That are guesses. And how would know if a text is generated by AI or by (sloppy) humans? Not everything bad is AI, not everything godd is human.

@jensgro @j9t I would assume that “trained individuals” are not sloppy. I can only go from my experience training people for 20 years in this, sorry if that is not enough anecdotal evidence.

> Not everything bad is AI, not everything godd is human.

I did not say that anywhere. (My initial post says 80% of “AI” text alternatives probably good enough.) 🤷‍♂️

@yatil, I love the trained-individual sentiment, but we know alt text isn’t and cannot always be provided by trained individuals.

And from what we know from “pre-AI” times, human-written alt text doesn’t seem to be consistently good—if it’s at all present.

Hence the question if we have more data.

@j9t I mean by trained individuals a 15–30 minute instruction. Everyone can do it. You don’t need to study for it. This is not a problem. Yes, ableism is carried out by humans as well as embedded in the training of models. It’s not a technical problem.

I have not done any studies on this and am unaware of any.