Even with giving “AI” the benefit of the doubt and say 80% of their alternative texts are okay to use, that means the 20% that are not good enough take more time to correct (because you first have to analyze that the text is not good enough, and then you need to write a new one. And for the rest the “analyze/determine/ok” loop is probably about the same length as the “analyze/write accurate alternative text” loop.

https://yatil.social/@yatil/116234999890719208 (1/4)

Eric Eggert (@[email protected])

"AI is the only way to make accessibility at scale work (and please don't show me different examples; of course, with a human in the loop, it's all perfect)." 💤💤💤

yatil.social
Because the main time goes into analyzing what the alternative text should be. You need to do that in any case. (2/4)
And yeah, you need to do it at scale for every image, because with “AI” it is guesswork if the alternative is good enough. For many photos, creating templates by humans where you pluck in information based on them is so much quicker and more accurate than running an “AI” over the photos. Especially as so much of the cost of “AI” is externalized and covered by VC money and environment cost. (3/4)
Human in the loop for me means a human in every loop, not human in some loops. If you mean it with “human in the loop”, then your benefits of having humans review slop instead of creating genuine content, dwindles rapidly. (4/4)