For the first time in my life on Masto, I read alt text that made me wish the person hadn’t bothered. After a wordy post, they wrote in the alt text field something like, “I don’t really want to write alt text and train the AI for free, but okay, person holding a computer.” I’m going to assume they didn’t think about what a slap in the face this was to me, and possibly to anyone else who relies on alt text. 3 Times I started an acerbic reply and deleted it, but I’m still seething, so I’m asking anyone who is so disingenuous to just not bother; you’ll create less hard feelings that way.
@ChristineMalec That’s weird. Why do they think it has anything to do with training AI? Maybe I’m just ignorant, but I thought alt text was for humans.
@ClimateJenny No idea, goodness knows the body of their post had enough verbage in it.

@ChristineMalec @ClimateJenny

The reason is that alt text is scraped in order to train multimodal text + image models.

But for anyone who actually listens to the blind community, it doesn't take long to learn that these models are pretty important to a lot of people for accessibility. And regardless, alt text is directly helpful. I'm happy to put my displeasure with big tech on hold for things that genuinely improve people's lives.

@hosford42 @ChristineMalec @ClimateJenny In that case, I think writing subjective alt text would be a solution for those who fear it trains AI. Something like, "I took this pretty flower near sunset, there's a bird in the background spoiling my perfect view of the sunset and the flower is..." I like writing my alt text like that, not thinking of AI but because I suck at describing images accurately and so put in my feelings, which I suspect would be worthless to AI?

@dilmandila @hosford42 @ChristineMalec @ClimateJenny

Another approach would be to write clear, descriptive alt text but use something like Nightshade to poison your image so that "AI" can't make sense of it:

https://nightshade.cs.uchicago.edu/whatis.html

@CppGuy

I still think that's working against the public good. If the models being trained on this data are used for accessibility by the blind community, and we are intentionally working to lower their accuracy, we are hurting the blind community. We should fight the AI vendors on a front where there aren't innocent bystanders who will be hit.

@dilmandila @ChristineMalec @ClimateJenny

@hosford42 @CppGuy @dilmandila @ChristineMalec @ClimateJenny I wish there was a way to poison it for some models and not for others. I'm all for alt text generators if they're trained on less data than the typical LLM and are used explicitly for the purposes of accessibility. Unfortunately, the alt is almost definitely also being scraped by LLMs and image generators. But the way to solve that problem is absolutely not to neglect those who depend on alt text, increasing the need for generators