If you're like me, then you were really happy to learn about Mastodon's enthusiastic support for image descriptions, and you were eager to join in.

Then you went to actually write something and realized you have no idea how to present visual info in a way that is helpful/enjoyable to those who are #VisuallyImpaired or #Blind.

I found this guide really informative: https://uxdesign.cc/how-to-write-an-image-description-2f30d3bf5546

Post-viral Edit: Don't forget to give the author some love on medium. They did the work!

#Accessibility

How to write an image description

I wrote this how-to guide with the immensely helpful counsel and insights from Bex Leon and Robin Fanning, as well as through an online…

Medium

@ianburnette
This is a great guide. Some other ones that I have found useful:

Cooper Hewitt, Smithsonian Design Museum guide for image descriptions:
https://www.cooperhewitt.org/cooper-hewitt-guidelines-for-image-description/

Jake Archibald, Writing great alt text: Emotion matters
https://jakearchibald.com/2021/great-alt-text/

Léonie Watson, Thoughts on skin tone and text descriptions: https://tink.uk/thoughts-on-skin-tone-and-text-descriptions.md-notes-on-synthetic-speech/

Neiman Labs, “Space is for everyone”: Meet the scientists trying to put otherworldly images into words: https://www.niemanlab.org/2022/08/space-is-for-everyone-meet-the-scientists-trying-to-put-otherworldly-images-into-words/

Cooper Hewitt Guidelines for Image Description | Cooper Hewitt, Smithsonian Design Museum

On Striving for Digital Inclusion Museums provide robust content for people to interact with across digital platforms. As cultural organizations continue to develop more advanced experiences, it is essential that they consider all audiences during the creation of digital resources and tools. Digital accessibility ensures that people with disabilities have access to our online collections,

Cooper Hewitt Smithsonian Design Museum

@kellylepo @ianburnette Seems to me Mastodon should just implement an im2text plugin and automate the process.

https://github.com/OpenNMT/Im2Text

GitHub - OpenNMT/Im2Text: Im2Text extension to OpenNMT

Im2Text extension to OpenNMT. Contribute to OpenNMT/Im2Text development by creating an account on GitHub.

GitHub
The Hidden Image Descriptions Making the Internet Accessible

The challenge of describing every image on the internet, and the people who are trying.

The New York Times

@kellylepo @ianburnette The article seems to have two concerns:

1) AI back in 2016 did a terrible job. This is unsurprising. I'd be interested in specifically reviews of im2text-based models.

2) Training sets have biases. This is also unsurprising; all content produced by humans has biases. We know ways to reduce bias in training sets.

The solution is definitely not to make people work harder on each post; the solution is to automate that work away.

@dragonsidedd @kellylepo

I'm not opposed to automation, and it should definitely be used to help fill in gaps, but I don't think it's going to fully substitute human interaction. I live in the Netherlands, and I still only speak very basic Dutch. AI translation makes a lot of things easier for me, but I really can't interact that much with non English-speakers.

Besides, this is a great opportunity to help build a better training dataset for alt-text generation.