The fediverse has a huge community of blind and partially sighted users, as well as people accessing the internet from bad or limited connections that don't load images. Alt-text helps everyone.

*Please* make an effort with it. If you're on .art especially, we expect you to be either adding alt-text or using one of the many options available to ask for help with it, which can be as simple as adding an emoji to your post.

https://mastodon.art/@Curator/109279035107793247

Mastodon•ART 🎨 Curator (@[email protected])

We have a lot of blind and partially sighted users on the fediverse. Accessibility is important here, and as such, it's good form to include image descriptions (alt text) when we add media to our posts. When someone uses a screenreader to read the timeline, and the screenreader comes to an image with no alt text, it just says 'image' (as far as I know). That's boring, unhelpful, and provides no clue as to what the image actually *is*. This differs slightly depending on how you're using (cont)

Mastodon.ART
@Curator
I'm glad the Android app I'm using, @Fedilab , opens a popup when the description is missing. "This image has no description, are you sure you want to send it ?"
@Curator Also, a lot of us just don't boost posts with images and no image descriptions, no matter how much we like your art, screenshots, etc. If engagement is something you care about on Mastodon, you need image descriptions.

@Curator Hey! Thanks for the boostable reminder (:

This is possibly the wrong place to ask, but how can I reach out to un/partially-sighted fediverse users? Are there communities where I will be more likely to find them? I am very interested in developing a tool I made for sighted people into something that can help the less sighted too!

I have tried before, and haven't managed to find someone interested. I'd like to at least try reaching out to a wider audience before giving up - and I don't believe it will be helpful to develop my tool without feedback from the people who will use it.

#accessibility #blind #VisuallyImpaired #ScreenReader

@Curator By the way, this is entirely a free (+ open source) tool with no product placements or advertisements. Hopefully that sets some worries aside 
@ator Tagging @bright_helpings , @Tehiverse (<- that whole instance, actually), @Mayana , @FrostPoem :)

@Curator @bright_helpings @Tehiverse @Mayana @FrostPoem Thank you, massively appreciated! Here is the link to my original thread, and I will be more than happy to explain or clarify further.

https://tech.lgbt/@ator/110007333449734905

curious robot :v_enby: :v_bi: (@[email protected])

Hey fediverse, I need your help with #accessibility. Would a tool to find relevant parts of a page be helpful to blind and partially-sighted people? If so, how should such a tool be controlled? My add-on is Mark My Search (links and description in reply) and it is currently inaccessible to the visually impaired, but I am hopeful it could be a huge help if adapted.

LGBTQIA+ Tech Mastodon

@Curator @bright_helpings @Tehiverse @Mayana @FrostPoem As a brief introduction, this tool highlights words you search for on a particular page. The highlighting continues when you follow links (even in a separate tab) until you deactivate it. When you search for something online, those keywords are automatically highlighted unless you disable that feature.

Many shortcuts are available. The main ones allow instantly _scrolling to each paragraph with highlighting_. It is a relatively advanced and customisable tool and has received praise for its power.

I wish to make this accessible to screenreaders and more. Here are my current ideas:

1. Allow drastically increasing contrast of highlighting. You can already customise the colours.

2. Add a popup (activated by a shortcut) which lists all of the highlights in context, and allows you to jump to them in the page.

@Curator BTW I'll add to that that alt text helps also sighted people, especially if done right: it helps when media fails to load due to federation issues or low bandwidth in the client side, it helps identify people and environments the reader may be unfamiliar with, and obviously it helps with the legibility of text in screenshots.

@oblomov @Curator
Other reasons for alt-text.

* To explain the significance of the image to someone who might not be familiar with the subject matter. (Or even ... To explain the joke)

* For the image maker to articulate to THEMSELVES why the image is worth sharing ('why do I like this so much'), and a little exercise in self expression and creative writing.

All in all a thoroughly good thing (IMO) 

@ACAElliott @oblomov @Curator what I’d really like to see is alt text that explains a joke in a way that doesn’t dismantle the joke to death

Another thing I’d like to see is alt text that can be translated, and this I think would be a pivotal feature – I often see cartoons which I’d love to read except they’re in a foreign language

If the cartoon text at least was in alt text, and translatable, it’d give cartoons a worldwider audience, and if there was some way that the text of the cartoon was in some clever way actually the alt text ie it didn’t ‘pop up’ to appear, it was real text in the cartoon bubbles, that could be read out by screen readers, and also translated live into whatever language (and still somehow fit in the word bubbles), that would be superb

@ACAElliott @oblomov @Curator I’ve had a thought – both the examples here (how to properly explain a joke without destroying it’s jokiness and also how to alt-text the cartoon speech bubbles) are kind of related to the script of the thing before it becomes a joke meme or a cartoon layout

If there were a sort of ‘way’ of relaying a script for a joke meme or joke image, it’d also be viable as a methodology for annotating the speech bubbles of a cartoon

Obviously it’d usually be retro-derived, because the joke image already exists and was never formally scripted out in the first place, it ‘just happened’, and in the case of the multi-panel cartoon, it might not have been scripted in that format initially, the scripting would’ve been to annotate the pictures only and the text came later, or something like that – so this script format of alt-text would be kind of faked by going back after the final product and pretending that the script was one of the initial stages of development of it

Anyway, it should be dead simple or people won’t bother – if a fun way of back-deriving a ‘script’ could be arrived at, and make it forgiving and lossy but effective enough, that’d come across as a best-practice for how to annotate visual humour everywhere else, and if this is borne in mind from the beginning of new visual works (even in a lightweight way) it might help in terms of comedic structure in the first place, so artists, authors, meme-makers would want to use this ‘script’ way of thinking because it’s good

@Curator Also, when sighted readers provide alt-text in an answer, I feel it's good practice to include it in an edited version of the post.

When missing, alt texts can always be added and improved.

@ffeth du coup, la question de forcer un alt-text (via un filtre, un bot, etc) se pose, non ?
@misc en dernier recours si l'humain échoue lamentablement à être humain ?
@misc (perso j'ai un nudge dans l'interface web de mastodon, ça a suffi à me faire prendre l'habitude)
@ffeth plus sur le fait qu'on peut pas dépendre du client pour implémenter ça.

@misc Si 80% des cas sont couverts, ça sera une incitation pour les 20 derniers pourcent ?

Perso je ne mets pas souvent d'alt aux photos en privé.

@ffeth en fait, si c est une question d inclusion, la question se pose de savoir à partir de quel proportion une personne mal voyante se sent exclu. Est ce qu il vaut mieux bloquer les posts et éviter ce sentiment d exclusion, en sachant que tu va pas pouvoir tout lire de toute facon ?
@misc je pense qu'il y a plusieurs manières de décrire une image. Ça peut même se faire dans le corps du post plutôt que dans les métadonnées de l'image (avantage pour l'immédiat : le corps du post est traduisible en un clic).
@ffeth
Is there a best practice to make those alt text answers discoverable? I'd often like to boost and respond with an alt text, but I'm not sure if it will be discovered by the people who actually need it if the OP doesn't care to edit it into the original post. It would be really great if fedi clients had a feature to catch a certain tag or word from the responses and merge it into the post for screen readers.
@Curator

@ge0rg Great question! Please add me to answers!

@Curator

@Curator

It's just not that hard to do. At all.

@Curator Yes, yes, but also, yes! Thanks for mentioning those with poor Internet service. I'd add that very busy instances and very small instances will often have limited throughput at their end, so there are all kinds of reasons you might not see an image even if your eyes work fine (for now).

I think it's called the cut curb effect: Accessibility provisions for the disabled help everyone else, too, just like cutting curbs for wheelchair users also makes it easy to push a stroller.

@Curator I'm re-training myself to use alt-text - an embarrassing admission from someone who used to work in the same office as a blind software engineer. I remember his anguish when, after the accessibility of the WWW's early days, it all went horribly wrong.
Our department used to run websites and pages through accessibility evaluation tools, until new management came along.

@Curator And there are legal as well as ethical requirements! This is the EU's.

https://en.m.wikipedia.org/wiki/EN_301_549

EN 301 549 - Wikipedia