Poll only for #Blind/LowVision users who rely on #AltText. Is AltText generated with an LLM actually “better than nothing” as some argue? Please comment if you’re Blind or Low Vision, and please boost to get a good sample.
Yes
35.8%
No
38.3%
Something else (explain in reply)
25.9%
Poll ended at .
@RachelThornSub
Something else is, I'd like to know the results of this to guide my actions.
@Steveg58 I see. I hadn't thought about that possibility. I should have included that option for sighted users.
@RachelThornSub @Steveg58 on aus.social (perhaps other instances as well) show results is already an option.
@skribe @RachelThornSub
I'm on Aus.social and no, it is not an option. Perhaps it is a feature of your mastodon app?
@Steveg58 @RachelThornSub I can see it on the web version too.
@skribe
Not for me. No idea why you're seeing it.
@Steveg58 @skribe @RachelThornSub it works here. I'm on famichiki.jp and using moshidon on android.
@Haikyoneko @skribe
Look people. I'm not interested if it works on your server. That is totally irrelevant. It doesn't work for me and it does work for skribe and we are on the same server. So it is something in settings, or perhaps the browser or something environmental like that.
@Steveg58 @skribe wow. Rude much?
@Haikyoneko @skribe
No, being harassed by people.

@Steveg58 @Haikyoneko yep, there's no reason for any harassment. Sorry you've been exposed to that.

The only setting I can think it might be is the advanced web interface. Otherwise, I have no clue either.

@Steveg58 @RachelThornSub Likewise.. If there's a guide on how to alt text things in a useful way to those who need it, I'd love to read it! My efforts are inherently flawed as a sighted person.
@Steveg58 @RachelThornSub do not use the confabulating theft machine.
@RachelThornSub Absolutely. It's utter madness that some suggest that it isn't. Of course, being generative AI, you can't rely on it, and human always equals better in my book, but certainly, I'd rather have LLM output over nothing.
@bscross32 @RachelThornSub but nothing is a signal to others to jump in and provide alt text in a reply, which the OP can then take over in an edit of their original post
@RachelThornSub So, I would say it depends. Sometimes, I just want to know what the picture is, not all the little details within it. That's where human intervention comes in. Then, if I have a picture I just took, I'd like that detailed information, so that's where LLM comes in, because they are more patient than humans. :)
@[email protected] I am low vision and I refuse to accept the false zero sum game being set up whereby I have to screw other people out of affordable electricity, fresh water, or a good paying job that doesn't cause PTSD in order to have alt text. The cruelty of this technology makes it unusable, no matter how inconvenienced I end up feeling.
@RachelThornSub alt text that reads letters off a screenshot is better than nothing.
@RachelThornSub not blind but often... not cognitively at peak human performance... and i appreciate when the alt text lets me figure out what the poster wants me to notice in an image

LLM-generated text does the exact opposite of help

@RachelThornSub Yes, as long as people check it before they post it to make sure it's accurate.

And if they don't, and if it's wrong, it's unlikely that I as a blind person would know, scarily.

@Fragglemuppet That's the thing. I've seen clearly wrong AltText, which suggests the person who posted it didn't even look at it. I think anyone who would check LLM-produced AltText would probably find it quicker and easier to just write it themself.
@Fragglemuppet @RachelThornSub That presumes the capacity of writing a description and writing it faster from scratch than writing it by editing a llm one. Nop, not everybody has the same habillity or capacities.
@RachelThornSub
There were many times when I bailed on writing an alttext when posting something quick that was "public but intended for one person anyway". Reducing the barrier to do something definitely matters to me - I don't find it quicker to write from scratch.
@Fragglemuppet

@RachelThornSub

As a sighted photographer that is not into writing prose descriptions of my photos, because I post them for the photo, so I always wrote Simple AltTxt. Then I found a utility that writes better AltTxt than I would or probably could and find out many think that is wrong too. So I might just go back to writing "photo of a flower".

This is not meant to offend anyone just indicating my frustration on not knowing what I should do.

Not even sure I should be responding as the poll itself (which I didn't respond to) was not intended for sighted AltTxt users.

@the5thColumnist @RachelThornSub My suggestion would be to keep it simple. If the reason you posted the photo was because it was a pretty flower, well...that's fine for the alt-text. No matter how many words you use, you might not be able to communicate the exact feeling of beauty you experienced. If you could, you'd be a writer, not a photographer. Ask yourself why you posted, and what you want someone to take away from it. If you want them to notice the colour, or the size, or whatever, those are what goes in the alt text.

@fastfinge @the5thColumnist @RachelThornSub Maybe we should then include in the alt text, that it was ai generated? e.g.

[fromash.ai] Here goes the messy 1 quintillion parameters LLM`s attempt to describe an image showcasing the sense of life.

@pvd1313 @the5thColumnist @RachelThornSub This helps a lot, yes. Though if I know you just AI generated it, I'm probably not even going to keep reading. My AI is almost certainly better than yours, because I use it constantly and have customized the settings to get it to be as accurate as these things are capable of being.
@pvd1313 @the5thColumnist @RachelThornSub Depends. I like to start with deepseek-ocr if I have any reason to suspect the image is text. If it is, I can stop there. Otherwise, I move up to something like microsoft/phi-4-multimodal-instruct. If I still care and didn't get enough, llama-3.2-90b-vision-instruct will do the trick for most things. Only if it's charts and graphs that I care about do I need to use either the Google or OpenAI models. If it's pornographic, I have to use Grok, because XAI is completely and utterly unhinged and won't refuse anything no matter what. I use everything either locally where possible, or via the openrouter.ai API. That way it's more private, and I'm only paying for what I use. I usually use the tool: github.com/SigmaNight/basiliskLLM

It supports ollama, openrouter, and any openAI compatible endpoint, and integrates perfectly with the NVDA screen reader.
@fastfinge @the5thColumnist @RachelThornSub Thank you for sharing. I have 80 year old grandma, and looking for ways to increase her tech education. so far, she can only use phone calling, phone contacts, telegram (read only)
@pvd1313 @the5thColumnist @RachelThornSub What kind of phone does she have?
@fastfinge @the5thColumnist @RachelThornSub Nokia smartphone (dont remember the actual model, and she is living separately) with shipped OS. I tried talk back assistant, but it is total mess. I was even struggling with disabling it after enabled it (spent like 10 minutes)
@pvd1313 @the5thColumnist @RachelThornSub TalkBack is what fully blind folks use, and it works well. But it needs training from a specialist; nobody can just learn it completely by themselves. However, dictation on the Nokias should work for making calls and answering messages. I really don't know how accessible telegram is with dictation or TalkBack these days, though. Unfortunately I use IOS, not Android. @dhamlinmusic do you know anything?
@fastfinge @pvd1313 @the5thColumnist @RachelThornSub Never used Telegram, and don't use dictation.
@dhamlinmusic @fastfinge @the5thColumnist @RachelThornSub Her relatives use telegram to send her pictures / info. Also she is reading news there.

@fastfinge @the5thColumnist @RachelThornSub Thanks for taking the time to write a very kind answer.

The poll is not for me, I'm sighted but its a about a topic I find challenging, as an Audhd I find it hard to describe pictures and telling stories (It's a dificulty to the point that I avoid taking language exams)

I don't post many pictures but I usually find it easier to share a picture to show something I want people to focus on rather than explaining it with words.

But since I find it hard to write the alt and I wanted to contribute to make this community accesible, I ended up, not publishing many things I would have publish if I didn't feel the presure to be able to write and summarize and do all the things I find so hard. So I felt like I was putting neurotypical expectations on myself. I felt I just wouldn't be able to express myself. So I ask about this to blind people and had some very kind answers that I found really liberating.

So, know I still don't post many images but if I do, somotimes I write a short alt text and more often I use llm. I find it way easier to edit a wrong llm description text than writing it myself.

So I'll keep an eye in the comments to see what to consider if using llm descriptions.

@sinmisterios @the5thColumnist @RachelThornSub Another thing you could do is just copy paste an explanation of your issue into the alt text. Odds are someone else will write it for you. Or a blind person who comes across the image will ask. Accessibility for people with disabilities shouldn't mean silencing the voices of other people with disabilities. You could also create an image-only account, that says write in the profile you can't write alt-text. That way people who don't ever want to have images we can't understand in our timelines could follow your main account, and ignore your image only account.
@RachelThornSub Sorry. I was trying to vote in a different poll and then this popped into its place and I voted before I realized what happened.

@RachelThornSub my sight is degrading, as is my knowledge of popular culture. I often look at alt text to "get" what I should be "seeing".

I am unsure as to whether AI is up to handling that task.

@RachelThornSub I am not a low vision user, so not answering the poll.

But I wanted to comment:

I often offer alt text when I see it is missing or wildly inaccurate, using the #Alt4You hashtag.

There is also an #Alt4Me hashtag to request someone else draft alt text for you, if you find writing alt text difficult (as some commented). No idea how often it gets a response, but if you are otherwise putting in something minimal, why not try it? Better than rolling the dice with AI, I reckon.