@freya @cmccullough IMO as an extremely anti-genAI advocate, accessibility uses are the one exception I make. If you are disabled and using it for access, please do be aware of its many dangerous pitfalls, but by all means go ahead and use it; that's the rare case where the benefits outweigh the harms.
OTOH, if you're using it to "provide accessibility" and you've got other options, usually one of them is better and using genAI is a cost/quality tradeoff that is screwing over the disabled people you claim to be helping, *especially* if you're deploying it in an interactive context (relevant scary example here: the Slack bot that when asked about the fire alarm in there building said it was just a test (i.e., generated the most likely answer, as is its function) even though there was a real fire (thankfully nobody was harmed).
@freya @cmccullough
Tl;dr: the answer to your question is a matter of balancing benefits and damages generated by ai technology.
Long version:
Good question. The thing is, for almost every technology there is someone benefitting from it, and taking that technology away would hurt those beneficiaries.
So the true question is, what is the balance? What is the damage done by keeping the technology, what is the damage by abolishing it. And, for both options, what could be done to reduce the damage done?
I am not that worried about electric power consumption. That is an immediate issue, yes, but considering the improvements in renewable power generation, I believe this could be solved in a rather short period (depending on political will).
More critical is water consumption. Clean water is already a highly valuable resource, and consumption by any technology beyond a certain point, this has an immediate impact on plants and animals as well as humans in the area. This can be somewhat mitigated by setting up data centers in water rich locations, but it remains an issue.
Next is the issue of making ruthless corporations and their billionaire owners even richer and more influential. Not too long ago I would have said I don't care, as their money doesn't hurt me, and with a bit of tax law adaptions, we could also somewhat benefit. However, in recent years their factual political influence combined with their push for policies that hurt people all over the globe, I am wary of any cent going into their pockets. This can be somewhat mitigated by using FOSS ai models, however, those are mostly derived from big tech models, so without a certain amount of people feeding big tech, those will disappear as well.
The last issue is about copyright. This is mostly an issue for generative ai. And there it is basically unresolvable, as AI companies have repeatedly argued, that without ignoring copyright, they cannot train their models.
For other types of AI (e.g. image analysis), this is less of an issue, so depending on what kind of ai you're using, this might not be relevant for you.
All these points need to be considered by each individual user, as well as politicians to set up meaningful regulation. For individual users it is a question of consciousness, for politicians a question of weighing overall disadvantages and benefits.