THREAD

1/

I’ve gotten quite a few messages from disabled people who benefit from AI in the same way I do but feel unable to admit to it because they are scared of backlash.

I will start by saying I understand concerns about AI, they are real. AI is energy intensive, data centres use water, a resource that is already scarce in many places, and the companies behind these products are unethical in so many ways.

#AI #Ethics #Scotland #Disability #UK #LLM

2/

But something feels off in how this debate is being handled. We live inside unethical systems constantly. That is our baseline as humans in the 21st century.

3/

The aviation industry is a good example It is hugely environmentally destructive, and bound to inequality (only 10 - 11% of the world's population takes a flight in any given year, with only about 2 - 4% traveling internationally annually. Despite high passenger numbers, an estimated 80% of the global population has never flown in an airplane!) and yet we don’t generally judge people for flying. In fact travel has come to be seen as so essential that we don’t really put limits on it at all

4/

I’m sure you would all agree however that there are ways to be an ethical user of this incredibly unethical industry? I think AI should be treated the same way.

5/

Collapsing all AI use into one immoral category doesn’t make sense to me. Frivolously chatting to it all day, repeatedly generating images for fun, or asking it to write your book is not the same as asking AI to help navigate the labour and bureaucracy of disability, or the pressures of other forms of inequality.

6/

For me the distinction is between creative and functional work. I don’t want AI to be part of the process of my creative work, but AI being involved in the functional work of managing my disability frees up space for the creative work which feels integral to my happy existence as a human being.

7/

For a bit of context, a return flight from Scotland to Spain uses roughly the same amount of energy as hundreds of thousands of substantial text only AI interactions. That’s a lifetime’s worth of pretty heavy AI use. Something, somewhere in our thinking has gotten skewed. This is not to advocate for, or excuse excessive AI use, it's to ask that judgement is proportional and accurate.

8/

I understand that drawing these stark moral lines feels very clean and very clear but I think that it can often end up protecting harmful existing heirarchies.

9/

I’m not aguing for a ‘fuck it’ attitude to AI use, not at all. We need to approach this powerful technology in a considered and careful way. It needs to be heavily regulated at the policy end too. What I’m asking people to see is that it is possible act ethically within an unethical system (there are exampels everywhere!) and that if we care about ethics we must make sure that our judgement is ethical too.

END

@kristiedegaris

1. Use of AI as a disability aid is ethical IMO. And might account for 1 in a thousand (?) of current AI usage.

2. I hold discretionary plane travel to be unethical.

3. I engage in many activities that are contaminated by unethical aspects.

@skua I think my point, which I maybe didn't make well, is that we need to stop laser focusing on one thing as *the* issue. The things we want to change are systemic. I find it so incredibly exhausting to see such polarised, un-nuanced opinions over and over again.

I think it's hard to say what people use AI for, what's often amplified is the worst case stuff? That's not to say that a lot of usage isn't genuinely trash. It is.

I think excess is the general issue though.

@skua And also I don't want to present disability as the only ethical form of AI usage. I think AI, used properly, has the ability to help mitigate several aspects of inequality in day to day life.

@kristiedegaris
My own tests of genAI have it only helpful in areas where I possess high levels of competence, where I know if the genAI is outputting complete garbage, subtle garbage or useful material.

There are many areas of life where I don't have high competence. Using it there I would be relying on a grossly inadequate tool. Life is bad enough already.

@skua Ah this is such a good point!! I wouldn't be able to use it as effectively if I didn't have years of experience of these systems behind me already. I know roughly what is expected and needed at all points. I do still think it can be helpful for menial tasks, but understanding what is required is a huge advantage.

@kristiedegaris @skua 1/

you made it well.

There are people wandering around with broad brushes, doing the value signaling thing... even Cory Doctorow was recently held to task.

I ain't a fan, but I think his use isn't bad.

The key issue most detractors have is that training data was stolen by corporations running large models.

There are options. I pointed at 2 yesterday.

https://knowprose.com/2026/02/ethical-local-ai-olmo-apertus/

@knowprose @skua I missed what happened with Cory?

@kristiedegaris @skua oh, he posted in one of his blogs that he used llms for spellchecking and grammar checking. Shenanigans ensued.

I stayed out of it because of the same reason I stayed out of the baftas.lol

@kristiedegaris If you would like to read up on that matter I can recommend this blog post by @tante https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@zettpunkt @tante Thank you! I'm up to date but not sure I agreed with a lot of this, especially the end. But I also agree in principle? It sucks to be Cory here because he is being held to such a high standard, *the* arbiter of technology standards ;) I suspect he uses an LLM for editing because he is time poor and/or would like that time for something else. As part of his human day. Also are small amounts of convenience a sin? It's so complicated.
@kristiedegaris @zettpunkt the article does not criticize Cory for his use of an LLM for spellcheck but for the way he tries to argue that any stand against "AI" based on political and ethical concerns is pointless and just "purity culture"
@tante @zettpunkt I read it. I was commenting on what got him into the situation in the first place.
@tante @zettpunkt I don't fully agree with Cory and I don't fully agree with the article. But again, we are slaves to words and time and I think it's hard to get across the complexity of issues and opinions in this discussion. I have to say, I have found the criticism of AI use to be really rigid, hence my posting. I'm not saying they're the only opinions that exist but they are dominant certainly.

@kristiedegaris @skua 2/

Science and technology have very disturbing roots here and there. I remember a friend studying medicine in the 90s asking how much of medicine came from nasty experiments in wwii as an example. Yuck.

But technology itself should not be anthropomorphized. It is wielded by humans. Humans are the issue.

Blaming all ai for bad things let's people off the hook. Ai is being used as a scapegoat by 2 sides.

Humans stole training data. Corporations profited.

@kristiedegaris @skua 3/

We put arsonists in jail, not fire.

So it should be with every misused technology.

But those same people buy free speech in a country that permits it.

And that is the core issue.

Yes, exactly.

@kristiedegaris Yes.

We discussed privately, so I am putting it publicly in your thread as well.

I also address 'Fruit of the Poisoned Tree' arguments.

https://knowprose.com/2026/02/ai-ethics-and-use-for-a-minority/