THREAD

1/

I’ve gotten quite a few messages from disabled people who benefit from AI in the same way I do but feel unable to admit to it because they are scared of backlash.

I will start by saying I understand concerns about AI, they are real. AI is energy intensive, data centres use water, a resource that is already scarce in many places, and the companies behind these products are unethical in so many ways.

#AI #Ethics #Scotland #Disability #UK #LLM

2/

But something feels off in how this debate is being handled. We live inside unethical systems constantly. That is our baseline as humans in the 21st century.

3/

The aviation industry is a good example It is hugely environmentally destructive, and bound to inequality (only 10 - 11% of the world's population takes a flight in any given year, with only about 2 - 4% traveling internationally annually. Despite high passenger numbers, an estimated 80% of the global population has never flown in an airplane!) and yet we don’t generally judge people for flying. In fact travel has come to be seen as so essential that we don’t really put limits on it at all

4/

I’m sure you would all agree however that there are ways to be an ethical user of this incredibly unethical industry? I think AI should be treated the same way.

5/

Collapsing all AI use into one immoral category doesn’t make sense to me. Frivolously chatting to it all day, repeatedly generating images for fun, or asking it to write your book is not the same as asking AI to help navigate the labour and bureaucracy of disability, or the pressures of other forms of inequality.

6/

For me the distinction is between creative and functional work. I don’t want AI to be part of the process of my creative work, but AI being involved in the functional work of managing my disability frees up space for the creative work which feels integral to my happy existence as a human being.

7/

For a bit of context, a return flight from Scotland to Spain uses roughly the same amount of energy as hundreds of thousands of substantial text only AI interactions. That’s a lifetime’s worth of pretty heavy AI use. Something, somewhere in our thinking has gotten skewed. This is not to advocate for, or excuse excessive AI use, it's to ask that judgement is proportional and accurate.

8/

I understand that drawing these stark moral lines feels very clean and very clear but I think that it can often end up protecting harmful existing heirarchies.

9/

I’m not aguing for a ‘fuck it’ attitude to AI use, not at all. We need to approach this powerful technology in a considered and careful way. It needs to be heavily regulated at the policy end too. What I’m asking people to see is that it is possible act ethically within an unethical system (there are exampels everywhere!) and that if we care about ethics we must make sure that our judgement is ethical too.

END

@kristiedegaris

1. Use of AI as a disability aid is ethical IMO. And might account for 1 in a thousand (?) of current AI usage.

2. I hold discretionary plane travel to be unethical.

3. I engage in many activities that are contaminated by unethical aspects.

@skua I think my point, which I maybe didn't make well, is that we need to stop laser focusing on one thing as *the* issue. The things we want to change are systemic. I find it so incredibly exhausting to see such polarised, un-nuanced opinions over and over again.

I think it's hard to say what people use AI for, what's often amplified is the worst case stuff? That's not to say that a lot of usage isn't genuinely trash. It is.

I think excess is the general issue though.

@skua And also I don't want to present disability as the only ethical form of AI usage. I think AI, used properly, has the ability to help mitigate several aspects of inequality in day to day life.

@kristiedegaris
My own tests of genAI have it only helpful in areas where I possess high levels of competence, where I know if the genAI is outputting complete garbage, subtle garbage or useful material.

There are many areas of life where I don't have high competence. Using it there I would be relying on a grossly inadequate tool. Life is bad enough already.

@skua Ah this is such a good point!! I wouldn't be able to use it as effectively if I didn't have years of experience of these systems behind me already. I know roughly what is expected and needed at all points. I do still think it can be helpful for menial tasks, but understanding what is required is a huge advantage.

@kristiedegaris @skua 1/

you made it well.

There are people wandering around with broad brushes, doing the value signaling thing... even Cory Doctorow was recently held to task.

I ain't a fan, but I think his use isn't bad.

The key issue most detractors have is that training data was stolen by corporations running large models.

There are options. I pointed at 2 yesterday.

https://knowprose.com/2026/02/ethical-local-ai-olmo-apertus/

@knowprose @skua I missed what happened with Cory?

@kristiedegaris @skua oh, he posted in one of his blogs that he used llms for spellchecking and grammar checking. Shenanigans ensued.

I stayed out of it because of the same reason I stayed out of the baftas.lol

@kristiedegaris If you would like to read up on that matter I can recommend this blog post by @tante https://tante.cc/2026/02/20/acting-ethical-in-an-imperfect-world/
@zettpunkt @tante Thank you! I'm up to date but not sure I agreed with a lot of this, especially the end. But I also agree in principle? It sucks to be Cory here because he is being held to such a high standard, *the* arbiter of technology standards ;) I suspect he uses an LLM for editing because he is time poor and/or would like that time for something else. As part of his human day. Also are small amounts of convenience a sin? It's so complicated.
@kristiedegaris @zettpunkt the article does not criticize Cory for his use of an LLM for spellcheck but for the way he tries to argue that any stand against "AI" based on political and ethical concerns is pointless and just "purity culture"
@tante @zettpunkt I read it. I was commenting on what got him into the situation in the first place.
@tante @zettpunkt I don't fully agree with Cory and I don't fully agree with the article. But again, we are slaves to words and time and I think it's hard to get across the complexity of issues and opinions in this discussion. I have to say, I have found the criticism of AI use to be really rigid, hence my posting. I'm not saying they're the only opinions that exist but they are dominant certainly.

@kristiedegaris @skua 2/

Science and technology have very disturbing roots here and there. I remember a friend studying medicine in the 90s asking how much of medicine came from nasty experiments in wwii as an example. Yuck.

But technology itself should not be anthropomorphized. It is wielded by humans. Humans are the issue.

Blaming all ai for bad things let's people off the hook. Ai is being used as a scapegoat by 2 sides.

Humans stole training data. Corporations profited.

@kristiedegaris @skua 3/

We put arsonists in jail, not fire.

So it should be with every misused technology.

But those same people buy free speech in a country that permits it.

And that is the core issue.

Yes, exactly.

@kristiedegaris Yes.

We discussed privately, so I am putting it publicly in your thread as well.

I also address 'Fruit of the Poisoned Tree' arguments.

https://knowprose.com/2026/02/ai-ethics-and-use-for-a-minority/

@kristiedegaris

Flying is straightforward: passengers move from A to B.

What is now labeled "AI" is a conflation of a huge spectrum of tech, from well established machine learning, to slop generation, to unicorn "agentic" "AI".

There is a lot of genuinely useful applications in that spectrum. It's time for a more careful labeling.

@kristiedegaris I really appreciate this thoughtful point of view.

I don’t have answers. But I do see that “AI” can mean many things (technologies, use cases) and some are more grossly exploitative than others, and some feel more ethical than others.

I am generally anti-AI myself 😁, but probably much more anti-flying, so I liked your counterexample.

The case of navigating and managing disabilities… well in my household there have been moments when AI-adjacent tools have been incredibly helpful in distilling the insane amount of bureaucracy one has to cut through. It doesn’t eliminate the bureaucracy but it did give us useful pointers of what to do next. Could we have accomplished the same thing without those tools? Yes but with more spoons and a lot more time; we already spend an overwhelming amount of our time and energy on navigating these helpful but very flawed systems.

AI as assistive tech, then, feels less gross to me, maybe even ethical.

@kristiedegaris …but I would really prefer we correct the flawed systems first (bureaucracy, benefit programs, etc) to make them all more accessible. Like, AI is a super resource intensive and crazy solution to this particular “problem” — these accessibility systemic problems don’t need technology, just commitment to better access.

And another thing (since I’m ranting at this point, please tune me out!), these AI companies have demonstrated to not have any interest in ethics, anyway. So if there is an ethical, green “AI” future… it will not be coming from the big name tools we see today. (Grrr)

Thanks for this lovely thought provoking thread.

@scott Yes, I use AI because of huge failings in the systems that already exist, If we solved those my need for AI would all but disappear. As I said, I see all the problems with AI and the companies that run them, but I see that everywhere tbh when I go grocery shopping, when I use a bank.

I think you're right that a sustainable (in every sense) AI future will not come from these current, huge companies, but I do think we will move towards its, probably too slowly.

Thanks for your rant! <3

@scott Thanks for sharing this! Assistive tech is a great phrase and I very much see AI that way based on my own usage.

I'd also agree re the experience of using AI and some things still need to be done without it, the many in person appointments and telephone calls for example.

I wouldn't say I am anti-flying but it's been my choice to not fly for almost a decade now. That's not to say I won't fly again.

@kristiedegaris I try not to use AI at all, but the number of agencies and govt dept's using it as the front door to accessing services, that once were a real person on the other end of a phone, is growing by the day.
@Jim_Graves I've noticed that too.

@kristiedegaris @Jim_Graves this is a place where a nuanced view is essential. ML can scrape a ton of data and docs and find helpful links. But it's also stymied - elsewhere in this conversation it's noted that it has no expertise, and that's where it fails is customer interactions.

Sometimes customers only need help with a search. Other times they need a human with all that that entails.

@kristiedegaris One of the biggest, and most understated, problems with "AI" is the *centralisation of compute*.

AI happens in photography - NR is a DL conv.net in action - and I'm not averse to seeding a photo's description with an auto-generated paragraph either.

But what I despise is the concept of pay-to-play. Don't talk about ChatGPT or Claude, but OpenAI and Anthropic. Why should I have to pay companies $ to do what could be achieved locally because they bought the world's GPUs & RAM?

@xylophilist I completely agree!

@kristiedegaris At work we talk about "data silos" - somewhat pejoratively. Compute itself needs the same discussion.

I've already dabbled in F/OSS photography, years ago. Maybe I should work a bit harder on integrating deep learning for noise-reduction & super-resolution into a smoother workflow again, see if I could get rid of DxO... That would be an intriguing way to go.

@xylophilist I'm quite interested in the software that scales up images. That could be incredibly handy.

@kristiedegaris It can be quite easy - you can have your LLM of choice generate a python script that will train a model on the difference between lanczos4 upscaled images vs ground truth - effectively it becomes "artefact removal". Similar for noise - put it in, train the differences, now you have a tool for removing it. Etc.

Trouble is the amount of GPU required for the training or running on a serious photo is... significant.

@xylophilist It's all so interesting. And honestly, probably beyond my understanding in many ways.
@kristiedegaris I've been thinking of saving up for a newer macbook pro - when they produce an M5 one - mostly in order to do photo things on it.
But if I can replace DxO, that's a significant chunk of workflow that doesn't need to be a mac. It's just a matter of organizing a fluid data flow and fixing a few things that've suffered bitrot. Hmmmmm... :)

@kristiedegaris

It’s the societies we live in that disable us, so IMO using unethical tools in society to alleviate suffering is justified

We didn’t make these tools unethical, huge corporations did - the responsibility is theirs

BP & Ogilvy made the most successful ad campaign of all time to coin “carbon footprint” to push the blame on us

But one decision from the chair of BP will do more than I ever could. Scale matters.

Ppl do what you can, but don’t martyr yourselves 💜

@gorsefan I love this and agree wholeheartedly. Thank you for saying it. You're right, this punitive individualistic attitude is a form of control too.
@kristiedegaris yeah, right on! Thank you too 🤗
@kristiedegaris there’s the hidden exploitation of cheap labor in Africa to sanitize the content, the concentration of power by fascist techbros, the cheapening of art and facts, the racist, sexist, western biases in the source data, the non consensual use of people’s creative labor which is then rented back to them, the pressure on wages, the documented effects on cognition, addiction, the drowning of the web in slop, etc. It’s spectacular really. I don’t think there’s any ethical use of GenAI.
@kristiedegaris could have added the generation of deep fakes, non consensual porn, CSAM, the hoarding of RAM and hard disks which are becoming expensive and rare, the super short lifespan of the GPUs that are over clocked and end up in landfills, the strain on web servers that are scraped non stop and on open source software projects that are inundated with garbage code. I don’t think the comparison with air travel even remotely flies if you’ll pardon the bad pun.

@sknob I will pardon the pun.

It's a pretty good comparison. Look into the issues that air travel causes globally including what the travel industry does socially, politically, psychologically etc. It's just as vast.

When you take it apart, every huge capitalist industry looks a lot like AI does on the inside.

@sknob

1/

Flying produces vast carbon emissions and non CO₂ warming effects, it's protected by fuel subsidies and tax exemptions, and drives airport expansion that destroys land and communities. Travel economies hollow out cities price residents out of housing, displace local businesses, and replace lived cultures with tourist monocultures designed for consumption.

@sknob

2/

Entire regions become dependent on precarious hospitality labour (so bad in Scotland!) while public funds are diverted to support private airlines and infrastructure that mostly benefits the mobile and affluent. The industry is also incredibly entangled with sexual and gendered harm, from the expansion of sex tourism and trafficking to the normalisation of entitlement, racialised desire, and violence in destination economies.

@sknob

3/

Itt also props up authoritarian regimes under the banner of economic development. Psychologically, travel is sold as self actualisation and freedom, encouraging constant movement, novelty addiction, and escape rather than stability or repair, while politically it reproduces colonialist logic in which some bodies move freely across borders and others are immobilised or criminalised. And that's just to start...

@kristiedegaris I agree, but again, who needs another huge destructive industry foisted on us that is at least as bad, for similar and very different reasons, and which we managed to do without just a few short years ago? I don’t see how the ubiquity of an entrenched, unethical and damaging industry justifies normalizing rhe use of upstart one...
@sknob I agree that it would be better in many ways if these things did not exist, but they do? And as I've explained there are positive uses for AI that can be done well within a personal ethical framework.
@kristiedegaris I do a lot of things (or refrain from doing things), just because I want to be aligned with my principles and ethics when at all possible. Sometimes it’s easy, sometimes it’s impractical or hard, and sometimes I compromise or fail miserably, even though nothing is stopping me from doing better. Resistance doesn’t seem futile to me :)
@sknob It's not about resistance being futile. As I said I haven't flown in nearly a decade, but I understand why other people may need to fly. Do you understand why some people may rely on AI and how that doesn't mean they are unethical for using it?

@kristiedegaris I understand how GenAI can be marginally useful in some situations and how it can help people who come to rely on them, but I fail to understand how that makes that use case ethical. I understand the appeal, the need even, but that doesn’t make it ethical in my view.

If I think killing animals for food is unethical and my doctor orders me to eat meat for medical reasons and I do so, I will still consider that killing animals for food is unethical while I eat my steak.

@sknob I think we just disagree :)
@kristiedegaris it seems so yes :)
@sknob I just think ethical critiques of AI belong at the system level not the user level. And on a personal level, we don't choose to live like this, forced to use tools that are built this way. I dunno, feels so shit to have nowhere else to turn to help myself and my kid access healthcare and be told its unethical. I'm not sure anyone but the individual should get to decide what counts as survival?
@kristiedegaris air travel didn’t start out that way and it’s been with us for decades. Look at the damage wrought by GenAI in a mere 3 years. And it is an intrinsically totalitarian technology for the simple reason that it discourages using one’s own brain. Air travel is bad enough without adding the impact of GenAI (personally I shun both).
It depends on how you use it. If I could afford a personal assistant I would have one and no one would tell me that's me not using my own brain. Using AI to help navigate a truly bureaucracy and admin heavy system is not not using my own brain. So maybe AI use can be evaluated on an individual level?
I haven't flown in nearly a decade now. I feel like I am able to do that. I don't need it for work or to see family and I travelled a bit in my youth. It doesn't mean I condemn all others who use air travel. Some people have to.
@kristiedegaris As far as I am concerned, the only justifiable use of "AI" is in Big Science, for the purpose of doing data operations at a speed greater than humans can manage, as this is useful for drug discovery, mapping protein folding, searching huge image databases of the stars and of the surface of the Earth, among other things.
Using it for things that merely require "auto complete", or for customer services, or for surveillance of any kind, is NOT OK.