THREAD

1/

I’ve gotten quite a few messages from disabled people who benefit from AI in the same way I do but feel unable to admit to it because they are scared of backlash.

I will start by saying I understand concerns about AI, they are real. AI is energy intensive, data centres use water, a resource that is already scarce in many places, and the companies behind these products are unethical in so many ways.

#AI #Ethics #Scotland #Disability #UK #LLM

2/

But something feels off in how this debate is being handled. We live inside unethical systems constantly. That is our baseline as humans in the 21st century.

3/

The aviation industry is a good example It is hugely environmentally destructive, and bound to inequality (only 10 - 11% of the world's population takes a flight in any given year, with only about 2 - 4% traveling internationally annually. Despite high passenger numbers, an estimated 80% of the global population has never flown in an airplane!) and yet we don’t generally judge people for flying. In fact travel has come to be seen as so essential that we don’t really put limits on it at all

4/

I’m sure you would all agree however that there are ways to be an ethical user of this incredibly unethical industry? I think AI should be treated the same way.

5/

Collapsing all AI use into one immoral category doesn’t make sense to me. Frivolously chatting to it all day, repeatedly generating images for fun, or asking it to write your book is not the same as asking AI to help navigate the labour and bureaucracy of disability, or the pressures of other forms of inequality.

6/

For me the distinction is between creative and functional work. I don’t want AI to be part of the process of my creative work, but AI being involved in the functional work of managing my disability frees up space for the creative work which feels integral to my happy existence as a human being.

7/

For a bit of context, a return flight from Scotland to Spain uses roughly the same amount of energy as hundreds of thousands of substantial text only AI interactions. That’s a lifetime’s worth of pretty heavy AI use. Something, somewhere in our thinking has gotten skewed. This is not to advocate for, or excuse excessive AI use, it's to ask that judgement is proportional and accurate.

8/

I understand that drawing these stark moral lines feels very clean and very clear but I think that it can often end up protecting harmful existing heirarchies.

9/

I’m not aguing for a ‘fuck it’ attitude to AI use, not at all. We need to approach this powerful technology in a considered and careful way. It needs to be heavily regulated at the policy end too. What I’m asking people to see is that it is possible act ethically within an unethical system (there are exampels everywhere!) and that if we care about ethics we must make sure that our judgement is ethical too.

END

@kristiedegaris One of the biggest, and most understated, problems with "AI" is the *centralisation of compute*.

AI happens in photography - NR is a DL conv.net in action - and I'm not averse to seeding a photo's description with an auto-generated paragraph either.

But what I despise is the concept of pay-to-play. Don't talk about ChatGPT or Claude, but OpenAI and Anthropic. Why should I have to pay companies $ to do what could be achieved locally because they bought the world's GPUs & RAM?

@xylophilist I completely agree!

@kristiedegaris At work we talk about "data silos" - somewhat pejoratively. Compute itself needs the same discussion.

I've already dabbled in F/OSS photography, years ago. Maybe I should work a bit harder on integrating deep learning for noise-reduction & super-resolution into a smoother workflow again, see if I could get rid of DxO... That would be an intriguing way to go.

@xylophilist I'm quite interested in the software that scales up images. That could be incredibly handy.

@kristiedegaris It can be quite easy - you can have your LLM of choice generate a python script that will train a model on the difference between lanczos4 upscaled images vs ground truth - effectively it becomes "artefact removal". Similar for noise - put it in, train the differences, now you have a tool for removing it. Etc.

Trouble is the amount of GPU required for the training or running on a serious photo is... significant.

@xylophilist It's all so interesting. And honestly, probably beyond my understanding in many ways.
@kristiedegaris I've been thinking of saving up for a newer macbook pro - when they produce an M5 one - mostly in order to do photo things on it.
But if I can replace DxO, that's a significant chunk of workflow that doesn't need to be a mac. It's just a matter of organizing a fluid data flow and fixing a few things that've suffered bitrot. Hmmmmm... :)