THREAD

1/

I’ve gotten quite a few messages from disabled people who benefit from AI in the same way I do but feel unable to admit to it because they are scared of backlash.

I will start by saying I understand concerns about AI, they are real. AI is energy intensive, data centres use water, a resource that is already scarce in many places, and the companies behind these products are unethical in so many ways.

#AI #Ethics #Scotland #Disability #UK #LLM

2/

But something feels off in how this debate is being handled. We live inside unethical systems constantly. That is our baseline as humans in the 21st century.

3/

The aviation industry is a good example It is hugely environmentally destructive, and bound to inequality (only 10 - 11% of the world's population takes a flight in any given year, with only about 2 - 4% traveling internationally annually. Despite high passenger numbers, an estimated 80% of the global population has never flown in an airplane!) and yet we don’t generally judge people for flying. In fact travel has come to be seen as so essential that we don’t really put limits on it at all

4/

I’m sure you would all agree however that there are ways to be an ethical user of this incredibly unethical industry? I think AI should be treated the same way.

5/

Collapsing all AI use into one immoral category doesn’t make sense to me. Frivolously chatting to it all day, repeatedly generating images for fun, or asking it to write your book is not the same as asking AI to help navigate the labour and bureaucracy of disability, or the pressures of other forms of inequality.

6/

For me the distinction is between creative and functional work. I don’t want AI to be part of the process of my creative work, but AI being involved in the functional work of managing my disability frees up space for the creative work which feels integral to my happy existence as a human being.

7/

For a bit of context, a return flight from Scotland to Spain uses roughly the same amount of energy as hundreds of thousands of substantial text only AI interactions. That’s a lifetime’s worth of pretty heavy AI use. Something, somewhere in our thinking has gotten skewed. This is not to advocate for, or excuse excessive AI use, it's to ask that judgement is proportional and accurate.

8/

I understand that drawing these stark moral lines feels very clean and very clear but I think that it can often end up protecting harmful existing heirarchies.

9/

I’m not aguing for a ‘fuck it’ attitude to AI use, not at all. We need to approach this powerful technology in a considered and careful way. It needs to be heavily regulated at the policy end too. What I’m asking people to see is that it is possible act ethically within an unethical system (there are exampels everywhere!) and that if we care about ethics we must make sure that our judgement is ethical too.

END

@kristiedegaris Your flying example is so useful.

We need to clamber out of the developer sandbox we’re trapped in, because energy use, human exploitation and social impact really matter here too. We need to get to the point where people can evaluate and then limit their user-side AI needs in informed ways. We need far higher ethical and transparency standards on the corporate side. We need the courage to limit use and drive up safety planning.

We expect planes and flying to be held to safety standards. We exert some degrees of consumer pressure on pricing and reliability. I suspect many of us who fly for reasons of geography also offset if we can, and limit our use to try to minimise harm. We think about the planet and each other.

Let’s get to this point with AI, and stop accepting that our only option is to provide endless free sandbox testing for hypesellers.

@kristiedegaris Having just read this (thanks to @koutropoulos), I’m imagining it with Qantas instead of Anthropic. Imagine an airline famous for its safety record saying it’s “dropped its safety pledge” for commercial and competitive reasons. How do we make this similarly unthinkable for AI? One way is to exert consumer and media pressure on companies that say things like this:

“We didn't really feel, with the rapid advance of AI, that it made sense for us to make unilateral commitments ... if competitors are blazing ahead”

https://m.slashdot.org/story/452796

Slashdot

@kate @koutropoulos It's all completely fucked and I fear that only racing ahead and falling off the cliff will be enough to teach us the lessons.
@kristiedegaris @koutropoulos Ha! I get this feeling, but I think we just keep aiming for mitigation, moderation, today a bit less fucked than yesterday. We do what we can, including by airing it all out together.
@kristiedegaris @koutropoulos I’m particularly sympathetic to your argument as a reluctant flyer and equally reluctant AI user. I want to be a non flyer but for now it’s a compromise I live with because I’m in Australia. As you say, you might fly again. I might stop flying. But for now I don’t fly casually, or without worrying about both the ethics and safety of flying and using that to guide how I do it. Same with AI.

@kristiedegaris

Just adding another thought, because the flying comparison really has got me thinking.

As a reluctant flyer, I don’t expect to be picked up and flown somewhere when I’m least expecting it. I’m very tired of AI showing up uninvited.

BUT (and this is what you have helped me see) as a reluctant flyer I’m still the trigger for a lot of hidden flying: food miles, imported products etc. And that leads to all the hidden exploitative labour that produces the things that are flown here, and is the reason why they have to be flown. So this is where I want higher standards of corporate transparency in AI as in carbon impact. I want to know what I’m costing the planet and other people when something appears as a convenience for me.

We’re (slowly) doing it with plastic, let’s do it with AI.

@kate I'd say that with flying, people are rightly focused on environmental stuff, but the travel industry does way more harm than just environment. In fact, it causes vast harm. So imo it's about way more than being upfront about carbon.

And with regards to AI, I do think the changes will come, more ethical AIs will emerge, more ethical companies and we will have more choice. But that may not be enough to undo the harm they cause. I don't know.