Paddy Duke

@paddyduke
273 Followers
364 Following
6.4K Posts

Accessibility & ethics in design

Neophyte anarchist

Makes punk/rock/electronic music

Irishman in Scotland

PronounsHe/Him/They/Them
My websitehttps://weenotions.com
Musichttps://soundcloud.com/807
CodePenhttps://codepen.io/paddyduke
When we hit 1000 signatures on Friday it was a proper dream, to petition for a public inquiry into Section 28 We want to get to 10000 so we get a response from the Government. Can you help with direct sharing in your newsletter? Whatsapp groups? Posts? petition.parliament.uk/petitions/76...

For me, this is the body horror money quote from that Scientific American article:

"participants who saw the AI autocomplete prompts reported attitudes that were more in line with the AI’s position—including people who didn’t use the AI’s suggested text at all"

So maybe you can't use it "responsibly", or "safely". You can't even ignore it and choose not to use it once you've seen it.

If you can see it, the basilisk has already won.

More to the point though in this metaphor where you're getting a potentially-infected scrape at work, we are living in the pre-germ-theory age of AI. We are aware that it might be dangerous sometimes, but we don't know to whom or why. We are attempting to combat miasma with bloodletting right now, and putting the miasma-generator in every home before we know what it's actually doing.
If I could use another inaccurate metaphor, AI psychosis is the "instant decapitation" industrial accident with this new technology. And indeed, most people having industrial accidents are not instantly decapitated. But they might get a scrape, or lose a finger, or an eye. And an infected scrape can still kill you, but it won't look like the decapitation. It looks like you didn't take very good care of yourself. Didn't wash the cut. Didn't notice it fast enough. Skill issue.

There is also not much to figure out for the rest of us. The technology is purpose-designed to remove people from the equation, much like a handgun is purpose-designed to remove people from existence. Any "figuring out" about either tech will only result in variations on their purpose

What we need to do is strip back the tech, go back to the drawing board, and figure out how to reinvent it practically from scratch to be more human. /fin

There is no rebuilding, no constructive potential for "AI" without political reform, both in the US and in Europe. The motivation behind the tech still remains: powerful people want to take things away from society

The people who funded and drove today's "AI" will have the resources to figure out how to make the tech affordable after the financial bubble pops. They're the ones who will have the resources to figure things out about LLMs, not us.

They were built for this goal—fewer humans, less accountability—and will always be problematic no matter who is in charge and always always be a risk as long as the "shitty people" have power

In the modern history the "shitty human beings" have always retained power and influence after bubbles pop, from Reagan onwards. The bubble builds up wealth and power, they keep it after it pops, and use their influence to get in on the ground floor on the next one.

Generative models and automated decision-making systems are political projects. They are tools for fencing off sectors of our society for rent, for cutting back on education and healthcare for the poor, for removing accountability. They are inherently tools for removing humans from the equation. They are not neutral in their design. Their existence has a political purpose
I've been seeing people say the harms of "AI" comes from shitty people, not the tech and that the shitty people will go away after the bubble pops. While the first part is true in the strictest sense (contrary to claims, these are not autonomous systems) it's not really true in a practical sense. The harms are designed in and the "shitty human beings" won't go away.
Hackers breached the European Commission by poisoning the security tool it used to protect itself - https://thenextweb.com/news/european-commission-breach-trivy-supply-chain that's cheating...
Hackers breached the European Commission by poisoning the security tool it used to protect itself

CERT-EU has attributed a major data breach at the European Commission to cybercrime group TeamPCP, which exploited a supply chain attack on the open-source security tool Trivy to steal 92 GB of compressed data from the Commission’s AWS infrastructure. The notorious ShinyHunters gang then published the data, which included emails and personal details from up […]

The Next Web