A common argument I come across when talking about ethics in AI is that it's just a tool, and like any tool it can be used for good or for evil. One familiar declaration is this one: "It's really no different from a hammer". I was compelled to make a poster to address these claims. Steal it, share it, print it and use it where you see fit.

https://axbom.com/hammer-ai/

#AiEthics #DigitalEthics
If a hammer was like AI…

Computations will “estimate” your aim, tend to miss the nail and push for a different design. Often unnoticeably.

Axbom
Poster: If a hammer was like AI.

Download and read more: https://axbom.com/hammer-ai/

#AiEthics
If a hammer was like AI…

Computations will “estimate” your aim, tend to miss the nail and push for a different design. Often unnoticeably.

Axbom
I'll add clarifications regarding some of the topics to this thread. 👇

Regarding Monoculture.
Today, there are nearly 7,000 languages and dialects in the world. Only 7% are reflected in published online material. 98% of the internet’s web pages are published in just 12 languages, and more than half of them are in English. When sourcing the entire Internet, that is still a small part of humanity.

76% of the cyber population lives in Africa, Asia, the Middle East, Latin America and the Caribbean, most of the online content comes from elsewhere. Take Wikipedia, for example, where more than 80% of articles come from Europe and North America.

Now consider what content most AI tools are trained on.

Through the lens of a small subset of human experience and circumstance it is difficult to envision and foresee the multitudes of perspectives and fates that one new creation may influence. The homogenity of those who have been provided the capacity to make and create in the digital space means that it is primarily their mirror-images who benefit – with little thought for the wellbeing of those not visible inside the reflection.
Regarding Power concentration.

When power is with a few, their own needs and concerns will naturally be top of mind and prioritized. The more their needs are prioritized, the more power they gain. Three million AI engineers is 0.0004% of the world's population.

The dominant actors in the AI space right now are primarily US-based. And the computing power required to build and maintain many of these tools is huge, ensuring that the power of influence will continue to rest with a few big tech actors.
Regarding Invisible decision-making.

The more complex the algorithms become, the harder they are to understand. As more people are involved, time passes, integrations with other systems are made and documentation is faulty, the further they deviate from human understanding. Many companies will hide proprietary code and evade scrutiny, sometimes themselves losing understanding of the full picture of how the code works. Decoding and understanding how decisions are made will be open to infinitely fewer beings.

And it doesn't stop there. This also affects autonomy. By obscuring decision-making processes (how, when, why decisions are made, what options are available and what personal data is shared) it is increasingly difficult for individuals to make properly informed choices in their own best interest.
Regarding Bias and injustice.

One inherent property of AI is its ability to act as an accelerator of other harms. By being trained on large amounts of data (often unsupervised) – that inevitably contains biases, abandoned values and prejudiced commentary – these will be reproduced in any output. It is likely that this will even happen unnoticeably (especially when not actively monitored) since many biases are subtle and embedded in common language. And at other times it will happen quite clearly, with bots spewing toxic, misogynistic and racist content.

Because of systemic issues and the fact that bias becomes embedded in these tools, these biases will have dire consequences for people who are already disempowered. Scoring systems are often used by automated decision-making tools and these scores can for example affect job opportunities, welfare/housing eligibility and judicial outcomes.
Regarding Content moderator trauma.

In order for us to avoid seeing traumatitizing content when using many of these tools (such as physical violence, self-harm, child abuse, killings and torture) this content needs to be filtered out. In order for it to be filtered out, someone has to watch it. As it stands, the workers who perform this filtering are often exploited and suffer PTSD without adequate care for their wellbeing. Many of them have no idea what they are getting themselves into when they start the job.
Regarding Data / Privacy breaches

There are several ways personal data makes its way into the AI tools. First, since the tools are often trained on data available online and in an unsupervised manner, personal data will actually make its way into the workings of the tools themselves. Data may have been inadvertently published online or may have been published for a specific purpose, rarely one that supports feeding an AI system and its plethora of outputs. Second, personal data is actually entered into the tools by everyday users through negligent or inconsiderate use – data that at times is also stored and used by the tools. Third, when many data points from many different sources are linked together in one tool they can reveal details of an individuals’ life that any single piece of information could not.
If you speak French, here is a podcast episode taking you through the different topics in the hammer diagram. My French is unfortunately not good enough to follow along 😅

https://airescommunes.ca/@airescommunes/episodes/doit-on-reglementer-lintelligence-artificielle/activity
Doit-on réglementer l'intelligence artificielle ?

Quels sont les enjeux éthiques de l’intelligence artificielle et est-ce qu’ils nécessitent tous d’être réglementés ? Est-ce que l’intelligence artificielle en soi est suffisamment différente pour avoir ses propres lois ? Est-ce que nous avons déjà les outils nécessaires pour réglementer l’IA ? Je tente de répondre à ces questions à travers 9 angles différents, identifiés magnifiquement dans une image d’un marteau par le consultant en éthique Per Axbom. Un vol camouflé de données Le coût en production de CO2 Les décisions invisibles La désinformation Les fuites de données confidentielles Les biais et les injustices Les oligopoles et la concentration du pouvoir La responsabilité Le traumatisme de la modération L’image du marteau et de l’IA par Per Axbom: If a hammer was like AI… - Sous licence Creative Commons BY-SA Mon article de blog sur les réseaux de neurones: On démonte le robot #1 – les réseaux de neurones

Castopod

@axbom Regarding Invisible decision-making: a tool is something you can control. It is an illusion to think that AI can be controlled in the future.

A hammer that can decide for itself what to hit is a dangerous thing.

@axbom in addition to the internet being 98% published in 12 languages, the code that makes the internet uses syntax that is 100% English
@fasterandworse

Thank you! I've honestly been bad at recognising this and bringing attention to it. So much taken for granted all the time.
@axbom it’s really interesting for the web when you consider how much the difficulty of making accessible web pages is compounded by English element names and attributes that are complex enough as they are for a native speaker

The web is just a small portion of the internet, but your point still stands.

Also worth considering that the web was created by and for Anglophone academia - so a tiny fraction of even the English-speaking world.

HTML’s structure is really predicated on being able to write an academic paper in English. So it uses conventions - headings, tables, ordered lists, figures - from that milieu.

Many of these have no corollaries in other languages or cultures.

@fasterandworse @axbom

(None of this should be read as a pop at Berners-Lee, CERN or the web in general. It is a mad and beautiful thing. But it is what it is, and there are reasons for that. )

@fasterandworse @axbom

@iamdavidobrien @axbom yep, I said the internet in my previous response. But the web is particularly interesting in how easy it is to make *something* but complex to make something semantically correct and accessible, even if English is your language.

@fasterandworse @axbom I often think that if the ai mistakes would disadvantage Andersen, musk, Thiel, etc. AI would have been strangled at birth. But, by being a tech sycophant, it survived.

The bias is not a bug, it is the point

@axbom thanks for these clarifications. Could you elaborate on the power consumption angle too, and what other options achieve similar results for 1% of the power?
@pauldaoust

To be fair that is more of an intentionally provocative number, as OpenAI won't disclose energy use. There are estimates claiming that one ChatGPT query costs (in money) 100x more than a Google search, or consumes 10x as much energy. And one ChatGPT session, to get the response you want, can be estimated at 10 prompts.

But in the end there are many examples of things that can be done in more environmentally friendly ways. For example, writing a speech for a wedding. There's a lof of that going around now. Another way to write a speech for a wedding is pen and paper, and from the heart 😊

More on carbon costs here:
https://axbom.com/aielements/#carbon-costs
The Elements of AI Ethics

Let's talk about harm caused by humans implementing AI.

Axbom • Digital Compassion
@axbom I appreciate the response! As well as what you've been writing in your blog. Quite an eye-opener re: the cost of querying the model; I thought that almost all of the energy was spent in training. I do see huge potential in LLMs, along with a lot of very concerning visions of the future -- not all of it dark and grim, just bland and meaningless. If these models are trained on the repository of human culture, and the best of that culture requires effort to hone...
@axbom (I love that the reasons for 'doing it the hard way' are both beautiful and more ecologically friendly)
Extreme Heat, Drought Drive Opposition to AI Data Centers

With drought spreading around the globe, battles over water are erupting between AI companies seeking more computing power and communities where their facilities are located.

Bloomberg
@axbom @pauldaoust the water issue is not gonna stop with ending AI rush, sadly, as it is a problem with computing centers and other high-tech facilities in general (semiconductor fabrication, power generations, etc). but I guess it's still an important facet of the discussion.
@axbom 7000 languages, but many many many more dialects. Even fairly small languages can have tens of dialects.
@axbom @tante I have a half written blog post about a hammer designed by the guidelines of Nir Eyal’s book Hooked. It is internet connected and it tells you the hammer stats of your contacts and keeps a leaderboard and encourages you to keep a daily hammering streak.
@fasterandworse

Haha, you might enjoy my "review" of Hooked:
https://axbom.com/nir-eyal-habit-danger/

@tante
How Nir Eyal’s habit books are dangerous

Hired as a speaker throughout Silicon Valley and the international tech world, Nir Eyal’s appeal and influence cannot be ignored. He wrote the book that outlines a technique helping companies create products and services that tap into the psychology of habits. The book, Hooked – How to Create Habit-Forming Products,

Axbom
@axbom @tante oh that’s you! I spent last month researching all the references of the first few chapters because of how little crit that damn book has got and how it remains at the top of ux/product design reading lists. I found your article in the process
@fasterandworse

Oh excellent. Make sure to ping me when you're done.

@tante
@axbom @tante it’s a crime that he gets away with writing that book and the follow up. Drug dealer also deals the antidote. Have you listened to the latest If Books Could Kill podcast about Atomic Habits by James Clear?
@fasterandworse

Ah, no. Thanks, added to my playlist.

@tante
@fasterandworse @axbom Oh looking forward to reading it!
@tante @axbom it’s on the back burner while I work on this week’s one. As we speak I’m on stream working on it.

@axbom

You forgot the part where the navy buys it for 600% markup.

@axbom What could possibly go wrong?

https://www.washingtonpost.com/technology/2023/10/22/scale-ai-us-military/

Seriously, I want suggestions about likely problems.

Ones that I see:

-- Killer drone unable to differentiate between dark skinned people
-- Intel from languages that are not English being ignored
--Giving more trust to info from people named Jared who played lacrosse

ScaleAI wants to be America’s AI arms dealer

The tech startup says the United States needs Silicon Valley to compete with China. Others fear a deadly arms race.

The Washington Post

@axbom

Thinking on this and on AI generally, I believe a model of AI already exists and has for decades: it's the modern corporate organization.

The modern corporation, like today's AI, is a voracious information & intelligence gatherer, hoarder and exploiter. Its internal knowledge & logic is typically hidden from us. It 'serves' us—but it also subsumes us. It fabricates, it lies, and it does whatever necessary to ensure its own survival regardless of broader consequences.

#AI

@axbom I would argue that “it’s just a tool” isn’t an inaccurate argument, per se, it’s just dangerously oversimplified, per the argument laid out in this poster. Like, say… smartphones, the “tool” that is AI comes with many considerations in its use (as does a product like a smartphone in their manufacture) which require a great deal of regulation and ethical consideration in our approach to them.
@brooklynman

Yeah I agree with your point. I’d say my main concern is that the statement itself removes accountability and consideration for the bigger picture effects. Saying something is just a tool creates the faulty mental model of all tools having interchangeable qualities frrom an ethical perspective, which simply isn’t true.

Which is just me repeating what you just said. 😂

The word to watch out for is ”just”. If we instead said ”it’s a tool” that would make more sense. The word ”just” is there to shed accountability.
@axbom i agree with you, too. The hand-waving away of any thought or care to act with responsibility is troubling. As with any tool, there can be consequence with its origin and/or use. Where does the lithium or cadmium in your batteries come from or go once you done with it? Or that hamburger you just ate? Such questions deserve consideration for they all have an aggregate impact on our world.
@axbom @weblearning Thank you, that is going on the wall at work!
@axbom @discoursology I never post to Linkedin Hate. It. But this poster had to go there. And Per, your argument about 12 dominant languages (and how we seem to be conveniently forgeting the other 7,000 languages and dialects in the world). Needs to be heard.
Per Axbom (@[email protected])

A common argument I come across when talking about ethics in AI is that it's just a tool, and like any tool it can be used for good or for evil. One familiar declaration is this one: "It's really ...

@axbom @alexglow The biggest concern I have is people trusting it.
@axbom it's a tool like an assault rifle is a tool.
@axbom Wondering how that argument works when bringing a hammer to a concert, sports event, museum, court room, air plane, protest, ...: "But Sir, this is just a tool!"
@maz 👍 I do like that analogy.
@axbom … and the PDF is a proper vector document 😍
@jwleblan I wouldn't have it any other way 😊

@axbom "It's just a tool" is a statement as old as history. Langdon Winner wrote a famous paper on the subject: "Do artefacts have politics"

https://www.jstor.org/stable/20024652

Sorry for JSTOR; couldn't find on libgen.

Do Artifacts Have Politics? on JSTOR

Langdon Winner, Do Artifacts Have Politics?, Daedalus, Vol. 109, No. 1, Modern Technology: Problem or Opportunity? (Winter, 1980), pp. 121-136

@eternaltyro That was a really good read! Thank you 🙏

Appreciated the nuclear vs solar narrative, which brought a lot of clarity.

I haven’t seen Oppenheimer yet but it feels a good opportunity today to draw these parallells as this is currently top-of-mind in more societal contexts. This has given me lots of ideas for new content.

@axbom

If it's NO different from a hammer then we already have hammers, so we don't need AI.

@axbom Something I find interesting is that when discussions about how to build an ethical AI comes up, I usually hear suggestions about going back to how we were doing things before this "generative-AI" hypecycle...
@axbom duly stolen. Thank you for making this.