A common argument I come across when talking about ethics in AI is that it's just a tool, and like any tool it can be used for good or for evil. One familiar declaration is this one: "It's really no different from a hammer". I was compelled to make a poster to address these claims. Steal it, share it, print it and use it where you see fit.

https://axbom.com/hammer-ai/

#AiEthics #DigitalEthics
If a hammer was like AI…

Computations will “estimate” your aim, tend to miss the nail and push for a different design. Often unnoticeably.

Axbom
Poster: If a hammer was like AI.

Download and read more: https://axbom.com/hammer-ai/

#AiEthics
If a hammer was like AI…

Computations will “estimate” your aim, tend to miss the nail and push for a different design. Often unnoticeably.

Axbom
I'll add clarifications regarding some of the topics to this thread. 👇

Regarding Monoculture.
Today, there are nearly 7,000 languages and dialects in the world. Only 7% are reflected in published online material. 98% of the internet’s web pages are published in just 12 languages, and more than half of them are in English. When sourcing the entire Internet, that is still a small part of humanity.

76% of the cyber population lives in Africa, Asia, the Middle East, Latin America and the Caribbean, most of the online content comes from elsewhere. Take Wikipedia, for example, where more than 80% of articles come from Europe and North America.

Now consider what content most AI tools are trained on.

Through the lens of a small subset of human experience and circumstance it is difficult to envision and foresee the multitudes of perspectives and fates that one new creation may influence. The homogenity of those who have been provided the capacity to make and create in the digital space means that it is primarily their mirror-images who benefit – with little thought for the wellbeing of those not visible inside the reflection.
Regarding Power concentration.

When power is with a few, their own needs and concerns will naturally be top of mind and prioritized. The more their needs are prioritized, the more power they gain. Three million AI engineers is 0.0004% of the world's population.

The dominant actors in the AI space right now are primarily US-based. And the computing power required to build and maintain many of these tools is huge, ensuring that the power of influence will continue to rest with a few big tech actors.
Regarding Invisible decision-making.

The more complex the algorithms become, the harder they are to understand. As more people are involved, time passes, integrations with other systems are made and documentation is faulty, the further they deviate from human understanding. Many companies will hide proprietary code and evade scrutiny, sometimes themselves losing understanding of the full picture of how the code works. Decoding and understanding how decisions are made will be open to infinitely fewer beings.

And it doesn't stop there. This also affects autonomy. By obscuring decision-making processes (how, when, why decisions are made, what options are available and what personal data is shared) it is increasingly difficult for individuals to make properly informed choices in their own best interest.
Regarding Bias and injustice.

One inherent property of AI is its ability to act as an accelerator of other harms. By being trained on large amounts of data (often unsupervised) – that inevitably contains biases, abandoned values and prejudiced commentary – these will be reproduced in any output. It is likely that this will even happen unnoticeably (especially when not actively monitored) since many biases are subtle and embedded in common language. And at other times it will happen quite clearly, with bots spewing toxic, misogynistic and racist content.

Because of systemic issues and the fact that bias becomes embedded in these tools, these biases will have dire consequences for people who are already disempowered. Scoring systems are often used by automated decision-making tools and these scores can for example affect job opportunities, welfare/housing eligibility and judicial outcomes.
Regarding Content moderator trauma.

In order for us to avoid seeing traumatitizing content when using many of these tools (such as physical violence, self-harm, child abuse, killings and torture) this content needs to be filtered out. In order for it to be filtered out, someone has to watch it. As it stands, the workers who perform this filtering are often exploited and suffer PTSD without adequate care for their wellbeing. Many of them have no idea what they are getting themselves into when they start the job.
Regarding Data / Privacy breaches

There are several ways personal data makes its way into the AI tools. First, since the tools are often trained on data available online and in an unsupervised manner, personal data will actually make its way into the workings of the tools themselves. Data may have been inadvertently published online or may have been published for a specific purpose, rarely one that supports feeding an AI system and its plethora of outputs. Second, personal data is actually entered into the tools by everyday users through negligent or inconsiderate use – data that at times is also stored and used by the tools. Third, when many data points from many different sources are linked together in one tool they can reveal details of an individuals’ life that any single piece of information could not.
If you speak French, here is a podcast episode taking you through the different topics in the hammer diagram. My French is unfortunately not good enough to follow along 😅

https://airescommunes.ca/@airescommunes/episodes/doit-on-reglementer-lintelligence-artificielle/activity
Doit-on réglementer l'intelligence artificielle ?

Quels sont les enjeux éthiques de l’intelligence artificielle et est-ce qu’ils nécessitent tous d’être réglementés ? Est-ce que l’intelligence artificielle en soi est suffisamment différente pour avoir ses propres lois ? Est-ce que nous avons déjà les outils nécessaires pour réglementer l’IA ? Je tente de répondre à ces questions à travers 9 angles différents, identifiés magnifiquement dans une image d’un marteau par le consultant en éthique Per Axbom. Un vol camouflé de données Le coût en production de CO2 Les décisions invisibles La désinformation Les fuites de données confidentielles Les biais et les injustices Les oligopoles et la concentration du pouvoir La responsabilité Le traumatisme de la modération L’image du marteau et de l’IA par Per Axbom: If a hammer was like AI… - Sous licence Creative Commons BY-SA Mon article de blog sur les réseaux de neurones: On démonte le robot #1 – les réseaux de neurones

Castopod

@axbom Regarding Invisible decision-making: a tool is something you can control. It is an illusion to think that AI can be controlled in the future.

A hammer that can decide for itself what to hit is a dangerous thing.

@axbom in addition to the internet being 98% published in 12 languages, the code that makes the internet uses syntax that is 100% English
@fasterandworse

Thank you! I've honestly been bad at recognising this and bringing attention to it. So much taken for granted all the time.
@axbom it’s really interesting for the web when you consider how much the difficulty of making accessible web pages is compounded by English element names and attributes that are complex enough as they are for a native speaker

The web is just a small portion of the internet, but your point still stands.

Also worth considering that the web was created by and for Anglophone academia - so a tiny fraction of even the English-speaking world.

HTML’s structure is really predicated on being able to write an academic paper in English. So it uses conventions - headings, tables, ordered lists, figures - from that milieu.

Many of these have no corollaries in other languages or cultures.

@fasterandworse @axbom

(None of this should be read as a pop at Berners-Lee, CERN or the web in general. It is a mad and beautiful thing. But it is what it is, and there are reasons for that. )

@fasterandworse @axbom

@iamdavidobrien @axbom yep, I said the internet in my previous response. But the web is particularly interesting in how easy it is to make *something* but complex to make something semantically correct and accessible, even if English is your language.

@fasterandworse @axbom I often think that if the ai mistakes would disadvantage Andersen, musk, Thiel, etc. AI would have been strangled at birth. But, by being a tech sycophant, it survived.

The bias is not a bug, it is the point

@axbom thanks for these clarifications. Could you elaborate on the power consumption angle too, and what other options achieve similar results for 1% of the power?
@pauldaoust

To be fair that is more of an intentionally provocative number, as OpenAI won't disclose energy use. There are estimates claiming that one ChatGPT query costs (in money) 100x more than a Google search, or consumes 10x as much energy. And one ChatGPT session, to get the response you want, can be estimated at 10 prompts.

But in the end there are many examples of things that can be done in more environmentally friendly ways. For example, writing a speech for a wedding. There's a lof of that going around now. Another way to write a speech for a wedding is pen and paper, and from the heart 😊

More on carbon costs here:
https://axbom.com/aielements/#carbon-costs
The Elements of AI Ethics

Let's talk about harm caused by humans implementing AI.

Axbom • Digital Compassion
@axbom I appreciate the response! As well as what you've been writing in your blog. Quite an eye-opener re: the cost of querying the model; I thought that almost all of the energy was spent in training. I do see huge potential in LLMs, along with a lot of very concerning visions of the future -- not all of it dark and grim, just bland and meaningless. If these models are trained on the repository of human culture, and the best of that culture requires effort to hone...
@axbom (I love that the reasons for 'doing it the hard way' are both beautiful and more ecologically friendly)
Extreme Heat, Drought Drive Opposition to AI Data Centers

With drought spreading around the globe, battles over water are erupting between AI companies seeking more computing power and communities where their facilities are located.

Bloomberg
@axbom @pauldaoust the water issue is not gonna stop with ending AI rush, sadly, as it is a problem with computing centers and other high-tech facilities in general (semiconductor fabrication, power generations, etc). but I guess it's still an important facet of the discussion.
@axbom 7000 languages, but many many many more dialects. Even fairly small languages can have tens of dialects.