https://axbom.com/hammer-ai/
#AiEthics #DigitalEthics
Quels sont les enjeux éthiques de l’intelligence artificielle et est-ce qu’ils nécessitent tous d’être réglementés ? Est-ce que l’intelligence artificielle en soi est suffisamment différente pour avoir ses propres lois ? Est-ce que nous avons déjà les outils nécessaires pour réglementer l’IA ? Je tente de répondre à ces questions à travers 9 angles différents, identifiés magnifiquement dans une image d’un marteau par le consultant en éthique Per Axbom. Un vol camouflé de données Le coût en production de CO2 Les décisions invisibles La désinformation Les fuites de données confidentielles Les biais et les injustices Les oligopoles et la concentration du pouvoir La responsabilité Le traumatisme de la modération L’image du marteau et de l’IA par Per Axbom: If a hammer was like AI… - Sous licence Creative Commons BY-SA Mon article de blog sur les réseaux de neurones: On démonte le robot #1 – les réseaux de neurones
@axbom Regarding Invisible decision-making: a tool is something you can control. It is an illusion to think that AI can be controlled in the future.
A hammer that can decide for itself what to hit is a dangerous thing.
The web is just a small portion of the internet, but your point still stands.
Also worth considering that the web was created by and for Anglophone academia - so a tiny fraction of even the English-speaking world.
HTML’s structure is really predicated on being able to write an academic paper in English. So it uses conventions - headings, tables, ordered lists, figures - from that milieu.
Many of these have no corollaries in other languages or cultures.
(None of this should be read as a pop at Berners-Lee, CERN or the web in general. It is a mad and beautiful thing. But it is what it is, and there are reasons for that. )
@fasterandworse @axbom I often think that if the ai mistakes would disadvantage Andersen, musk, Thiel, etc. AI would have been strangled at birth. But, by being a tech sycophant, it survived.
The bias is not a bug, it is the point
Hired as a speaker throughout Silicon Valley and the international tech world, Nir Eyal’s appeal and influence cannot be ignored. He wrote the book that outlines a technique helping companies create products and services that tap into the psychology of habits. The book, Hooked – How to Create Habit-Forming Products,
You forgot the part where the navy buys it for 600% markup.
@axbom What could possibly go wrong?
https://www.washingtonpost.com/technology/2023/10/22/scale-ai-us-military/
Seriously, I want suggestions about likely problems.
Ones that I see:
-- Killer drone unable to differentiate between dark skinned people
-- Intel from languages that are not English being ignored
--Giving more trust to info from people named Jared who played lacrosse
Thinking on this and on AI generally, I believe a model of AI already exists and has for decades: it's the modern corporate organization.
The modern corporation, like today's AI, is a voracious information & intelligence gatherer, hoarder and exploiter. Its internal knowledge & logic is typically hidden from us. It 'serves' us—but it also subsumes us. It fabricates, it lies, and it does whatever necessary to ensure its own survival regardless of broader consequences.
A common argument I come across when talking about ethics in AI is that it's just a tool, and like any tool it can be used for good or for evil. One familiar declaration is this one: "It's really ...
@axbom "It's just a tool" is a statement as old as history. Langdon Winner wrote a famous paper on the subject: "Do artefacts have politics"
https://www.jstor.org/stable/20024652
Sorry for JSTOR; couldn't find on libgen.
If it's NO different from a hammer then we already have hammers, so we don't need AI.