How people think AI is going to kill them: terminator robots.
How AI is actually going to kill them: by destroying their habitat and drinking all their water.
How people think AI is going to kill them: terminator robots.
How AI is actually going to kill them: by destroying their habitat and drinking all their water.
*driven by millions of users who don't care about artist's rights.
@aral the logic of capital 😮💨
Attached: 1 image In a society centered on people, automation eliminates drudgery; it doesn't threaten your livelihood. Labor-saving devices facilitate SOCIAL reproduction. Under capitalism, automation is JUST about CAPITAL reproduction. We're reopening 3-Mile Island to power AI. We're emptying rivers to cool AI. We're flattening mountains & ripping scars across the earth to manufacture AI. We're boiling the seas, making people work faster and faster for less and less. For what? More paperclips?
@aral the greatest trick AI ever pulled was creating the cultural perception that it would be the „slaving machines“ that kills us, instead of the indifferent wills of the money-men who built the machines…
And of course, now they keep recycling the same fears with classism and racism.
And so many sci-fi writers have been blatantly uninformed and ignorant Ito these issues throughout.
And "assisting" internet searches, so we know we should eat:
- petrol in our spaghetti sauce
- glue in our pizza cheese
- rocks
- those mushrooms that melt down your liver
@billbennett Like people who trust IA for medoacl or electrical repair advice?
I won't be suprised if it has already caused deaths…
@aral And by inducing people to hate and murder each other by feeding them misinformation as "news" or in search results. Can we rename AI as AS: artificial stupidity!
Edit: someone has very helpfully pointed out to me that AS is an acronym for Asperger`s syndrome, and I wish to clarify that this did not occur to me at the time I wrote this, and I humbly apologise to anyone who might be affected by what was meant to be a tongue in cheek reference to the "intelligence" of LLMs.
Don't count out the killer robots. They are coming along nicely. Just because they will be deployed by humans and not by SkyNet and won't be self-replicating doesn't make them not a threat.
All those "human in the loop" systems are being developed with the knowledge that it would be more profitable to take the human out of the loop at some point. Palantir drools about it.
And sometimes it's hidden. I was looking up details on making a tincture recently and an otherwise reasonable looking article said to "use a high ABV alcohol such as isopropyl"
Your use of AI is directly harming the environment I live in: https://www.baldurbjarnason.com/2024/your-use-of-ai-harms-the-environment/
@aral and they still won't get *general* AI out of in it. Just hallucinating piece-of-shit LLMs that can only churn out spam and incomprehensible text..
At the expense of what little climate stability remains
@aral
Or by advising them to eat poisonous mushrooms 🍄 🤔 😋 ☠️
https://www.vox.com/24141648/ai-ebook-grift-mushroom-foraging-mycological-society
@aral That’s the 20-40 year plan.
For the average person, they’re more likely to have their life threatened by an AI:
- Rejecting their medical needs - organ recipient, insurance (US)
- Deciding that cost cutting measures on the product factory floor are worth the risk to safety relative to likely blowback
- Rejecting their job application
- Devaluing the only work they are able to do, being disabled
- Stealing time in productivity’s name from others who might have seen your pain
@aral There's also the people who'll die to it but not by anything bombastic but because it decided to cut some bureaucratic thread that unravels someone's life entirely.
Making a massive misinformation generator means it'll misinform in small and big ways, some very noticeable but others will be subtle enough to cause some real damage.
@aral
Remember when we thought search engines would use more all the electricity and water?
Then it was social media.
It’s not that the technology is unproblematic — glorified predictive text of dubious origin being wildly and widely misused to support a fantasist tech bubble — but there is a pattern here.
@aral or moral panics are recycled when they’re proven distractions.
OpenAI’s Superalignment team was pushing AI as existential threat when distraction was needed from IP… complications…, labour abuses, and bias.
Now that grounding is creating new issues like impersonation risk and exploit vulnerabilities, suddenly we’re supposed to be looking away at the water?
It’s too convenient and too recurring.
@justanotheramy true. But it doesn’t have to be infinite. It just has to accelerate current brittleness to the point of fracture. That we’re actually choosing that is pretty bonkers
@aral A thousand years from now aliens visit Earth and sift through the remnants of humanity...
"Our archaeologists have discovered that apparently the downfall of their civilisation started with something they referred to as 'Clippy'."
@aral Exactly! I had a discussion on AI with a colleague and when I said I see an overall danger in AI, without being specific, he just threw in the argument„yeah killer robots are terrible but we can regulate them, see AI is just a tool like any other“…
It’s like you say. People have been effectively gaslighted that THAT is the real danger.
@aral or by blocking all their attempts to get help, as it's already being used by department of social services to process SNAP paperwork, social security paperwork, new patient paperwork for many medical clinics & some banks.
*If you can't use any money & can't get any medical care, then you're not going to survive very long in our society.