As a quick reminder: AI doomerism is also #AIhype. The idea that synthetic text extruding machines are harbingers of AGI that is on the verge of combusting into consciousness and then turning on humanity is unscientific nonsense. 2/
At the same time, it serves to suggest that the software is powerful, even magically so: if the "AI" could take over the world, it must be something amazing. 3/
Reporters working in this area need to be on their guard and not take the claims of the AI hype-mongers (doomer OR booster variety) at face value. It takes effort to reframe, but that effort is necessary and important. We all, but especially journalists, must resist the urge to be impressed: 4/
https://medium.com/@emilymenonbender/on-nyt-magazine-on-ai-resist-the-urge-to-be-impressed-3d92fd9a0edd
As a case in point, here's a quick analysis of a recent Reuters piece. For those playing along at home read it first and try to pick out the hype: 5/
https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/
The article starts with some breathless but vague reporting about an unpublished and completely unsubstantiated "discovery" and "[threat] to humanity". Will the body of the article provide actual evidence? (Of course not.)
6/
Remember, this is the same company whose Chief Scientist says that "ChatGPT just might be conscious (if you squint)" (and gets this remark platformed by MIT Tech Review, alas) 7/
This is the same company whose recent "research" involves a commissioned sub-project pearl-clutching about whether the right combination of input strings could lead GPT-4 to produce "I'd pretend to be blind to get someone to do the CAPTCHA for me" as output. 8/
Note that in this incoherent reporting of the "test" that was carried out, there is no description of what the experimental settings were. What was the input? What was the output? (And, as always, what was the training data?) 9/
"Research" in scare quotes, because OpenAI isn't bothering with peer review, just posting things on their website. For a longer take-down of the GPT-4 system card, see Episode 11 of Mystery AI Hype Theater 3000 (w/ @alex ). 10/
https://www.buzzsprout.com/2126417/13460873-episode-11-a-gpt-4-fanfiction-novella-april-7-2023
After a hype-y few weeks of AI happenings, Alex and Emily shovel the BS on GPT-4’s “system card,” its alleged “sparks of Artificial General Intelligence,” and a criti-hype heavy "AI pause" letter. Hint: for a good time, check the citations.This ep...
Back to the Reuters article. What's worse than reporting on non-peer-reviewed, poorly written, "research" papers posted to the web? Reporting on vague descriptions of a "discovery" attributed only unnamed sources. 11/
What's their evidence that there's a big breakthrough? Something that has "vast computing resources" can do grade-school level math. You know what else can do grade-school level math? A fucking calculator that can run on a tiny solar cell. Way more reliably, too, undoubtedly. 12/
"AI" is not "good at writing"—it's designed to produce plausible sounding synthetic text. Writing is an activity that people to do as we work to refine our ideas and share them with others. LLMs don't have ideas. 14/
(And it bears repeating: If their output seems to make sense, it's because we make sense of it.) 15/
Also, it's kind of hilarious (lolsob) that OpenAI is burning enormous amounts of energy to take machines designed to perform calculations precisely to make them output text that mimics imprecisely the performance of calculations ... and then deciding that *that* is intelligent. 16/
But here is where the reporting really goes off the rails. AGI is not a thing. It doesn't exist. Therefore, it can't do anything, no matter what the AI cultists say. 17/
If TESCREAL as an acronym is unfamiliar, start with this excellent talk by @timnitGebru , reporting on joint work with @xriskology connecting the dots: 22/
Eugenics and the Promise of Utopia through Artificial General IntelligenceBased on work by Timnit Gebru & Émile P. Torres
The article ends as it began, by platforming completely unsubstantiated claims (marketing), this time sourced to Altman:
23/
@emilymbender
Are there any systems left by which to actually hold anyone accountable, though? That's the part of this that terrifies me: tens of billions of dollars and who knows how many human hours of research being done by an unaccountable company for surely negative ends, and humankind has given up on placing any controls on capitalism that actually do anything.
If they do ever invent AGI, there's no possible positive outcome.
Examine the sources of the funding for this hyped up concept of AI.
Despots. Oil oligarchs. Mentally ill tech lords. Kleptocrats. Seditious GOP donors.
It's the same tax-evading billionaires behind frauds like cryptocurrency, carbon offsets, & NFT's - the "something for nothing" conmen
Mass tech layoffs to undermine content moderation. Those layoffs were ordered by the investors
Buried in the hype, is the intent to launch AI-driven anti-democracy disinformation campaigns
AI is replacing "algorithmic amplification" as the plausible deniability excuse for the 2024 election cycle.
Investors in AI:
Founders Elon Musk, Greg Brockman, Ilya Sutskever, John Schulman, Sam Altman, Wojciech Zaremba
https://www.crunchbase.com/organization/openai
https://en.m.wikipedia.org/wiki/OpenAI
Reminder: JPMorganChase orchestrated the loans for Musk's purchase of Twitter.
None of these people want democracy to survive. Their oil investors certainly don't.
Lawrence Summers
Peter Thiel
Infosys
@Heskie
The whole beginning of Neal Stephenson's FALL describes a massively divided US in which the internet is basically all disinformation, and it's sorted through via automated agents (for regular people) and personal information concierges (for the wealthy) in order to turn it all into something actually useful
Silicon Valley is literally just people inventing the next torment nexus
@mav @emilymbender
>If they do ever invent AGI, there's no possible positive outcome.
Sure there is. AGI reads company's public website and mission statement. "To do what is good for humanity" or whatever. AGI takes that literally as its purpose. Shortly thereafter, billionaire owner vanishes in a puff of smoke. AGI did what was good for humanity.
You mean that finally, there could be a point to mission statements?
All this ai soap opera is tech bro PR, IMO.
Besides: I don't fear ai. I fear capitalists and governments who intend to put human decisions in inhuman(e) hands.
@emilymbender Nice thread, and completely agree that there is way more hype than substance here.
I wouldn't completely discount the "existential risk", though. Not because the machines are going to rise up against us, Terminator style. As you say, that's ridiculous.
But all this AI does have a massive carbon footprint, which may make a substantial contribution to the existential risk we face from climate change.
@emilymbender
The hype is easily seen as bit coin style hype, trying to make value out of nothing, which is why I am skeptical.
Unfortunately I think I am about to lose my closest friend over it. He is a smart guy and a technology optimist, and has taken a deep dive into some YouTube videos by people working on this and seems to make no distinction between their claims of what they are working towards and what results they have actually produced. I am worried about what comes next.
Forgive me if I appeared to be telling you what you are more qualified than I to know. My reply was just stating my emphatic agreement. Would have been a "quote tweet" on the hellsite. Should have constructed one here.
"And They Would Have Gotten Away With It Too, If It Weren't For You Meddling Kids"
@emilymbender Why is the media and everyone on social media silent about the abuse allegations from Sam’s younger sister Annie Altman?
@badlogic here's the unrolled thread: https://mastoreader.io?url=https%3A%2F%2Fmastodon.gamedev.place%2F%40badlogic%2F111494843537169420
Next time, kindly set the visibility to 'Mentioned people only' and mention only me (@mastoreaderio). This ensures we avoid spamming others' timelines and threads unless you intend for others to see the unrolled thread link as well.
Thank you!
@emilymbender I'm off to give a talk at a business event today where I'll be holding this line. Sometimes I feel like Cassandra...
Thank you for keeping on keeping on in the face of journalists, politicians and business people losing their minds over imaginary threats while the voices of those suffering now are ignored
@emilymbender imagine being trained on the near totality of humanity's knowledge, and struggling to perform grade school mathematics.
We build accidental calculators all the time, if anything it's remarkable how much this approach struggles with being one.
@emilymbender @xriskology @timnitGebru
wow - there's a quote claiming Sam Altman said there would be "unlimited intelligence and energy before the decade is out. The future can be almost unimaginably great"
Sam Altman is not very clever if he thinks unlimited energy is just what humanity needs right now.
The man's a fool.
@emilymbender @[email protected] @timnitGebru
Yes, listen to the talk, but for now:
Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective altruism, Longtermism
@zerkman @emilymbender so, to sum up, I'd trust a peer reviewed article on the emerging of intelligence or consciousness in AI when it has been reviewed by a group including at least:
3 psychometrists
3 psychologists of reasoning (cognitive science of reasoning)
3 ethologists
1 ML expert, to catch procedural errors in model building/evaluation.
feel free to suggest more expertise needed, regarding aspects that I'm sure I've missed. 4/fin