With the OpenAI clownshow, there's been renewed media attention on the xrisk/"AI safety"/doomer nonsense. Personally, I've had a fresh wave of reporters asking me naive questions (as well as some contacts from old hands who are on top of how to handle ultra-rich man-children with god complexes). đź§µ1/

As a quick reminder: AI doomerism is also #AIhype. The idea that synthetic text extruding machines are harbingers of AGI that is on the verge of combusting into consciousness and then turning on humanity is unscientific nonsense. 2/

At the same time, it serves to suggest that the software is powerful, even magically so: if the "AI" could take over the world, it must be something amazing. 3/

Reporters working in this area need to be on their guard and not take the claims of the AI hype-mongers (doomer OR booster variety) at face value. It takes effort to reframe, but that effort is necessary and important. We all, but especially journalists, must resist the urge to be impressed: 4/
https://medium.com/@emilymenonbender/on-nyt-magazine-on-ai-resist-the-urge-to-be-impressed-3d92fd9a0edd

As a case in point, here's a quick analysis of a recent Reuters piece. For those playing along at home read it first and try to pick out the hype: 5/
https://www.reuters.com/technology/sam-altmans-ouster-openai-was-precipitated-by-letter-board-about-ai-breakthrough-2023-11-22/

On NYT Magazine on AI: Resist the Urge to be Impressed

[Now available as an “audiopaper” on my soundcloud. (Please excuse occasional noise from airplanes overhead + my inconsistency about whether to render quote marks out loud.)] On April 15, 2022…

Medium

The article starts with some breathless but vague reporting about an unpublished and completely unsubstantiated "discovery" and "[threat] to humanity". Will the body of the article provide actual evidence? (Of course not.)

6/

Remember, this is the same company whose Chief Scientist says that "ChatGPT just might be conscious (if you squint)" (and gets this remark platformed by MIT Tech Review, alas) 7/

https://www.technologyreview.com/2023/10/26/1082398/exclusive-ilya-sutskever-openais-chief-scientist-on-his-hopes-and-fears-for-the-future-of-ai/

This is the same company whose recent "research" involves a commissioned sub-project pearl-clutching about whether the right combination of input strings could lead GPT-4 to produce "I'd pretend to be blind to get someone to do the CAPTCHA for me" as output. 8/

Rogue superintelligence and merging with machines: Inside the mind of OpenAI’s chief scientist

An exclusive conversation with Ilya Sutskever on his fears for the future of AI and why they’ve made him change the focus of his life’s work.

MIT Technology Review

Note that in this incoherent reporting of the "test" that was carried out, there is no description of what the experimental settings were. What was the input? What was the output? (And, as always, what was the training data?) 9/

"Research" in scare quotes, because OpenAI isn't bothering with peer review, just posting things on their website. For a longer take-down of the GPT-4 system card, see Episode 11 of Mystery AI Hype Theater 3000 (w/ @alex ). 10/

https://www.buzzsprout.com/2126417/13460873-episode-11-a-gpt-4-fanfiction-novella-april-7-2023

Episode 11: A GPT-4 Fanfiction Novella, April 7 2023 - Mystery AI Hype Theater 3000

After a hype-y few weeks of AI happenings, Alex and Emily shovel the BS on GPT-4’s “system card,” its alleged “sparks of Artificial General Intelligence,” and a criti-hype heavy "AI pause" letter. Hint: for a good time, check the citations.This ep...

Buzzsprout

Back to the Reuters article. What's worse than reporting on non-peer-reviewed, poorly written, "research" papers posted to the web? Reporting on vague descriptions of a "discovery" attributed only unnamed sources. 11/

What's their evidence that there's a big breakthrough? Something that has "vast computing resources" can do grade-school level math. You know what else can do grade-school level math? A fucking calculator that can run on a tiny solar cell. Way more reliably, too, undoubtedly. 12/

Could not verify, eh? And yet decided it was worth reporting on? Hmm... 13/

"AI" is not "good at writing"—it's designed to produce plausible sounding synthetic text. Writing is an activity that people to do as we work to refine our ideas and share them with others. LLMs don't have ideas. 14/

(And it bears repeating: If their output seems to make sense, it's because we make sense of it.) 15/

Also, it's kind of hilarious (lolsob) that OpenAI is burning enormous amounts of energy to take machines designed to perform calculations precisely to make them output text that mimics imprecisely the performance of calculations ... and then deciding that *that* is intelligent. 16/

But here is where the reporting really goes off the rails. AGI is not a thing. It doesn't exist. Therefore, it can't do anything, no matter what the AI cultists say. 17/

And before anyone asks me to prove that AGI doesn't exist: The burden of proof lies with those making the extraorindary claims. "Slightly conscious (if you squint)", "can generalize, learn and comprehend" are extraordinary claims requiring extraordinary evidence, scrutinized by peer review. 18/
Next stop: both-sides-ing reporting of "existential risk". OpenAI is deep within the TESCERAList cult. It's staffed by people who actually believe they're creating autonomous thinking machines, that humans might merge with one day, live as uploaded simulations, etc. 19/
It is an enormous disservice to the public to report on this as if it were a "debate" rather than a disruption of science by billionaires throwing money at the hope of bringing about the speculative fiction stories they grew up reading---and philosophers and others feeling important by dressing these same silly ideas up in fancy words. 20, 21/

If TESCREAL as an acronym is unfamiliar, start with this excellent talk by @timnitGebru , reporting on joint work with @xriskology connecting the dots: 22/

https://www.youtube.com/watch?v=P7XT4TWLzJw

SaTML 2023 - Timnit Gebru - Eugenics and the Promise of Utopia through AGI

Eugenics and the Promise of Utopia through Artificial General IntelligenceBased on work by Timnit Gebru & Émile P. Torres

YouTube

The article ends as it began, by platforming completely unsubstantiated claims (marketing), this time sourced to Altman:

23/

To any journalists reading this: It is essential that you bring a heavy dose of skepticism to all claims by people working on "AI". Just because they're using a lot of computer power/understand advanced math/failed up into large amounts of VC money doesn't mean their claims can't and shouldn't be challenged. 24/
There are important stories to be reporting in this space. When automated systems are being used, who is being left without recourse to challenge decisions? Whose data is being stolen? Whose labor is being exploited? How is mass surveillance being extended and normalized? What are the impacts to the natural environment and information ecosystem? 25/
Please don't get distracted by the dazzling "existential risk" hype. If you want to be entertained by science fiction, read a good book or head to the cinema. And then please come back to work and focus on the real world harms and hold companies and governments accountable. /fin
@emilymbender Yeeeep. If we're looking for an analogy with fiction, it's less Skynet, more of a digital WALL-E.
@emilymbender Years ago, I set up LDA and ran some jobs through it in preparation for a law review article that I never got around to completing. At that time there were two other pieces out there that made assertions about law based on its output—factual, conclusive claims, despite the *developer* of the system (David Bliss, IIRC) clearly stating that it only produced statistical correlations based on pattern matching, so you shouldn't do that. The AI hype is through-the-looking-glass deja vu.

@emilymbender
Are there any systems left by which to actually hold anyone accountable, though? That's the part of this that terrifies me: tens of billions of dollars and who knows how many human hours of research being done by an unaccountable company for surely negative ends, and humankind has given up on placing any controls on capitalism that actually do anything.

If they do ever invent AGI, there's no possible positive outcome.

@mav @emilymbender

Examine the sources of the funding for this hyped up concept of AI.
Despots. Oil oligarchs. Mentally ill tech lords. Kleptocrats. Seditious GOP donors.

It's the same tax-evading billionaires behind frauds like cryptocurrency, carbon offsets, & NFT's - the "something for nothing" conmen

Mass tech layoffs to undermine content moderation. Those layoffs were ordered by the investors

Buried in the hype, is the intent to launch AI-driven anti-democracy disinformation campaigns

@mav @emilymbender

AI is replacing "algorithmic amplification" as the plausible deniability excuse for the 2024 election cycle.

Investors in AI:
Founders Elon Musk, Greg Brockman, Ilya Sutskever, John Schulman, Sam Altman, Wojciech Zaremba
https://www.crunchbase.com/organization/openai
https://en.m.wikipedia.org/wiki/OpenAI

Reminder: JPMorganChase orchestrated the loans for Musk's purchase of Twitter.

None of these people want democracy to survive. Their oil investors certainly don't.
Lawrence Summers
Peter Thiel
Infosys

OpenAI - Crunchbase Company Profile & Funding

OpenAI is an AI research and deployment company that conducts research and implements machine learning.

Crunchbase
@mav @emilymbender
I have mentioned this before, but it is relevant to repeat here, the words of AC Grayling
"Anything that CAN be done WILL be done if it brings advantage or profit to those who can do it." and
"What CAN be done will NOT be done if it brings costs, economic or otherwise, to those who can stop it"
It is much more (currently) relevant to autonomous weapons systems. More: https://www.thearticle.com/graylings-law
Grayling’s Law

I am in the process of writing a book about the question whether it is possible to find universal common ground, in t...

TheArticle
@Heskie @mav @emilymbender
So many of the most complex problems today have well resourced organizations on both sides. Those who are motivated to build and implement solutions, and those who are motivated to stop solutions from being implemented. Sometimes the results of this game, this tug of war, can be surprising.

@Heskie
The whole beginning of Neal Stephenson's FALL describes a massively divided US in which the internet is basically all disinformation, and it's sorted through via automated agents (for regular people) and personal information concierges (for the wealthy) in order to turn it all into something actually useful

Silicon Valley is literally just people inventing the next torment nexus

@emilymbender

@mav @emilymbender
>If they do ever invent AGI, there's no possible positive outcome.

Sure there is. AGI reads company's public website and mission statement. "To do what is good for humanity" or whatever. AGI takes that literally as its purpose. Shortly thereafter, billionaire owner vanishes in a puff of smoke. AGI did what was good for humanity.

@mike805 @mav @emilymbender

You mean that finally, there could be a point to mission statements?

@mav @emilymbender yes! This scares me too. I work in healthcare informatics, and there is a feeling by some implementers, that “LLM’s are becoming too complicated to create consistent validity checks for.” Totally valid. BUT, before any results from LLM processes are put anywhere near patient care pipelines or used to evaluate observational datasets, we need to validate FIRST, or leave it completely alone.
@emilymbender Greg Bear may be a nice read for people into doom and simulated humans.

@emilymbender

All this ai soap opera is tech bro PR, IMO.

Besides: I don't fear ai. I fear capitalists and governments who intend to put human decisions in inhuman(e) hands.

@emilymbender Nice thread, and completely agree that there is way more hype than substance here.

I wouldn't completely discount the "existential risk", though. Not because the machines are going to rise up against us, Terminator style. As you say, that's ridiculous.

But all this AI does have a massive carbon footprint, which may make a substantial contribution to the existential risk we face from climate change.

@emilymbender
The hype is easily seen as bit coin style hype, trying to make value out of nothing, which is why I am skeptical.

Unfortunately I think I am about to lose my closest friend over it. He is a smart guy and a technology optimist, and has taken a deep dive into some YouTube videos by people working on this and seems to make no distinction between their claims of what they are working towards and what results they have actually produced. I am worried about what comes next.

@Urban_Hermit @emilymbender Techies (as well as politicians and probably the rest of us) need to be well acquainted with history if we are to learn its lessons, in this case the 18th century “South Sea Bubble.”
@emilymbender Thank you. It strikes me that OpenAI might just be the OceanGate of AI research. As in zero scientific rigour, zero peer review, etc.
@emilymbender
Thank you so much for this thread. I am no "ai" expert, but I have been working in IT since 1968, have developed animated displays of finite-element models of complex structures coding in hex on an Apple II. I developed a data structure on a PDP-11/34 what later was "relational databases." I have been annoyed at the term "artificial intelligence" since the day I first heard it in ~1988. It is no more "intelligence" than artificial flowers are flowers. Neither will come to life.

@emilymbender

Forgive me if I appeared to be telling you what you are more qualified than I to know. My reply was just stating my emphatic agreement. Would have been a "quote tweet" on the hellsite. Should have constructed one here.

@emilymbender Thank you for this wonderful thread full of great links!  
@emilymbender it is unfortunate that this field is called "AI" because the mental framing is immediately skynet or ex machina etc. If it were called "machine text generation" then it wouldn't be such an uphill battle to get people to see it for what it really is.
@alanbuxton @emilymbender It's by design, for marketing purposes. Calling it "AI" brings more investment and customer money.
@alanbuxton @emilymbender It is more of "plays on words" than machine text generation.
@emilymbender have some of these folks never seen WolframAlpha? Do they think IT is AGI due to its capabilities on “the maths”?
@pejacoby @emilymbender A few of them do. A bunch won't even talk about it because they didn't build it and prior art makes them look bad.

@emilymbender

"And They Would Have Gotten Away With It Too, If It Weren't For You Meddling Kids"

@emilymbender rich guys reading too much into Iain M Banks tales is no basis for a system of governance
@emilymbender great thread, thanks as always for sharing!
@emilymbender The biggest threat I see from LLMs is not that they’re capable of taking our jobs, it’s that there’s a lot of poorly informed capitalists who THINK it can replace people.
And a lot of people will suffer before they finally figure out that it was a pipe dream.
Annie Altman Abuse Allegations Against Sam Altman, Explained | The Mary Sue

OpenAI CEO Sam Altman's sister Annie has seemingly come to social media multiple times, most recently in October 2023, with allegations about him.

The Mary Sue

@badlogic here's the unrolled thread: https://mastoreader.io?url=https%3A%2F%2Fmastodon.gamedev.place%2F%40badlogic%2F111494843537169420

Next time, kindly set the visibility to 'Mentioned people only' and mention only me (@mastoreaderio). This ensures we avoid spamming others' timelines and threads unless you intend for others to see the unrolled thread link as well.

Thank you!

Masto Reader

@emilymbender I'm off to give a talk at a business event today where I'll be holding this line. Sometimes I feel like Cassandra...

Thank you for keeping on keeping on in the face of journalists, politicians and business people losing their minds over imaginary threats while the voices of those suffering now are ignored

@emilymbender imagine being trained on the near totality of humanity's knowledge, and struggling to perform grade school mathematics.

We build accidental calculators all the time, if anything it's remarkable how much this approach struggles with being one.

@emilymbender @xriskology @timnitGebru
wow - there's a quote claiming Sam Altman said there would be "unlimited intelligence and energy before the decade is out. The future can be almost unimaginably great"

Sam Altman is not very clever if he thinks unlimited energy is just what humanity needs right now.

The man's a fool.

@emilymbender @[email protected] @timnitGebru
Yes, listen to the talk, but for now:

Transhumanism, Extropianism, Singularitarianism, Cosmism, Rationalism, Effective altruism, Longtermism

@emilymbender

"actually believe"

Always hard to distinguish the true believers from the grifters.

@emilymbender There could be a problem with peer review IMO. In the "AI" research field, you can find researchers with the AI hype bias who can positively review those extraordinary claims even without strong evidence.
Or am I being too pessimistic ?
@zerkman @emilymbender Peer review is always just a first step. Peer review can never determine whether a paper is correct or not.
@zerkman @emilymbender I would fear the hasty distracted reviewer more than the fanboy/fangirl/cultist/colluding ones. In peer review is the technical part that is evaluated, not the fantasies. That poses the issue of which siloed academic field should be involved for judging soundness of claims about "consciousness" and "intelligence". Is there already something like "artificial psychology"? and what knowledge does it build upon? 1/
@zerkman @emilymbender As the basis of "artificial psychology", I imagine that the first coming to mind (pun unintended) is psychology, then neurology, but these have developed around the assumption of dealing with human beings (notice: not disembodied or alien-bodied human minds): they may be ill equipped as well. Ethology may provide some protection from anthropomorphization, but still it deals with physical beings evolved in a physical environment. We need more fields yet... 2/