Here is your must-read article for the day, a profile of @emilymbender, and her efforts to deflate the ridiculous hype around large language models such as ChatGPT.

It's also about the people who are behind that hype, and about what their way of thinking has the potential to do to us.

It's worth reading all the way to the end.

https://nymag.com/intelligencer/article/ai-artificial-intelligence-chatbots-emily-m-bender.html

@ct_bergstrom @emilymbender

I find it weirdly striking of how much in this debate just seems like a "affirming the consequent" fallacy - in what world is the brain is a computer => a computer is a brain sound in any way?!

Similarly, granted that a bunch of human learning is associative - that doesn't mean all of it is, or that any associative learning would be human

Thank you Carl! Great way to start the day. @ct_bergstrom @emilymbender
@ct_bergstrom @emilymbender An excellent piece. Thanks for sharing.
@tomdewar @ct_bergstrom @emilymbender just what the world needs right now: Believable BullshitEngines.
@ct_bergstrom @emilymbender Oh my, I'm reading this and going "exactly" with index finger pointed at the screen multiple times.

@ct_bergstrom @emilymbender

So frustrating how conversation about #LLM and #ChatGPT completely misses the most important point.

We're so easily distracted by #SciFi questions like whether it's a mind, is it sentient, will it destroy humanity.

(No, no, and maybe.)

The most important point is that we (rightly) fear AI run amok because we (rightly) fear #capitalism run amok.

Yes, it hallucinates bogus facts.

Yes, it can be tricked into acting like a creepy stalker.

No, it doesn't simply predict the statistically most likely next word in a sequence.

It actually has an internal model of concepts and relationships, and can draw meaningful and truthful insights that are useful to humans.

In many cases, it's more useful than search engines or Wikipedia.

It's a powerful tool, and like all powerful tools, it will be used for good and for evil.

And you can bet that the evil uses are currently being accelerated and amplified by billionaires and profit-seeking corporations exploiting workers and customers and subverting democracy to enrich themselves.

Same as it ever was.

That's why the real point is that we need non-profit public-benefit organizations to drive safe and positive and trusted uses of this technology.

The Internet Archive, Wikipedia and craigslist are good examples.

The genie is out of the bottle. We can't wish this technology away.

Let's ensure it gets used for good.

Let's ensure humanity has a fighting chance against capitalism amplified by intelligent machines.

@ares @ct_bergstrom @emilymbender But this is a technology that requires hundreds of millions of dollars to implement. The field is dominated by the largest monopolies on the planet. AI is an expression of pure hypercapitalism, it's impossible to divorce it from that matrix.

Our future -- artifical humans owned and operated by mega-corporations, slowly eating away at our conception of humanity.

@zenkat @ct_bergstrom @emilymbender

yes, it's bleak

it's hard to imagine how we can prevent dangerous practices by capitalists who can operate from any jurisdiction

but at least we can build a trusted alternative that won't manipulate us with disinformation to extract profits to billionaires

wikipedia also costs more than a hundred million a year to operate

i'm incredibly thankful that wikipedia and mastodon and open-source software exist in a world controlled by capital

safe and trusted non-profit alternatives to capitalist #ai should also exist, or we're truly fucked

@ares @ct_bergstrom @emilymbender
Too many in what I'll here call computer-related sciences are evidently overgrown brats retarded in empathic development; learning in childhood most people don't share their gift for rapid calculation, they assume for the rest of their lives most people are inferior creatures whom they have every right to manipulate & control.
@ares @ct_bergstrom @emilymbender
To feel better, I'll fall back on my clown instincts and riff on one trivial detail near the beginning of the article:
Any woman, married to a man for over 20 years, who still has any "fucks left to give" would be a woman of Olympian stature, empathy-wise.

@ares @ct_bergstrom @emilymbender > It actually has an internal model of concepts and relationships, and can draw meaningful and truthful insights that are useful to humans.

The nature of that model also matters. It wouldn't be making trivial arithmetic mistakes if that model included the semantics of its information.

@ares @ct_bergstrom @emilymbender The annoying thing is that OpenAI was originally a public benefit nonprofit but got captured by investors.

@ares
Do you have a reference on:

> No, it doesn't simply predict the statistically most likely next word in a sequence.
>
> It actually has an internal model of concepts and relationships, and can draw meaningful and truthful insights that are useful to humans.

From reading the recent Steven Wolfram article, my understanding was that it predicts a few statistically likely words and somewhat randomly picks one.

@AFresh1

there's a lot of confusion about this because predicting the next word and then comparing the prediction with the actual next word in a large corpus of text is how the model is *trained* and not how concepts and relationships are represented internally

internally every concept is a vector in high-dimensional space, which is how it distinguishes the 400+ different meanings and senses of the english word "set" for example

i'll let chatgpt itself explain it

@ct_bergstrom @emilymbender
Dasselbe Problem, über das Joseph Weizenbaum schon vor 56 Jahren mit seinem viel primitiveren "Eliza" gestolpert ist (siehe auch sein Buch: "Die Macht der Computer und die Ohnmacht der Vernunft", 1976)
@ct_bergstrom @emilymbender ..arguably they are not saying the discussion is inflated, but many times about the wrong things, the potential/impact of LLM is clear and present.

@ct_bergstrom @emilymbender I want to be her friend.

"Bender is 49, unpretentious, stylistically practical, and extravagantly nerdy — a woman with two cats named after mathematicians who gets into debates with her husband of 22 years about whether the proper phrasing is “she doesn’t give a fuck” or “she has no fucks left to give.”"

@ct_bergstrom @emilymbender It is a good read, although I don't see it all the same way as @emilymbender does.

I also found Manning's undermining of body-language odd. It's pretty crucial in every-day social activities. You can even _hear_ body language behaviour in speech.

@ct_bergstrom @emilymbender Everyone arguing against ChatGPT does not understand the purpose of ChatGPT.

It is not to be good.

It's a massive misinformation engine **and that's the feature, not the bug**.

Having linguists and others telling everyone that ChatGPT is a massive misinformation engine is like telling people Trump is racist. It's doing their marketing for them.

@ct_bergstrom @emilymbender Brilliant Bender, businessman Manning. Thank you for posting.
@ct_bergstrom Who are the people behind the downplaying of the technology? That’s what I see much more of.

@ct_bergstrom @emilymbender Would love to see this debate. Is it available somewhere?

On the highlighted part, I think it’s both. We are not a stochastic parrot. Heck, we aren’t even our brains. But our stochastic parrot brain plays a crucial role in making us who we are.

@anthony @ct_bergstrom @emilymbender when I read the article, this fragment made me stop and think, how much time has this Manning fella spent raising children. Then I laughed out loud in the middle of the meeting I wasn't paying attention at.

@alter_kaker Weird. I thought Manning had the better take, if those two surely oversimplified descriptions were accurate.

I’d point to people’s (mis)use of grammar as a good example of how we learn language to a large extent through observation and not through formal learning.

@alter_kaker I think that’s also a good example of the problem with relying *solely* on this inductive form of reasoning (which are the same problems with LLMs).

As I said elsewhere, though, ChatGPT et. al. are impressive compared to a child their age.

@anthony children are never self supervised. They are super social creatures and are getting context clues and social feedback from parents from day one. The most important, complex, and biggest learnings that they get are not about information but about being human. When children learn language for example, they are constantly learning not just vocabulary and how to form sentences or w/e but what things mean, what kind of reaction words elicit, what is and isn't appropriate etc etc
@anthony why else do you think so much of childhood learning at every stage is about testing boundaries? They are learning how to be human beings in society, eliciting feedback. Self-led, sure. But definitely not self-supervised.
@anthony and when you do get kids who are not brought up with plenty of attention and social feedback (eg very poor families where adults are too stressed or busy or ill to provide appropriate supervision of learning), you get antisocial behavior
@anthony the inductive learning you mention below (tunnel - > cop lights) is an almost negligible proportion the learning children do every second

@alter_kaker Wow, I completely disagree. What learning do children do “every second” that isn’t inductive? Learning through testing boundaries *is* inductive. Quintessentially so. Learning through social feedback is generally inductive. And kids learn as much if not more (probably more) by copying what they see others do than through social feedback.

Perhaps the big difference, which Manning also points out, is that kids learn through multi-modal input. That makes a huge difference.

@alter_kaker Self-supervised learning doesn’t mean you let a toddler run into the street. But it does mean that toddler learned how to walk, and to run, through trial and error as opposed to a parent or teacher telling them about Newton’s laws of motion.

@alter_kaker

The first time my son (then just a few years old) went through a tunnel (in our car), two police cars came through the tunnel, passing everyone, flashing their lights.

On the way home we went through the tunnel again. I could see him looking around for the police cars.

I dunno. Seems a lot like self-supervised learning to me. But maybe I’m misunderstanding the debate, which is why I was interested in seeing a transcript or video of it.

Invited Speakers and Panels

Official website for the 2022 Annual Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies

NAACL-HLT 2022
Watch lectures from the best researchers.

On-demand video platform giving you access to lectures from conferences worldwide.

Underline.io
@escapadesrpg Yeah, that was it. And yeah, the Intelligencer article didn’t really give a good summary of the points of view.
@anthony AI won't ruin journalism, they are doing it to themselves 🤦‍♂️

@escapadesrpg The article describes the singularity as “the tech fantasy that, at some point soon, the distinction between human and machine will collapse.”

Umm, no, the singularity would very much not involve machines becoming a thing even remotely close to humans.

But I’m not sure journalists are ruining journalism so much as that these sorts of ignorant, biased articles have always been part of journalism. Critical reading has always been an important part of reading all kinds of text.

@ct_bergstrom @emilymbender

"...chatbots that we easily confuse with humans are not just cute or unnerving. They sit on a bright line. Obscuring that line and blurring — bullshitting — what’s human and what’s not has the power to unravel society."

@ct_bergstrom @emilymbender

Do chatbots dream of electric parrots?

@ct_bergstrom @emilymbender

I see two major and different types of personalities responding here: The intellectual, driven by ego, and the common sense “human” wise thinker

@ct_bergstrom @emilymbender I think it is very presumptuous to assume that I am not a parrot

@ct_bergstrom @emilymbender From a serious perspective, I have just finished reading that article and am so glad you brought it up and linked it. I found it very interesting and helpful.

Less seriously and going back to my original reply, I may or may not be an anthropomorphic parrot, but I am not a stochastic parrot or a stochastic human.

@ct_bergstrom @emilymbender
Fascinating, and thought-provoking article. Thank you.

@STCmicrobeblog @ct_bergstrom @emilymbender
Speak for yourself

I'm just a pretrained transformer at the tail end of a reinforcement learning process with human feedback who happens to share many of the same referents as my trainers

@ct_bergstrom @emilymbender
I love it:
"Bender remains particularly fond of an alternative name for AI proposed by a former member of the Italian Parliament: “Systematic Approaches to Learning Algorithms and Machine Inferences.” Then people would be out here asking, “Is this SALAMI intelligent? Can this SALAMI write a novel? Does this SALAMI deserve human rights?”"
@ct_bergstrom @emilymbender "Your reality, sir, is nothing but lies and balderdash, and I am delighted to say that I have no grasp of it whatsoever!" *scatters paperwork with sword* (in response to bald guy with lanyard, our walking, talking Age of Reason on two legs)

@ct_bergstrom @emilymbender @vampiress
Poorly cited article. It introduces Emily in relation to her recent 2020 publication, but fails to cite it. For those who are interested, I think this is it:

https://aclanthology.org/2020.acl-main.463.pdf

@ct_bergstrom @emilymbender This is really excellent. This is a less technical look, but has the most excellent summary of LLM I've seen - "an automated mansplaining machine. Often wrong, and yet always certain — and with a tendency to be condescending in the process. And if it gets confused, it's never the problem. You are."
https://futurism.com/artificial-intelligence-automated-mansplaining-machine
Artificial Intelligence Is Just an Automated Mansplaining Machine

Artificial intelligence like ChatGPT is often wrong, and yet it's somehow always certain, even tending to double down on incorrect answers. Sound familiar?

Futurism

@ct_bergstrom @emilymbender I was remembering "signifier and signified". One thing that's bemusing me is the tech cult just directly rejecting the idea that there is anything signified beyond the signifiers.

I was dubious about the concept of qualia the first time I encountered it, but the more I've thought about it, the more important it seems to be. Qualia are the aspects of an experience that cannot be communicated, only known through direct experience.

@ct_bergstrom @emilymbender Hey that's a disservice to parrots (and cats and dogs and…) who may not agree with us on what words mean, but figure out ways to communicate with us.

In other words the parrot may now know the etimology of "Arrrr matey" but the parrot knows it means they may get a cracker if the human is in a good mood.

@ct_bergstrom @emilymbender

I work as a developer but I do not work with anything concerning what we would call "AI". I think this technology can be useful, but I'm so glad to read something with actual credibility behind it about this industry and the problems behind it and all the hype. Nothing that I have read recently about AI has talked about it realistically, this is a breath of fresh air.

@ct_bergstrom @emilymbender amazing article! Thank you for sharing!