I've figured out what pisses me off so much about Facebook's Galactica demo.

It's not because people can use to to write bad essays for their homework. There are plenty of large language models that can do that. It's because Facebook is presenting it as something that it most definitely is not.

Facebook is selling it as a knowledge engine, a "new interface to access and manipulate what we know about the universe."

Actually it's just a random bullshit generator.

http://galactica.org

Galactica Demo

Let's take a look. Galactica can generate wikipedia articles, supposedly.

So let's see what they look like. Here's one for Brandolini's law, the principle that bullshit takes another of magnitude less effort create than to clean up.

Left: Galactica's attempt at creating a wikipedia entry
https://galactica.org/?prompt=wiki+article+on+brandolini%27s+law

Right: The actual wikipedia entry
https://en.wikipedia.org/wiki/Brandolini%27s_law

wiki article on brandolini's law - Galactica

Here's the kicker. It's not that Galactica picked the wrong law. It is that the Padua economist to whom Galactica attributes the law, Gianni Brandolini, DOES NOT EXIST.

Galactica's phrasing of the law itself? That does not exist either. No one has ever said that phrase online (rather a surprise, tbh).

Galactica doesn't let us "access and manipulate what we know about the universe." It generates *pure bullshit* — which, incidentally, will be orders of magnitude more difficult to clean up.

UW researcher Robert Wolfe pointed out to me that there is a fundamental category mistake in how #galactica is being pitched.

This is not a machine learning system that is designed to represent scientific facts, models, and the structures that associate them. (There are other research efforts that attempt to do that.) This is a large language model that is designed to produce semantically plausible text using scientific terms and conforming to our expectations for various technical formats.

This is why, when I called it a bullshit generating machine, I was using the term bullshit in its technical sense. Philosopher Harry Frankfurt explained, in On Bullshit, that bullshit is speech intended to be persuasive without concern for the truth. For Frankfurt, the difference between a liar and a bullshitter Is this a liar knows the truth and is trying to lead you elsewhere where is the bullshitter either doesn’t know or doesn’t care wants to sound like they know what they’re talking about.

That’s more or less exactly what a large language model like this does.

It is trained to produce text it seems like it was written by a competent person. In this case #galactica also uses a technical vocabulary, frequent citation, structured argumentation, numbers, etc. to create a veneer of legitimacy—all tools frequently employed in the sort of new-school bullshit that we treat in our book.

it doesn’t care about facts. It has no representation of them beyond their semantic relations.

@ct_bergstrom I think even "semantic relations" is in fact an overstatement. It's all about textual distribution and nothing more.
@emilymbender @ct_bergstrom but textual relations do track semantic relations, no? that’s why LSA models in cognitive science that basically track co-occurrence of text are such good predictors of various semantic similarity tasks/relationships. So not semantic relationships pee se, but the two things aren’t wholly distinct.

@UlrikeHahn @ct_bergstrom

I wrote a paper so I could stop having this argument:

https://aclanthology.org/2020.acl-main.463/

See in particular section 7.

Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data

Emily M. Bender, Alexander Koller. Proceedings of the 58th Annual Meeting of the Association for Computational Linguistics. 2020.

ACL Anthology
@emilymbender @ct_bergstrom aha! will read and ponder - thank you!

@emilymbender @ct_bergstrom

ok, I'm back, having gone off and read your paper (in particular section 7) and I don't think it conflicts with what I said and I think that matters for the whole discussion, so I will try and spell this out.
First, I was trying to say (and will say) that LLMs *contain information* about semantic relationships - which is not the same as saying that they have access to fully fledged semantics (hence I said "not semantic relationships per se"). I've always thought 1/

@emilymbender @ct_bergstrom

2/ of co-occurrence statistics (and related distributional information) as "footprints of use" not use per se (so I totally agree with your section 7!). There is something missing there, but what's missing is also missing in *other* computational approaches for dealing with language and that doesn't preclude there being empirical questions about whether one system or approach is better than another . I very much like @ct_bergstrom's description of the system as a

@emilymbender @ct_bergstrom

3/ BS machine, and I think the observation that it works differently in important respects from other current systems for dealing with scientific text is also apt. But those systems also don't have effectors that give them symbol grounding.

But we can still have meaningful discussion about whether their functionality and performance is better or worse.

All of which is to say that I think it is entirely right to point out the limitations of distributional knowledge

@emilymbender @ct_bergstrom
4/ but I disagree on how far those kinds of in principle arguments go.
We can think about the Collins Dictionary of English and agree that it contains specifications of intensional relationships between concepts, and we can agree that actual human speakers have (extensional) knowledge that goes beyond that.
But it is, to my mind, an empirical and open question *how much* such knowledge is required.
And, relatedly, it is not clear a priori how far a given system

@emilymbender @ct_bergstrom
5/ can get in practice without it.

With respect to BS versus true statements about the world this boils down to the relationship between coherence and correspondence . And that depends on the specific coherence constraints.

All of this, I think, is why cognitive scientists currently have considerable interest in the kinds of reasoning and inference LLMs can support, even though they understand what such systems do and do not capture about meaning.

@emilymbender @ct_bergstrom

6/all of which is a long winded way of trying to make the point that in principle considerations (e.g., predictive or distributional models as 'category errors') go less far than one might think, imo, even if that diagnosis is taken as correct.

apologies for wading in here, and please feel free to ignore.