Moving a thread from Twitter over here, which starts with my reacting to Michael Black's thread on #Galatica where he tests it and concludes it's dangerous, saying:

"Why dangerous? Galactica generates text that's grammatical and feels real. This text will slip into real scientific submissions. It will be realistic but wrong or biased. It will be hard to detect. It will influence how people think. (5/9)"

https://twitter.com/Michael_J_Black/status/1593133739538022400

>>

Michael Black on Twitter

“Why dangerous? Galactica generates text that's grammatical and feels real. This text will slip into real scientific submissions. It will be realistic but wrong or biased. It will be hard to detect. It will influence how people think. (5/9)”

Twitter

"Generates text that's grammatical and feels real": We sounded the alarm about this in the Stochastic Parrots paper. Ordinarily, I don't expect the average AI researcher to have read my papers, but that paper (thanks to Google) can hardly be missed.

So what gives?

>>

#StochasticParrots #Galatica #AIHype

I looked back at Section 6 of #StochasticParrots, which begins with this paragraph.

And then we look deeply into the risks and harms that follow when that seemingly coherent text reproduces the language of systems of oppression.

>>

So, this looks like a case where that wasn't alarming enough. It only becomes *alarming* for some when what's threatened is ~%~science~%~.

Or maybe folks read our paper and didn't really believe us but had to wait until they could "see it with their own eyes" = see the effect from a) a live demo that b) impinges on something where they do have skin in the game.

>>

Better late than never, I suppose, but I hope that part of the lesson learned here is to take seriously the work that approaches all of this from a critical lens.

And by take seriously I don't mean "believe uncritically". I mean learn how to read and discern and learn from and understand that if you are working on AI it is literally part of your job to do this reading & learning.

#ML #AI #AIHype

@emilymbender Is this tendency to find and fixate early on meaning analogous to this problem from a PL context?

There've been security problems when a system assumes that, because a string is in a structured language, say GIF, that no downstream part of the system will interpret it as a string in another language, say JavaScript.

https://0x00sec.org/t/gif-javascript-polyglots-abusing-gifs-tags-and-mime-types-for-evil/5088

This happens, not infrequently, when one part of a system gets Content-Type metadata specifying the language, makes some security relevant decision, and forwards it to another part of the system without that metadata. The downstream subsystem then uses a heuristic to pick a language while relying on the earlier security decision.

This kind of flaw makes it past design review, because, I conjecture, of two different senses of "language":

- A language is a set of string plus some semantics. In this view, it's clear that a string can be in more than one set.
- A string of JavaScript is that which is produced by a JavaScript programmer, even if the product has flaws that put it outside the set a JavaScript interpreter can deal with. In this view, the provenance of the string separates it from other languages.

GIF/Javascript Polyglots: Abusing GIFs, tags, and MIME types for evil

Note: This is a repost from my personal blog GIF/Javascript Polyglots: Abusing GIFs, tags, and MIME types for evil 6 minute read The backstory Recently I saw a feature on a product I work on where we allowed hotlinking to arbitrary gifs without pulling them in, mangling, and then saving for our own use. Right away I thought, “Well this isn’t wise” and set off to find ways to abuse it. The easiest and most obvious was to link to an image and then swap it out for a less savoury one later. Kid st...

0x00sec - The Home of the Hacker
@emilymbender
Looks again pushed not paper, but the online demo prioritizing pr above anything else, thus after overall public critiques, going offline:
https://sigmoid.social/@garymarcus/109359710579963248
Gary Marcus (@[email protected])

Attached: 1 image Liz Truss lasted longer than #galactica

Sigmoid Social

@emilymbender computer science hasn't had its Trinity moment. And if it had, it's had no impact on the science or the practice, unlike trinity did on physics / chemistry.

It had lots of little moments that could be extrapolated, and yet, no one has been taking people spelling out warnings seriously. For 30 years now: http://tech.mit.edu/V105/N16/weisen.16n.html

Weizenbaum examines computers and society - The Tech

An article from the Tuesday, April 9, 1985 issue of The Tech - MIT's oldest and largest newspaper and the first newspaper published on the Internet.

@emilymbender, When I reflect on my interaction with Galactica, for me there’s something about the primacy of direct experience of interacting with it, and of whatever ‘expertise’ (I write that humbly) I have to personally interpret and evaluate it’s output. So as well as, or aside from, whatever threat Galactica may raise, interacting directly with Galactica was a more, hmm, ‘meaningful’ experience than ‘only’ reading of it.

@austenrainer
Resist the urge to be impressed.
https://medium.com/@emilymenonbender/on-nyt-magazine-on-ai-resist-the-urge-to-be-impressed-3d92fd9a0edd

And if you are impressed, please don't go around rhapsodizing about it. That only makes the #AIhype situation worse.

@emilymbender Indeed. And I wasn’t impressed, at all, hence ‘meaningful’ in quotes. I meant that it’s one kind of knowledge to read about something like Galactica or GPT-3, it’s another kind of knowledge to interact directly with it and to pay attention to one’s own reactions to it.