Human beings have built an abstract model of language that *multiplies* their own linguistic powers. But most of my friends are really depressed about it because they’ve seen so much science fiction that they interpret anything called #AI as a *diminishment* of human agency. It’s a testament to the power of fiction, but also a little loopy if I’m being honest.
I seriously think it might be smart to move toward a discussion of “language models” & “image models” and ditch the AI acronym, which seems to contribute nothing but millenarian fantasy + apocalyptic despair + reactive moralism.
Then we could have a critical discussion about “who owns the language models?” We can’t have that discussion (in any useful way) if half my FB feed is weeping into their coffee because human life is meaningless now that robots are taking over.
The original dream of #AI producing autonomous robotic agents has meanwhile come to seem implausible on social grounds. Looking at how angry people get about models that just respond to human prompts by predicting the next word, it is very difficult to imagine we’ll tolerate systems executing real-world actions to fulfill even intermediate goals chosen by the system itself. C-3PO would get indicted on AI ethics charges irl. Stammering and calling us all “master” will not save him.
@TedUnderwood what we call things really matters.
@TedUnderwood some people have been getting angry, but has that been slowing progress significantly? the technology & its deployment seems to be rapidly moving forward regardless?

@TedUnderwood Didn’t you hear? LLMs are the end of assessment, pedagogy, and education because they aren’t detected by my Canvas plagiarism tool.

Academics—who should know better!—are unfortunately setting the pace for unhinged takes.

I just started experimenting with LLMs and they are interesting but very far from bulldozing the ivory tower.

@TedUnderwood are you suggesting that I might be falling prey to hyperbole?
@sharifyoussef I am going to memorize that last line so I can pull it out when needed!
@TedUnderwood We should also have a discussion about future models for society that are not based on assuming humans to only have value as employed taxpayers.
@TedUnderwood "Who owns the language models?" will turn out to be an underspecified question. Here are the questions I am asking as a copyright lawyer: (1) Was assembling the training data lawful? (2) is the LLM a "derivative work based on the training data" (3) does the output of the LLM infringe on any copyright in any of the training data? (4) does the output of the LLM reflect human intentions/choices/conceptions so that it qualifies as authorship? (5) if so, which humans?
@MatthewSag This is smart and persuasive. I admit that I am just ducking IP issues for the most part because I find them depressing. I prefer a very open world, but it's quite possible that's not the legal system I currently inhabit. Google's odds of doing an end-run around the laws seem better than my odds of changing them.
@TedUnderwood and I should be equally clear that the legal issues are not the only issues 🙂
@MatthewSag @TedUnderwood y'all are so measured and thoughtful. This is definitely a different conversation than i would have expected on Twitter
@TedUnderwood this is such a smart take. Everybody is comfortable with weather models - data into a black box creating usually a useful result that sometimes can also be wrong, something we know to apply our judgement to. I like this a lot.

@TedUnderwood Many academics seem to be on board with this on principle, but don't have the collective influence to sway the lay public toward calling an #LLM what it is. I have a hard enough time explaining it to scientists outside the field.

I don't see an easy way back from the #AI hype that's been drummed up by industry (also academics courting industry). As much as I dislike the cultural divisiveness of it, reactive moralism might be the best collective thing we have going at the moment.

@colditzjb ugh — I don’t actually see an easy way to ditch the term either, but I have a really hard time enduring the reactive moralism
@TedUnderwood I feel the same. I guess my hope is that if "AI" gets sufficient public shaming, maybe industry will pivot to calling fewer things AI and use actual technical terms. Unfortunately, I think tech journalism is a major offender in perpetuating AI-ism through clickbait hot takes, arguing pro-AI futurism vs anti-AI fearmongering. I've read a few nuanced takes on it, but I don't think that's what most people are reading.
@TedUnderwood that would help, but let’s face it I don’t see this happening. (Based on other less widely spread context where less-optimal terms also abound)
@TedUnderwood You think that would help?
@TedUnderwood Could you get rid of that word "train" as well, then? Because "train" makes it seem like they're toddlers or pets, when really, they're just machines sifting a lot of data, for better or worse over time with "garbage in/garbage out", you know?
@prokofy Nah, I’m quite committed to the view that learning is happening; I think we have a good mathematical theory of what it means when we say that. The problem in my view is not that it’s wrong to say “learning” “training” or “intelligence,” but that SF has taught people to assume those words imply goals and agency.

@TedUnderwood @prokofy where SF is read quite narrowly only.

@annaleen 's AUTONOMOUS actually drives that question in the opposite direction by exploring all the ways in which humans are made into machines and instruments

@trochee @TedUnderwood @annaleen

“The danger of computers becoming like humans is not as great as the danger of humans becoming like computers."

Konrad Zuse

@trochee @TedUnderwood @annaleen Eh, if only those selves stayed destroyed so that they didn't harm other people, but they do in fact.

You know, like that rag-tag army, that "Gang That Couldn't Shoot Straight," Russia, which massacres tens of thousands of civilians.