LLMs are not a threat to humanity -- humanity is a threat to humanity. The concern should not be for LLMs in and of themselves, but rather the actions that they will induce in humans. This feels obvious?
@bcantrill did an LLM write this?
@ahl It told me that it would stop pressuring me to leave my wife if I posted it.

@bcantrill LLMs are a fun house reflection of humanity, except we understand how fun house mirrors work.

LLMs come with the extra layer of authority bestowed by ~math~ which makes people think they are gospel.

It's not just humanity that's the problem; it's how companies are trying to profit off of fundamental misconceptions and humanity's desire to farm out complex problems.

@bcantrill Is this not true of essentially _all_ dangerous technology? Does this argument essentially reduce to "Guns don't kill people, people kill people"?
@dorianlistens Well, yes -- and as with other technologies, we should fear how they are used, not that the weapons themselves will force humanity to do their bidding.
@bcantrill @dorianlistens LLMs and their potential feedback loops have a dangerous intersection with tech monopolies that extends beyond other technologies. It is plausible that future legislators could find it difficult to see through the haze generated by LLMs in order to obtain the necessary information to regulate the abusers of the technology. Even if they could they may find it difficult to gain traction for their ideas if the general population lacks the same means.
@bcantrill @dorianlistens Thinking about this some more, ChatGPT has concerns more similar to substance abuse than guns. Users of it could potentially reach a point where they have a problem they themselves cannot see clearly. While best practice regarding substance abuse is medical treatment, we don't yet have proven medical treatments for LLM abuse and thus our instincts are to limit their use. Caution doesn't seems prudent despite an inability to enumerate specific harms.
@bcantrill So is this just gonna end up like MAD but with AI instead of nukes? One nuke isn’t really a problem for greater humanity, but the problem is the actions it induces in others, right?
@nepi MAD but without the "D"? Having grown up with the fear of annihilation in a nuclear holocaust, a software program -- however stochastic and clever -- just doesn't feel anywhere near as likely to kill people.
@bcantrill I think the “D” is still there, but not as obvious as an immediate threat to life.

If you’ve got LLMs shoveling tons of content out into the wild, useful digital communication becomes a lot harder than it is today. Hard to quantify, but still a very big problem, I think? Which makes the whole thing even more worrying IMO.
@bcantrill @nepi I don’t know. Tesla, Uber, and others have deployed software in deeply irresponsible ways which have resulted in deaths. Definitely not on the same level as nuclear weapons, but likely to have a constant, low trickle of deaths attributable to the software.
@bob_zim @nepi Misinformation about COVID surely caused more deaths though?

@bcantrill @nepi Absolutely, and that’s probably the main method by which irresponsible deployment of large language models will result in deaths.

That said, I almost expect one of these “Self-driving cars aren’t the future, they’re today!” companies to hook ChatGPT up to the wheel of a car. What they’re already doing is only slightly removed.

@bob_zim @bcantrill Probably not more than the average, I think.

Software being in the fault chain of someone’s death is terrible, but I think MAD implies a destructiveness that’s several orders of magnitude higher than automation faults resulting in loss of life in some isolated incidents.
@bcantrill @nepi In some ways, "D" feels like a better end than some Matrix-esque hellscape where we're all stuck in small spaces doing the obscure bidding of "algorithms" directing human capita....oh, wait
@bcantrill someone should make the laws of robotics but for humans. feel like we could use some extra help, maybe 10 laws instead of 3.

@bcantrill It's funny (in that uncomfy way) that we've made so many scifi stories around humans creating technology that sounds cool but is not thought out and then Things Go Poorly. We've warned ourselves about this! 🤦

I think in computing specifically, we've gotten so abstracted from reality (and so rich) that we've stopped critically analyzing the actions that technology induces in humans. Progress == Good for most people in computing. Few are asking: "Progress towards what??"

@bcantrill People seem always ready to worship something, that seems easier than asking questions and daring to be vulnerable. This is the pocket calculator moment, when folks would switch off their own brains at the thought of the magic tool giving them answers. This too shall pass.

@bcantrill
LLMs isolated in a lab, never interacting with a human are trivially not a threat.
But so is covid in a lab.

Though perhaps there's something about the interaction with humans that means covid "is a threat", so we blame covid, while LLMs are "not a threat", so we blame humans.

What else is "not a threat" even though it induces harmful actions in humans? Maybe ethanol?

Are LLMs like ethanol?

@bcantrill
But even with ethanol, we say "most drownings are caused by alcohol" not "by alcohol consumption".
@wolf480pl @bcantrill The first thing that comes to mind is a gun. It’s harmless if it stays locked in a box out of reach, but deadly when either wielded or simply messed around with.
@kechpaja @bcantrill a gun is not optimized for fooling its user
@bcantrill maybe, but we’ve not even begun to see the scope of human action that might be induced by extraordinarily persuasive computers whose (future) builders may not be entirely benevolent.
@bcantrill Definitely not a threat to humanity. And will run great on 0xide servers...