LLMs are not a threat to humanity -- humanity is a threat to humanity. The concern should not be for LLMs in and of themselves, but rather the actions that they will induce in humans. This feels obvious?
@bcantrill So is this just gonna end up like MAD but with AI instead of nukes? One nuke isn’t really a problem for greater humanity, but the problem is the actions it induces in others, right?
@nepi MAD but without the "D"? Having grown up with the fear of annihilation in a nuclear holocaust, a software program -- however stochastic and clever -- just doesn't feel anywhere near as likely to kill people.
@bcantrill I think the “D” is still there, but not as obvious as an immediate threat to life.
If you’ve got LLMs shoveling tons of content out into the wild, useful digital communication becomes a lot harder than it is today. Hard to quantify, but still a very big problem, I think? Which makes the whole thing even more worrying IMO.