@bcantrill LLMs are a fun house reflection of humanity, except we understand how fun house mirrors work.
LLMs come with the extra layer of authority bestowed by ~math~ which makes people think they are gospel.
It's not just humanity that's the problem; it's how companies are trying to profit off of fundamental misconceptions and humanity's desire to farm out complex problems.
@bcantrill @nepi Absolutely, and thatâs probably the main method by which irresponsible deployment of large language models will result in deaths.
That said, I almost expect one of these âSelf-driving cars arenât the future, theyâre today!â companies to hook ChatGPT up to the wheel of a car. What theyâre already doing is only slightly removed.
@bcantrill It's funny (in that uncomfy way) that we've made so many scifi stories around humans creating technology that sounds cool but is not thought out and then Things Go Poorly. We've warned ourselves about this! đ¤Ś
I think in computing specifically, we've gotten so abstracted from reality (and so rich) that we've stopped critically analyzing the actions that technology induces in humans. Progress == Good for most people in computing. Few are asking: "Progress towards what??"
@bcantrill
LLMs isolated in a lab, never interacting with a human are trivially not a threat.
But so is covid in a lab.
Though perhaps there's something about the interaction with humans that means covid "is a threat", so we blame covid, while LLMs are "not a threat", so we blame humans.
What else is "not a threat" even though it induces harmful actions in humans? Maybe ethanol?
Are LLMs like ethanol?