How many studies do researchers need to do before the threat of LLMs is taken seriously? This technology *might* have some useful niche applications, but widespread deployment will be a disaster for humanity.

This shit is an existential hazard, and not in the way the AI companies love to talk about. It's not going to take over the world like Skynet, it's a cognitohazard that turns anyone that interacts with it into an idiot.

https://www.psychologytoday.com/us/blog/the-algorithmic-mind/202603/adults-lose-skills-to-ai-children-never-build-them

Adults Lose Skills to AI. Children Never Build Them.

Discussions of cognitive offloading often miss a critical distinction: What AI does to a 45-year-old's brain is categorically different from what it does to a 14-year-old's.

Psychology Today

@malcircuit People don't care about studies. They care about money and right now LLMs promise them more money.

They're wrong. They'll be proven wrong. But it'll take time and some MASSIVE failures, first.

Fortunately, if the news is to be believed, Zuckerberg is doing his hardest to destroy Meta with AI so it may come sooner than later.

@faithisleaping @malcircuit I wish I could believe that it will eventually be proven to be bad. I'm afraid it's going to become a subliminal part of society and stick with us forever. Technologies have a tendency to do that no matter how harmful they are.

Broadcasting brought us manufactured consent.
Cable brought us cognitive overwhelm.
Social media brought psyops to the individual.

I'm afraid AI be around forever and destroy conceptual diversity.

@sabrina @malcircuit It will likely be around forever in some form. We're not going to just shut it all down. Too many people have spent too much money.

But it will likely become part of the background noise that we learn to navigate.

But also, you say social media brought psyops to the individual but here we are talking on social media. It also brought human connection across continents.

Which isn't me saying that we should look for the silver lining to AI. It's pretty shit at 95% of what they're trying to use it for. I'm just pushing back a bit on the "technology is destroying everything" narrative. It's changing everything. It's destroying some things. Other things get created in its wake. Humanity will probably survive this one, too.

@faithisleaping @malcircuit Mostly I'm frustrated that we continue to dive into technology after technology without learning to moderate the harms. With AI we know a lot of the harms and we're still failing to moderate them. (I don't know how well we knew the harms of past technologies early in their use as well as we know the harms of AI this time.)

Intuitively I don't agree that technology is destroying everything. I don't want to make a "technology is destroying everything" argument. I also don't want to moderate my argument just because I don't want to be making that argument. I'd have to give more thought to what argument I want to make and how.

I definitely agree the technologies I chose have done more good than harm. I chose media technologies by accident, but since they go together...they have progressively increased the spread of culture and human connection and that's amazing.

And now I notice that my fear of the harm of AI is that it will undo that...flatten culture and erase human connection.

@sabrina Yeah, and it absolutely is doing harm and it's absolutely a play by the rich and the powerful to control the thoughts of the masses.

There's a great quote from the classic Dr. Who episode The Green Death, where the doctor says something along the lines of,

Humans! Whenever they discover something new, the first thing they do is try to figure out how to kill each other with it, then how to make money off it, and only later do they study it to know whether or not it's safe.

I think about that quote a lot.

@malcircuit