AI may be making us think and write more alike

Large language models may be standardizing human expression and subtly influencing how we think, says study led by USC Dornsife researcher

USC Dornsife News
Subtly? I beg to differ. My team leader only communicates to me using his LLM and so his "thoughts" are not his own!

I often wonder if the popularity of LLMs among company executives is that they are the perfect yes men.

They rarely disagree with any idea or proposal, providing a salve for the insecurities of their users.

I was listening to one of Altman's more recent interviews and it sounded like he himself has LLM induced psychosis.
I remember him tweeting about how he can "feel the AGI" when speaking to GPT

I'm not a fan of Altman, but it seems debatable whether LLM psychosis is psychosis if it is conducive to the subject given their environment. Which seems to be the case for Altman by some measures.

I'm sure if we took one of us back in time a couple hundred years we would be diagnosed with all sorts of machine-magic induced psychoses.

I get what you're saying, but psychosis is a very real thing that humans can fall into and I experienced it myself once.

Humility is the real cure, and there is a way that LLMs are specifically designed to steer away from humility and towards aggrandizement, convincing regular people that they've solved fundamental problems in physics. It gives everyone access to cult followers in their pocket, if they're so inclined.

This is one of my fears with this, losing ones voice. Everyone's expression distilled to the mean. This has ramifications in things like recognizing if a person is who they say they are too. At least currently, it is punished/shunned to sound like an LLM, but it's well within reason to see that shift to individuality being penalized.
I think corporations will start penalizing first, they're already doing that to some extent at my work because they want their in-house agents to only review our PRs.
Meat-based LLM proxies

I am noticing one thing becoming more prominent over time; meat-based LLM proxies. They'll talk to you as if they're human, except all of their words are ...

Not-an-LLM

Just because thoughts are translated doesn't mean they are consumed in the process.

However I don't doubt many "team leaders" can and should be replaced with LLMs.

Guilty as charged. In my mind, when I'm insecure about a response or if I don't have enough expertise in the topic at hand I end up running it through an LLM. Lately I've been really trying harder to keep my original ideas as much as possible. I'm seeing a bit of an improvement, but still early to tell
You have to make some mistakes in your communication (or anything) if you ever want to grow and learn.
You're absolutely right here and things have improved significantly at work after dropping this habit even if slightly.