Turns out llms also have “artificial hive mind”, top AI models all say very similar sounding things, do you think that we can use this to detect bots?

https://piefed.world/c/technology/p/914536/turns-out-llms-also-have-artificial-hive-mind-top-ai-models-all-say-very-similar-soundin

Turns out llms also have “artificial hive mind”, top AI models all say very similar sounding things, do you think that we can use this to detect bots?

I recently read about a study asking a bold question: Are all AI models basically saying the same thing? Researchers tested this by collecting 26,0…

This makes sense once you consider that the top models all have basically the same training data (i.e. everything ever posted on the internet).
They’re also trained on each other’s outputs. I forget exactly which two models it was, but there was an example where, e.g., if you asked Claude about itself it would confidently declare it was ChatGPT.

They’re also trained on each other’s outputs.

That seems like a recipe for disaster.

It’s like the elite learnt nothing from the effects of incest breeding…
It’s a recipe for model collapse at the least. Everything will trend toward the mean, and the models will get worse instead of better.