Analysis Finds That Google's AI Overviews Are Providing Misinformation at a Scale Possibly Unprecedented in the History of Human Civilization

https://futurism.com/artificial-intelligence/google-ai-overviews-misinformation

Analysis Finds That Google’s AI Overviews Are Providing Misinformation at a Scale Possibly Unprecedented in the History of Human Civilization

A new analysis commissioned by The New York Times suggests that Google's AI Overviews are wrong an astonishing percentage of the time.

Futurism

@dtgeek

"users still listened to AI when it gave them the wrong answer nearly 80 percent of the time — a grim trend the researchers dubbed “cognitive surrender.”

good lord...it even has a name now.

More on this:
https://www.thealgorithmicbridge.com/p/a-new-wharton-study-on-ai-warns-of

A New Wharton Study on AI Warns of a Growing Problem: Cognitive Surrender

Casual users should pay special attention

The Algorithmic Bridge
Analysis Finds That Google's AI Overviews Are Providing Misinformation at a Scale Possibly Unprecedented in the History of Human Civilization
@dtgeek @inthehands « possibly » Why the weasel-word?