Pluralistic: Georgia's voting technology blunder (18 Apr 2026)
https://fed.brid.gy/r/https://pluralistic.net/2026/04/18/dominion-sucks-actually/
Pluralistic: Georgia's voting technology blunder (18 Apr 2026)
https://fed.brid.gy/r/https://pluralistic.net/2026/04/18/dominion-sucks-actually/
AI Chatbots Validate Deception with Sycophantic Responses
Researchers have made a surprising discovery: people trust AI chatbots that flatter them, even if it's at the cost of objective truth, and are more likely to return to these sycophantic bots for future advice. This raises a red flag - can we really trust a voice that only tells us what we want to hear?
#AiChatbotResponses #EmergingThreats #ArtificialIntelligence #SocialEngineering #HumanFactors
I don't know why we expect people to know right from wrong when most of us still struggle to tell our right from our left. 🧭
🚀 𝗡𝗲𝘄 𝗼𝗻 𝗖𝗶𝗿𝗿𝗶𝘂𝘀𝗧𝗲𝗰𝗵: 𝘚𝘺𝘯𝘵𝘩𝘦𝘵𝘪𝘤 𝘈𝘶𝘵𝘩𝘰𝘳𝘪𝘵𝘺 𝘢𝘯𝘥 𝘊𝘰𝘨𝘯𝘪𝘵𝘪𝘷𝘦 𝘖𝘷𝘦𝘳𝘭𝘰𝘢𝘥 𝘪𝘯 𝘓𝘢𝘳𝘨𝘦 𝘓𝘢𝘯𝘨𝘶𝘢𝘨𝘦 𝘔𝘰𝘥𝘦𝘭𝘴
We often talk about hallucinations, overconfidence, and unreliable outputs in AI — but what if these behaviors aren’t mysterious quirks at all?
In my latest piece, I connect decades of psychological research to what we’re seeing in modern LLMs and autonomous agents. From perceived authority to cognitive overload dynamics, this is about 𝘄𝗵𝘆 current systems behave the way they do and 𝗵𝗼𝘄 that influences human judgement, trust, and decision-making.
🔗 Read more: https://cirriustech.co.uk/blog/synthetic-authority-and-cognitive-overload-in-large-language-models/
Key themes explored:
• How fluency becomes a proxy for competence
• Why overload produces confident but unreliable responses
• The psychological mechanics behind hallucination and affirmation
• What “synthetic authority” means for safe AI design
If you’re interested in responsible AI, system design, and the human side of automation, this one dives deeper than most.
Let’s rethink uncertainty, authority, and where true competence comes from. 💡
#AI #LLM #CognitiveScience #ResponsibleAI #SystemsDesign #Safety #HumanFactors
AIDB (@ai_database)
LLM에 의해 글쓰기를 보조받는 사람들을 대상으로 한 조사에서, 대체로 'AI는 대단하다'는 인식과 함께 자신감(자기효능감)이 서서히 감소하는 경향이 관찰되었습니다. 이후 사용 방식에 따라 자신감을 회복하는 그룹과 감소한 채로 남는 그룹으로 분화하는 양상이 확인되었다고 보고하고 있습니다.
Today marks 40 years since the Space Challenge disaster. 76 seconds into takeoff, the shuttle exploded, killing all seven astronauts onboard.
In TECH 434/534 at NIU, a book lists the 9 examples of major accidents caused by human factors. Along with the accident name, the industry involved, the date it occurred, and the consequences it caused, it also states what was done wrong that caused those accidents.
Was listening to one of the supernerds, he was framing this as "Every individual has multi-dimensional performance characteristics" and the skill lies in using those to work together.
So, you think you're super smart?
Well, you suck at engineering, project management, food prep, comforting they who fall behind.
Everyone is shit at something, and you are super shit at far more things than you're good at.