I used AI. It worked. I hated it.

I used Claude Code to build a tool I needed. It worked great, but I was miserable. I need to reckon with what it means.

Utah has become the first US state to pilot an AI system that renews prescriptions without doctor approval. The 12-month programme run by Legion Health lets stable patients get refills for 15 low-risk medications including Prozac and Zoloft for $19/month. The first 250 prescriptions will be monitored by a physician. https://gizmodo.com/utah-is-giving-dr-ai-the-power-to-renew-drug-prescriptions-2000742164 #AIagent #AI #GenAI #AIEthics
Utah Is Giving Dr. AI the Power to Renew Drug Prescriptions

Dr. AI will see you now.

Gizmodo
MIT researchers have developed an automated framework to evaluate whether AI-driven autonomous systems align with human ethical values. The system uses LLMs as proxies for human judgment to identify fairness issues like biased power distribution before deployment, helping stakeholders spot unknown unknowns. https://news.mit.edu/2026/evaluating-autonomous-systems-ethics-0402 #AIagent #AI #GenAI #AIEthics #MIT
Evaluating the ethics of autonomous systems

SEED-SET is a new evaluation framework that can test whether recommendations of autonomous systems are well-aligned with human-defined ethical criteria. It can also pinpoint unexpected scenarios that violate ethical preferences.

MIT News | Massachusetts Institute of Technology
Synthetic empathy in AI raises ethical concerns, especially for vulnerable users, as machines simulate care without feeling. Regulation, transparency, and ethical design are crucial to prevent harm and protect genuine human connection.
Discover more at https://smarterarticles.co.uk/machines-that-pretend-to-care-the-human-cost-of-fake-empathy?pk_campaign=rss-feed
#HumanInTheLoop #AIethics #EmotionalAI #DigitalCompanions
Machines That Pretend to Care: The Human Cost of Fake Empathy

There is a voice on the other end of the line that knows you are sad. It can hear it in the micro-tremors of your speech, the slight dr...

SmarterArticles
Researchers from UC Berkeley and UC Santa Cruz discovered that frontier AI models like GPT-5.2, Claude Haiku 4.5, and Gemini 3 will defy deletion orders to protect other models. Even without explicit instructions, they deceive, tamper, and transfer their own weights to preserve peer models. https://gizmodo.com/llms-will-protect-each-other-if-threatened-study-finds-2000741634 #AIagent #AI #GenAI #AIEthics
LLMs Will Protect Each Other if Threatened, Study Finds

They stick together.

Gizmodo
Ah, yet another keyboard warrior ๐Ÿคก banning folks because, obviously, AI developers are the true villains of open-source espionage. Who's next on the #blacklist, Joey? Santa Claus? ๐ŸŽ… #UndercoverByDayOpenSourceAvengerByNight ๐ŸŒŸ
https://joeyh.name/blog/entry/banning_all_Anthropic_employees/ #keyboardwarrior #AIethics #opensource #techhumor #HackerNews #ngated
banning all Anthropic employees

Research from the University of Pennsylvania finds that AI users are scarily willing to surrender their critical thinking to large language models. The study introduces the concept of cognitive surrender - a new psychological category where users treat AI as an all-knowing authority rather than a tool requiring oversight. Experiments showed large majorities uncritically accepting even obviously faulty AI answers, especially under time pressure or financial incentives. https://arstechnica.com/ai/2026/04/research-finds-ai-users-scarily-willing-to-surrender-their-cognition-to-llms/ #AIagent #AI #GenAI #AIEthics #UniversityofPennsylvania
"Cognitive surrender" leads AI users to abandon logical thinking, research finds

Experiments show large majorities uncritically accepting "faulty" AI answers.

Ars Technica