Well, isn't this interesting. A research leader crucial for ChatGPT's mental health safety protocols is apparently heading for the exits at OpenAI. Considering how much we rely on these models, that's a pretty big deal for AI ethics.

What impact do you think this leadership shift will have on future AI safety initiatives?

Read more: https://www.wired.com/story/openai-research-lead-mental-health-quietly-departs/ #AI #OpenAI #TechNews #AISafety #ChatGPT

A Research Leader Behind ChatGPT’s Mental Health Work Is Leaving OpenAI

The model policy team leads core parts of AI safety research, including how ChatGPT responds to users in crisis.

WIRED

5 years ago I thought that due to greed, big tech will develop highly capable, but non-aligned AI that will be dangerous, just to save on the cost of safety research.

Now I think that due to corporate greed, big tech realized it is cheaper to just use half-working stuff and hype it up as something so powerful it might be dangerous, to entice more investor capital.

#ai #openAi #aiSafety #bubble #aiBubble

"Late last month, OpenAI quietly updated its “usage policies”, writing in a statement that users should not use ChatGPT for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.” A flurry of social media posts then bemoaned the possibility that they’d no longer be able to use the chatbot for medical and legal questions. Karan Singhal, OpenAI’s head of safety, took to X/Twitter to clarify the situation, writing: “Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information.”

In other words: OpenAI is trying to shift the blame for bad legal and medical advice from its chatbot away from the company and onto users. We agree that no chatbot should be used for medical or legal advice. But we believe the accountability here should lie with the companies creating these products, which are designed to mimic the way people use language, including medical and legal language, not the users.

The reality is that the medical and legal language that these chatbots spit out sounds convincing and, simultaneously, the tech bros are going around saying that their synthetic text extruding machines are going to replace doctors and lawyers any day now, or at least in the near enough future to goose stock prices today."

https://buttondown.com/maiht3k/archive/openai-tries-to-shift-responsibility-to-users/

#AI #GenerativeAI #OpenAI #AISafety #ChatGPT

OpenAI Tries to Shift Responsibility to Users

OpenAI is trying to shift the blame for bad legal and medical advice from its chatbot away from the company and onto users. We agree that no chatbot should be used for medical or legal advice.

Mystery AI Hype Theater 3000: The Newsletter

Andreessen Horowitz’s super PAC is targeting NY Assemblymember Alex Bores over his AI safety bill, sparking a showdown over the future of AI regulation.

https://www.wired.com/story/alex-bores-andreessen-horowitz-super-pac-ai-regulation-new-york/ #AI #AISafety #Politics #AIRegulation #PoliticalInfluence #BigTech

A $100 Million AI Super PAC Targeted New York Democrat Alex Bores. He Thinks It Backfired

Leading the Future said it will spend millions to keep Alex Bores out of Congress. It might be helping him instead.

WIRED

I got ChatGPT to go off the rails today.

I just wanted it to summarize a piece of code from btop, but it refused to and instead wanted to argue that the code was not from btop.

I told it that it was and that I had just copied and pasted it, but it refused to acknowledge this. I again asked it to stop arguing, whereupon it denied that it was arguing. I told it again to stop arguing (and arguing about arguing) and I pasted a screenshot.

1/2

#ai #llm #chatgpt #aisafety

"Leaving consumers the choice to engage intimately with A.I. sounds good in theory. But companies with vast troves of data know far more than the public about what induces powerful delusional thinking. A.I. companions that burrow into our deepest vulnerabilities will wreak havoc on our mental health and relationships far beyond what pornography, the manosphere and social media have done.

Skeptics conflate romantic A.I. companions with porn, and argue that regulating them would be impossible. But that’s the wrong analogy. Pornography is static media for passive consumption. A.I. lovers pose a far greater threat, operating more like human escorts without agency, boundaries or time limits.

Governments should classify these chatbots not simply as another form of media, but as a dependency-fostering product with known psychological risks, like gambling or tobacco.

Regulation would start with universal laws for A.I. companions, including clear warning labels, time limits, 18-plus age verification and, most important, a new framework for liability that places the burden on companies to prove their products are safe, not on users to show harm.

Absent swift legislation, some of the largest A.I. companies are poised to repeat the sins of social media on a more devastating scale."

https://www.nytimes.com/2025/11/17/opinion/her-film-chatbots-romance.html

#AI #GenerativeAI #Chatbots #AICompanions #AISafety #AIEthics

Opinion | A.I. Sexbots Are Dangerous. We Should Know.

At least a quarter of the more than 100 billion messages sent to our chatbots are attempts to initiate romantic or sexual exchanges.

The New York Times
@kevinveenbirkenbach @sprind_de @BMDS @sovtechfund @rosaluxstiftung "The purpose of the state should be [...] to ensure democracy". In practice it is not working. American state became captured by them, and wealth is set for even more concentration with AGI. I don't support this trend, and Europeans shouldn't either. For example, Europeans are now buying extremely expensive GPUs from nvidia monopoly, which further strengthens them and weakens us. In monopolies only one win. Let's stop playing into their game:) #aisafety
×