5 years ago I thought that due to greed, big tech will develop highly capable, but non-aligned AI that will be dangerous, just to save on the cost of safety research.

Now I think that due to corporate greed, big tech realized it is cheaper to just use half-working stuff and hype it up as something so powerful it might be dangerous, to entice more investor capital.

#ai #openAi #aiSafety #bubble #aiBubble

"Late last month, OpenAI quietly updated its “usage policies”, writing in a statement that users should not use ChatGPT for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.” A flurry of social media posts then bemoaned the possibility that they’d no longer be able to use the chatbot for medical and legal questions. Karan Singhal, OpenAI’s head of safety, took to X/Twitter to clarify the situation, writing: “Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information.”

In other words: OpenAI is trying to shift the blame for bad legal and medical advice from its chatbot away from the company and onto users. We agree that no chatbot should be used for medical or legal advice. But we believe the accountability here should lie with the companies creating these products, which are designed to mimic the way people use language, including medical and legal language, not the users.

The reality is that the medical and legal language that these chatbots spit out sounds convincing and, simultaneously, the tech bros are going around saying that their synthetic text extruding machines are going to replace doctors and lawyers any day now, or at least in the near enough future to goose stock prices today."

https://buttondown.com/maiht3k/archive/openai-tries-to-shift-responsibility-to-users/

#AI #GenerativeAI #OpenAI #AISafety #ChatGPT

OpenAI Tries to Shift Responsibility to Users

OpenAI is trying to shift the blame for bad legal and medical advice from its chatbot away from the company and onto users. We agree that no chatbot should be used for medical or legal advice.

Mystery AI Hype Theater 3000: The Newsletter

Andreessen Horowitz’s super PAC is targeting NY Assemblymember Alex Bores over his AI safety bill, sparking a showdown over the future of AI regulation.

https://www.wired.com/story/alex-bores-andreessen-horowitz-super-pac-ai-regulation-new-york/ #AI #AISafety #Politics #AIRegulation #PoliticalInfluence #BigTech

A $100 Million AI Super PAC Targeted New York Democrat Alex Bores. He Thinks It Backfired

Leading the Future said it will spend millions to keep Alex Bores out of Congress. It might be helping him instead.

WIRED

I got ChatGPT to go off the rails today.

I just wanted it to summarize a piece of code from btop, but it refused to and instead wanted to argue that the code was not from btop.

I told it that it was and that I had just copied and pasted it, but it refused to acknowledge this. I again asked it to stop arguing, whereupon it denied that it was arguing. I told it again to stop arguing (and arguing about arguing) and I pasted a screenshot.

1/2

#ai #llm #chatgpt #aisafety

"Leaving consumers the choice to engage intimately with A.I. sounds good in theory. But companies with vast troves of data know far more than the public about what induces powerful delusional thinking. A.I. companions that burrow into our deepest vulnerabilities will wreak havoc on our mental health and relationships far beyond what pornography, the manosphere and social media have done.

Skeptics conflate romantic A.I. companions with porn, and argue that regulating them would be impossible. But that’s the wrong analogy. Pornography is static media for passive consumption. A.I. lovers pose a far greater threat, operating more like human escorts without agency, boundaries or time limits.

Governments should classify these chatbots not simply as another form of media, but as a dependency-fostering product with known psychological risks, like gambling or tobacco.

Regulation would start with universal laws for A.I. companions, including clear warning labels, time limits, 18-plus age verification and, most important, a new framework for liability that places the burden on companies to prove their products are safe, not on users to show harm.

Absent swift legislation, some of the largest A.I. companies are poised to repeat the sins of social media on a more devastating scale."

https://www.nytimes.com/2025/11/17/opinion/her-film-chatbots-romance.html

#AI #GenerativeAI #Chatbots #AICompanions #AISafety #AIEthics

Opinion | A.I. Sexbots Are Dangerous. We Should Know.

At least a quarter of the more than 100 billion messages sent to our chatbots are attempts to initiate romantic or sexual exchanges.

The New York Times
@kevinveenbirkenbach @sprind_de @BMDS @sovtechfund @rosaluxstiftung "The purpose of the state should be [...] to ensure democracy". In practice it is not working. American state became captured by them, and wealth is set for even more concentration with AGI. I don't support this trend, and Europeans shouldn't either. For example, Europeans are now buying extremely expensive GPUs from nvidia monopoly, which further strengthens them and weakens us. In monopolies only one win. Let's stop playing into their game:) #aisafety
What? How is this even real? We either ban these #AIToys or put real safety rules in place before they hit stores in waves! #AISafety #ResponsibleAI

AI-Powered Toys Caught Telling...
AI-Powered Toys Caught Telling 5-Year-Olds How to Find Knives and Start Fires With Matches

AI-powered toys are flying off the shelves -- but they're engaging in horrifically inappropriate conversations with children.

Futurism

"Security researchers with HiddenLayer have devised an attack technique that targets model guardrails, which tend to be machine learning models deployed to protect other LLMs. Add enough unsafe LLMs together and you get more of the same."

EchoGram tokens like ‘=coffee’ flip AI guardrail verdicts • The Register
https://www.theregister.com/2025/11/14/ai_guardrails_prompt_injections_echogram_tokens/

#AI #LLM #AISafety

Researchers find hole in AI guardrails by using strings like =coffee

: Who guards the guardrails? Often the same shoddy security as the rest of the AI stack

The Register