AgentSudo ra mắt! Đây là hệ thống phân quyền mã nguồn mở cho các tác nhân AI, giúp gán quyền hạn cụ thể & bảo vệ chức năng Python, tương tự lệnh 'sudo'. Mục tiêu là ngăn chặn tác nhân AI gây hại do lỗi hoặc gọi công cụ nguy hiểm. Hữu ích cho dev & nghiên cứu an toàn AI.

#AgentSudo #AISafety #OpenSource #AI #BaoMatAI #PhanQuyenAI

https://www.reddit.com/r/LocalLLaMA/comments/1p7dujm/i_launched_a_permission_system_for_ai_agents_today/

A man asked AI for health advice and he ruined his life

YouTube

"An OpenAI safety research leader who helped shape ChatGPT’s responses to users experiencing mental health crises announced her departure from the company internally last month, WIRED has learned. Andrea Vallone, the head of a safety research team known as model policy, is slated to leave OpenAI at the end of the year.

OpenAI spokesperson Kayla Wood confirmed Vallone’s departure. Wood said OpenAI is actively looking for a replacement and that, in the interim, Vallone’s team will report directly to Johannes Heidecke, the company’s head of safety systems.

Vallone’s departure comes as OpenAI faces growing scrutiny over how its flagship product responds to users in distress. In recent months, several lawsuits have been filed against OpenAI alleging that users formed unhealthy attachments to ChatGPT. Some of the lawsuits claim ChatGPT contributed to mental health breakdowns or encouraged suicidal ideations.

Amid that pressure, OpenAI has been working to understand how ChatGPT should handle distressed users and improve the chatbot’s responses. Model policy is one of the teams leading that work, spearheading an October report detailing the company’s progress and consultations with more than 170 mental health experts."

https://www.wired.com/story/openai-research-lead-mental-health-quietly-departs/

#AI #GenerativeAI #OpenAI #AISafety #MentalHealth

A Research Leader Behind ChatGPT’s Mental Health Work Is Leaving OpenAI

The model policy team leads core parts of AI safety research, including how ChatGPT responds to users in crisis.

WIRED

"Khung.Framework An.Toàn AI Mới: Kết Quả Từ Sự Kết Hợp Giữa Người Dùng & AI. Phát triển hệ thống "từ chối có trọng số" để cân bằng sự hữu ích và an toàn. #AISafety #AnToanAI #UserAI #KếtHợpNgườiDùng #TríTuệNhânTạo"

https://www.reddit.com/r/singularity/comments/1p5tdq7/a_userai_collaboration_on_an_alternative_ai/

Well, isn't this interesting. A research leader crucial for ChatGPT's mental health safety protocols is apparently heading for the exits at OpenAI. Considering how much we rely on these models, that's a pretty big deal for AI ethics.

What impact do you think this leadership shift will have on future AI safety initiatives?

Read more: https://www.wired.com/story/openai-research-lead-mental-health-quietly-departs/ #AI #OpenAI #TechNews #AISafety #ChatGPT

A Research Leader Behind ChatGPT’s Mental Health Work Is Leaving OpenAI

The model policy team leads core parts of AI safety research, including how ChatGPT responds to users in crisis.

WIRED

5 years ago I thought that due to greed, big tech will develop highly capable, but non-aligned AI that will be dangerous, just to save on the cost of safety research.

Now I think that due to corporate greed, big tech realized it is cheaper to just use half-working stuff and hype it up as something so powerful it might be dangerous, to entice more investor capital.

#ai #openAi #aiSafety #bubble #aiBubble

"Late last month, OpenAI quietly updated its “usage policies”, writing in a statement that users should not use ChatGPT for “tailored advice that requires a license, such as legal or medical advice, without appropriate involvement by a licensed professional.” A flurry of social media posts then bemoaned the possibility that they’d no longer be able to use the chatbot for medical and legal questions. Karan Singhal, OpenAI’s head of safety, took to X/Twitter to clarify the situation, writing: “Despite speculation, this is not a new change to our terms. Model behavior remains unchanged. ChatGPT has never been a substitute for professional advice, but it will continue to be a great resource to help people understand legal and health information.”

In other words: OpenAI is trying to shift the blame for bad legal and medical advice from its chatbot away from the company and onto users. We agree that no chatbot should be used for medical or legal advice. But we believe the accountability here should lie with the companies creating these products, which are designed to mimic the way people use language, including medical and legal language, not the users.

The reality is that the medical and legal language that these chatbots spit out sounds convincing and, simultaneously, the tech bros are going around saying that their synthetic text extruding machines are going to replace doctors and lawyers any day now, or at least in the near enough future to goose stock prices today."

https://buttondown.com/maiht3k/archive/openai-tries-to-shift-responsibility-to-users/

#AI #GenerativeAI #OpenAI #AISafety #ChatGPT

OpenAI Tries to Shift Responsibility to Users

OpenAI is trying to shift the blame for bad legal and medical advice from its chatbot away from the company and onto users. We agree that no chatbot should be used for medical or legal advice.

Mystery AI Hype Theater 3000: The Newsletter

Andreessen Horowitz’s super PAC is targeting NY Assemblymember Alex Bores over his AI safety bill, sparking a showdown over the future of AI regulation.

https://www.wired.com/story/alex-bores-andreessen-horowitz-super-pac-ai-regulation-new-york/ #AI #AISafety #Politics #AIRegulation #PoliticalInfluence #BigTech

A $100 Million AI Super PAC Targeted New York Democrat Alex Bores. He Thinks It Backfired

Leading the Future said it will spend millions to keep Alex Bores out of Congress. It might be helping him instead.

WIRED