🤖 Safe Learning Systems & AI
How can AI stay reliable under uncertainty? At the AI Colloquium, Prof. Nils Jansen explains how formal methods and reinforcement learning contribute to trustworthy AI. 📐🧠

📅 15 Jan 2026 | ⏰ 10:15 AM
📍 TU Dortmund University, JvF 25, 3rd Floor

💬 What makes AI trustworthy in your view?

Photo: Ruhr University Bochum

#AI #SafeAI #TrustworthyAI #MachineLearning #FormalMethods #Research #AIColloquium

Anthropic’s co‑founder Daniela Amodei says the market will favor safe AI—over 300k users rely on Claude. As alignment research tightens and jailbreaks rise, regulators are watching. Can transparent deployment keep the edge? Read how safety could become a competitive advantage. #Anthropic #ClaudeAI #SafeAI #AIAlignment

🔗 https://aidailypost.com/news/anthropics-amodei-says-market-will-reward-safe-ai-300000-use-claude

🤖 Smart AI needs smarter boundaries.

AI is reshaping our world—but without ethics, innovation can become a threat.
⚖️ Let’s define the line between progress and responsibility. Read more!👇

https://neuronus.net/en/blog/ethical-AI-threats-and-solutions

#AIethics #EthicalTechnology #AILegislation #AIandSociety #AIrisks #DigitalResponsibility #AIgovernance #SafeAI #Neuronus

AI is rising fast—but can we keep it in check?

As AI reshapes our world, global regulation is key to ensuring it serves humanity—not harms it.
🧭 Let’s guide AI with smart, united regulation.

🔍 Learn more about AI laws—read the full blog.👇

https://neuronus.net/en/blog/ai-legislation-and-regulations

#AIRegulation #AIandLaw #ResponsibleAI #EthicalAI #SafeAI #TechRegulation #GlobalTechPolicy #Neuronus

LawZero | Yoshua Bengio Launches LawZero: A New Nonprofit Advancing Safe-by-Design AI

Yoshua Bengio, the most-cited artificial intelligence (AI) researcher in the world and A.M. Turing Award winner, today announced the launch of LawZero, a new nonprofit organization committed to advancing research and developing technical solutions for safe-by-design AI systems.

🤝 In collaboration with the (@UniStuttgartAI) Institute for Artificial Intelligence at the Universität Stuttgart, it is our great pleasure to highlight the following event:
🎤 Engineering Safe Systems with AI
🗓️ June 5, 2025 | 15:45 | Room U32.101, Universitätsstraße 32
We’re pleased to support this talk by Dr. Reinhard Stolle Deputy Director at Fraunhofer IKS, on how to engineer safe AI-enabled systems without compromising innovation.
In his talk, “Engineering Safe Systems with AI”, Dr. Stolle will explore two key perspectives on safety: a safety-centricand an AI-centric view. He will present his team’s approach to combining the strengths of both, introducing a model for continuous safety engineering for high-risk AI systems—explicitly modeling and propagating uncertainties and confidences during both design and operation.
📣 Students, staff, and all interested guests are warmly invited to attend this exciting and insightful session!

👤 About the Speaker
Dr. Reinhard Stolle is Deputy Director of Fraunhofer IKS and Head of the Mobility Business Unit. He studied computer science at FAU Erlangen and the University of Colorado at Boulder, earned his master’s and Ph.D. in AI, and completed postdoctoral research at Stanford. His career spans AI research at Xerox PARC, 14 years in software and autonomous driving at BMW, and leadership roles at AID (VW Group) and Argo AI, focusing on Level 4 autonomous vehicles.
#AI
#SafeAI
#Engineering
#FraunhoferIKS
#AIsafety
#AutonomousSystems
#TechTalk
#Innovation
#ContinuousEngineering
#AIethics
#KIInstitut
#AIresearch

IRIS Board of Directors
Prof. Dr. André Bächtiger
Prof. Dr. Reinhold Bauer
Prof. Dr. Sibylle Baumbach
Dr. Miriam K.
Prof. Dr. Steffen Staab @ai
Jun.-Prof. Dr. Maria Wirzberger

Former Google boss warns of #AI 'Bin Laden scenario'

World leaders and tech executives met in Paris, France, at the AI Action Summit, where the development of #SafeAI was an ongoing concern.

The event ended with the signing of a joint agreement to develop safe AI, which the US and UK refused to sign 😡

https://bgr.com/tech/former-google-boss-warns-of-ai-bin-laden-scenario/

#Technology

Former Google boss warns of AI 'Bin Laden scenario'

Currently an investor in AI, the former CEO of Google advocates for safe AI, warning about a potential "Bin Laden scenario."

BGR
D-ReLU: A breakthrough in robust AI, designed to defend against adversarial attacks while maintaining efficiency and scalability. This research, led by Korn Sooksatra (now at Meta), has implications for high-stakes AI applications. Blog: https://buff.ly/4fC9GeP Full paper: https://buff.ly/3UXzNVi #ResponsibleAI #SafeAI #AdversarialML
Resilient AI: Advancing Robustness Against Adversarial Threats with D-ReLU

This article explores D-ReLU, an advanced modification of the ReLU activation function, designed to improve the robustness of AI models against adversarial attacks. By incorporating adaptive, learn…

BAYLOR AI
Optimizing Fairness and Robustness in Machine Learning Models. Keynote at #NeurIPS 2022 - #LXAI workshop. #TrustworthyAI #RobustAI #EthicalAI #SafeAI #ResponsibleAI
https://baylor.ai/?tag=ai-orthopraxy
AI Orthopraxy – BAYLOR AI

BAYLOR AI
Imagine if one of the Rules/Regulations of #SafeAI were that they had to be #CarbonNeutral at every step of their existence. Need more compute? Build more solar/wind. No carbon credits, none of that. Actual real capacity to power the machines that power your technology.