"Production-ready AI agents need production-grade governance."

Microsoft's Agent Governance Toolkit for:
• Security & access controls
• Policy enforcement
• Audit & compliance guardrails

https://github.com/microsoft/agent-governance-toolkit

#AgenticAI #ResponsibleAI #OpenSource #AISecurity

GitHub - microsoft/agent-governance-toolkit: AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and reliability engineering for autonomous AI agents. Covers 10/10 OWASP Agentic Top 10.

AI Agent Governance Toolkit — Policy enforcement, zero-trust identity, execution sandboxing, and reliability engineering for autonomous AI agents. Covers 10/10 OWASP Agentic Top 10. - microsoft/age...

GitHub

We’ve added Qwen 3.5 397B to the GreenPT API.

Strong reasoning, structured outputs and solid performance on complex tasks.

Available on European infrastructure with full data control.

#AI #API #ResponsibleAI

🏆 What ethical questions arise in AI and data science?

The AI Ethics Video Series by NFDI4DS brings together experts to discuss responsible AI, ethical challenges, and practical perspectives for the data science community.

🎥 10 episodes available on YouTube.
https://youtube.com/playlist?list=PLiv4TocTZt7MaKqhq3Qe8zhsgkqx2tnTG&si=karEh8JS6j4PBjyO

#AIethics #ResponsibleAI #DataScience

contributed by @NFDI4DS

📄 Paper: https://arxiv.org/abs/2508.07902

💻 Code and data: https://github.com/UKPLab/eacl2026-culturecare

🔗 Project: https://github.com/UKPLab/arxiv2025-culturecare

Follow the authors Chen Cecilia Liu, Hiba Arnaout, Nils Kovacic, and Iryna Gurevych from the UKP Lab, Technische Universität Darmstadt and hessian.ai, as well as Dana Atzil-Slonim from the Psychology Department, Bar-Ilan University.

See you this week in Rabat 🕌! #EACL2026

#UKPLab #CulturalNLP #ResponsibleAI #NLProc #NLP4MentalHealth #NLPsych #NLP #MentalHealth

Tailored Emotional LLM-Supporter: Enhancing Cultural Sensitivity

Large language models (LLMs) show promise in offering emotional support and generating empathetic responses for individuals in distress, but their ability to deliver culturally sensitive support remains underexplored due to a lack of resources. In this work, we introduce CultureCare, the first dataset designed for this task, spanning four cultures and including 1729 distress messages, 1523 cultural signals, and 1041 support strategies with fine-grained emotional and cultural annotations. Leveraging CultureCare, we (i) develop and test four adaptation strategies for guiding three state-of-the-art LLMs toward culturally sensitive responses; (ii) conduct comprehensive evaluations using LLM-as-a-Judge, in-culture human annotators, and clinical psychologists; (iii) show that adapted LLMs outperform anonymous online peer responses, and that simple cultural role-play is insufficient for cultural sensitivity; and (iv) explore the application of LLMs in clinical training, where experts highlight their potential in fostering cultural competence in novice therapists.

arXiv.org

Some technologies are created with values, others have values thrust upon them

Back in 2023, I wrote an extensive series of articles called Teaching AI Ethics in which I explored nine areas of ethical concerns with artificial intelligence. In those early articles, I argued that it is absolutely necessary to wrestle with the ethical challenges of artificial intelligence, particularly as generative applications such as ChatGPT, Microsoft Copilot, and Google Gemini become ever more ubiquitous. The series of articles has since become the most visited part of my website, […]

https://leonfurze.com/2024/04/12/some-technologies-are-created-with-values-others-have-values-thrust-upon-them/