๐Ÿฅ Synthetic Data & Trustworthy Health AI

How can AI learn from health data without violating privacy? At the AI Colloquium, Allan Tucker shares lessons from synthetic health data generation-covering bias, concept drift, and regulation in evolving healthcare systems. ๐Ÿงฌ๐Ÿ“Šโณ

๐Ÿ“… 4 Feb 2026 | โฐ 9:30โ€“10:30
๐Ÿ’ป Online via Zoom

๐Ÿ’ฌ What role should synthetic data play in medical AI?

#HealthAI #SyntheticData #TrustworthyAI #DataScience #AIColloquium #EthicalAI #OpenScience

๐Ÿค Inclusive Research & AI
Who is included in AI research - and who is overlooked? At the AI Colloquium, Prof. Giulia Barbareschi discusses inclusive research practices and why they are essential for trustworthy, valid AI systems. ๐ŸŒ๐Ÿง 

๐Ÿ“… 21 Jan 2026 | โฐ 9:30โ€“10:30
๐Ÿ“ TU Dortmund & ๐Ÿ’ป Zoom

๐Ÿ’ฌ How can inclusion improve AI research?

#InclusiveAI #TrustworthyAI #HCI #DataScience #AIColloquium #Research #AccessibilitylMethods #Research #AIColloquium

๐Ÿค–๐Ÿงฌ Can generative AI support medical research responsibly?

A new paper in Statistics in Medicine explores how tools like ChatGPT can assist biostatisticians ๐Ÿ“Š โ€“ and where serious risks remain โš ๏ธ
Key takeaway: AI works best when combined with human expertise, transparency, and solid statistical foundations ๐Ÿง ๐Ÿ“

๐Ÿ’ฌ How much trust should we place in AI-driven science?

#TrustworthyAI #ResponsibleAI #Science #DataScience #OpenResearch #AIethicsh #AIColloquium

Photo: ChatGPT

๐Ÿค– Safe Learning Systems & AI
How can AI stay reliable under uncertainty? At the AI Colloquium, Prof. Nils Jansen explains how formal methods and reinforcement learning contribute to trustworthy AI. ๐Ÿ“๐Ÿง 

๐Ÿ“… 15 Jan 2026 | โฐ 10:15 AM
๐Ÿ“ TU Dortmund University, JvF 25, 3rd Floor

๐Ÿ’ฌ What makes AI trustworthy in your view?

Photo: Ruhr University Bochum

#AI #SafeAI #TrustworthyAI #MachineLearning #FormalMethods #Research #AIColloquium

๐Ÿค– Explainable AI: When can we trust an explanation?
On 11 December, Prof. Barbara Hammer (Bielefeld University) explores how XAI methods can support critical applications and why explanations may diverge. ๐Ÿ”๐Ÿ’ก

๐Ÿ“… 10:15โ€“11:45 AM
๐Ÿ“ JvF25/3-303 โ€“ Lamarr/RC Trust Dortmund

๐Ÿ’ฌ How do you evaluate trustworthy AI?

#XAI #ExplainableAI #MachineLearning #TrustworthyAI #Science #Research #AIColloquium

๐Ÿค– AI Colloquium Lecture

Curious about what makes a good conversation between humans and AI?

Prof. Milica Gaลกiฤ‡ will discuss how emotions, confidence, and explainability influence conversational AI โ€“ exploring large language models, task-oriented systems, and the role of reinforcement learning.

๐Ÿ—“๏ธ 18 Sept 2025 | 10:15 โ€“ 11:45
๐Ÿ“ Joseph-von-Fraunhofer-Str. 25, Dortmund, Floor 3, Room 303

Organized by the Lamarr Institute, RC Trust, and DoDas.

#MachineLearning #AIColloquium #ConversationalAI