πŸ€– Is Information Digital?

At the AI Colloquium, Edward A. Lee (UC Berkeley) explores whether objective observation can ever fully capture physical reality. πŸ“‘πŸŒ

Using concepts from computer science & information theory, he argues that some knowledge may require embodied interaction – even for machines. πŸš²πŸ€–

πŸ“… 5 March | 13:00–15:00 | TU Dortmund

What are the limits of digital representation?

#AIColloquium #CyberPhysicalSystems #TrustworthyAI #PhilosophyOfAI

Photo: Hesham Elsherif/TU Dortmund

πŸ₯ Synthetic Data & Trustworthy Health AI

How can AI learn from health data without violating privacy? At the AI Colloquium, Allan Tucker shares lessons from synthetic health data generation-covering bias, concept drift, and regulation in evolving healthcare systems. πŸ§¬πŸ“Šβ³

πŸ“… 4 Feb 2026 | ⏰ 9:30–10:30
πŸ’» Online via Zoom

πŸ’¬ What role should synthetic data play in medical AI?

#HealthAI #SyntheticData #TrustworthyAI #DataScience #AIColloquium #EthicalAI #OpenScience

🀝 Inclusive Research & AI
Who is included in AI research - and who is overlooked? At the AI Colloquium, Prof. Giulia Barbareschi discusses inclusive research practices and why they are essential for trustworthy, valid AI systems. 🌍🧠

πŸ“… 21 Jan 2026 | ⏰ 9:30–10:30
πŸ“ TU Dortmund & πŸ’» Zoom

πŸ’¬ How can inclusion improve AI research?

#InclusiveAI #TrustworthyAI #HCI #DataScience #AIColloquium #Research #AccessibilitylMethods #Research #AIColloquium

πŸ€–πŸ§¬ Can generative AI support medical research responsibly?

A new paper in Statistics in Medicine explores how tools like ChatGPT can assist biostatisticians πŸ“Š – and where serious risks remain ⚠️
Key takeaway: AI works best when combined with human expertise, transparency, and solid statistical foundations πŸ§ πŸ“

πŸ’¬ How much trust should we place in AI-driven science?

#TrustworthyAI #ResponsibleAI #Science #DataScience #OpenResearch #AIethicsh #AIColloquium

Photo: ChatGPT

πŸ€– Safe Learning Systems & AI
How can AI stay reliable under uncertainty? At the AI Colloquium, Prof. Nils Jansen explains how formal methods and reinforcement learning contribute to trustworthy AI. πŸ“πŸ§ 

πŸ“… 15 Jan 2026 | ⏰ 10:15 AM
πŸ“ TU Dortmund University, JvF 25, 3rd Floor

πŸ’¬ What makes AI trustworthy in your view?

Photo: Ruhr University Bochum

#AI #SafeAI #TrustworthyAI #MachineLearning #FormalMethods #Research #AIColloquium

πŸ€– Explainable AI: When can we trust an explanation?
On 11 December, Prof. Barbara Hammer (Bielefeld University) explores how XAI methods can support critical applications and why explanations may diverge. πŸ”πŸ’‘

πŸ“… 10:15–11:45 AM
πŸ“ JvF25/3-303 – Lamarr/RC Trust Dortmund

πŸ’¬ How do you evaluate trustworthy AI?

#XAI #ExplainableAI #MachineLearning #TrustworthyAI #Science #Research #AIColloquium

πŸ€– AI Colloquium Lecture

Curious about what makes a good conversation between humans and AI?

Prof. Milica GaΕ‘iΔ‡ will discuss how emotions, confidence, and explainability influence conversational AI – exploring large language models, task-oriented systems, and the role of reinforcement learning.

πŸ—“οΈ 18 Sept 2025 | 10:15 – 11:45
πŸ“ Joseph-von-Fraunhofer-Str. 25, Dortmund, Floor 3, Room 303

Organized by the Lamarr Institute, RC Trust, and DoDas.

#MachineLearning #AIColloquium #ConversationalAI