Good morning, Belval! Second day of #UndoneCS. https://www.undonecs.org/2026/
This morning, some AI (we were mostly spared yesterday).
Good morning, Belval! Second day of #UndoneCS. https://www.undonecs.org/2026/
This morning, some AI (we were mostly spared yesterday).
"Explainable AI as a consequence of target system ignorance in Machine Learning" by Clément Arlotti
[Personal opinion: this is one of my main problems with generators: they cannot explain why they said what they said.]
"epistemic agent with graduate physics knowledge" (aka a student) 🙂
Unlike traditional modelling, in deep learning, model, data and algorithms are intertwined. Explainability requires separability and we don't have it.
(Example is of course a cat image recognition system.)
No structuring hypothesis, no prior knowledge, no explainability.
"Radical alternative for AI" by François Levin
Limitations of the AGI (Artificial General Intelligence): some technical (it is irrealist), some practical (specialized IA would be more interesting), some political.
Proposal: attempt instead Alien Intelligence, an intelligence different from ours. Human are not the reference.
"Memory Undone: Between Knowing and Not Knowing in Data Systems" by Viktoriia Makovska,
Not obvious how to implement real deletion of data: data is always stored in many places (think of logs).
Also, if you delete a nasty account in a social network, AI trained from data will retain his nastyness ("memory ghosts").
Erasure is possible, Unlearning more difficult.
There is now a scientific field called "machine unlearning" (implementing real deletion).
It is not just for individual privacy, it is also to fix the training data.
Unlearning may mean to *add* information to counterbalance the ghost of the deleted information.
"Ineffective Right & Undone Science: the case of the access to administrative algorithms in France" by Luc Pellissier
Spoiler: source code access does not really work when you want to understand the origin of a decision.
Pseudonymisation of legal decisions in France: refusal to give the source code of the LLM which does it. What is the source code of a LLM? Training corpus+weights?
The speaker asked his university the source code of the pay program. Refusal "for security reasons".
"Electronic bureaucracy and lack of reflexivity" by David Monniaux @MonniauxD
Internal university processes are computerized but CS researchers in the university are never consulted. UI of internal software is awful,
Security is often awful ("2FA" by sending two emails to the same address).
Discussion "are the french academics always complaining?" :-)