Good morning, Belval! Second day of #UndoneCS. https://www.undonecs.org/2026/
This morning, some AI (we were mostly spared yesterday).
Good morning, Belval! Second day of #UndoneCS. https://www.undonecs.org/2026/
This morning, some AI (we were mostly spared yesterday).
"Explainable AI as a consequence of target system ignorance in Machine Learning" by Clément Arlotti
[Personal opinion: this is one of my main problems with generators: they cannot explain why they said what they said.]
"epistemic agent with graduate physics knowledge" (aka a student) 🙂
Unlike traditional modelling, in deep learning, model, data and algorithms are intertwined. Explainability requires separability and we don't have it.
(Example is of course a cat image recognition system.)
No structuring hypothesis, no prior knowledge, no explainability.
"Radical alternative for AI" by François Levin
Limitations of the AGI (Artificial General Intelligence): some technical (it is irrealist), some practical (specialized IA would be more interesting), some political.
Proposal: attempt instead Alien Intelligence, an intelligence different from ours. Human are not the reference.
"Memory Undone: Between Knowing and Not Knowing in Data Systems" by Viktoriia Makovska,
Not obvious how to implement real deletion of data: data is always stored in many places (think of logs).
Also, if you delete a nasty account in a social network, AI trained from data will retain his nastyness ("memory ghosts").
Erasure is possible, Unlearning more difficult.
There is now a scientific field called "machine unlearning" (implementing real deletion).
It is not just for individual privacy, it is also to fix the training data.
Unlearning may mean to *add* information to counterbalance the ghost of the deleted information.
"Ineffective Right & Undone Science: the case of the access to administrative algorithms in France" by Luc Pellissier
Spoiler: source code access does not really work when you want to understand the origin of a decision.
Pseudonymisation of legal decisions in France: refusal to give the source code of the LLM which does it. What is the source code of a LLM? Training corpus+weights?
The speaker asked his university the source code of the pay program. Refusal "for security reasons".
"Electronic bureaucracy and lack of reflexivity" by David Monniaux @MonniauxD
Internal university processes are computerized but CS researchers in the university are never consulted. UI of internal software is awful,
Security is often awful ("2FA" by sending two emails to the same address).
Discussion "are the french academics always complaining?" :-)
Keynote by Tomas Petricek https://tomasp.net/ (from his book "Cultures of programming" https://www.cambridge.org/core/books/cultures-of-programming/075A2D1DE611EE47807A683147B21691)
When there is a bug (like the Knigth glitch https://en.wikipedia.org/wiki/Knight_Capital_Group#2012_stock_trading_disruption), which lessons to deduce? Whose fault was it?
Summary of the keynote: there are several cultures of programming, do no stick to only one (not just the mathematical one or the engineering one).
"Fragmented Innovation: Anime and the Limits of Computer Science R&D" by Jun Kato https://junkato.jp/
Academic study of anime production (mostly undone). Production was not digitalized quickly because all the studios are on the same subway line in Tokyo, so physical distribution of material was possible.
"Can We Rigorously and Verifiably Determine How Little the Industry complies with Copyleft Licenses such as GPL?" by Bradley Kühn
Not enough crossover venues between free software people and academics.
Compliance with copyleft licences: companies sending source code… which does not compile.
"Donald Trump is one of the few people which respected the AGPL." (For Truth Social)
"Who is driving storage research? Questioning the priorities behind SSD research" by Ryan Lahfa
Research on storage is mostly done in big tech / HPC and the needs of the small systems (self-hosted servers)) are quite forgotten.