Luciano Floridi

2K Followers
13 Following
805 Posts
A new paper from Yale's DEC 🤓
The Artificial in "Artificial Intelligence": How Imagination Shapes AI Regulation
https://papers.ssrn.com/sol3/papers.cfm?abstract_id=6289639
On Umberto Eco, for the 10th anniversary of his death https://medium.com/p/5e38b87083e3?postPublishedType=initial
On Umberto Eco, for the 10th anniversary of his death

I reproduce here a short piece I wrote for Il Sole 24Ore in 2016, upon the death of Umberto Eco. This time it’s in English

Medium
"Closing the AI benefits gap: Systems design for population health equity", now published in Public Health, Volume 253, April 2026: 106205https://www.sciencedirect.com/science/article/pii/S0033350626000740?dgcid=author
On a 25th anniversary and the past of automation (series: notes to myself)

On a 25th anniversary and the past of automation (series: notes to myself) When I was young, I met Herbert Simon at Carnegie Mellon. It was not an accident. The yearly meeting of the International …

Medium
Large Language Model Reasoning Failures

Large Language Models (LLMs) have exhibited remarkable reasoning capabilities, achieving impressive results across a wide range of tasks. Despite these advances, significant reasoning failures persist, occurring even in seemingly simple scenarios. To systematically understand and address these shortcomings, we present the first comprehensive survey dedicated to reasoning failures in LLMs. We introduce a novel categorization framework that distinguishes reasoning into embodied and non-embodied types, with the latter further subdivided into informal (intuitive) and formal (logical) reasoning. In parallel, we classify reasoning failures along a complementary axis into three types: fundamental failures intrinsic to LLM architectures that broadly affect downstream tasks; application-specific limitations that manifest in particular domains; and robustness issues characterized by inconsistent performance across minor variations. For each reasoning failure, we provide a clear definition, analyze existing studies, explore root causes, and present mitigation strategies. By unifying fragmented research efforts, our survey provides a structured perspective on systemic weaknesses in LLM reasoning, offering valuable insights and guiding future research towards building stronger, more reliable, and robust reasoning capabilities. We additionally release a comprehensive collection of research works on LLM reasoning failures, as a GitHub repository at https://github.com/Peiyang-Song/Awesome-LLM-Reasoning-Failures, to provide an easy entry point to this area.

arXiv.org
On why I miss Oxford (series: notes to myself)

I dreamed of being at Oxford for as long as I can remember, but the dream became a plan when I studied EFL. It must have been the summer of…

Medium

“This Is Not Even Wrong” — What Happens When Q Is Neither True Nor False?

https://medium.com/@lfloridi/this-is-not-even-wrong-what-happens-when-q-is-neither-true-nor-false-98029bc7f104?postPublishedType=initial

“This Is Not Even Wrong” — What Happens When Q Is Neither True Nor False?

Wolfgang Pauli’s famous critique, “This is not even wrong” addresses situations where a statement fails to be meaningful because it cannot…

Medium
Deeply grateful to Bruce Benson at FTI Consulting for this article, which thoughtfully applies some ideas from my recent work to real-world AI implementation challenges.
"Floridi Curves: Taming AI Problems"
https://www.fticonsulting.com/insights/articles/floridi-curves-approach-taming-ai-problems-industry
Floridi Curves: Taming AI Problems | FTI Consulting

Artificial intelligence is transforming industries worldwide, healthcare to automotive and manufacturing. Yet implementation success remains elusive.

"L'AI non ci ruba il lavoro. Lo riscrive".
Podcast con Claudio Pagliara, Direttore dell'Istituto Italiano di Cultura a NY. https://open.spotify.com/episode/4TwoJGy6j85eQTwGCdO9uF?si=xIPhTPEiSb6okky8KtKZZQ&nd=1&dlsi=f1ffebcd0b94461d
9. Floridi: l’AI non ci ruba il lavoro. Lo riscrive.

Zoom · Episode

Spotify
Yale DEC paper:
"Agentic AI Optimisation (AAIO): What it is, How it Works, Why it Matters, and How to Deal With It"
has been accepted for publication in Minds and Machines.
Free updated preprint: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=5220068
Agentic AI Optimisation (AAIO): What it is, How it Works, Why it Matters, and How to Deal With It <br> (revised preprint, forthcoming in Minds and Machines)

The emergence of Agentic Artificial Intelligence (AAI) systems capable of independently initiating digital interactions necessitates a new optimisation paradigm