A British school library banned around 200 books deemed inappropriate. Including books about totalitarianism, toxic masculinity and books aimed at queer teenagers.

This is a worrying move, and extra worrying is the use of AI to classify the books. It demonstrates the Bias as a Service that is inherently part of what AI is.

https://www.indexoncensorship.org/2026/03/school-book-banning-escalates-in-the-uk-as-greater-manchester-secondary-school-censors-scores-of-books/

School book banning escalates in the UK as Greater Manchester secondary school censors scores of books - Index on Censorship

A school librarian faced a disciplinary hearing and scores of books were removed from her school library after she stocked Laura Bates’ Men Who Hate Women

Index on Censorship

Also it is troubling that whatever an AI has generated is granted authority over the well argument opinion of a trained librarian with more than a decade of experience.

Reminder that LLMs give the statistically most likely next word, not the most correct word. And I would wager that the training data leans heavily towards conservatism, authoritarianism, neurotypicalism, heteronormativity and patriarchy. I.e. LLMs need to be considered Bias-as-a-Service.

@Dany
But also remember that the prompt was written by a person with a clear bias. Everyone knows how willing an LLM is to please you. You can steer it in whatever direction you want.

It is fed with 80TB of books…

@axel @Dany Agreed, it would help a lot if the people who write 'which of these books are not appropriate for kids, and tell me why' also ask 'for each of these books, defend vigorously why they should remain available'. I don't think this is a good way of using an LLM, but people use it like an infallible oracle.

If you are upset about a book, then don't read it.

@Dany also, probable willful ableism, given that the librarian is openly autistic.