So #LLMs can leak their (possibly sensitive) training data [1], are guaranteed to hallucinate [2], produce "bullshit" (the technical term for indifference to veracity of statements) [3], use copious amounts of water [4] and electricity [5].
What's next? Are you going to tell me they're racist, too?
https://www.nature.com/articles/s41586-024-07856-5
[1]: http://arxiv.org/abs/2202.07646
[2]: http://arxiv.org/abs/2401.11817
[3]: https://doi.org/10.1007/s10676-024-09775-5
[4]: https://arxiv.org/abs/2304.03271
[5]: https://www.nature.com/articles/d41586-024-00478-x

AI generates covertly racist decisions about people based on their dialect - Nature
Despite efforts to remove overt racial prejudice, language models using artificial intelligence still show covert racism against speakers of African American English that is triggered by features of the dialect.