So #LLMs can leak their (possibly sensitive) training data [1], are guaranteed to hallucinate [2], produce "bullshit" (the technical term for indifference to veracity of statements) [3], use copious amounts of water [4] and electricity [5].

What's next? Are you going to tell me they're racist, too?

https://www.nature.com/articles/s41586-024-07856-5

[1]: http://arxiv.org/abs/2202.07646
[2]: http://arxiv.org/abs/2401.11817
[3]: https://doi.org/10.1007/s10676-024-09775-5
[4]: https://arxiv.org/abs/2304.03271
[5]: https://www.nature.com/articles/d41586-024-00478-x

AI generates covertly racist decisions about people based on their dialect - Nature

Despite efforts to remove overt racial prejudice, language models using artificial intelligence still show covert racism against speakers of African American English that is triggered by features of the dialect.

Nature

These are important conversations to have when thinking about using any technology, not just large language models. But LLMs are particularly suited to do all of these things because of the resources they require to produce humanly-plausible text.

We, scientists and non-scientists alike, can't just ignore these concerns. Yes, the outputs of these models is cool and better than previous technologies. But those gains don't come for free and they can enact real harm