@quixoticgeek 1. LLMs are useful far beyond sparkling autocorrect. In fact their embeddings are arguably their most useful feature.
2. LLMs provide a lot of different options for accessibility needs. From sparkling autocorrect allowing for much better encoding and decoding of voice data, to helping with attention management.
3. Immersion Cooling (stick your server in vegetable oil) could already be being used to save water. They don't because of graft. Why pay $9 per gallon of vegetable oil that will last forever, when you can pay .35 cents (2500 times less)? States are literally giving companies water and paying them to use more of it.
4. We have known for over a decade how to remove bias from LLMs. It sometimes degrades performance, so they've decided not to do that. In other ways they have debiased it.
5. It's flatly not true that learning from the data it creates will reinforce bias. In fact, given the way they currently are designed, it will reduce bias. Also, if we can't tell that it was made by a machine, then it doesn't matter whether or not it was.