It may be minor, but something that saddens me in all that LLM craze is how scientifically and intellectually poor and dry it is.
When I started getting interested in AI for robotics, nearly 20 years ago (yeah, I'm ancient), I and my colleagues were after some big questions. We were studying intelligent life - people and animals, mostly - or working with people who do (neuroscientists and other physiologists, psychologists, ethologists, ...), trying to understand behavior, cognition, decision, perception, action, and then to model it, and draw inspiration and insight from it to design artificial intelligent systems (with a very wide definition of "intelligent"). It was something huge, and as with most huge scientific problems and big questions, the usual way to do it is to break them into smaller problems, and to try to approach them one at a time while keeping some understanding of how they connect to the bigger picture. It was trying to explore a huge ocean, deeper and deeper, one small dive at a time. And as for most fundamental research, it was mostly done for its own sake and useful outcomes were an occasional byproduct.
This research goes on as it can, but most people who speak about AI (and get the limelight and the funding and the political interest etc.) now aren't doing that. They are not doing science. What they have done, mostly, is take one of those occasionally useful byproducts and develop and iterate over it to create something simultaneously impressive, unethical, dangerous, and of questionable real utility to anyone but their (rich, powerful) promoters.