A large chunk of people stating #AGI is the future and is coming soon are spokespeople for companies that are in the #AI business, most of whom are in some startup phase. Well, not a *large* chunk but the chunk that is getting press. Of course they are saying that.
The models need clarity and refinement to make them useful, and so far every model that has "matured" reaches a level of degenerative uselessness. Google's search engine is a prime example of it, as people purposely tuned their web-based context (some with AI help) to create garbage web pages that to Google's search engine AI looked good enough to index and include in results, and now results are fairly useless. The LLMs will begin to show these same signs as they are not being fed curated data, they are simply being fed a huge amount of data as fast as possible.
Until we can come to a consensus as a society on what we deem to be right and wrong which we can't do "acoustically" then we're not going to be able to do it digitally. We're not going to get Star Trek's Data or even a HAL 9000. We'll get something shitty that eventually breaks down.
For a bit of context, my first AI projects I worked on were 14 years ago preparing good and bad data example sets to train models, and even in that small and controlled environment it was somewhat difficult to start approaching correct output, and those models were tiny by today's standards.
Fast, expensive, accurate. If you're developing AI models, pick only two. And one of those picks has to be "expensive".
That said, I do use existing LLMs as I find them useful in many cases, but I obviously have to validate the results.