There is no "ethical use case" for this new generation of AI (LLM and generative) and i'll tell you why.
It's because this entire iteration of AI is purposed for one thing: human economic replacement.
It "assists" us, thus learning from us how to BE us and then, to *replace* us as economically valuable entities.
*That* is what is driving the *trillions* of dollars in global corporate investment in this project: Human intellectual labor replacement by digital simulcrum.
@kitkat_blue @Paulatics That may be the intention behind all of this. But it won’t work. After all, the introduction of computers also didn’t lead to the elimination of accountants.
One of the few areas where I’m optimistic about the future risks of AI.
Human intellectual labor is a workflow of varying complexity. This is what agentic-AI (LLM driven) is purposed to replicate. Saying "it won't work" implies there is a ceiling or approaching plateau in it's capability growth. Agentic AI is a trainable form of AI, i.e. improving itself with (our) use. So i ask: Why do you assume it must fall short of full automation of human intellectual workflow (i.e."it won't work")? What will create this "limit" on it's capabilities?