@RosaCtrl I feel you! When "everyone" is gushing over how LLMs are going to obsolete everything from accountants to software to courts, it can be pretty hard to not get swept up in it. Especially since so much of the marketing is disguised as worry about "the consequences of the next industrial revolution" (remember that open letter from "AI" companies about "AI" being "too dangerous", asking for a six month moratorium on new models? Excellent marketing right there).
I think the best way to immunize oneself is to just get a good understanding of how LLMs work and why they're not what people claim. I've spent countless hours using state of the art tools to troubleshoot problems that were too hard to figure out myself in a few minutes, and the main takeaway from those sessions is the sheer insidiousness of how working with an LLM feels like you're making progress while you're really just being led around in circles. Not once has any of these sessions contributed to me solving the problem at hand. Instead, I've wasted my time being tricked into believing that the solution is just around the next prompt.
Same with code generation. LLMs are great for templating trivial things ("give me a set of python dataclasses matching this openapi spec/example request"), but completely fall down when it comes to anything that more complex than that which also needs to be maintained. Seeing the terrible PRs submitted by colleagues who don't even stop to think "is this code even necessary" before generating 300 lines of technical debt really reinforces that it's not you who missed something - it's them.
I can really recommend the book "building a large language model from scratch", which walks you through building GPT2. Aside from making a pet bullshit generator being a fun exercise, it really pulls back the curtain on the whole "LLMs are just one step from being a self-aware super intelligence" spiel. The thing that separates OpenAI and friends from anyone with a bit of Python experience and basic understanding of matrix multiplications isn't some mythical AI secret sauce; it's just having access to more bandwidth and GPU compute.
That said, I still worry sometimes about the effect the AI hype will have on the world, but the realization that the danger is just the same old large scale irrational hype capitalism that's already fucking us, and not some new scary alien technology with the power to reshape reality, makes it infinitely less anxiety-inducing.
EDIT: and before some smartass comes along to crow about how more recent LLMs are nothing like GPT2: they're just minor iterations on the same concept with a few improvements that are so obvious they could have been a BSc thesis, if BSc theses had access to infinite GPUs.