@matt a year ago people said the same thing about LLMs, same with a year before that. During this time, I've still seen many people repeatedly reach the conclusion that they felt more productive then the LLM actually made them. On top of the fact that the output from LLMs I see is still regularly unreliable.
This is a big reason I keep telling people that personal experience with an LLM doesn't tell you anything. Getting a bunch of code that "looks right" doesn't mean it's saving you time. And generating a bunch of code was never the hard part of this job. On top of the fact that people keep showing me examples in duct typed languages which is like, yeah, of course it looks like it can write code it doesn't need to understand anything that's happening to do so.
I also just, don't get this. Why would we subject ourselves to this? Is the idea of your entire job being turned into supervising a machine that does pretty horrendous work that you have to prompt gamble with not terrifying?