I find AI doomerism annoying and overblown. I don't think many proponents of AI doomerism are really thinking for themselves; they're repeating the opinions of a small intellectual/writer class who personally don't really see much value in current large language models. I believe that many people are bad at writing and summarizing data quickly, so for a huge number of people, LLMs do provide value. Failing to recognize this is a brag—as if saying, "I personally don't need LLMs!"
People repeat the opinions of others when repeating them helps to elevate one's own status. I suspect that AI doomerism is a common belief not because it's accurate but because it's high status.
Likewise, the common refrain that "Most people don't understand how exponential growth works, that AI is just going to get better and better as it improves itself" is another subtle form of self-flattery. Yes, but that's always been true—change can build on itself, and we never know exactly how or if society or the economy or whatever will adjust to it. But another word for that is "progress." That's all it is—a series of incremental changes that get increasingly far from a primordial form.
Most people have trouble with progress. It's always been kind of a difficult concept to wrap one's head around. Our economy has progressed, our society has progressed, and yes, our technology has progressed and is progressing (and these things are all related, they all caused friction, and they all faced and continue to face the opposition, to greater or lesser degrees, of high status groups). The actual form progress takes is always new, but as a phenomenon it's as old as humanity.
For instance, computers have been designing new computers for a long time now. Considering how many transistors modern chips have, it's impossible to lay out a new, competitive processor the way our ancestors in the 70s and 80s did, with a mechanical pencil and graph paper. We use generative AI. We've just been doing it for so long that we don't call it that, because that phrase wasn't in common usage until recently. We call it computer-aided design (and a few other things).
This doesn't mean I don't think there are any downsides to the current LLM craze—I'm getting tired of the slop, too, and I don't think every new product needs an assistant button or toolbar—but there are downsides to every new technology. It's always been two steps forward, one step back. I can't think of a single time in the past that didn't have something I liked about it, but I still prefer to live in the present, and I don't think that will change in twenty, thirty, fifty years.