I find AI doomerism annoying and overblown. I don't think many proponents of AI doomerism are really thinking for themselves; they're repeating the opinions of a small intellectual/writer class who personally don't really see much value in current large language models. I believe that many people are bad at writing and summarizing data quickly, so for a huge number of people, LLMs do provide value. Failing to recognize this is a brag—as if saying, "I personally don't need LLMs!"