There are plenty of tasks which they solve perfectly, today.
Name a single task you would trust an LLM on solving for you that you feel confident would be correct without checking the output. Because that is my definition of perfectly and AI falls very, very far short of that.

Before we go any further: I hate to ask you to do this, but I need your help — I'm up for this year's Webbys for the best business podcast award. I know it's a pain in the ass, but can you sign up and vote for Better Offline? I have
As a standalone thing, LLMs are awesome.
They really aren’t though and that is half the problem. Everyone pretends they are awesome when the results are unusable garbage 80% of the time which makes them unusable for 99% of practical applications.
It is really not a big change to the way we work unless you work in a language that has very low expressiveness like Java or Go and we have been able to generate the boilerplate in those automatically for decades.
The main problem is that it is really not at all useful or produces genuinely beneficial results and yet everyone keeps telling us they do but can not point to a single GitHub PR or similar source as an example for a good piece of code created by AI without heavy manual post-processing. It also completely ignores that reading and fixing other people’s (or worse, AI’s) code is orders of magnitude harder than writing the same code yourself.
Probably not going to go belly-up, in a while
Don’t be so sure about that, the numbers look incredibly bad for them in terms of money burned per actual revenue, never mind profit. They can’t even pay for the inference alone (never mind training, staff, rent,…) from the subscriptions.