There's a whole cottage industry right now looking at the limits of #ChatGPT (e.g. it can't double-check claims or do external attribution) and turning those limits into sentences that begin "Language models can only ever ..." I get why this is a reassuring, widely circulated message, but I wish a few people circulating it would mention that there is already active research that seeks to address those limits. https://paperswithcode.com/paper/attributed-question-answering-evaluation-and
Papers with Code - Attributed Question Answering: Evaluation and Modeling for Attributed Large Language Models

No code available yet.

There are also prototypes, e.g. perplexity.ai. And it's obviously an area where Microsoft and Google are preparing for fierce competition. So if I'm honest, I feel like the cottage industry of reassurance is mostly misleading people in this case. https://www.perplexity.ai/?uuid=684bdf06-5bcd-4f76-9d80-494126816273
Perplexity AI

@TedUnderwood
I think the main reason they released ChatGPT was to prepare us for the shock of GPT-4.

I agree with what you said- we should be more concerned with how this changes everything, rather than its short term limitations.