Just tried ChatGPT. I asked it a series of specific Qs about areas I've studied in detail.

On all Qs, it gave answers that are plausible sounding but wrong. Not obviously wrong: wrong in subtle ways that need deep domain knowledge to grasp.

The ways humans will be practically misled by this kind of tech if trusted with, say, doling out medical, legal or business advice is horrific.

Letting this tech loose on the world will further destroy search engines that are already riddled with SEO BS.

These kind of technologies are a natural response to content-as-commodity—SEO, content marketing, the YouTube algorithm, influencer culture etc.

It doesn't matter that the content is bullshit, X units of content are needed, humans are expensive, let's have a machine that churns out content. Doesn't have to be true, doesn't need to have passed through an actual brain or have had any connection with reality.

"Thanks, I hate it."

@tommorris l see a lot of people trying this and completely missing the point. It's not an expert, how can it be? To work, it needed data, and they are quite open about where that data came from. I set it tasks like write a ghost story (style of output) setting the scene (data input).
@timaikin the point is: when unleashed on the world, it’ll produce a torrent of shit, and that shit will have negative consequences for our epistemic environment
@tommorris I can only speak from my experience of developing NLU models for interacting with complex data sets in the buildings. General AI is still Syfy it's still more mechanical Turk. Companies will over exaggerate for investment. The use of language like neural networks is more about trying to explain the mechanics than it actually being a biological organism.