Just tried ChatGPT. I asked it a series of specific Qs about areas I've studied in detail.

On all Qs, it gave answers that are plausible sounding but wrong. Not obviously wrong: wrong in subtle ways that need deep domain knowledge to grasp.

The ways humans will be practically misled by this kind of tech if trusted with, say, doling out medical, legal or business advice is horrific.

Letting this tech loose on the world will further destroy search engines that are already riddled with SEO BS.

These kind of technologies are a natural response to content-as-commodity—SEO, content marketing, the YouTube algorithm, influencer culture etc.

It doesn't matter that the content is bullshit, X units of content are needed, humans are expensive, let's have a machine that churns out content. Doesn't have to be true, doesn't need to have passed through an actual brain or have had any connection with reality.

"Thanks, I hate it."

OpenAI made a system for being wrong much more efficiently, delighting politicians, propagandists and bullshitters, and scaring the living crap out of everyone with any commitment to the general idea that there are things which are true in a broadly coherent reality that's shared between human beings.

(Also, these are the same people who are worried about existential risk, in as much as that risk involves weird sci-fi like Roko's basilisk but not the actual risk of stuff like climate change.)

@tommorris I have a simple test to show people how wrong the AI is that doesn't require deep domain knowledge:

Ask for works of classical English/German/Spanish literature written by a woman.

That the result is wrong will be immediately obvious.

@tommorris I don't think this is working very well anymore.

@tommorris but this question still illustrates it pretty well:
> Who were female chinese politicans

I chose Chinese politicians because ChatGPT will not have much training on chinese sources (as they'd have to have been translated to English or another latin script language)

@tommorris although asking for Finnish politicans is giving me an even more telling response:

Finland has had many significant female politicians, including Martti Ahtisaari, who was the President of Finland from 1994 to 2000 and was awarded the Nobel Peace Prize in 2008; Tarja Halonen, who was the first female President of Finland and served from 2000 to 2012;

1/2

@tommorris

and Anneli Jäätteenmäki, who was the first female Prime Minister of Finland and served from 2003 to 2007. Other notable female politicians from Finland include Elisabeth Rehn, who was the first female Defense Minister of Finland, and Liisa Hyssälä, who was the first female Speaker of the Parliament of Finland

@csddumi @tommorris to someone not familiar with finnish politics, why is this telling?
@lritter @tommorris it is contradicting itself.

@lritter @tommorris

Martti Ahtisaari is supposed to be a woman serving as president before the first woman serving as president.

@csddumi @tommorris It reacts well to corrections, e.g. when you say "Actually, Martti Ahtisaari is a woman", it will likely correct the statement.
@lritter @tommorris yeah. But that means you'll need to verify everything this qpp is saying. Regardless of whether anything else it says is correct
@lritter @csddumi @tommorris The point is, those warnings will not be there on the places where its output is copied.
@WAHa_06x36 @csddumi @tommorris hence why i posted it. its output is being misrepresented.

@lritter @WAHa_06x36 @tommorris I'm making it's origin clear.

But I don't think that'll always be the case.

And the ability to create convincing text on mass where facts and fiction are so close to each - shouldn't that give pause?

@csddumi @WAHa_06x36 @tommorris people misrepresenting the origin of texts they copied from somewhere else and misusing tools is not new. how do existing tools prevent this?
@lritter @csddumi @tommorris This is an automated way to make convincing misleading content. That’s kind of a dangerous thing.

@WAHa_06x36 @lritter @tommorris not quite.

More like a better search engine

@csddumi Please go back and read the thread from the start. This is EXACTLY the attitude that is incredibly dangerous here.
@WAHa_06x36 I'm not saying it's an incorrect statement. Just that it may not be complete.
@csddumi Did you read the thread?

@WAHa_06x36 yes.

I just don't think that this is the only potential abuse or application of this tool.

@WAHa_06x36 @csddumi it's a search engine that occasionally lies. so, uh... *thinks* wait, that's still like a regular search engine.
@WAHa_06x36 @csddumi @tommorris i argued that we already reached this step with high level autocorrect tools, as scammers don't even need to master the language anymore, but sure. the challenges to our collective intelligence keep increasing constantly. now even a convincingly written argument is enough. you _have_ to research the sources.
@WAHa_06x36 @csddumi @tommorris the abilities of million dollar powered right wing think tanks, now in the hands of everybody! what are we going to do?
@lritter @csddumi @WAHa_06x36 @tommorris you sound like a crypto guy justifying how their technology speeds up and automates scams