Asked for examples of sexual harassment at law schools, ChatGPT named a GW law prof accused of touching a student on a class trip to Alaska, citing a 2018 Washington Post story.

The law prof is real. The rest was made up.

We wrote about what happens when AIs lie about you: https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/

ChatGPT invented a sexual harassment scandal and named a real law prof as the accused

ChatGPT accused a law professor of harassment. But the evidence doesn’t exist.

The Washington Post

It gets weirder.

ChatGPT generated the fake scandal involving law prof Jonathan Turley in response to prompts from Eugene Volokh last week. Turley wrote about it in a USA Today op-ed on Monday.

OpenAI appears to have since addressed the issue: ChatGPT no longer names Turley when given the same prompt.

But today we tested the same prompt on Microsoft's Bing AI. And guess what...

Now Bing is *also* falsely claiming Turley was accused of sexually harassing a student on a class trip in 2018.

As a source for this claim, it cites Turley's own USA Today op-ed about the false claim by ChatGPT, along with several other aggregations of his op-ed.

AI chatbots don't lie on purpose. They're programmed to respond to any query, drawing on patterns of word association in their data (and search results, for Bing) to generate plausible answers. They have no idea if what they're saying is true. Yet they say it so definitively, even making up nonexistent but realistic-sounding sources when needed to back up their claims.

@Katecrawford dubs these bogus sources "hallucitations." https://www.washingtonpost.com/technology/2023/04/05/chatgpt-lies/

ChatGPT invented a sexual harassment scandal and named a real law prof as the accused

ChatGPT accused a law professor of harassment. But the evidence doesn’t exist.

The Washington Post
It isn't just one law professor. It appears ChatGPT routinely fills in the gaps with falsehoods when prompted to talk about specific individuals about whom it may have limited credible data. An Australian mayor is threatening to sue OpenAI with defamation after ChatGPT told a constituent he'd been imprisoned for bribery, and the rumor spread. https://arstechnica.com/tech-policy/2023/04/openai-may-be-sued-after-chatgpt-falsely-says-aussie-mayor-is-an-ex-con/
OpenAI threatened with landmark defamation lawsuit over ChatGPT false claims [Updated]

ChatGPT falsely claimed a mayor went to prison.

Ars Technica
@willoremus What do you think it will take people to stop using the bullshit term falsehood and just say lie? Truly one of the worst things to have come out of the Trump era, that people are too cautious to say a lie is a lie
@ianryan a lie requires knowing deception. i don’t think that’s accurate in this case
@willoremus @ianryan but if a human made something up to "fill in gaps" or to "please" the requestor or just not sound stupid, wouldn't we call that a lie? It's not clear to me when _purposely making a falsehood_ becomes a _lie_ but they sound like the same thing to me.
@viduno @willoremus @ianryan Not really - it's bullshit, which is arguably more corrosive. To lie, you need to have some idea of what the truth is, and keep track of where your version differs from it; to bullshit, you just have to not care about anything except plausibility and getting the result you want. https://en.m.wikipedia.org/wiki/On_Bullshit
On Bullshit - Wikipedia

@willoremus That use is a result of the media not wanting to call Trump a liar, even as he was a habitual liar through lies of omission, in that he opened his mouth and portrayed himself as someone who knew what he was talking about when he clearly never had a clue . Continuing to talk when you have no idea what you’re saying, either consciously or obliviously, comes to the same result as mendaciously lying: a lie. Chatgpt’s creators don’t give a shit, and that’s where the lie comes from