@JMMaok @tob @Miniver
The student came up with the "fact" (Greek is a combination of 4 languages) and when they decided to look it up, he opened ChatGPT. As the post is written, he must have asked ChatGPT again and got the same answer once again, while the teacher was looking it up in a classic online search engine. (Hence point 2.) The student must have got the same wrong answer twice, otherwise the post wouldn't work.
@JMMaok @tob @Miniver
And judging from my (very brief) experience with ChatGPT he must have got a different answer the second time and be like "WTF ChatGPT???"

@ditol @JMMaok @Miniver you know, I had a different read. I think the student got the wrong answer from the gpt, and then he searched the wrong answer and found supporting results.

If he had done gpt again he would have gotten a different answer. Good teaching moment, that. But he didn't. He took the gpt answer at face value.

@tob @JMMaok @Miniver
It says literally: "I said let’s look this stuff up together, and they said OK, I’ll open a search bar, and they opened … Ch*tGPT."
Thus, the student came with a "fact" he learned from ChatGPT and then, for the sake of the argument he opened ChatGPT prompt again. And they went on discussing. Hence, he must have gotten the same result twice, which is very surprising and makes me feel sceptical about the story. Not about the general problem though.

@ditol @tob @JMMaok @Miniver

The thing is that, like traditional search engines, chatGPT can be easily "prompted" with the answer you are looking for. For example when I asked "How is greek a combination of four other languages" it came back with a bulleted list of four languages (Latin, Turkish, Italian, French, for those curious) and a paragraph of how each made up Greek during the late Byzantine Empire (leaving aside that enlightenment era French did not exist at that time).

@masonbially @tob @JMMaok @Miniver
Yes, right, thanks! This is a good explanation. I should've thought of this. If this is a real story, I really would have lived to read it in full, not just the TL;DR version from Twitter. Must have been hilarious. In such a conversation they would end up having the GPT-"internet" generating answers seemingly supporting the previous nonsense, to which it is hard to find counterarguments because nobody ever cared to disprove obvious nonsense. This is good.