This thread is a little microcosm summary of how language models are going to break a lot of things we like by just making things up and poisoning search results with artificial results that are accepted as reality. #chatgpt #languagemodels #techthebadparts https://kolektiva.social/@pandion/109652335028245111
pandion (@[email protected])

We don't talk enough about how often ChatGPT just makes shit up. It's trying to convince me there's an Isaac Asimov story I've never read called "Susan Calvin and the Missing Lover," published in Jan 1957 in Amazing Science Fiction. There are zero references to this anywhere on the entire internet. I feel so gaslit right now!

kolektiva.social
#chatgpt and other language models have no accuracy filter, they often just make up “facts” and present them as reality. As more and more people post chatgpt results online and make content with chatgpt that has factual inaccuracies in it, these will be cited as previous authority on the topic. It doesn’t take many references for something on the internet to get repeated and eventually accepted as fact.

@fields @jlori

📚 Discussion Opportunities

❓ What new opportunities do services like ChatGPT offer to students?

❓ What is different about how we seek info from "chatbots" as compared to search engines?

❓ What are some possible risks about such services?

⭐ If you have used ChatGPT or a similar service describe that experience. What did you like about it? What didn't you like?

#education #teacher #newsliteracy #medialiteracy #lessonplan #edtech #ChatGPT #criticalthinking @edutooters

@CultureOverlord @fields @jlori @edutooters I am genuinely worried about the AMOUNT of CS literacy it takes to understand the difference between what Siri or the Google Assistant is doing and what ChatGPT is doing. I don't know how we can possibly teach people how fundamentally different what they're doing is and what that means for the credibility of their answers.

@pandion @fields @jlori @edutooters

Good point!

Our LIMITED understanding:

The primary intention of ChatGPT is to engage in human-like conversations.

The primary intention of voice assistants is to complete tasks. To do so they necessarily engage in human-like conversations but their human-ness is secondary.

If the above is true then ChatGPT's focus on "seeming human" matters more than its accuracy. ChatGPT might be a storyteller where anecdotes (or fantasy?) might be as useful as facts.

@CultureOverlord @pandion @jlori @edutooters except that people apparently have *zero* (rounded) framework for evaluating whether its answers can be trusted or not, as evidenced already by the sheer proliferation of “chatgpt write me a blog post” prompts I’ve seen.
@fields @CultureOverlord @pandion @jlori @edutooters If teachers choose to use it, they’ll need to be careful as I’ve already seen some issues. For example, I asked it to write an example of a lesson plan using Georgia’s 7th grade standards. I didn’t specify what standard and it came back completely incorrect. Just because it spit something out doesn’t make it factually correct, so it’s up to teachers to verify everything.

@pmcgonagleEDU @fields @pandion @jlori @edutooters

We generally regard it as a charming after-dinner speaker whose history and knowledge is an unknown.

We might enjoy the things it has to say but have no idea if any of those things are reliable.

It could be an interesting classroom exercise to have it create a draft paper and then have the class fact-check it for accuracy.

@CultureOverlord @pmcgonagleEDU @pandion @jlori @edutooters honestly I find its choice of tone and inflection to be incredibly annoying and repetitive. It’s like an unwanted guest that someone else invited and won’t leave the kitchen.
@fields @CultureOverlord @pandion @jlori @edutooters I also noticed the repetitiveness, but that’ll get better over time as well as with its accuracy.
@pmcgonagleEDU @fields @CultureOverlord @jlori @edutooters I asked it to stop telling me it's a language model every other sentence, and it said sorry, it has difficulty doing that because, you see, it's a language model.