With #Galactica and #ChatGPT I'm seeing people again getting excited about the prospect of using language models to "access knowledge" (i.e. instead of search engines). They are not fit for that purpose --- both because they are designed to just make shit up and because they don't support information literacy. Chirag Shah and I lay this out in detail in our CHIIR 2022 paper:

https://dl.acm.org/doi/10.1145/3498366.3505816

>>

Situating Search | Proceedings of the 2022 Conference on Human Information Interaction and Retrieval

ACM Conferences
@emilymbender Discovering that a chat bot's data is inconsistent with its sources is often trivial. I've had fun asking GPT-3 questions about the rules of Scrabble, especially around how many tiles there are for each letter, and whether you can spell out certain words with and without blank tiles. The answers are internally inconsistent, don't correspond with actual Scrabble rules, and don't even correspond with the source material that GPT-3 pointed me to when I asked it how it "knew".
@markproxy I am well aware. I wasn't asking for experience reports with GPT-3 or any other LLM with that post.
@emilymbender I meant merely to support your point with an illustrative example from my own experience, in the context of conversation around an important topic. I can see how it might have come across differently; sorry for that!
@markproxy Thank you for this. In the future, please consider whether such support was requested before offering it as well as how you present it if you choose to do so.