With #Galactica and #ChatGPT I'm seeing people again getting excited about the prospect of using language models to "access knowledge" (i.e. instead of search engines). They are not fit for that purpose --- both because they are designed to just make shit up and because they don't support information literacy. Chirag Shah and I lay this out in detail in our CHIIR 2022 paper:

https://dl.acm.org/doi/10.1145/3498366.3505816

>>

Situating Search | Proceedings of the 2022 Conference on Human Information Interaction and Retrieval

ACM Conferences
@emilymbender Is this #ChatGPT able to return, as part of its reply, links to the sources on which it based its reply ?
@miklo @emilymbender I don't believe so, as the final model does not actually contain its source information.
@rmbles @miklo Indeed not. This is part of why it's such a terrible idea to use a language model as an information access system. It cuts off any ability for the person using the system to contextualize the source of the information (when it even is legit information, as happens only by chance with these things).
@miklo @emilymbender In my limited experiments, I've been able to ask for and receive links to sources, and the linked content has always been relevant but often contradictory. I have asked for additional sources, thinking GPT-3 might have "averaged" multiple sources, but have not typically gotten anything to support my hypothesis; instead, I get the same link as the first time, or a link to another contradictory source.