With #Galactica and #ChatGPT I'm seeing people again getting excited about the prospect of using language models to "access knowledge" (i.e. instead of search engines). They are not fit for that purpose --- both because they are designed to just make shit up and because they don't support information literacy. Chirag Shah and I lay this out in detail in our CHIIR 2022 paper:

https://dl.acm.org/doi/10.1145/3498366.3505816

>>

Situating Search | Proceedings of the 2022 Conference on Human Information Interaction and Retrieval

ACM Conferences
Chatbots could one day replace search engines. Here’s why that’s a terrible idea.

Language models are mindless mimics that do not understand what they are saying—so why do we pretend they’re experts?

MIT Technology Review
@emilymbender

This thread from Twitter makes a similar argument with some background information about Google's search strategy:

https://twitter.com/deliprao/status/1599098378172104704?t=QlQFV6P3OServvtYepryHg&s=19
@[email protected] on Twitter

“Despite the amazing results I’ve experienced with ChatGPT, this is not a correct way to look at LLM vs. Google search. Since several other tweets have made this equivalence and have been eager to spell doom for Google, let’s examine the details:”

Twitter
@ltmccarty The initial proposals that Shah & I were reacting to in our paper came from ... Google (including Sundar Pichai himself at GoogleIO).
@emilymbender Thanks for sharing these! I’m adding as good readings to my #digitalhumanities syllabi for spring.
@emilymbender In addition to the TR piece, your interview from May in pnw.ai might also be useful here, particularly for its point about terms and framing (and reference to Stefano Quintarelli's alternative term for AI: “SALAMI” = “systematic approaches to learning algorithms and machine inferences”) https://pnw.ai/article/the-problem-with-overestimating-ai/121722775
The problem with overestimating AI

Discover the latest artificial intelligence news, jobs and more.

pnw.ai