Super frustrated with all the cheerleading over chatbots for search, so here's a thread of presentations of my work with Chirag Shah on why this is a bad idea. Follow threaded replies for:

op-ed
media coverage
original paper
conference presentation

Please boost whichever (if any) speak to you.

Chatbots are not a good replacement for search engines

https://iai.tv/articles/all-knowing-machines-are-a-fantasy-auid-2334

All-knowing machines are a fantasy | Emily M. Bender and Chriag Shah

The idea of an all-knowing computer program comes from science fiction and should stay there. Despite the seductive fluency of ChatGPT and other language models, they remain unsuitable as sources of knowledge. We must fight against the instinct to trust a human-sounding machine, argue Emily M. Bender & Chirag Shah.

IAI TV - Changing how the world thinks
Chatbots could one day replace search engines. Here’s why that’s a terrible idea.

Language models are mindless mimics that do not understand what they are saying—so why do we pretend they’re experts?

MIT Technology Review

Chatbots-as-search is an idea based on optimizing for convenience. But convenience is often at odds with what we need to be doing as we access and assess information.

https://www.washington.edu/news/2022/03/14/qa-preserving-context-and-user-intent-in-the-future-of-web-search/

Q&A: Preserving context and user intent in the future of web search

In a new perspective paper, University of Washington professors Emily M. Bender and Chirag Shah respond to proposals that reimagine web search as an application for large language model-driven...

UW News

Chatbots/large language models for search was a bad idea when Google proposed it and is still a bad idea even when coming from Meta, OpenAI or You.com

https://dl.acm.org/doi/10.1145/3498366.3505816

Situating Search | Proceedings of the 2022 Conference on Human Information Interaction and Retrieval

ACM Conferences

Language models/automated BS generators only have information about word distributions. If they happen to create sentences that make sense it's because we make sense of them. But dis-connected "information" inhibits the broader project of sense-making.

https://www.youtube.com/watch?v=VY1GHbU_FYs&list=PLn0nrSd4xjjY3E1qxXpWDoF7q-Q3d6g_A&index=17

Situating Search

YouTube

@emilymbender Hah. That's some... loaded wording, while we're speaking of trusting interactions by default.

Raw language models are a terrible way to acquire information, but they have lots of potential as an interface, given that there is already a lot of quick info search happening through language recognition through "dumb" assistant software.

The right place for a chatbot in the UX isn't as the search engine, it's as a parser of the query/result in some applications like text-to-voice.

@emilymbender And even then I'd suggest that for certain applications, like as accessibility tools, the downsides in information accuracy may be tolerable.

I'm also curious about the circularity of the argument that search engines share some of the same problems. Yeah, they do, and the pushback against them in the 90s looked a lot like this, too. But that ship has sailed, hit an iceberg and sunk in the middle of the ocean. We are starting from a search engine-saturated world already.

@emilymbender Which is to say, dumb algorithms are already pushing bias, we are already giving them too much credit and they already can be compromised by hostile techniques like SEO.

I don't think AI chat is inherently more believable because it sounds more human, that's just the uncanny valley of seeing new tech. We should hope to design these to do better than the old tech, but surely the bar for usability is to not do worse, which is much easier.

@emilymbender Alright, I'll stop threading and ranting, but just one more warning. A lot of both reasonable and unreasonable observations and criticism of generative AI, chatbots and the like is starting to degenerate into straight bias against ML, which is dangerous. ML is already ubiquitous and crucial to lots of fields, from astrophysics to game development.

Let's be careful to not let reasonable warnings about big data devolve into technophobia against the research field in general.