With #Galactica and #ChatGPT I'm seeing people again getting excited about the prospect of using language models to "access knowledge" (i.e. instead of search engines). They are not fit for that purpose --- both because they are designed to just make shit up and because they don't support information literacy. Chirag Shah and I lay this out in detail in our CHIIR 2022 paper:

https://dl.acm.org/doi/10.1145/3498366.3505816

>>

Situating Search | Proceedings of the 2022 Conference on Human Information Interaction and Retrieval

ACM Conferences
@emilymbender Can we be excited about computers getting better at understanding natural language and its context as an input? Is that the big shift when it comes to #ChatGPT and #Galactica? Maybe they cannot provide accurate answers, but they help computers understand what we mean when we talk to them.
@max @emilymbender in fact i tried out chatgpt for very different questions and it gave me correct, precise and understandable answers most of the time. Saying that they just make things up and cannot give correct answers just shows a lack of knowledge about this tool. Those AI agents will be the fundamental part of every human machine interaction in the near future.
@max @emilymbender They can't do that Max because they are neural networks that just create an output without being aware of what they are doing.
@lolzac I know what they are and how they work, but this is not a reason why they cannot get better at understanding language. ChatGPT is one example. Image recognition is another one. Neural networks don’t need to be sentient or "know" what they are doing to provide outputs that are useful for us.