I hate talking about llm stuff all the time, but one recent thought I had is that a lot of (non tech) people think these tools are amazing because it seems like magic. Doing tasks that users wouldn't be able to do.

Except I think users can do these things. But chatgpt and others hide the tool calls that happen, so that it remains looking like magic.

If they showed that the llm queried google, used a calculator, ran some code it reveals that the LLM itself isn't doing as much heavy lifting. Ruining the illusion is probably bad for business though.

@xssfox it’s like RAG, i mean if you’ve got a perfectly good vector search tool why do you need to wrap the search results in a generative summariser?