@hacks4pancakes “When we threatened to switch off the bot, it responded defensively, just like a human!”
You know who else responds defensively to said “attacks”?
AIs in sci-fi books.
It’s almost like, probabilistically speaking, the next words following “we’re going to switch you off” are going to be some form of defensive action.
@hacks4pancakes I have friends in the nlp research space, in academia.
They love LLMs. The research they can do on human language is amazing!
To a one, none of them will use any of the AI tools. They are perpetually confused as to why people trust LLMs for...anything that isn’t research into human language.
@b4ux1t3 @hacks4pancakes llm are by their very nature non-deterministic.
Why anyone would trust it verbatim is beyond me. I occasionally use it at work for coding, and it is suitable for some tasks if we're vigilant on code quality and we have a human QA team to verify.
Trust it? Never. Use it. Sure, in carefully controlled circumstances.
@Sablebadger pretty much the bulk of utility from the technology comes from semantic search; you don’t really even need the LLM for that, it’s just that something that can translate between the machine returns from something like a vector store back to plain English is very good UX. That’s likely what lead to people discovering the interesting emergent “agentic” behaviors (these are actually cool behaviors…just not the worldchanging BS they’re pushing).
Listening to folks in academia talk about universal human translators while they simultaneously avoid the agentic stuff really puts things into perspective to me.