Not all ai is bad, just most of it
Not all ai is bad, just most of it
I already use llms to problem solve issues that I’m having and they’re typically better than me punching questions into Google. I admit that I’ve once had an llm hallucinate while it was trying to solve a problem for me, but the vast majority of the time it has been quite helpful. That’s been my experience at least. YMMV.
If you think I suck, I’m guessing you haven’t actually used telephone tech support in the past 10 years. That’s a version of hell.
If you think LLMs suck, I’m guessing you haven’t actually used telephone tech support in the past 10 years. That’s a version of hell I wish on very few people.
I’m specifically claiming that they’re bullshit machines. i.e. they’re generating synthetic text without context or understanding. My experience with search engines and telephone support is way better than what any LLM fed me.
It’s funny tou bring up luddites, since they actually had the right idea about technology like LLMs. They were highly skilled textile workers who opposed the introducyion of dangerous medhanical looms that produced low quality goos, but were so easy to use so that a child could work them (because they wanted to employ children). They only got their bad name of backward anti-technology lunatics afterwards. But they were actually concerned for low quality technology being deployed to weaken worker’s rights, cheapen products and make bosses even richer. That’s actually the main issue I have with what’s happening with AI.
There’s a book by Brian Merchant called “Blood in the machine” on the topic, if you’re interested. He’s also on a bunch of podcasts, if you’re not the big reader.
I’m referring to “bullshit” in the way argued in this paper:
Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs.

Recently, there has been considerable interest in large language models: machine learning systems which produce human-like text and dialogue. Applications of these systems have been plagued by persistent inaccuracies in their output; these are often called “AI hallucinations”. We argue that these falsehoods, and the overall activity of large language models, is better understood as bullshit in the sense explored by Frankfurt (On Bullshit, Princeton, 2005): the models are in an important way indifferent to the truth of their outputs. We distinguish two ways in which the models can be said to be bullshitters, and argue that they clearly meet at least one of these definitions. We further argue that describing AI misrepresentations as bullshit is both a more useful and more accurate way of predicting and discussing the behaviour of these systems.
That’s 99% of what I’m looking for. If I’m figuring something out by myself, I’m not looking it up on the internet.
I’m an engineer and I’ve found LLMs great for helping me understand an issue. When you read something online, you have to translate from what the author is saying into your thinking and I’ve found LLMs are much better at re-framing information to match my inner dialog. I often find them much more useful than google searches in trying to find information.
The script doesn’t go away when you replace a helpdesk operator with ChatGPT. You just get a script-reading interface without empathy and a severally hindered ability to process novel issues outside it’s protocol.
The humans you speak to could do exactly what you’re asking for, if the business did not handcuff them to a script.
The script doesn’t go away when you replace a helpdesk operator with ChatGPT. You just get a script-reading interface without empathy and a severally hindered ability to process novel issues outside it’s protocol.
The humans you speak to could do exactly what you’re asking for, if the business did not handcuff them to a script.
But they do handcuff them to a script… at least 1st and 2nd level tech support. That’s the point. It’s so fucking awful. It’s a barrier to keep you from the more highly paid tech support people who may actually be able to answer your questions. First you have to wait on hold to make sure you think it’s worth wasting their time on your annoying problem, THEN it’s a maze you have to navigate, and then whoops you just got hung up on… so sorry, start all over! LLMs are (can be) so much better at this!