In most cases, LLMs will not replace humans or reduce labor costs as companies hope. They will •increase• labor costs, in the form of tedious clean-up and rebuilding customer trust.
After a brief sugar high in which LLMs rapidly and easily create messes that look like successes, a whole lot of orgs are going to find themselves climbing out of deep holes of their own digging.
Example from @Joshsharp:
https://aus.social/@Joshsharp/112646263257692603
j# (@[email protected])
Yesterday we had another example of LLMs creating support issues for us. User: "hi, how do I do this thing? Your docs say I can go here and change some options, but there's no settings there" Me: "that's right, we don't have such a feature, but also we don't say you can do it in the docs, where did you read that?" User: "oh I didn't actually read the docs, I asked 'AI' and it hallucinated this answer. Sorry!" At this rate I'm looking forward to 2025 when I'll be spending 100% of my time doing support to correct falsehoods about our app made up by LLMs
