@lauren The first example about lawyers would be exactly trying to use LLM for factual reasons. Naturally it will lead to comical results.
And yes, using LLMs for code, without knowing programming, is a risky proposition. You might get something usable, but it would be about as reliable as copy-pasting the first answer on stack overflow.
I don't understand why you disregard the creative use-case, which is massive atm. Or why you consider me saying that experts can use AI, is "elitist".
@db0 I believe my statements here (and past writings on this subject, which should be easy to find) sufficiently detail my views on this topic so that I need not detail them again here.
I think AI has enormous promise, and LLMs can be very useful. I also feel that the companies pushing these out now in an insane arms race know full well that the vast majority of users don't understand the limitations, are not going to test or verify or check answers (even if they had the skills to do so) and are being horribly misled.
@lauren On that I agree. The companies pushing for LLMs, especially on anything factual are being completely irresponsible. I do agree that they are useless for that purpose and this is an unfixable problem.
I merely wanted to point out that LLMs are not worthless when they're not factual. Their use-case is elsewhere.
@db0 @lauren We’ve had programs for a long time that produce code for you. They’re called compilers.
If you need an LLM to write code for you, you’re not working in a language that is enough high-level for your task. The text you write should be the source code, whatever language you’re using, otherwise you’re just prohibiting people from editing your code.
And finding subtle errors that are introduced by an LLM is not possible if you weren’t able to write the code yourself in the first place.
@db0 My reply was not mainly directed to you, but to other people reading the thread and mistaking your post for something useful.
And, it’s not disrespectful to point out that someone is wrong. That’s what you tried to do in your post. And I tried to point out that you were wrong.
And please don’t use language like “Learn to interact respectfully next time”. That, if anything, is condescending.
@db0 “You clearly think I don't know what I'm talking about”
Yes. You clearly know very little about computer science, and you misunderstand how Mastodon works.
You were not having a private conversation with the original poster, and my replying to you is not any more “barging in” than what you were doing in the first place. My reply was the third in the tree. There wasn’t any “discussion” happening, just your reply.
@ahltorp Right. So I am accurate in calling you a condescending smuglord. I don't know what you're whining about then.
There's no point in replying seriously to blowhards like you. Learn to behave.
@lauren if a pocket calculator says "10x10={<¥™÷>" it's obvious that it's broken. If my calculator says "10x10=10,000" it's broken in a *much worse* way.
A wrong answer that can pass for right is much more destructive than one that's obviously faulty.
@Smrki No. The big difference is that traditional Search requires users to go to those sites to get actual information. This automatically exposes them to far more details, and the SERP puts a range of choices up front and impossible to ignore.
LLM responses present a "prepackaged" single response with a false air of authenticity.
The situations are entirely different.
It is not good when bots talk to each other.
Na I’d rather have a 90% functioning script that I have to do a little correction on than write 100% of it myself.
Most of the time, the whole script it spits out is correct on yhr first try though.
I mean, the IDE usually tells you where the issue is. Would still save time if you don’t know the language.
In fact, I know it does because I’ve used it for languages I don’t write.
But maybe this is a different use case than you were describing in the OP.
@lauren The way to figure it out is by testing. If you don't have a way to test, yes, you're out of luck. For example, if I ask a colleague for suggestions on how to debug a difficult problem and get back some feasible ideas, they might be wrong, but I can try them out. The same could be the case if I ask something like Copilot how to use some API I'm not familiar with: I expect that there might be problems with the answer, but it's a starting point.
I agree that it's a very bad idea to blur search and LLM, but even search suffers because so many of the results are from content farms filled with low quality crud.