@evan Done w this question. Fully set it down when I realized AI is 'just' the latest religious war among coders. (Perhaps second only to tabs-vs-spaces.)
Now I'm consciously trying to participate in AI conversations (if I participate at all) in ways that break the "Is AI good or bad?" framing, rather than reifying the fight.
Firefox did this in code: big button to turn AI off, little buttons to turn on LLM-powered features, starting w on-device translation which ~everyone understands & wants.
@evan The definitions of “OK” and “AI” are pretty darn load bearing in this question.
I’m “No, but”… it hinges on my exact assumption that AI means large private LLMs, and OK means “an ethical thing to do”.
(Not a complaint, I know you can’t exhaustively define every word in existence)
@evan
By #ai you probably mean #llm?
Personally I'm not very fond of them, some people however don't seem to be able to function without them anymore.
The definition of what an error is can be very wide or very narrow. To assess 'correctness' can entail several things.
Was the correct syntax, spelling or grammar used? Does the logic contain any obvious mistakes? The topic of ethics is very tricky as an LLM is unable to do any actual reasoning. The output can look convincing, but is it really?
@gatesvp
The definition of #ai is too broad.
Technology sold as artificial intelligence is unable to reason and it's therefore very much up to discussion if one can call it intelligent.
A #llm or Large Language Model in full is a parroting machine that is able to generate a text that looks convincing. Looking convincing doesn't necessarily mean it is correct. Please also keep in mind all #llm's have implicit bias and explicit filters that heavily skew the output. This makes their use limited.
@evan We know large language models can't exist in their current form without using copyrighted data.
How are you ensuring your model doesn't contain copy righted data?
andif you're going to use a model by one of the big tech providers, there's going to be the issue of complicity with what they're doing.
An #LLM, as provided today, no. They are all trained on data for which they do not respect copyright; they all use gargantuan scarce resources in training and inference; they are proprietary and centralised and any dependency on them is begging to be exploited. It's never okay to take part in that.
But AI existed long before LLMs, and will continue long after the LLM bubble pops. It's fine to use an AI (e.g. spell check) that doesn't have those problems, to analyse documents for errors.