Is it OK to use AI to analyze code and documents for errors?

#EvanPoll #poll #ai

Yes
37.3%
Yes, but...
39.3%
No, but...
6.4%
No
17%
Poll ended at .

@evan
By #ai you probably mean #llm?

Personally I'm not very fond of them, some people however don't seem to be able to function without them anymore.

The definition of what an error is can be very wide or very narrow. To assess 'correctness' can entail several things.

Was the correct syntax, spelling or grammar used? Does the logic contain any obvious mistakes? The topic of ethics is very tricky as an LLM is unable to do any actual reasoning. The output can look convincing, but is it really?

@alterelefant @evan

By #ai you probably mean #llm?

Look, Evan is a professional communicator. Director, board member, researcher... Using the right words is key to each of his jobs.

Why would assume that he meant a word he specifically didn't say?

@gatesvp
The definition of #ai is too broad.

Technology sold as artificial intelligence is unable to reason and it's therefore very much up to discussion if one can call it intelligent.

A #llm or Large Language Model in full is a parroting machine that is able to generate a text that looks convincing. Looking convincing doesn't necessarily mean it is correct. Please also keep in mind all #llm's have implicit bias and explicit filters that heavily skew the output. This makes their use limited.