Can we please stop using terms that are human-based to describe the output of #AI? It’s not “hallucinating”, “reasoning”, “deciding”, “changing its mind”, or anything else that we objectively and subjectly perceive as human/learning/teachable. It is a series of commands that outputs a result. It may or may not be plausible. That’s all it does.
#Output.
The output was funny.
The output was way off the truth.
The patient believed the output and died.
Construction proceeded using output.
/rant










