@rpin42 Humans might sometimes resemble this process, but it is not at all accurate to say we apply “exactly the same approach” because we plainly do not. We can remember facts. We can detect inconsistencies. We can detect and ignore superfluous information. An LLM cannot do any of these things.
Sometimes the output looks like they do these things. But the fact that the output looks like the output of thinking doesn’t mean it was the result of thinking, or even the result of a process analogous to thinking. We think. It doesn’t.
@mcnees