A recent toot by @L_howes (I am unable to quote-toot) did let me rethink how important it is when using pure ML/AI protocols (particular generative ones) to always remember that they are intrinsically build on correlation only. Causation comes with the builder or user.
Thus generative AI like #ChatBotGPT does not do science, it mimics it. It does not create art, it copies and mimics it. And when we claim a language model makes things up, it really only does what it is trained for
Thus generative AI like #ChatBotGPT does not do science, it mimics it. It does not create art, it copies and mimics it. And when we claim a language model makes things up, it really only does what it is trained for