A recent toot by @L_howes (I am unable to quote-toot) did let me rethink how important it is when using pure ML/AI protocols (particular generative ones) to always remember that they are intrinsically build on correlation only. Causation comes with the builder or user.
Thus generative AI like #ChatBotGPT does not do science, it mimics it. It does not create art, it copies and mimics it. And when we claim a language model makes things up, it really only does what it is trained for
@christofjaeger Yes. In this instance ChatGPT did what is was made to do. The trouble is in people misunderstanding what that is and what it does and doesn't mean for them.
@christofjaeger Also you would not believe the mansplaining I've got from some quarters about this. I know what it's doing, I'm trying to get other people to recognise that. The fact they don't is a big problem that's going to propagate.
@L_howes yes, yes, and yes. Partly, as often, the overhype is to blame. First comes the hype, then comes the time in practise, the critical evaluation, further development ... business as usual.
But as long #chatgtp us still hyped with headlines like 'imagine how this will change everything after Alphafold solved the protein folding problem' (WHICH IT DIDN'T) people will gratefully be led up the garden path