RE: https://tldr.nettime.org/@tante/116206664781934665
Like: I am not even a scientist or academic and even I see how garbage that idea is.
RE: https://tldr.nettime.org/@tante/116206664781934665
Like: I am not even a scientist or academic and even I see how garbage that idea is.
@tante As a scientist, I can say that yes, these people are unequivocally missing the point.
"Research" (reading literature, evaluating contrasting papers, etc) is different from "research" (theorizing or designing and doing experiments), and this latter definition cannot be done by these AI tools.
You need to interact with the real world in some way and those interactions are where you, the scientist, actually learn valuable things that are worth writing about and sharing.
@tante AI Booster: "but you can scan the literature and formulate hypotheses faster using LLMs so you can focus on the theorizing more"
The reading, criticism, and evaluation is where you figure out what questions and hypotheses are even worth investigating in the first place and how to go about doing the experimentation.
It's like practicing your scales to become a good composer - you need to know the fundamental building blocks and what's possible before putting together something of value
How much I'd wish I came up with that, I got the idea from Derek Muller (the Veritasium guy) who gave a talk on AI and learning a while ago. If you have a chance to watch it, he makes quite a few good points.

They're also missing that putting in the hours and actually consciously engaging with the material is how you build the understanding and intuition required to pose new questions. You can't short-circuit that process.
(This is the same thing they're oblivious to in software development too: when you truly understand the problem, writing the code is trivial. When you don't, writing the code is how you gain understanding of the problem.)