I see a difference in disciplines and what counts as the core intellectual contribution in that discipline. I.e. in empirical engineering research where progress can = finding a solution that demonstrably does better on some benchmark, how you get to that solution, how you synthesise the state of the art on the matter etc are not core contributions themselves, so it’s fine to experiment with LLMs in those aspects.
https://fediscience.org/@UlrikeHahn/116153459113075153
https://fediscience.org/@UlrikeHahn/116153459113075153
Ulrike Hahn (@[email protected])
I personally use AI only in the context of research *on* AI. I intentionally don’t use it to facilitate my research and I currently feel that’s the right choice for me. But I know many researchers choosing differently who are both excellent scientists and individuals with integrity who I respect as people. The discourse which, like clockwork, either tells such researchers that they don’t understand how these systems work and they’re garbage and/or that the researchers themselves are morally deficient people isn’t changing minds. It’s hard for me to see it as helpful.