#AI drama: beside all obvious problems, I am now quite imaging a long term additional problem caused by the use(which always is an abuse) of #LLM by students.
You will risk to have #imposter_syndrome all your life, if all important tasks of research (reading articles, summarising them, write down your research, code up your analysis etc) are outsources since the very beginning to any #LLM model.
Any thesis time is exactly meant to help you learn these basic steps!
#academia #science #physics
The illusion of a quick progression in "results" given by using any chat-gpt sort of thing is addictive, and if you start with that I don't think you will ever slow down and stop using it.
Simply because if you continue in academia, there will always and ever be times in which more results and more productivity will be expected (to apply for jobs, compete for a grant, just publish your results etc).
I don't think there is any easy phasing out of #AI if it's there for you since the start.
But if you start your entire career with that, you might also slowly build up the conscious or unconscious feeling that deep down many of your results are AI-related. It might be untrue, but difficult to say. It's the same sort of problem in growing a career with a too invasive PhD supervisor; except your supervision cannot follow you anywhere, and does not usually do all these task for you.
And not having a clear idea of why "you" matter in research is terrible, I guess.
This random flow of thoughts was prompted by a recent exchange I had with a friend of mine, who used to work in research and now is on a #AI startup.
He boasted that it should be mandatory for all students to use all AI tool since the beginning, or they will be forever behind.
I just screamed internally and externally and just saw in my mind a terrible, dystopian future already approaching. It seemed to me as wise as suggesting all students to sniff cocaine every day to boost their productivity.
I know, I am old.
@franco_vazza You're not alone.