The threat is comfortable drift toward not understanding what you're doing
The threat is comfortable drift toward not understanding what you're doing
If this article was written a year ago, I would have agreed.
But knowing what I know today, I highly doubt that the outcomes of LLM/non-LLM users will be anywhere close to similar.
LLMs are exceptionally good at building prototypes.
If the professor needs a month, Bob will be done with the basic prototype of that paper by lunch on the same day, and try out dozens of hypotheses by the end of the day.
He will not be chasing some error for two weeks, the LLM will very likely figure it out in matter of minutes, or not make it in the first place.
Instructing it to validate intermediate results and to profile along the way can do magic.
The article is correct that Bob will not have understood anything, but if he wants to, he can spend the rest of the year trying to understand what the LLM has built for him, after verifying that the approach actually works in the first couple of weeks already.
Even better, he can ask the LLM to train him to do the same if he wishes.
Learn why things work the way they do, why something doesn't converge, etc.
Assuming that Bob is willing to do all that, he will progress way faster than Alice.
LLMs won't take anything away if you are still willing to take the time to understand what it's actually building and why things are done that way.
5 years from now, Alice will be using LLMs just like Bob, or without a job if she refuses to, because the place will be full of Bobs, with or without understanding.
The problem is in most environments Bob won’t spend the rest of the year figuring out what the LLM did, because bob will be busy promoting the LLM for the next deliverable, and the problem is that if all bob has time for us to prompt LLMs, and not understand, there will be a ceiling to Bob’s potential.
This won’t affect everyone equally. Some Bob’s will nerd out and spend their free time learning, but other Bob’s won’t.