“Coder” always reminded me of the term “CAD monkey” which is widespread in disciplines like Architecture. In school we were told we should aim to become an Architect, not a CAD monkey. The architect was the author, the folk with ideas, the monkey received the ideas and modelled them in AutoCAD.
Now here is the catch: in my (short) experience, you only realise issues with the design when the monkey starts to work. Turns out this and that structural elements don’t align, turns out that space is smaller than what it should be, specially if you want it to look like on the sketch. You could think about that process in terms of “waterfall”, which programmers know to be faulty because design and implementation are not a linear process, they are related in a feedback loop, and that loop gets shorter if the architect becomes part CAD monkey and vise versa. You want a short loop because you want to keep the creative flow going. You design something, you try to implement it, you realise it won’t work, you go back and improve the design or throw it away. All this happens in a very messy and lousy defined way, because, well, brains are fascinating
This may be an in college acquired deformation, but I see programming the same way. Sure, for simple stuff the LLM may throw at you the right code, and sure the CAD monkey won’t find any issue in your standard bathroom design.
The irony is that LLM enthusiasts seem to care a lot about “the interesting code”, the one the LLM won’t help with, and the one you get better at by writing a bunch of “boring” code in interesting ways
I now realise that when I said feedback loop above I wasn’t thinking on what most people considers as such. Sure, pressing tab on Zed or Cursor may be faster than you typing the actual code, so in that sense an LLM has a shorter feedback loop, and you may be trying the code faster than me. However what I meant was the feedback loop that goes on in our brains, otherwise the analogy with the CAD monkey wouldn’t work. When we type code, or when we model a wall in its actual dimensions, we don’t think just about performing those actions, they even feel automatic most of the time.
We think about other aspects of what we are working on, directly related or not. A bunch of times, while writing a function or data type, I’ve stopped myself because I suddenly became aware of some fundamentally wrong assumption, or remembered some extra requirement, or thought of a better way!
I’m pretty sure this is a familiar experience, I can’t be the only one thinking in this way. Well, that’s the feedback loop I was referring too! I suspect it may be related to motor function as, for example, we know that writing by hand helps to memorise and/or understand things. Citation needed https://www.npr.org/sections/health-shots/2024/05/11/1250529661/handwriting-cursive-typing-schools-learning-brain
I would bet typing code will have similar recognition in the future.
Actually, have you ever tried to learn a programming language that’s based on a paradigm you are not used to just by reading a book? I may be specially stupid here, but I’m skeptic that’s possible, even for a experienced folks.
So yeah, I’m daring to say that pressing tab and accepting code you understand and can improve is not the same as writing it yourself. This is of course based on my own experience and I may be projecting too much, but I can’t stop thinking there are good reasons to be a late adopter of LLMs
@RosaCtrl
I was also exposed to someone who was trying to learn a programming language, and the LLM had produced a rather hairy generic type signature. They asked for help understanding it, but the thing to understand was that the LLM was off its rocker.
They essentially have "yeah, that looks right" as their goal, and if the correct answer is surprising, as it often is to students, it's fundamentally the wrong tool for them.
@RosaCtrl
Yeah, I think bullshit, in the sense of statements that don't care whether they're correct or not, is pretty fitting for what an LLM produces.
Lots of people are trying to make it produce useful bullshit, as in statements that look right because they _are_ right. It doesn't change what the machine is fundamentally doing.