@hongminhee @jnkrtech So while I very much relate to your dichotomy between craft and efficiency, I want to push back on your assumption that generating code through AI is necessarily more efficient. The Opus feat of rewriting the C compiler in Rust is impressive, and shows what you can do with an unlimited compute budget, but it misses the fact that the main way we engage with code day-to-day is not by starting new greenfield projects, but by maintaining existing code. And despite how good LLMs have gotten at one-shot generating applications from scratch, the more you start changing the requirements or adding new features, the more brittle the resulting code will be, and the bigger the attack surface for bugs will become.
So the question to ask is, once an LLM writes a 100,000 LOC compiler, who will maintain it? If it's the LLM, well, then perhaps we will truly lose all connection to the code we produce But that seems too unreliable. But if it's a human, well, from talking to people who use these models at work, it seems that reviewing and maintaining LLM code is one of the most mentally taxing aspects of using these models. So whoever has to maintain this 100k LOC project is going to have a very bad time.
My partner likes to joke that a good developer should be judged not by the number of lines of code written, but by the number of lines deleted. So like you point out, it's the *metric* of LOC written that's flawed, regardless of whether it's achieved by writing code manually or by using an LLM. But if the metric is no longer the amount of code written, then what should it be?