I'm actually believing this, the main reason "AI" hype has become this big is tech people being impressed by it "writing code".

Then they were wrongly extrapolating capabilities to other fields (because "programming is super hard, harder than any other vocation, therefore 'AI' can do anything!!!1!"). To them, it clearly appears be god because they see themselves as gods—because they can write quicksort and linked lists or something.

Meanwhile, LLMs are only passable at generating code because it is laughably easy, mainly because programming languages and "best practices" are extremely verbose, repetitive and clunky; requiring endless boilerplate and infinite layer cakes to achieve even the most trivial things.

Because other people that don't care about that shit are so dependent on technology, it gets pushed to everyone without consent.

Kind of like a hubris ouroboros.

@thomasfuchs I basically concur. I should have saved the link to it, but someone did a blog post awhile back that was basically "LLMs work well on your code because your code is shit." I have observed that, notably, they struggle with common LISP (although that may also be a consequence of the training dataset).

But, I would extrapolate to observing that most code is shit because it doesn't actually pay to write deeply concise code. There has always been a tradeoff between "getting it done today" and "getting it done perfectly," and the people who want the machine to do the thing want today. In fact, if you don't know your problem domain perfectly, I'd argue that trying to make your code optimally concise is counterproductive.

For those reasons, we can expect LLMs to be a time-saver to the extent that they can execute on "Take this fuzzy pattern and apply it to the codebase" and I expect they will end up a permanent tool in the toolbox (though not in their current form; a whole datacenter to do a 'soft-grep' is overkill, my prediction is that the open source projects will succeed in condensing the tool down into "works 90% of the time on the most popular languages and fits on one or two graphics cards").

@mark @thomasfuchs

And there's so much repetition in code. For an entity that has access to billions of lines of code most of what needs to be developed can be done with copy and paste.

@EuphoriaLavender @thomasfuchs On a lark, I tried throwing a locally-running Qwen at Common LISP using the CLSQL library.

It had no idea the API for the library and did not give me runnable code. But what was fascinating was it did give me syntactically-valid LISP (just trying to call nonexistent functions), and the shape of it matched the shape of the CLSQL API---function names were wrong, but arguments and even interrelationships like "make a connection and then use it to execute SQL" were mostly right.

... which suggests to me that at a fundamental level, the structure of SQL code is just a common pattern, so common that it could be extrapolated across language and library boundaries. And that means making code that talks to a backend via SQL should be automatable.

@mark @thomasfuchs

It should be and, at the very least, the right LLM could be a very useful tool for helping to code. I suspect that, like all the technical innovations that came before, AI is not going to make programmers obsolete, though there may be a prolonged period of cheap and/or ignorant managers who don't know better buying into the hype and getting lots and lots of bad code first.