I'm actually believing this, the main reason "AI" hype has become this big is tech people being impressed by it "writing code".

Then they were wrongly extrapolating capabilities to other fields (because "programming is super hard, harder than any other vocation, therefore 'AI' can do anything!!!1!"). To them, it clearly appears be god because they see themselves as gods—because they can write quicksort and linked lists or something.

Meanwhile, LLMs are only passable at generating code because it is laughably easy, mainly because programming languages and "best practices" are extremely verbose, repetitive and clunky; requiring endless boilerplate and infinite layer cakes to achieve even the most trivial things.

Because other people that don't care about that shit are so dependent on technology, it gets pushed to everyone without consent.

Kind of like a hubris ouroboros.

@thomasfuchs I basically concur. I should have saved the link to it, but someone did a blog post awhile back that was basically "LLMs work well on your code because your code is shit." I have observed that, notably, they struggle with common LISP (although that may also be a consequence of the training dataset).

But, I would extrapolate to observing that most code is shit because it doesn't actually pay to write deeply concise code. There has always been a tradeoff between "getting it done today" and "getting it done perfectly," and the people who want the machine to do the thing want today. In fact, if you don't know your problem domain perfectly, I'd argue that trying to make your code optimally concise is counterproductive.

For those reasons, we can expect LLMs to be a time-saver to the extent that they can execute on "Take this fuzzy pattern and apply it to the codebase" and I expect they will end up a permanent tool in the toolbox (though not in their current form; a whole datacenter to do a 'soft-grep' is overkill, my prediction is that the open source projects will succeed in condensing the tool down into "works 90% of the time on the most popular languages and fits on one or two graphics cards").

@mark @thomasfuchs

And there's so much repetition in code. For an entity that has access to billions of lines of code most of what needs to be developed can be done with copy and paste.

@EuphoriaLavender @thomasfuchs On a lark, I tried throwing a locally-running Qwen at Common LISP using the CLSQL library.

It had no idea the API for the library and did not give me runnable code. But what was fascinating was it did give me syntactically-valid LISP (just trying to call nonexistent functions), and the shape of it matched the shape of the CLSQL API---function names were wrong, but arguments and even interrelationships like "make a connection and then use it to execute SQL" were mostly right.

... which suggests to me that at a fundamental level, the structure of SQL code is just a common pattern, so common that it could be extrapolated across language and library boundaries. And that means making code that talks to a backend via SQL should be automatable.

@mark @EuphoriaLavender btw, I completely agree that the future for "AI" in programming is using local specialized LLMs for heavily enhanced autocomplete and interactive "pair programming"; and as one tool of many

@mark @EuphoriaLavender fwiw I ran a LLM on my 5 year old MacBook Pro (M1) and asked it to suggest improvements to some old JavaScript code and it made some good suggestions (and some meh one). For Ruby it didn’t do great.

But what I’m saying here is that it’s totally moving towards local stuff; if you want to use these tools and find them helpful there will be zero reason to pay money to any large AI companies to host it for you.

@mark @EuphoriaLavender Anyway I can't wait for the hype to be over so these tools are evaluated upon their merits (and lack of merits) and we can maybe do more about making programming languages and frameworks better.

@thomasfuchs @mark

Seems like the more real-world testing and results sharing that's done the better the results will be. Local really seems to be the way to go but who knows how long it'll be before we get reliable local tools. The overhead is exorbitant at the moment, seemingly in ever possible way.

@EuphoriaLavender @mark yeah. What I’m thinking is that better languages and framework design would reduce the need to make lots of boilerplate and layers and thus reduce the need (or perceived need) for code generation

@mark @thomasfuchs

It should be and, at the very least, the right LLM could be a very useful tool for helping to code. I suspect that, like all the technical innovations that came before, AI is not going to make programmers obsolete, though there may be a prolonged period of cheap and/or ignorant managers who don't know better buying into the hype and getting lots and lots of bad code first.