i do not want to get into the business of posting LLM takes but very briefly:

It feels clear to me that some people* are getting value out of using LLMs for programming. Basically see https://simonwillison.net/'s whole blog. If I think about it purely on the basis of "in a vacuum, can this help me write programs", it seems like an exciting technology.

BUT...

(1/?)

(* it also feels clear that some people are NOT getting value out of LLMs, hoping to avoid flamewars about that please)

(continued from ^)

Google search doesn't work as well anymore because the results are full of LLM-generated articles? I hear about CEOs putting pressure on their teams to produce more faster because they've been told that AI will increase productivity?

it feels sad. even though I find LLMs useful sometimes, with all of the societal impacts it often feels like it isn't actually improving my life.

(2/?)

@b0rk It's a huge bummer. The technology IS cool and does present a huge advancement in our ability to interface with natural language.

I think it's important to frame it, instead of "can it do x?", as "sure, it can or might eventually be able to do X, but at what cost?" LLMs can and should exist and be accessible, and small open source models you can run on commodity hardware DO exist.

The problem is the eschatological venture capital death cult that's made LLMs their hobby horse,as always