i do not want to get into the business of posting LLM takes but very briefly:

It feels clear to me that some people* are getting value out of using LLMs for programming. Basically see https://simonwillison.net/'s whole blog. If I think about it purely on the basis of "in a vacuum, can this help me write programs", it seems like an exciting technology.

BUT...

(1/?)

(* it also feels clear that some people are NOT getting value out of LLMs, hoping to avoid flamewars about that please)

(continued from ^)

Google search doesn't work as well anymore because the results are full of LLM-generated articles? I hear about CEOs putting pressure on their teams to produce more faster because they've been told that AI will increase productivity?

it feels sad. even though I find LLMs useful sometimes, with all of the societal impacts it often feels like it isn't actually improving my life.

(2/?)

@b0rk About searching getting increasingly worse, there's another side of this I've thought was interesting.

A non-negligible amount of people no longer ask their technical questions on public forums, they ask their favorite chatbot. These questions, and their answers, are not publicly displayed for other people sharing similar struggles to search for.

@karl @b0rk That's rather brilliant. I often use LLMs to find things for me, in subject areas where I know if they're lying. I used to use Google, before it degraded. Occasionally (like weekly) I would have a public discussion about a question.
I'm not doing those now, so I'm not contributing.
@karl @b0rk I'm seeing this more and more at work and it makes me sad.
@karl @b0rk Stack Overflow seems to suffer the most from this.
@enno @karl @b0rk And anyone trying to teach courses.
@karl @b0rk yes excellent point. Although knowledge was already becoming volatile with everyone switching to Discord instead of forums the last few years.

@boxofsnoo Well, for what it's worth, I see Discord as a IRC replacement, which already enabled similar volatile discussions.

That being said, Discord is hugely more popular, so perhaps it amplified the issue.

@karl @b0rk one of the particular benefits that Stack Overflow had was not the initial answers, but the commentary, refinement, and background references added to the question over time, including "back in 2014, this was the best answer. In 2019, <new tool> generally replaces this solution with.... Support for this started with $version".

(e.g. ECMA modules and general browser advances, packaging tools in other libraries, standards advancement, library evolution, etc)

@karl @b0rk maybe worse, maybe just also bad: the chatbots (even the same one) will tell them different things for the same question. different wrong things. uncommon misconceptions.
@karl @b0rk Long, long ago, companies supported their products and were the first port of call for support. Then they realised that they could just dump customers/users and get them to support each other for free. Then customers got chastised more and more by arrogant forum ‘sentinels’ for not giving enough detail and not knowing enough background stuff generally. Contempt as a service. I’ve not used a chatbot but suspect they don’t do that.
@karl @b0rk see also discord before AI... No preservation of knowledge
@mikebabcock @b0rk I see Discord as having a similar effect to IRC, perhaps worse as it's become popular but it was there before.
@b0rk I still think the asbestos metaphor is useful here - it's possible (in fact, not uncommon) for technologies to be both useful in some ways and damaging in others... and it can be the case that the downsides are impossible to suppress or are very long-lasting.
@b0rk I think Google’s problems have more to do with its ad business than the slop at the top, which is a term I think I just coined for when the top of the search results is LLM output.

@nick @b0rk Agreed. I have NUMEROUS issues with LLMs*, but Google has been getting worse for years before chatGPT. It has focused on general information over specific, to the point where it ignores the actual terms you search for in favor of more popular ones, presumably because more popular results equates to more and revenue.

"Slop at the top" is a great phrase.

* Anti LLM rant implied rather than given out of respect for b0rk's request.

@nick @b0rk Both issues (Google being driven by the ad side, and results being full of slop) are really compounding each other.

@b0rk I think that the way we perceive the Internet is still in a state of flux and 20 years from now we'll look at now the way we look at the early 2000s web2.0 the way at the time we looked at the early '90s geocities era when we looked weird at the earlier Usenet era and so on: a rather quaint but ultimately doomed to be obsoleted way to use this technology. In any case, there's a strong generational component, see the origins of the "Eternal September" meme.

At this point I feel it's all unescapably converging towards the amorphous yet omnipresent hose of semi-worthless data that's the backdrop of most cyberpunk media. But who knows?

@b0rk use "web_ on far right. It's hidden. Use Kagi. Include swear words.
@b0rk To me, 'sad' is the right word. We (as the software industry) have been promised so many wildly different productivity boosts, one (read: I) would assume we'd have become somewhat immune. But no, after 4th generation languages, RUP, CASE tools, Lo/No Code, Blockchain, Microservice for everything, we still fall for such promises.
@b0rk Oh nooooo… Now I sound like a grumpy old person. Maybe—just maybe—because I turned into one. 🤪😱😳

@b0rk Anecdata from a freelance dev here: currently on a project for a ~$10 billion revenue company. Not a week goes by without a mail from some layer of management encouraging people to "use more AI". Entire teams are forced to come up with "user facing AI features", with no regard to what those teams are doing. There are mandatory AI sessions on the regular. New hires spend more time talking to copilot than to the rest of my team.

Things are changing very rapidly...

@b0rk see also the struggles LWN are having. Click through rates are down, their original material is lower down google search than LLM plagarised version of their own articles, plus they are getting effectively DDoS'd by scrapers.
It's outright warfare on actual human authored websites.
@b0rk I don't think that's entirely true. Although I'm very LLM skeptical, Google search enshitified before LLMs came about. They only sped a process that was already going on.

@b0rk
As someone who recently graduated and been looking for a job, I've recently tried out Claude code and such because it seems there's an expectation now that I should get good at that.

When I do, I find the experience to be impressive and almost overwhelming with what it can do & how quickly. But then I get a sinking feeling that... I just don't want to be doing this. If I knew in university this is what I'd be doing I'd have kept programming as a hobby and gone into something else.

@b0rk It's a huge bummer. The technology IS cool and does present a huge advancement in our ability to interface with natural language.

I think it's important to frame it, instead of "can it do x?", as "sure, it can or might eventually be able to do X, but at what cost?" LLMs can and should exist and be accessible, and small open source models you can run on commodity hardware DO exist.

The problem is the eschatological venture capital death cult that's made LLMs their hobby horse,as always

@b0rk one thought that came to me when Cursor started seeming to be good enough a lot of engineers started using it was that LLM-assisted coding makes some sense since it's putting the burden of using that tech on coders. AI-based summaries are shoving it in the face of the general public, who don't have as much of a handle on what its limitations are. And just seems to be the latest in the general enshittification of search engines. Social media is especially vulnerable to AI-bots.

@b0rk TIL my hosting provider is raising prices because the companies pushing AI have so much money they can buy up so much disk, ram and chips to drive up prices for everyone.

I am of those pressured to use it to stay employed & if I'm being honest it's destroying my health because even when it works for me, I know it is built on a pile of harms including increased carbon emissions which makes my child's future worse. :(

@b0rk the bit about people being pressured to be more productive is true. I've been told that explicitly by my manager, and all the way up the chain to the CEO
@b0rk I've had to switch to Kagi for search. It's a noticeable improvement over Google.

@b0rk It’s almost heartening to see history repeat (or rhyme), and I can report so:

Automation — and this is one, boy is it one — has a classic Gartner Hype Curve and like most industrialisation-Luddite* tensions, it ends up with more work / more jobs, not less / fewer.

*per the cliché, not the historical reality?

But first you have to discover if the automation is:

A. replacing a task that it can’t replace
B. making an existing task exist, but faster
C. making the impossible possible, a whole new task

Bosses (been there, 2000s) will spend a couple of years getting you to bang away at type A, until it proves impossible

They’ll want some good news out of that, and hopefully you’ll have a type C to show them. But first they’ll have to figure out how to fit it, which they will, if they’re good.

And type B will be a bit of a minority — the weird truth is though, that the type A stuff will stop getting done anyway because the world turns, and priorities shift … enpoopifies maybe

Real professionalism comes when irreplaceable practices are institutionalised and valued. Think plane safety, but only for jets. Train safety is higher at the top end too.

But you CAN automate: When trains go driverless, they convert DECADES after the hype and predictions were made. And then they embed all the process and procedure into the automation.

We gotta be more like that in IT.

@b0rk my workplace provides my department with licenses for Copilot and chatgpte. We're encouraged to use it as much as possible. (If the business data in my presentation is falsified by the LLM, though, who gets the blame, me or openai?)

Amusingly, we had to take training classes on using it. It's a conversational chatbot. Why provide training on talking to a robot when they never provided training on talking to my human coworkers? Lol