Re: last boost https://chaos.social/@dpk/115589097803252590

That OCaml PR is textbook open source in the era of vibe coding...

It's got everything:

- PR submitted without the author acknowledging they didn't write it and don't understand it.
- Copyright laundering.
- "I just wanted to get it done!" versus maintainers who know they have to live with code contributions for years.
- Zero-effort pasting LLM output as reply to real people's thoughtful questions. (At least the author acknowledged what they were doing that time.)
- It doesn't matter that it's hard to review because "AI has a very deep understanding of how this code works."
- "Beats me. AI decided to do so and I didn't question it."

If this is our new world then it's going to turbocharge maintainer burnout.  

(If you don't want to read a quite long often depressing thread, would still recommend reading this well reasoned comment by one of the maintainers:
https://github.com/ocaml/ocaml/pull/14369#issuecomment-3556593972 )

Daphne Preston-Kendal (@[email protected])

Fucking. Hell. https://github.com/ocaml/ocaml/pull/14369

chaos.social

@projectgus holy crap what a shitshow! thanks for boosting this~

This whole "revolution" is highlighting to me a lot of work that our culture labels an "overhead", implying a waste of time, and how much value is actually in that work.

The canonical example in software is developing and using a mental model of how software works, while lines of code are really a side-effect rather than the main show.

@projectgus But it's equally true in other fields! One I'm heavily exposed to is doctors' notes - there's this big push to do auto transcription, whatever, but then LLM summarisation for the doctor's records. Talking to the doctors I know well, they spend enormous amounts of time and energy writing notes after interacting with patients - viewed by many as overhead again, but in their view it's a critical part of, essentially, validating (or invalidating) their understanding of what's going on!

@abrasive 100%, it's honestly scary. And a real indictment of many people's relationship with thinking.

I'm surprised pro-AI people don't address this more, because I don't think LLMs are totally useless[*] but the consequences of using them excessively and uncritically don't help convince people of that, either. I guess excessive use does help line go up.

[*] I do think their benefits aren't worth the significant societal and energy demand costs, but yeah...

@projectgus @abrasive I've been railroaded into GitHub CoPilot at work. It's honestly a really good tool at what it does well.

Without some photonics or quantum breakthrough to massively reduce the resource consumption though, it's a dead end. Even with that it should be limited to local models where costs are realized closer to the consumption instead of hidden behind a curtain.