Re: last boost https://chaos.social/@dpk/115589097803252590

That OCaml PR is textbook open source in the era of vibe coding...

It's got everything:

- PR submitted without the author acknowledging they didn't write it and don't understand it.
- Copyright laundering.
- "I just wanted to get it done!" versus maintainers who know they have to live with code contributions for years.
- Zero-effort pasting LLM output as reply to real people's thoughtful questions. (At least the author acknowledged what they were doing that time.)
- It doesn't matter that it's hard to review because "AI has a very deep understanding of how this code works."
- "Beats me. AI decided to do so and I didn't question it."

If this is our new world then it's going to turbocharge maintainer burnout.  

(If you don't want to read a quite long often depressing thread, would still recommend reading this well reasoned comment by one of the maintainers:
https://github.com/ocaml/ocaml/pull/14369#issuecomment-3556593972 )

Daphne Preston-Kendal (@[email protected])

Fucking. Hell. https://github.com/ocaml/ocaml/pull/14369

chaos.social

@projectgus holy crap what a shitshow! thanks for boosting this~

This whole "revolution" is highlighting to me a lot of work that our culture labels an "overhead", implying a waste of time, and how much value is actually in that work.

The canonical example in software is developing and using a mental model of how software works, while lines of code are really a side-effect rather than the main show.

@projectgus But it's equally true in other fields! One I'm heavily exposed to is doctors' notes - there's this big push to do auto transcription, whatever, but then LLM summarisation for the doctor's records. Talking to the doctors I know well, they spend enormous amounts of time and energy writing notes after interacting with patients - viewed by many as overhead again, but in their view it's a critical part of, essentially, validating (or invalidating) their understanding of what's going on!

@abrasive 100%, it's honestly scary. And a real indictment of many people's relationship with thinking.

I'm surprised pro-AI people don't address this more, because I don't think LLMs are totally useless[*] but the consequences of using them excessively and uncritically don't help convince people of that, either. I guess excessive use does help line go up.

[*] I do think their benefits aren't worth the significant societal and energy demand costs, but yeah...

@projectgus @abrasive I've been railroaded into GitHub CoPilot at work. It's honestly a really good tool at what it does well.

Without some photonics or quantum breakthrough to massively reduce the resource consumption though, it's a dead end. Even with that it should be limited to local models where costs are realized closer to the consumption instead of hidden behind a curtain.

@projectgus @abrasive "I guess excessive use does help line go up." And that's it, that's the whole story. "AI" is being pushed into everything, like crude oil, for basically no reason other than line go up. Once again, we're* digging our* own graves for line go up.
*For those of us who refuse to use LLMs, or anyone who has somehow managed to not use oil, other people are digging the graves for us but we sure will be included in the burial

@projectgus The doctor one is particularly troubling because LLMs generate *likely* text and so of course will miss the things that make each patient different & thus needing of help... And sure the docs review them but the hardest thing to review is something that's 95% correct 95% of the time?

I find so many mistakes in my work in the write-up, because it forces me to linearise my fuzzy fishbowl of thought... and what I swore I *knew* turns out to have been no such thing

@abrasive Totally. I write a lot and I realised a long time ago that writing is often thinking! Outsourcing the writing often means outsourcing the thinking, even if I "review" it later.

Regarding doctor notes, I'm reminded of this paragraph from https://taranis.ie/llms-are-a-failure-a-new-ai-winter-is-coming/

---

Transformers must never be used for certain applications – their failure rate is unacceptable for anything that might directly or indirectly harm (or even significantly inconvenience) a human. This means that they should never be used in medicine, for evaluation in school or college, for law enforcement, for tax assessment, or a myriad of other similar cases. It is difficult to spot errors even when you are an expert, so nonexpert users have no chance whatsoever.

LLMs are a failure. A new AI winter is coming.

Though LLMs had a lot of promise, this has not been demonstrated in practice. The technology is essentially a failure, and a new AI winter is coming.

Taranis