this article is haunting me because it is much more correct than it knows https://www.wired.com/story/vibe-coding-is-the-new-open-source/
Vibe Coding Is the New Open Source—in the Worst Way Possible

As developers increasingly lean on AI-generated code to build out their software—as they have with open source in the past—they risk introducing critical security failures along the way.

WIRED
like there's a bunch of faff in here about security issues (and sure, fair enough, the context here *is* wired's "security" column) and quality problems, but there is a much darker interpretation of what's happening
bluntly, POSIWID: "open source" is a social system for corporations to externalize infrastructure costs, and, materially, not much else. there are of course principles involved and possible future benefits, but *today*, that's mostly what is going on in the "community"
corporations have been reluctant to "give back" because while it can produce good press, if you start "giving back" to the "community" too much, then the cost savings you got from externalizing your complement starts to erode; if you're going to give money to some tech, might as well own it
[remember to like and subscribe and support me on github sponsors and patreon and tidelift, thanks]
but the scary part of this article is the bit where vibe coding *reads* to corporate interests as an alternative to open source, in that you can externalize your infrastructure development and maintenance costs onto OpenAI's VC investors instead. slightly higher overhead per developer, but no need to deal with pesky human beings who might start to agitate for more resources, so the reduction in hassle is worth the cost
it doesn't actually work; a vibe-coded framework is never going to help structure your systems anywhere near as well as one that is the result of human judgement and discernment, even one with a hefty pile of legacy junk associated with it. but management is not going to be able to see this; structurally, managers see the benefits and have a much harder time measuring or even perceiving the costs

@glyph I am constantly torn between the obvious (to me) knowledge that LLM are just reconstructing parts of their training corpus and the beliefs of management and vibe coders that the results they see are truly novel.

Where I am, having a serious discussion about LLM merits is taboo currently, so the steam train continues to barrel towards the proverbial broken bridge.

@merospit I am very curious. Is this discussion possible? https://blog.glyph.im/2025/08/futzing-fraction.html?
The Futzing Fraction

At least some of your time with genAI will be spent just kind of… futzing with it.

@glyph Coming from a data analysis background, I am very interested in verifying my assumptions using data. The method in your article seems to be useful to me to try to guage whether we should be using LLM, but at the corporate level rationality seems to have been throw under the bus.

Our current measurements don't even attempt to target productivity. The measurement currently is literally how many hours per day I (and the rest of the team) were using an LLM. There is an incentive currently to use unlimited tries in an attempt to get it right without having to think about it directly.

Part of me thinks that management think they can watch the LLM prompts (which are all recorded and visible to management) and when someone gets something right the prompt itself will be an asset to the company. But anyone who has tried to replicate a prompt sequence knows that it breaks every time the model is retrained or you try it on another companies LLM. That may make it an asset in the very short term, but the costs are being completely discounted.