this article is haunting me because it is much more correct than it knows https://www.wired.com/story/vibe-coding-is-the-new-open-source/
Vibe Coding Is the New Open Source—in the Worst Way Possible

As developers increasingly lean on AI-generated code to build out their software—as they have with open source in the past—they risk introducing critical security failures along the way.

WIRED
like there's a bunch of faff in here about security issues (and sure, fair enough, the context here *is* wired's "security" column) and quality problems, but there is a much darker interpretation of what's happening
bluntly, POSIWID: "open source" is a social system for corporations to externalize infrastructure costs, and, materially, not much else. there are of course principles involved and possible future benefits, but *today*, that's mostly what is going on in the "community"
corporations have been reluctant to "give back" because while it can produce good press, if you start "giving back" to the "community" too much, then the cost savings you got from externalizing your complement starts to erode; if you're going to give money to some tech, might as well own it
[remember to like and subscribe and support me on github sponsors and patreon and tidelift, thanks]
but the scary part of this article is the bit where vibe coding *reads* to corporate interests as an alternative to open source, in that you can externalize your infrastructure development and maintenance costs onto OpenAI's VC investors instead. slightly higher overhead per developer, but no need to deal with pesky human beings who might start to agitate for more resources, so the reduction in hassle is worth the cost
it doesn't actually work; a vibe-coded framework is never going to help structure your systems anywhere near as well as one that is the result of human judgement and discernment, even one with a hefty pile of legacy junk associated with it. but management is not going to be able to see this; structurally, managers see the benefits and have a much harder time measuring or even perceiving the costs
this is why there is no such thing as "vibe engineering" and it is farcical to imagine a world where it even could exist. even with all the responsible code review and QA and cross-checking (which is like, literally impossible, given the amount of vigilance fatigue that AI systems provoke, and the metrics we have so far all bear that out), the long-term maintenance cost of a workslop infrastructure is going to be *devastating*
the biggest problem we *already have* in open source right now, which we have oversimplified into the term "supply chain security", is the lack of understanding that putting a dependency in your project's dependency set (package.json, pyproject.toml, requirements.txt, cargo.toml, etc) is not just "downloading some code", it is *establishing an ongoing trust relationship with a set of human beings*. this fact is *way* too obscured in all the tools we use.
the "vibe engineering" version of this — even in the fantasyland scenario where the tools work and produce correct results, somehow, even in the face of all the evidence that they don't — is that you establish a permanent unfixable dependency on *OpenAI's subscription services*, which are being subsidized in the model of the millennial lifestyle right now, but as soon as you have slipped your organization's neck fully into that pricing noose, it will tighten up real fast
@glyph This is, to me, the biggest indictment of the 'managerial class'; it's such a complete self-own and it's absurdly obvious to anyone why ridding your company of probably-already-underpaid expertise in order to create a hard dependency on a venture capitalist service that is explicitly underpricing itself until it doesn't have to anymore...? Is a bad idea. Anyone who suggests that's a good idea is not someone you should take seriously. And that's even if it works (which it does not)! You never hear about this anything else: "IBM has created the PC, so there's no need for us to do electronics anymore, we can just buy it all from them!" is not how things went.

I'm sure there are those who have the stock options and timing and whatnot to take advantage of any potential bump from the people they're firing. But I would be quite surprised if that was true for a lot of the boosters.
@aud "management psychology eclipses rational profit maximizing" is my corollary to "culture eats strategy for breakfast"

@glyph I asked in a meeting a while back what the company model was if OpenAI started charging $200/month, and everyone looked at me as if that was impossible and crazy talk.

Now I'm having conversations about when the subscription fee is higher then paying for a junior developer. I wonder how many companies will be able to afford the skills they need to fix vibe code?

@craignicol @glyph the good news is that most vibe software and the companies which use it don't produce anything useful or necessary so it's not a loss to the world when they inevitably fail, the only loss is the workers losing their income. Paying people UBI to sit at home and do nothing would be more valuable than spending billions expanding datacenters and setting up kube clusters to accomplish nothing.

@craignicol @glyph and where will they get the junior devs if they've eliminated the career path for them?

My country already has a massive shortage of skilled workers in construction trades and automotive because companies made themselves ultra-lean and stopped training new workers, preferring to poach them from their competitors (many *still* won't hire apprentices, even on relatively low salaries, saying they "aren't good enough" ) and I'm seeing that happening in tech as well..

@craignicol @glyph as a HW engineer I use a few tools that run close to that that high (Most FEA and CAD), but that's at least useful.
@glyph I brought up basically this exact point at a meeting that included my CTO and a founder of the company and the response I got was basically "if the cost of the tool makes sense for us we'll continue to pay it" but with no further thought about what may happen when the tool is 10x the price and everyone at the org can't work without it. It wasn't great.
@fancysandwiches @glyph It feel like the unsaid part was "and then we'll restructure".
@glyph great thread. Thank you
@urlyman nice of you to say so

@glyph

As we just witnessed with all video streaming platforms.

You're totally right.

That kind of dependency is the kind of thing that kills companies. OpenAI has all the incentives to create a ecosystem of totally depending partners and then suck them dry.

@jgg @glyph Based on their financials, that's ultimately their only option.

@nonlinear @glyph

Are you talking of the AI providers or their clients?

@glyph companies leeching on LLM instead of "open source" will be great news for free software.

@glyph Do you think that languages with a large core (#rakulang?) may have a security advantage here, because they require less reliance on external libraries, whereas their core code could more plausibly be assumed to be sufficiently scrutinized?

Even in the case of perfectly trustworthy standard libraries, it seems that regular imports can lead to habits and a false feeling of security that extend to imports at large.

@davidschultz I doubt it. My experience of standard libraries is that they tend to have fewer devs looking at them, with less expertise in the relevant domain than specialized libraries, doing less frequent updates and accumulating more known bugs and limitations that people just work around. Plus, this “large core” really only helps if your problem domain is subsumed by the core, and that would be the same if you happened to have a large, unified, curated library that matches your domain too.

@glyph @davidschultz I agree.

Also, contributing (reporting issues, sending patches) to stdlib is always more intimidating than contributing to a random project of the ecosystem, because everyone assumes it has been written and reviewed by the top-level experts of the language.

Once you consider that stdlib contains the oldest code of the ecosystem and that it maybe just legacy and technical debt, that should change your mind.

@glyph your observations are spot on. Around this whole notion that "coding is social" I foster what is now Social coding commons, and with the objectives to bring sustainability to the participants in chaotic grassroots environments, the FOSS and social impact movements. Cocreate a commons based value economy together, and with that also form a strong foundation to exchange services with the wider corporate world. To be able to compete, as it were.

If interested see https://coding.social/introduction

Social networking reimagined

We find novel ways to collaborate and create value together.

Social experience design
@glyph Even orgs that should definitely know better sometimes seem to struggle with that concept (otherwise my previous work on Fedora's package review pipeline would have ended very differently from the way it did).

@glyph Ironically, you know who DOES understand this? Proprietary software developers (and Google devs). Rather than creating a trust relationship with a group of human beings, they simply vendor the code.

"Now it's ours, so we don't have to care about what's happening with its development! Sure, why not include a few more copies of the library? Just make sure they're different versions, for funsies."

@Andres4NY @glyph And people called me names when I suggested vendoring your dependencies...
@sleepyfox @Andres4NY Vendoring dependencies is The Way.
@80columns @sleepyfox It's still a security nightmare. I didn't mean to imply that I like vendoring; it was to be more thoughtful about dependencies.
@Andres4NY That's even better advice!
@80columns @Andres4NY @sleepyfox it’s the way to hell, sure…
@Andres4NY @glyph I seem to remember Blackberry 10 devices shipping with something like three different versions of OpenSSL inside 😅
@glyph @Andres4NY @Taffer I once fixed up a large Java project that had no less than seven different logging implementations/versions…

Amen @glyph
See also: "Trust as Infrastructure" by @bcantrill

Your supply chain is people - many of them maintaining #OpenSource modules without (sufficient) pay or recognition.

https://bsky.app/profile/bcantrill.bsky.social/post/3m2cufaznlk2l

Bryan Cantrill (@bcantrill.bsky.social)

Slides for my #monktoberfest talk today, "Trust as Infrastructure" (video to come): https://speakerdeck.com/bcantrill/trust-as-infrastructure

Bluesky Social

@glyph This why I always, [cough] maybe look at the repo to see if it's the original and for other signs such as age, version history, commit history, responses to issues etc.

But with LLMs this will soon be pathetically inadequate rather than just pretty bloody inadequate.

We need package repositories to provide a better way to establish trust than, numbers of downloads for example.

Pretty much every software project and all their users are wide open atm. Not to mention orgs & governments.

@glyph

"establishing an ongoing trust relationship" is brilliant spelling.

You knew that, but well done.

@glyph What do you think about tools like dependabot? In fear that, if you enable version updates, and not just security updates, it increases the likelihood that you’ll end up with a vulnerability, because when you update manually, statistically the version you get has been out for a while, whereas if it’s automatic, you literally only get versions that nobody had time to inspect yet.
@oscherler bugs are vulnerabilities more often than they are identified as such, so I am generally strongly in favor of dependabot. if you don’t trust the maintainers themselves to do the relevant inspection then you should stop depending on the package. (If you are an actual company responsible for a product, you need to do the inspection in-house, and your own developers should be assigned to read upstream release diffs.)
@glyph I was thinking more along the line of a package getting injected with malware, and it being noticed in 24 hours, but I get your point.
@oscherler @glyph Tools like dependabot are needed, but the implementation is very lacking. Being aware of a new version is helpful, but the hard part is reviewing the changes, and dependabot is poor at providing info. Worse, it makes easy to blindly trust the upgrade.

@glyph Many people believe you can separate the code from the humans who make it, and I wonder if this ideological commitment prevents them from understanding this trust relationship.

Indeed, the industry is investing in plagiarism engines so that they can avoid knowing who wrote the code originally.

@glyph @VileLasagna The company I work for is trying to address this but wow is this ever a horrifying problem. Npm, PyPI, rust crates, even Go, code pulls things in from whatever, which pulls things in from whatever, which… ♾️

@Taffer @glyph as a C++ cranky old man etc I'm used to sourcing dependencies and managing then for the build etc be a pretty fiddly problem to have to deal with. No one likes it

But more and more I guess we're realising we were the fucking blessed ones all along

@VileLasagna @glyph I’m firmly in the “CMake is horrible but also great” camp. 👍
@glyph @VileLasagna @Taffer cmake cannot even install manpages in the correct format for the OS (preformatted or not, compressed or not, correct locations and filenames(… 😹