@glyph I asked in a meeting a while back what the company model was if OpenAI started charging $200/month, and everyone looked at me as if that was impossible and crazy talk.
Now I'm having conversations about when the subscription fee is higher then paying for a junior developer. I wonder how many companies will be able to afford the skills they need to fix vibe code?
@craignicol @glyph and where will they get the junior devs if they've eliminated the career path for them?
My country already has a massive shortage of skilled workers in construction trades and automotive because companies made themselves ultra-lean and stopped training new workers, preferring to poach them from their competitors (many *still* won't hire apprentices, even on relatively low salaries, saying they "aren't good enough" ) and I'm seeing that happening in tech as well..
As we just witnessed with all video streaming platforms.
You're totally right.
That kind of dependency is the kind of thing that kills companies. OpenAI has all the incentives to create a ecosystem of totally depending partners and then suck them dry.
Are you talking of the AI providers or their clients?
@glyph Do you think that languages with a large core (#rakulang?) may have a security advantage here, because they require less reliance on external libraries, whereas their core code could more plausibly be assumed to be sufficiently scrutinized?
Even in the case of perfectly trustworthy standard libraries, it seems that regular imports can lead to habits and a false feeling of security that extend to imports at large.
@glyph @davidschultz I agree.
Also, contributing (reporting issues, sending patches) to stdlib is always more intimidating than contributing to a random project of the ecosystem, because everyone assumes it has been written and reviewed by the top-level experts of the language.
Once you consider that stdlib contains the oldest code of the ecosystem and that it maybe just legacy and technical debt, that should change your mind.
@glyph your observations are spot on. Around this whole notion that "coding is social" I foster what is now Social coding commons, and with the objectives to bring sustainability to the participants in chaotic grassroots environments, the FOSS and social impact movements. Cocreate a commons based value economy together, and with that also form a strong foundation to exchange services with the wider corporate world. To be able to compete, as it were.
If interested see https://coding.social/introduction
@glyph Ironically, you know who DOES understand this? Proprietary software developers (and Google devs). Rather than creating a trust relationship with a group of human beings, they simply vendor the code.
"Now it's ours, so we don't have to care about what's happening with its development! Sure, why not include a few more copies of the library? Just make sure they're different versions, for funsies."
Amen @glyph
See also: "Trust as Infrastructure" by @bcantrill
Your supply chain is people - many of them maintaining #OpenSource modules without (sufficient) pay or recognition.
https://bsky.app/profile/bcantrill.bsky.social/post/3m2cufaznlk2l
@glyph This why I always, [cough] maybe look at the repo to see if it's the original and for other signs such as age, version history, commit history, responses to issues etc.
But with LLMs this will soon be pathetically inadequate rather than just pretty bloody inadequate.
We need package repositories to provide a better way to establish trust than, numbers of downloads for example.
Pretty much every software project and all their users are wide open atm. Not to mention orgs & governments.
"establishing an ongoing trust relationship" is brilliant spelling.
You knew that, but well done.
@glyph Many people believe you can separate the code from the humans who make it, and I wonder if this ideological commitment prevents them from understanding this trust relationship.
Indeed, the industry is investing in plagiarism engines so that they can avoid knowing who wrote the code originally.