Every week there’s an article about the high vulnerability of package managers to supply chain attacks and I’m just amazed it’s taken this long for people to figure out that routinely auto-pulling 500 disparate third party libraries unseen into your project is a terrible idea
I remember back in my MacOS dev days being told that I should be using CocoaPods and when I told them that was a stupid idea (I had like 3 dependencies and regularly poked around in the source for all of them) I was the old fashioned old man. “But it automates all the updates!”. So what? There’s 3. a) I don’t need it, it’s super easy to pull changes from source and b) when I do it manually I actually *look* at the updates like a sane person would https://arstechnica.com/?p=2034866
3 million iOS and macOS apps were exposed to potent supply-chain attacks

Apps that used code libraries hosted on CocoaPods were vulnerable for about 10 years.

Ars Technica
Of course there’s no reason you can’t use automated package managers *and* do the kind of due diligence a responsible developer would do when pulling code from third parties into their project, but I don’t think I’ve ever seen anyone do this. Instead it seems normal to implicitly trust anything that comes out of a package management system no matter who controls it and that’s always been wild to me.
And the thing is, the number of external dependencies (and their update volume) that you can realistically, properly vet for inclusion in your project, is inherently small enough that you don’t need a package manager. And if you need a package manager to handle it all, you can’t be checking what you’re pulling in and so you’re definitely vulnerable.
“Vetting” can mean delegating due diligence to the publisher (or repackager) rather than personally reading the source, but that means vetting the publisher instead. And there is a finite number of those that you can maintain vetted trust in at any one time. You can’t just assume that the “community” somehow automatically protects you against bad actors. It might, but it’s been shown many times that it might not; sometimes everyone thinks someone else would have spotted a problem and no-one does
It makes me laugh when I see programmers harping on about their memory safe languages and how they’re not subject to buffer overruns like the old man languages, while auto-pulling 500 dependencies from randos on the Internet into their projects without even looking at them
@sinbad I think about how the design decisions of languages directly determine the kinds of problems one frequently encounters working with it, and the strength of a language is in part to what degree people can accept the problems created by the language. From everything I've heard Rust's main one is a significantly elevated cognitive load hurdle for ordinary tasks. This makes me wonder if over-dependence on tiny 3rd party libraries is pretty much required for most nontrivial Rust projects.
@sinbad I used to assume that the main problem created by high cognitive load languages like Rust and Haskell is that the occurrence of algorithmic bugs would itself just be higher due to programmer exhaustion, but from everything I've heard it sounds like it's not so much that so much as front-loading more of your debugging necessitates having a very comprehensive understanding of what you want up front or you're in for a slog. Easy shortcuts must start to look very appealing very quickly.
@sinbad Personally if I had to solve every sketch of an idea for Safety Purity and Memory Correctness before I could see whether or not they were bad ideas I'd probably have given up on my dreams years ago.
@aeva @sinbad this is why i've come to view C and C++ as languages for rapid prototyping/experimentation (and if the software to be shipped is a game, then they're also languages for production, because in this use case, memory unsafety makes for good times at GDQ)
@JamesWidman @aeva @sinbad I tend to find what people rapidly prototype is unique ways to crash. I think it's slightly too simplistic to think of it as a strict loss when you're often trading time spent trying to figure out where your memory safety related crash is coming from, v.s. reading an error message in the compiler. (this is not to say rust is fantastic for prototyping, but neither is C++ so ymmv)
@JamesWidman btw you make me wonder now how many speedrun strats are actually memory safety bugs. obviously there's the classic mario ones, but I wonder if there's any notable modern ones. the plot twist of course being that many modern (and not so modern) games are written in memory safe languages already. So I wonder how that shakes out.

@dotstdy some of my favorite speedruns are ones that depend on memory layout; e.g. for _Link to the Past_ and _Ocarina of Time_.

i feel like it might be interesting if game designers reintroduced this kind of thing deliberately!

@JamesWidman ironically, it would be a lot easier to do that sort of thing with a design oriented around simply avoiding memory safety problems, but then just turning all the bounds checks off. e.g. if you build all your game state out of a big mega struct containing a bunch of fixed size arrays. the problem with doing it with "real" memory safety violations in the modern day is that you tend to need sophisticated techniques to prime the heap state / deal with aslr and friends.
@dotstdy yeah, i mean, you would have to design your allocator(s) in such a way that objects are located deterministically, at least relative to each other (so, you can't predict absolute addresses, but you can predict relative addresses)
@JamesWidman Yea, or don't do any dynamic allocation. Code like it's 1999 :)