people: ask their dependencies to follow semver, for fuck's sake already
also people: make a surprised pikachu face when the major version is incremented with every release

(i have been both, at times. this is about me. this is also about others who i've seen be a lot more militant about this issue)

the thing is, if you have a sufficiently complicated application it is not feasible to determine what is a "breaking change" or not. this complexity limit kicks in long before you get to a "browser" or a "JIT compiler" but it is definitely well applicable by that point

i think what people mean when they do both of those things are a mix of "please stop adding features entirely. only fix bugs" and "please only make changes i like, but not the changes i dislike" depending on maturity level. that's not really how open source software works though

@whitequark this is exactly why I am a bit of a CalVer zealot. abandon the illusion. only by freeing yourself from desire can you achieve enlightenment
@whitequark in python (and many other languages) you can’t even get your tools to produce something as crude as an soversion. If we had a robust tool that could do something like “make a DAG of all publicly defined names and tell me if anything has been changed or removed between these two git tags” then we could maybe TRY to do semver but as it is I cannot imagine anyone doing it correctly

@glyph I think that's a really silly absolutism. who cares if all the names are there if the types have changed? who cares if the types are the same if the behavior changed? this exact line of thought implies you can never have any notion of compatibility at all

i think of version numbers as a communication tool, or sort of a filter: they tell you if shit's definitely broken or maybe fine. this is a useful signal

@whitequark I may be making some "live internet-facing service using libraries" assumptions about how updates need to be managed, which is to say, you have some compliance and possibly legal obligations to upgrade to new security supported versions in a timely way, every application is in its own venv (and probably its own container or even its own VM), there's a staff of people doing regular maintenance. for dependencies of such environments semver doesn't make sense
@glyph yes, you are, which is my point. Amaranth is (in the vast majority of contexts) not a security-relevant application and if Python was more like Rust in terms of how it manages dependencies, it would have been actively desirable to have some stub library that lets you link together modules built with different versions of the language; typically if you make a HDL module, you verify it and then you just don't touch it ever. (this is not the only possible workflow, but this is the de-facto standard workflow in the industry, and a lot of the time it's close to the best you can do because the interface of the module, including the bugs, gets implicitly embedded in projects like "Linux" where if you change anything your actual end users will see an update maybe in five years)
@glyph the other thing is that I guess this makes me not particularly interested in using Twisted (I have no other context here) because I don't want to be on call for this any more than humanly feasible, and I am not a startup that can afford to burn infinite VC cash chasing upgrades
@whitequark to be clear, we don't just do "calver, but every micro-release breaks everything", we have a rolling window where every release (potentially) removes deprecated stuff, and deprecates new stuff. if you have tests and they run without warnings, you should be able to upgrade without any work. the workflow for downstreams (who, again, must be upgrading periodically anyway) is "run & fix tests until no warnings, upgrade one version, repeat"
@whitequark in practice, among our cohort of comparable dependencies we have a reputation for extreme stability because we don't have these big major-version jumps where huge arbitrary clumps of stuff all break at once, every individual upgrade is usually between "zero" and "trivial" in terms of the amount of work it takes
@glyph yes, but your test workflow is of a kind that doesn't involve "you probably have to run it on a real hardware, and maybe plug in an oscilloscope and spend a day chasing a single-cycle problem resulting in updating an expression somewhere in the guts of an application"
@whitequark indeed not. I am pretty impressed that you get people to upgrade at all in the face of that. do your downstreams just pick a major version and ride it out until the end of support or can you convince them to upgrade major versions? and what kind of stuff do you change in minor/micro versions that don't risk breakage?
@glyph @whitequark Many hardware downstreams *don't* upgrade in place, they upgrade on new device builds (which is then covered by factory acceptance testing). Physical installations can then be ring fenced with less critical network devices that receive routine updates to mitigate the security risks (complete air gaps are also nice, but the lack of routine telemetry then becomes a risk in its own right). Major software upgrades get lumped in with hardware revisions (and the associated testing).
@ancoghlan @glyph yep, that's definitely one way in which this is commonly done! I would describe this on a higher level as "there is an upgrade workflow that is already in place because of some other, more powerful forces, and so fitting your software into this existing workflow can be a good match"
@whitequark @ancoghlan I am definitely going to encode the calver zealotry in a longer blog post about upgrade workflows, and this is a good contextual distinction to keep in mind. I still think that most of my religious beliefs about the way upgrades need to work apply here (the big one being: tools mostly don't exist to enforce the ability to upgrade without testing so you cannot upgrade to ANY version without testing) but there are definitely cultural distinctions about expectations
@whitequark @ancoghlan it's interesting that this superficially resembles what, in my mind, was the Bad Old Days of the internet-service context, where downstreams annoyed by some small breakage would just refuse to upgrade forever or would create internal, broken, unsupported and unsupportable forks, which they had to stop doing in ~2014 because PCI and SOC2 auditors started getting _real_ mad, which was extremely healthy leverage at the time. it's very different, though.
@glyph @whitequark Yeah, it's untenable as a universal practice for networked devices, as they need to evolve in parallel with the networks they're connected to. Ring fencing also leaves you horribly exposed if the perimeter ever gets compromised. Testing software updates against every hardware revision ever published is its own special flavour of awful though, so there are no good answers, just differently bad choices to weigh against the specifics of a deployment model.