this whole conflict wherein multinational corporations demand unpaid labor from hobbyists is only happening because the free software movement has built stuff that capital is not capable of building for itself
just so we're clear
this whole conflict wherein multinational corporations demand unpaid labor from hobbyists is only happening because the free software movement has built stuff that capital is not capable of building for itself
just so we're clear
if this whole financial risk-assessment thing were truly the highest goal to the companies demanding we all do their security work for them, the clear choice would be to not leverage free software or open source at all, right?
but that's not an option companies consider, because it would cost more
@ireneista using digital public goods that were voluntarily produced and made available for all to use to pursue prosperity in a capitalist society can be a legitimate business. Such businesses can benefit society.
If you cannot sustain that business on an ongoing basis without donated labor, then it isn’t a sustainable (or socially responsible) business.
@ireneista It's hard to understand the meaning of this line of thinking.
When third parties make stuff for a platform on their own initiative are they not in a sense doing this work on the platform that the platform owner is not paying for?
@rakslice it's late and we're tired and over-thinking, so just to say that again in case it was confusing (we're past the point of being able to tell...):
we agree with that conclusion but we don't think it would be a good thinking habit to treat it as a question of definition like that
@ireneista this reminds me, $work uses a SQL parsing rust library and we have a bunch of internal changes; I should go and upstream them
(the team is fully in favor of it, it's just effort and such)
I make upstreaming a default in my contracts. Up front, I make it clear all my work will be submitted.
It's easier to have that conversation before signing the agreements iibh.
This is precisely what regulations like the CRA are designed to address. You can incorporate any F/OSS code you want into your project, but you are liable for security flaws (with some wooly definitions and a recognition that the industry is in such a poor state that everything is insecure, it just has to not be stupidly insecure). That is intended to give an incentive for people to invest in security aspects of F/OSS projects.
As I recall, the same incentives apply to all code. The special cases for F/OSS are that you are not liable if you are not building a product. If you just release the code for other folks to use, you can disclaim liability.
There's some similarity with US liability law for open-source hardware. You can't disclaim liability on a physical device that you sell, but you can on a pile of (for example) Verilog (probably, this is based on lawyers telling me what they'd be comfortable arguing in court rather than on statute law, I believe). If you take some open-source Verilog and fab a chip or incorporate such a chip into your device, you then have some statutory liability.
"We estimate the supply-side value of widely-used OSS is $4.15 billion, but that the demand-side value is much larger at $8.8 trillion. We find that firms would need to spend 3.5 times more on software than they currently do if OSS did not exist. The top six programming languages in our sample comprise 84% of the demand-side value of OSS.
Further, 96% of the demand-side value is created by only 5% of OSS developers. "
https://www.hbs.edu/ris/Publication%20Files/24-038_51f8444f-502c-4139-8bf2-56eb4b65c58a.pdf
@ireneista For the readers who might be missing historical context (I know Irenes knows this): this is what BSD and GNU started out as, and what they still are at their core.
It's why the GNU-plus-Linux joke _exists_. In the 80s, GNU's tools were developed for commercial UNIXen and 4.xBSD. Linux really is just the kernel and wasn't a complete OS without GNU (in the 90s... there are plenty of non-GNU Linuxen these days).
@ireneista @overeducatedredneck Late 1980s and 1990s Unix vendors really liked unbundling what we think of as core stuff, too, making GNU and free software really quite important. AT&T unbundled *roff tools (then shipped only preformatted manpages), and for at least a while Sun unbundled the C compiler (I think on SPARC only), forcing people to GCC.
(Sun at least provided header files with the base system and I think cooperated with the GCC people.)
@cks @ireneista @overeducatedredneck ahmm people tend to forget just how hopeless the quality of many of the BSD utilities was, (just try porting them to a non DEC machine), using GNU tools was nearly a must to keep your sanity.
Not to mention the small issue of actually having a dev environment that was open and available for less than big bucks.
There are also proprietary non-UNIX operating systems. Microsoft the obvious example, with their own OS, toolchain, and application suite.
The interesting thing is the degree to which zero-marginal-cost goods resemble natural monopolies. It wasn't that writing a proprietary OS was hard, it's that writing the second-most-popular proprietary OS was hard. If two companies sell operating systems for the same price, the one that sells more will have more revenue and so can invest more. If you're a new company trying to sell a new OS you can't compete with MS because they have the revenue from hundreds of millions of sales per year, so you can't invest as much in R&D as them. The only models that have really worked since MS gained a dominant position are to give away the OS and make money elsewhere.
Most other operating systems are largely a rounding error in terms of adoption. Fedora and Ubuntu do well in the server space, largely subsidised by a few companies willing to pay for the extra support, but even then they aren't funding most of the development of the things that they bundle. Everything else (including my favourite operating systems) is a rounding error.
@hosford42 @david_chisnall right, so, like
the area in which corporations are most reliant on free software and open source is servers, for sure. nobody is building their startup's web app on .NET, nobody is hosting it on a server version of Windows. it wouldn't even be feasible, on those platforms there's no connective tissue to link up all the small pieces that are involved in building anything real.
I’ve written about this elsewhere, but proprietary software’s natural end state is to become a platform. This is the ideal state for rent seeking: other people add value that keeps people buying your product. Bill Gates understood this and talked about it explicitly.
One of the reasons that F/OSS often fails is that it adopts design structures from proprietary software that exist to drive software to this endpoint. And that’s a disaster because those design models favour proprietary software.
@david_chisnall @hosford42 ah! yes. that all sounds true.
hopefully if enough people understand it, we can stop repeating it...
@ireneista @hosford42 @david_chisnall I’m not sure I’d say “infeasible” - Stack Overflow/Exchange is famously .NET on Windows and seems to work well enough. Not aware of any more recent examples off the top of my head, not sure if that’s just my knowledge or if there genuinely aren’t any
(dear god, definitely not a platform I’d like to develop on, though)
Hotmail was the other famous case. It was originally hosted on FreeBSD and had a spectacular failed migration to Windows when Microsoft bought them, which led to rewriting the TCP/IP stack and a bunch of other scalability improvements. They eventually moved it over to Windows 2000, I believe. All of M365 is also hosted on Windows and Azure runs Windows on the control plane (Hyper-V runs the VMs, there's a Windows partition on every node for PV device emulation, management, and logging).
One of the big stock exchanges (NYSE?) is also hosted on Windows.
It's also worth noting that the financial aspects are not really a problem. A big server costs thousands of dollars. The cost of a Windows license on top of that is negilgible. The place where it hurts is if you want to run VMs and need a Windows license per VM.
The main reason that people don't run Windows on the server is that it simply isn't very good there. NT made a lot of design choices that were really good when 8 MiB of RAM was uncommon and 16 MiB was ludicrously expensive. Every page that's allocated for a process (by the kernel or userspace) is accounted to that process. The kernel never allocates more memory than the total of RAM and swap space and so allocations always fail at recoverable points. This is necessary to make every page of memory or swap fungible. They store all of the dynamic state for swapped-out pages in invalid PTEs, so you can even swap out page-table pages (except the root), so a process can almost entirely swapped out except for a handful of pages.
All of this introduces problems with the constraints of modern systems. It means they can't overcommit memory and so they often end up failing to allocate memory even though you have tens of GiBs of free RAM unless you have a lot of swap. That's expensive when you are scaling things up to ten thousand nodes. For complex software, no one actually handles memory allocation failure gracefully and so the benefits are largely hypothetical, but the costs are real. If you want to build reliable software today, you do it at a much higher level because 'this computer broke' is part of your model of things that you need to be resilient against.
They allow third-party device drivers (including AV tools and other not-really-driver crap) to run code in interrupt-service routines. They also do this in first-party drivers, whereas other operating systems tend to restrict the things you can do in ISRs to 'wake this run queue'. This is why UI latency is so terrible on Windows, and that's an even bigger problem on the server where high tail latency on individual nodes translates to system-wide low throughput.