@rl_dane

I'd rather w95 with its software suite and interface than w11 with its.

W11 is a worse OS than w95 was.

@OpenComputeDesign @kabel42

@pixx @OpenComputeDesign @kabel42

It does have memory protection, though. That was Windows 95's most glaring weakness.

Edit: I meant to say that it doesn't. derp.
Edit2: No, I was saying that W11 has memory protection. lol

@rl_dane @pixx @kabel42

Modern software still absolutely _sucks_ with anything to do with memory. Any claims modern OS's make are, at best, just giving people a false sense of security.

@OpenComputeDesign @pixx @kabel42

Brofam, Windows 95 used to crash on me daily.

Linux? Basically never.

FreeBSD? Maaaaybe once a week.

@rl_dane @pixx @kabel42

Linux and NetBSD both crash on me daily :P

@OpenComputeDesign @rl_dane @pixx @kabel42 Is it the OS that crashes, or applications running on the OS?

Are the crashes related to video output?

The OS should never crash. If it does, you most likely have defective hardware, or you’re finding issues with your video hardware support.

@AnachronistJohn @pixx @kabel42 @rl_dane

If a program crashes, 95% chance the OS crashes with it. Preemptetive/memory protected is a flat out lie.

@OpenComputeDesign @AnachronistJohn @kabel42 @rl_dane uhhhhh no. just, no.

I have programs crash semifrequently and have had maaaaaybe two OS crashes on linux in the last five years

one of which was due to the hard drive failing

@pixx @AnachronistJohn @kabel42 @rl_dane

Admittedly most crashes are from come from running out of RAM/modern computers sucking at handling Swap latency. But even when programs properly crash without running out of RAM, even if the system doesn't _technically_ go out with it (which it often still does), there's still rarely any chance of recovering the system without (if you're lucky) a reboot or (more likely) a hard reset. Even xkill doesn't help all that much a lot of the time

@OpenComputeDesign @pixx @kabel42 @rl_dane You might have hardware problems, then.

I’m compiling perl on a system with 24 megs of memory, so the system is basically entirely in swap. If that can run like that for a week or two and be fully fine afterwards, then the VM system is doing what it should.

I can’t speak for Linux - it’s becoming the Windows of the open source world - but I also thrash the heck out of memory and swap on modern high memory systems, too, without issues.

@AnachronistJohn @pixx @kabel42 @rl_dane

I used to be able to live out of swap both on Linux and the BSDs. But these days, neither Linux nor the BSDs like touching swap _at all_. Linux is still much worse about it. But on every computer I have, touching swap is like running through a minefield blindfolded.

It's way too widespread of a problem to be a hardware issue

@OpenComputeDesign @pixx @kabel42 @rl_dane Let’s reproduce it so it can be reported.

I have an amd64 system here running NetBSD. I can force the memory down from 32 gigs to whatever I want with a kernel config change.

Can you come up with a recipe for software to install and run, and perhaps sites to visit and do things, that you’re pretty sure will result in a non-responsive system?

@AnachronistJohn @pixx @kabel42 @rl_dane

Yeah, if I load up firefox, log into all my chats and emails, and play a couple youtube videos, that's easily enough to use up all my RAM, dip into Swap space, and cause the system to start freezing and hitching, and eventually become completely unresponsive.

But erm, I'm guessing you meant that _other people_ could use to reproduce my issue. So uhm, let me find some sites that don't require other people to log into all my stuff...

@OpenComputeDesign @AnachronistJohn @kabel42 @rl_dane

...how much ram did you say you have? That's kinda ridiculous unless you have a _lot_ more chats than I think you do O_o

@pixx @AnachronistJohn @kabel42 @rl_dane

4GB RAM, 4GB Swap space. I have mastodon, matrix, gmail/gchat, protonmail (actually, usually close protonmail so I can have more youtube), and youtube is pretty much all I ever have open on this computer. I only ever have a web browser open, no other programs except terminals, and I reboot twice a day

@OpenComputeDesign @AnachronistJohn @kabel42 @rl_dane

...I know firefox sucks but 4G for that is a bit insane imo

@pixx @AnachronistJohn @kabel42 @rl_dane

Well, it runs fine on first boot. But after a while (A few hours, less if I'm doing research and opening extra tabs), all the webapps have leaked enough memory for it to really slow down.

Just as an experiment, I've opened some extra tabs to accelerate the usage (I'm on a fairly fresh reboot) and just 600mb of Swap used is enough for the system to lag really hard when switching windows

@OpenComputeDesign @AnachronistJohn @kabel42 @rl_dane

huh

Well I just ran out of RAM on Linux twice, _without swap_, and system recovered fine. Was responsive enough for me to go kill the system update (which was using all RAM to compile, uh, vscodium, i think. used that one fucking time lol.)

This tab in firefox got killed, as did steamwebhelper and several other FF tabs, but the system is just, fine

@OpenComputeDesign @AnachronistJohn @kabel42 @rl_dane

I wonder if swap makes it _worse_ lol

@OpenComputeDesign @AnachronistJohn @kabel42 @rl_dane

TBH I want to see a memory usage breakdown of your system when this happens, I think that's what you'd really need to know what's going on :/

Firefox shouldn't generally be leaking memory. I've left 100 tabs open for weeks and memory usage never just randomly goes up

@pixx @OpenComputeDesign @AnachronistJohn @kabel42

Firefox is good about suspending inactive tabs to save RAM.

@rl_dane @OpenComputeDesign @AnachronistJohn @kabel42

agreed, other than the part where you started by saying "firefox is good" /snark

@pixx @OpenComputeDesign @AnachronistJohn @kabel42

I think that Firefox is objectively, ethically the least bad among any modern web browsers that can load a page like youtube or amazon.

Horrible bar to pass under, of course, but it is what it is.

@rl_dane @OpenComputeDesign @AnachronistJohn @kabel42

sure, it's still shit software thuogh

I'm not even talking ethics, just as a pure matter of code

it's bad code

@pixx @rl_dane @AnachronistJohn @kabel42

Bad ethics, bad code, still better in both respects to Chrome imo :P

Still don't like it, though

@OpenComputeDesign @pixx @AnachronistJohn @kabel42

I'm not so sure that Firefox is better code than Chromium. But definitely more ethical. Or at least, up until a couple years ago.

@rl_dane @pixx @AnachronistJohn @kabel42

I dunno, chromium based browsers have always been buggy as _fuck_ in my experience. Yes, even worse than firefox.

@OpenComputeDesign @rl_dane @pixx @AnachronistJohn
when I use chromium it it usually is pretty ok except for roll20 that somehow manages to crash the waylands session

@kabel42 @rl_dane @pixx @AnachronistJohn

Too be fair, wayland is also absolute _crap_ :P

@OpenComputeDesign @rl_dane @pixx @AnachronistJohn
Yes and an app crashing the whole session is defenetly a wayland problem

@kabel42 @rl_dane @pixx @AnachronistJohn

Hey, as we've all been told, on a preemptive multitasking system, it is _impossible_ for one crashed app to effect the rest of the system. So you must just be imagining things, anyway :P

@OpenComputeDesign @kabel42 @rl_dane @pixx That’s a bit disingenuous.

Obviously, on a Sinclair QL or an Amiga, a rogue program can take down the whole system.

On Windows, with tons of design issues and decades of bad decision history, a rogue program can take down the whole system.

With Linux and the BSDs, this shouldn’t happen, but it’s possible, and most often it happens when trusting stuff like video hardware to do its thing where the rest of the OS has not as much control over things as it does with the rest of the computer.

With Linux and the BSDs, if you can reliably crash the whole computer using a userland program, that’s a big bug and should be reported.

On the other hand, sometimes it feels good to vent, and if that’s the purpose of what you’re saying, that’s fine, but understand that your generalizations aren’t correct.

@AnachronistJohn @OpenComputeDesign @pixx @rl_dane
I think since the NT days a lot of windows bluescreens were bad drivers. Not sure if thats 10% or 70%

@kabel42 @AnachronistJohn @pixx @rl_dane

Ok too be fair drivers are literally the worst thing ever and it would genuinely be better for everyone if we just standardized hardware so drivers could be abolished completely

@OpenComputeDesign @AnachronistJohn @pixx @rl_dane no drivers is gonna suck, but something more modern than vesa would be great

@kabel42 @AnachronistJohn @pixx @rl_dane

What possible downsides could there be to no drivers?

@OpenComputeDesign
Well for starters video decode gets slow 😅
@kabel42 @AnachronistJohn @rl_dane
@pixx @OpenComputeDesign @AnachronistJohn @rl_dane
You could have a standard for that, you just probably need a new one for every new codec :)

@kabel42
Theoretically you wouldn't even need that but really, part of the problem with standardization is that if the standard way is bad, you'll just have people ignore it

Opengl is a standard and we still need drivers.

The only way around that is what rpi and, apparently, nvidia?? are soing, where the hardware has a coprocessor in it that implements the driver and then the api on the cpu is just a proxy

Which means you still have a driver but now you're dependent on the vendor for it and it's opaque and hidden

This is not better

@OpenComputeDesign @AnachronistJohn @rl_dane

@pixx @kabel42 @OpenComputeDesign @AnachronistJohn

Disagree. If there's a single hardware standard, and the vendor's implementation is crap, then the blame is on the hardware/firmware, rather than the OS/drivers.

@rl_dane @pixx @kabel42 @AnachronistJohn

I prefer to blame both.

Looking at you, ACPI

@rl_dane

Yeah but it doesn't *matter* who the blame is on

When the standard is implemented as a driver in software it can be _fixed_

When it's a driver in firmware then bugs are permanent the instant the vendor stops caring

A valve dev has been improving Amdgpu support for older cards recently

If the driver was in firmware then that wpuld be completely impossible

@kabel42 @OpenComputeDesign @AnachronistJohn

@pixx @rl_dane @kabel42 @AnachronistJohn

Counter point:
If there were no drivers, and everything was standardized, then there would be no moving targets, and application optimization could be perfected in perpetuity without the threat of obsoletion that currently plagues the computing world

@OpenComputeDesign

No, because now you've just shifted bugs into hardware where they're harder to fix

Drivers aren't buggy because drivers are a bad idea, it's because it's a hard problem. Implement an algorithm entirely in hardware and it's impossible to fix if the silicon is wrong.

The only possible way to have bugs be fixable is to have software controlling the hardware, so that you can route around bad or incorrect hardware. You can't just say 'well stop fucking up the hardware then,' that isn't how anything works

@rl_dane @kabel42 @AnachronistJohn

@pixx @rl_dane @kabel42 @AnachronistJohn

Well, given that drivers keep bricking hardware, I don't really trust drivers to fix anything.

I think the fundemential problem is, humans are unfixably incompetent

@OpenComputeDesign
I've never seen drivers brick hardware lol

I've had many gpu crashes over the years. The long term trend for both intel and amd gpus - the two most complicated drivers i interact with - is higher performance _and_ fewer bugs (crashes, glitches, misrendering) over time, and it's pretty obvious.

For a normal computing experience, mesa and the linux drivers are, uh, pretty good actually.

A lot of the code is a stinking pile. Not all, but a lot.

But to sit here and act like it doesn't work and isn't getting better seems patently absurd to me

@rl_dane @kabel42 @AnachronistJohn

@pixx @rl_dane @kabel42 @AnachronistJohn

Well, the main way drivers make hardware not work is by flat out not existing, and being virtually impossible to make exist. But there have been genuine instances of drivers not only screwing up Flash ROM and and excessively wearing out batteries and stuff like that, but even going so far as to melt hardware. (printers, GPUs, and depending on how you define a driver, CPUs)

@OpenComputeDesign

Sure but "doesn't work on my os" is not the same as bricked and that's a really ridiculous equivalence

Rpis gpu does the coprocessor proxying driverless thing; it's just as useless to me as tje amd ones, on 9.

Not having the ability to talk to hardware is not the same as the hardware being trash

@rl_dane @kabel42 @AnachronistJohn

@pixx @rl_dane @kabel42 @AnachronistJohn

Well, I've had a fair amount of hardware where the drivers were just lost to time. As is an unfortunately likely fate for a lot of hardware that requires drivers

@OpenComputeDesign
I live on a niche OS. All drivers are lost to time for me 😅
@rl_dane @kabel42 @AnachronistJohn

@AnachronistJohn @OpenComputeDesign @pixx @kabel42

I'd really like to. Just a bit of a bummer that there's no full-disk-encryption yet.

NetBSD Installation with Disk Encryption ☯ Daniel Wayne Armstrong

Libre all the things

@AnachronistJohn @OpenComputeDesign @pixx @kabel42

I think I've briefly skimmed that before.

I'll have to give it a try. ;)

@AnachronistJohn @pixx @kabel42 @rl_dane

Using it rn. Much prefer it to linux right now :)

Just, still far from perfect

@OpenComputeDesign @AnachronistJohn @pixx @kabel42

Look at the Linux Foundation's funding, then the FreeBSD foundation's funding, then the OpenBSD foundation's funding, then the NetBSD foundation's funding.

Actually, I've already [done it for you]

It's breathtaking how much the NetBSD guys accomplish with their resources.

I'll withhold any comment about some other foundation's accomplishments or lack thereof given their resources.

*cough*

R.L. Dane 🍵 (@[email protected])

@kaixin I did some poking around online: Legend: Organization (Estimated) annual budget Linux Foundation $200,000,000 FreeBSD Foundation $2,100,000 Free Software Foundation $1,100,000 OpenBSD Foundation $400,000 NetBSD Foundation $50,000 ($35,096 raised thus far as of writing) I think I know where my spare change is going to.

polymaths.social
@rl_dane @OpenComputeDesign @AnachronistJohn @pixx 2 mil is a devent dev team, what do you do with 200 mil? You can't hire that many good coders to work on one codebase, right?

@kabel42 @rl_dane @OpenComputeDesign @pixx With $200 million, they’ve now hired the kind of coders that are so good, they can’t be bothered to write portable code.

(that’s a jab at them for discussions they’ve had about how writing code that doesn’t break on big endian and 32 bit is too hard for them).

@AnachronistJohn

They've hired devs who care more about money than craftsmanship :p

@OpenComputeDesign @kabel42 @rl_dane

@pixx @AnachronistJohn @kabel42 @rl_dane

Yup, pretty much this. 200 mill is enough to buy greed :P

@kabel42
No, but you can set the money on fire pretty easily

Or give it to corporations lmao
@rl_dane @OpenComputeDesign @AnachronistJohn

@rl_dane
I won't, the linux foundation is an embarassment lmao
@OpenComputeDesign @AnachronistJohn @kabel42
@pixx @rl_dane @OpenComputeDesign @AnachronistJohn
Linux foundation is more than Linux though. Zephyr is fundet by the linux foundation
Zephyr Project

The Zephyr Project is a Linux Foundation hosted Collaboration Project.

Zephyr Project

@kabel42

even more of an embarassment then :P

@rl_dane @OpenComputeDesign @AnachronistJohn

@pixx @rl_dane @OpenComputeDesign @AnachronistJohn
Why? Doing more things with the money sounds like a good thing

@pixx @OpenComputeDesign @AnachronistJohn @kabel42

My initial draft comment mentioned doing lower primates doing very crass/gross things, but I demurred.

@rl_dane @AnachronistJohn @pixx @kabel42

I have personally donated to OpenBSD, NetBSD and Haiku. Not much, I have no income, but point is, they really are among my favorite projects.

@OpenComputeDesign @pixx @kabel42 @AnachronistJohn

No, the problem is that capitalism drives quantity over quality all day long.

Get everyone to adopt NASA-level coding standards and then get back to me about bugginess and incompetence.

@rl_dane

> capitalism rewards quantity over quality

Disagree, here. Plenty of good software wins.

Go back 20 years and the leading proprietary solutions were by and large _good_. Better in some cases than equivalent foss is now.

The markets reward quality, in the short term, the problem is that once you've won you largely stay won

Google won for good reasons. Adobe won for good reasons. Microsoft maybe not but a lot of people seemed to think their software was good back then.

Hell, discord largely won on quality. Quality that was subsidized by the investment class to lure people in, but they were genuinely better than most mainstream alternatives as a platform five, ten years ago.

@OpenComputeDesign @kabel42 @AnachronistJohn

@pixx @rl_dane @kabel42 @AnachronistJohn

Seems even FOSS is not very resistant to the problem of "Oh good, I'm the default now; That means it's time to start _sucking_"

@pixx @OpenComputeDesign @kabel42 @AnachronistJohn

But the course of enshittification is always to start with quality and value and turn things crappy and user-hostile over time.

So does it matter that Adobe and Google admittedly started out as amazing companies, if they're ugly, bullying behemoths now?

@pixx @OpenComputeDesign @kabel42 @AnachronistJohn

I dunno, I still think it's better to offer standard interfaces than have every piece of hardware work in a bespoke way.

@rl_dane @pixx @OpenComputeDesign @AnachronistJohn
You could do both, but youd need to find someone to negotiate that standarrd. For GPUs the last one was iirc VESA so the BIOS and DOS can output stuff. Those don't let you use a lot of the hardware.
USB has a lot of standard interfaces, but the implementations vary a lot.

@kabel42
I think this makes a lot of sense.

We could have standard interfaces that provide _partial_ functionality like video decode, modesetting, etc, and then a full driver when you want to run multiple shaders in parallel with different memory spaces and permissions and what ot

@rl_dane @OpenComputeDesign @AnachronistJohn

@rl_dane

Yeah but the hardware _is_ pretty custom, and very complex, already. Pretending it's not doesn't help.

Even something as simple as "reset" requires either knowing about every block that needs reset, or having effectively a cpu on device that performs that operation.

If you have software for it anyways- and you *will* - I'd much rather that software be in the os as source code than on the device as a blob

@OpenComputeDesign @kabel42 @AnachronistJohn

@pixx @rl_dane @kabel42 @AnachronistJohn

Concept:
_Force_ on device software to be open standards as well.

Lots of drivers, even for open source operating systems, still rely on binary blobs, both written to ROM in the hardware, and embedded in the drivers. Debian not installing such drivers by default for many years was _highly_ problematic, but it sure did give a fantastic impression of just how many drivers rely on proprietary blobs.

@pixx @rl_dane @kabel42 @AnachronistJohn

And besides, sure, there _are_ lots of open source drivers. But that's _despite_ the efforts of hardware vendors. As far as big companies are concerned, open source drivers are a bug, not a feature.

@OpenComputeDesign @pixx @rl_dane @AnachronistJohn that might have been true a few decades ago, but a lot of drivers in the linux kernel are now partly written by the manufacurer

@kabel42 @pixx @rl_dane @AnachronistJohn

Yeah but, every single OS now has to either borrow Linux's honestly pretty crappy drivers, or go through the exact same process an Linux did with 30 years of fighting hell.

@OpenComputeDesign @pixx @rl_dane @AnachronistJohn
yes, you can use the linux drivers directly, or as documentation, or reverse engeneer yourselve. What other option do you think there could be?

@kabel42 @pixx @rl_dane @AnachronistJohn

Well, using Linux drivers as documentation would be a lot easier if it wasn't for the fact that Linux stuff, in general, seems to have pretty poor code readability. And also, lots of Linux drivers _still_ have proprietary blobs at their core.

@OpenComputeDesign @pixx @rl_dane @AnachronistJohn
You sure about that, I've heard people refering to the linux kernal as an example of how to manage large code bases.

@kabel42 @pixx @rl_dane @AnachronistJohn

Well, too be fair, large code bases just kinda are the very definition of nightmares. "Kernel" and "large code base" ideally would not be related terms at all.

@OpenComputeDesign @pixx @rl_dane @AnachronistJohn
Thats why the gnu project is developing a micro kernel and usung Linux only as a temporary solution untill that is usable

@kabel42 @pixx @rl_dane @AnachronistJohn

Hasn't it been working on that since before Linux was even a thing?

@OpenComputeDesign @pixx @rl_dane @AnachronistJohn but its nearly finnished, only a few more years

@kabel42 @pixx @rl_dane @AnachronistJohn

Sure sure :P

One of the things that keeps me from flat out writing my own OS is remembering how the Gnu kernel is going

@OpenComputeDesign @pixx @rl_dane @AnachronistJohn yeah, the only kernel i even come close to understanding is amazon aws iot FreeRTOS
@OpenComputeDesign @pixx @rl_dane @AnachronistJohn and i'm kinda afraid the first three words mean, it will soom be enshittified

@OpenComputeDesign
No no no

That's totally wrong

Plenty of hobby kernels progress waaaay faster than hurd

Gnu is just bad at everything lol

@kabel42 @rl_dane @AnachronistJohn

@OpenComputeDesign @kabel42 @pixx @AnachronistJohn

Do what #SerenityOS did: target QEMU as your "hardware."

Solves a lot of problems. Linux kernel as HAL.

@rl_dane @kabel42 @pixx @AnachronistJohn

This is simultaniously viscerally terrifying and very appealing

@kabel42 @OpenComputeDesign @pixx @AnachronistJohn

Hurd is nearly finished? Really finished, or "GenAI singularity any moment now, just another trillion USD PUHLEEEEAAAZE"—finished? XD

@kabel42
Lmao

Hurd predates linux and also uses linux drivers iirc

@OpenComputeDesign @rl_dane @AnachronistJohn

@kabel42

"Temporary solution"

My temporary solutions are for days not decades 🤣

@OpenComputeDesign @rl_dane @AnachronistJohn