Is this why modern software feels like garbage?

For 20 years it was reasonable to expect computers to be twice as powerful ever two years.

So we built things that sort of worked on modern machines, but would work really well in two years.

And then growth slowed.

@ajroach42 I feel like that's not quite the full story.

This graph shows you what the high end was, and it shows you transistor density, not performance.

But what about the low end?

On the low end, in 1978, when that graph started, the average home computer had a 1 MHz 6502 or a 1.8 MHz Z80.

Move 15 years later, to 1992.

The bottom end of the *NEW* home computer market had a 1-2 MHz 6502 or a 3.5 MHz Z80, and there was a massive install base of similarly-performing computers.

@bhtooefr so we’d gotten really good at stretching the performance of that hardware.

@ajroach42 Even in the PC world, a 4.77 MHz 8088 runs about as fast as a ~1 to 1.2 MHz 6502 in real world code. It had a lot more memory, of course, but that's beside the point.

A lot of real-world applications needed to run acceptably on that hardware in 1992.

Sure, there was a lot of stuff that barely ran on a 486DX2 66, the fastest x86 was in 1992, roughly 70x faster than that 4.77 MHz 8088, but DOS applications were expected to run on that 8088 unless they had a damn good reason not to.

@ajroach42 Note that Windows 3.x, from 1990-1992, started to rewrite what a bottom end IBM PC compatible meant.

In 1990, a Turbo XT clone - usually 8-10 MHz - was a perfectly acceptable low-end machine.

By 1992, you really needed a 16 MHz 386SX, because that Turbo XT could not reasonably run Windows 3.0, and couldn't run Windows 3.1 at all. Even a 286 had memory management issues.

Then, Windows 95 really wanted a 486 or better, even though it supported the 386.

@ajroach42 With the move to Windows, the bottom end of the market was forced onto the upgrade treadmill.

(Outside of IBM PCs... the people clinging to 8-bit platforms started having to jump around this point. Apple II users had the Macintosh as an option, Acorn users had the Archimedes and RiscPC as a very natural option, but everyone else's 32-bit platforms had died and everyone jumped to the PC or clones.)

@ajroach42 In any case, the move to multitasking GUIs meant that some performance tricks that applications used in the past no longer worked, and the increased complexity of having to learn new ways of doing things meant that optimization took a back seat to figuring out how to do things in a GUI. And, the increased minimum requirements just to run the GUI meant that you had more hardware anyway, so who cares.

@ajroach42 And then, the Internet happened.

Keep in mind that personal computers were certainly a *thing* into the mid 1990s, but they weren't universal.

The Internet was the killer app for personal computers. It made them universal.

So now, you had a massive install base of computers that were newly sold in the late 1990s.

In 5 years, your baseline performance moves from a decade old 4.77 MHz 8088 to a brand new 166 MHz Cyrix MediaGX.

@ajroach42 And, of course, you had some stuff that was just absolutely dreadful for performance. I distinctly recall Java and Shockwave being the performance nightmares that we think of JavaScript being today - this is the mindset of developing for what's coming, not what's available now.

Of course, in 1997, the top end of the x86 market is a 450 MHz Pentium II... and then the GHz war hits.

@ajroach42 Already, the mindset of "CPU and RAM are cheap now, we don't have to save them" has set in hard, but the GHz war just reinforces it, with massive increases in performance in a very short time.

The Pentium III launched in early 1999, at 500 MHz.

One year later, performance had more than doubled, with the 1 GHz model.

In 2000, a low-end machine probably had a 500ish MHz Celeron, performing similarly to that year old top-end P3.

And, of course, computer adoption is still increasing.

@ajroach42 This means that the baseline keeps getting pushed up higher and higher, following the high end closely.

Even in the Pentium 4 era, low-end machines still got faster, quickly, as the low-end Durons/Semprons and Celerons clocked up and caught up with their high-end Athlon and Pentium counterparts. (We're now in what I consider the modern era, and why I posted this from this account, not @[email protected].)

Core 2 happens in 2006, and pushes the high end up significantly.

@ajroach42 But then, in 2007, netbooks happen, and push the bottom end *DOWN* temporarily - a 900 MHz Dothan Celeron M or a 1.6 GHz Atom is not a fast CPU, but netbooks are now using them, and things need to be performant on them.

Developers eventually did respond for some things, and I'd argue that this creates the plateau. It's not the stagnation in performance from 2011 to present, it's the stagnation in minimum reasonable requirements from 2008 to 2015 or so that creates it.

@ajroach42 However, the first generation of netbooks died out around 2010-2011.

By about 2012, Windows XP starts dying, and with it, IE 6. Now, websites can start using ✨ New Web Technologies ✨ (🤮), because they don't have to support IE 6 (or even IE 8) any more. Everything still seems fine, though, as tablets are taking off, and they're using pretty weak ARM chips, or occasionally recycled netbook Atoms.

...but then tablets get fast.

@ajroach42 And, "move fast, break things" became a mindset, so people stopped targeting low-end old devices, they focused on the latest and greatest.

*This* is why performance is so shit nowadays.

In 1992, people targeted old low-end hardware.

In 2018, you're lucky if people target something other than the new MacBook Pro that they're developing it on, and their new iPad.

@bhtooefr

@ajroach42

IDK, I think part of the problem is that we don't fully appreciate how inexpensive modern computers are compared to old ones. Unless you're counting realtime graphics, a 6 year old machine is only marginally better than a $30 Raspberry Pi 3.

Maybe it'd be better if we just recycled those old machines on a trade-in for modern equivalent machines?

@endomain @ajroach42 So that is a useful point, too (although I wouldn't say "marginally" better - I'd much rather use a 12 year old Core 2 Duo with 4-8 GiB RAM and a SATA SSD, than a RPi 3, as my primary machine), but part of the problem is that mindsets have changed.

People aren't targeting that RPi as a "low-end machine".

If you're lucky, they're targeting a couple year old iPad, that's much faster than any RPi.

@bhtooefr

@ajroach42

Idk, I was totally shocked how solid my RPi3 is. Other than draw performance it doesn't feel at all different from old machines.

@ajroach42 So, I want to touch on a couple of other points while I'm at this.

When Acorn died, the pipeline of new RISC OS hardware utterly stalled, the fastest hardware you could buy was a 233 MHz StrongARM, throttled by a 16 MHz 32-bit wide bus. There eventually was new hardware made, but with disruptive compatibility issues, so the old hardware still had a huge userbase.

This meant that commercial software development for the next decade had to consider the old hardware.

@bhtooefr I’ve seen this! I followed along as it was happening, but never got to experience the hardware first hand.

@ajroach42 There was an unwritten rule that new software, up to about *2008*, had to run on a 32 MHz ARM7, and it had to run reasonably on a 202 MHz StrongARM. (There were a few things that targeted the 600 MHz XScale-based Iyonix, and only barely ran on the StrongARMs, but they were the exception.)

The upshot? On a Raspberry Pi 1 B, RISC OS stuff FUCKING FLIES. It's SNAPPY. And this is despite RISC OS being a kinda mediocre cooperative multitasking OS.

@ajroach42 (2008 is when the Beagleboard started to become a reasonable platform for running RISC OS software, and therefore development slowly started to move away from the old Acorn-era platforms, which were last updated in 1997.)

@ajroach42 The other point is going to go in a COMPLETELY different direction, and back to the PC platform.

Even after the move to GUI land started happening, there was still an expectation of interoperation with people stuck on DOS for a few years.

HP was able to sell the 200LX palmtop through 1999, and that was basically Turbo XT-class hardware, really only meant to run DOS. You could interoperate with Windows software thanks to file format converters, and just run old DOS software.

@ajroach42 Nowadays, though, you can't run old software reasonably. Everything has an expectation of being networked, and with that comes a requirement of keeping up to date.

And, even if that weren't the case... there's a lot of stuff that just says "no" to old versions, or breaks, or can only import from old to new.

So, I feel like that's a factor, too - everyone has to stay on the bleeding edge, nobody cares about graceful degradation. (And, I mean, just look at the web...)

@bhtooefr that’s a good point.

Backwards compatibility in the modern world is something of a joke.

Heck forwards compatibility isn’t always a guarantee.

And the modern “always on” and “always real time” expectations for modern networking certain don’t help the situation.

@bhtooefr @ajroach42 this entire thread is beautiful. I know shit all about this history stuff but oh my god please teach my class
@bhtooefr I run a 200lx as part of my regular computer rotation.

@bhtooefr I really need to try riscOS on my pi.

Did they ever add wireless drivers, or am I still going to have to wait until I wire the place?

@ajroach42 That's part of TCP/IP overhaul step 3, they're still collecting money for the bounty for step 2: https://www.riscosopen.org/bounty/polls/29
RISC OS Open: Bounty details

@bhtooefr @ajroach42

at some point there was a change in ARM from 26 bit to 32 bit addressing (I forget exactly when).

RISCOS itself is maybe older than both of you, I remember it being a thing in my last years of high school (from 1988 to 1990).

More recently I did experiment briefly with it on an RPi (it is indeed very fast) but after 30 years had quite forgotten how to use the UI (or it worked better with 3 button mouse?) so went back to Raspbian..

@vfrmedia @ajroach42 Depends on what you call RISC OS - Arthur 1.20 is older than me, RISC OS 2.00 is younger than me.

And the UI is *extremely* dependent on a 3 button mouse, although there's an application that interprets the Windows key as the middle button in the RPi distribution of RISC OS.

And the 26 to 32 bit addressing change was with ARMv3 (read: ARM6), but it was a gradual change, 26-bit was still supported.

The problem is when Thumb came about, ARM reused the 26-bit mode bit for it

@vfrmedia @ajroach42 StrongARM was the last shipping ARM design that didn't have Thumb (technically, ARM9 could be configured without it, but nobody ever shipped it that way, and ARM9 is basically a clone of StrongARM anyway, IIRC).

So that's where the compatibility break happened, after StrongARM, when some ARM9 and XScale-based machines started coming out in the 2000s. (Acorn never successfully 32-bitted the OS, it was on the roadmap though.)

@bhtooefr @ajroach42 I'm fairly certain it would have been one of the original Arthur versions, they had red function keys and the BBC owl on them

Also I remember using the image scanner and the inkjet printer that was available with the Arcs (very rare things back then) to make fake ID with - so this would have been before March 1990, I turned 18 then after which I wouldn't have needed the fake ID 😆

@vfrmedia @ajroach42 Arthur was also not exactly... complete.

And the red function keys and owl were even on the A3000, which shipped with RISC OS 2.00 IIRC...

@bhtooefr @ajroach42 I think we may have had a mixture of these, I remember a single machine being delivered to our schools science department in around 1988, and others turning up around 1989/1990.

They were in a different area of the school and ran office/multimedia applications, unlike the BBC micros in the main building.

I finished high school in summer 1990, and cannot remember exactly what OS was loaded on them, I do remember it looking distinctly like RiscOS stil does today..

@vfrmedia @ajroach42 If it looks like RISC OS does today, then it's at least 2.0

Arthur's desktop was almost a tech demo (basically just a file manager/launcher and some desk accessories). The GUI primitives were there for user applications, too, but AFAIK it was single-tasking.

@bhtooefr @ajroach42

my memories are slightly hazy (it *is* 3 decades ago!) but I remember the (single) Arc issued to the Science Department (ISTR it had some kind of custom software like graphical chemistry software) having slightly different UI to the others - this was in the later part of 1988 whereas the rest of the machines arrived in 1989 (which would match up with the release of RISC OS 2.0)

@vfrmedia @ajroach42 @bhtooefr I also tried it on a RPI. When I checked riscos webpage, basically their main selling argument is that it is a brittish OS :D also it has non-preemptive multi-tasking (which traditionally is not a selling point for a multitasking OS)
Alex@rtnVFRmedia Suffolk UK (@[email protected])

178K Posts, 2.57K Following, 2.83K Followers · Old roaming Tom Cat (zwerfkater) but still young at heart. Toots EN, (NL,FR,DE). #DevOpa - interested more in retro tech / culture / aesthetics than new stuff. Also transport and public infrastructure in general (with a UK/European focus) Welcome to the secret goose shed! Avatar is a tabby point Siamese cat - header picture is a Stentor FM radio transmitter designed in the Netherlands; popular with small pirate radio broadcasters in late 1980s

Mastodon

@wictory @vfrmedia @ajroach42 I have literally seen a post on comp.sys.acorn.advocacy in which someone suggested that preemptive multitasking was bad, because Windows Vista was slow on a Pentium III, and RISC OS was fast on a (much slower) 233 MHz StrongARM.

Also, which RISC OS website? Because there's a couple different forks of the OS, RISC OS Open is the one that's active (and the one that runs on the Pi).

@bhtooefr @ajroach42 @vfrmedia if you want to speed up an OS, start with speeding up the user programs 😆 however, the guy has kind of a point, with emphasis on kind of. Meltdown mitigations basically kill the performance of preemptive context switches.

I honestly don't remember, also I think they changed the text soon after. BTW, windows 3.11 was also non-preemptive.
Andrew (Television Executive) (@[email protected])

90.6K Posts, 1.77K Following, 7.94K Followers · Trying to reshape the future of television. I write and build stuff. Est. 1990. (He, Him, Etc.) http://andrewroach.net Original posts CC-BY-SA 4.0 - Share them, but link to the original.

Retro Social

@wictory @ajroach42 @vfrmedia Fun fact: Windows 3.x actually was both preemptive and cooperative!

(Windows programs, however, were always cooperatively multitasked. Preemptive multitasking was for DOS programs. But, the preemptive part of the kernel was actually extended for Windows 95, to preemptively multitask Win32 programs.)

@wictory @ajroach42 @vfrmedia The disgusting one, I find, is Mac OS, actually.

It was designed with limited cooperative multitasking to enable desk accessories... so when they wanted to multitask everything, with MultiFinder? They hijacked the desk accessory mechanisms, basically making everything look like a desk accessory to everytthing else.

Surprisingly, this worked... OK, it worked badly, but it worked surprisingly well.

@bhtooefr @vfrmedia @ajroach42 I guess the non-preemptiveness is mostly because of the difficulty of implementing efficient concurrent access to the UI. Also, at least the x86 processors back had terrible context switching support.
Alex@rtnVFRmedia Suffolk UK (@[email protected])

178K Posts, 2.57K Following, 2.83K Followers · Old roaming Tom Cat (zwerfkater) but still young at heart. Toots EN, (NL,FR,DE). #DevOpa - interested more in retro tech / culture / aesthetics than new stuff. Also transport and public infrastructure in general (with a UK/European focus) Welcome to the secret goose shed! Avatar is a tabby point Siamese cat - header picture is a Stentor FM radio transmitter designed in the Netherlands; popular with small pirate radio broadcasters in late 1980s

Mastodon

@bhtooefr that’s a good point.

Prodigy and AOL were the reason we got our first PC.

@bhtooefr except that lots of platforms managed to have multitasking GUIs on 80s class hardware.

It was just harder. Geoworks ensemble will run on a 286 without issue.

I used a web browser in system 7 on a Mac plus last night, and even that was bearable. On an s/30 it’s speedy.

But you’re right, Windows put people on the upgrade wagon.

And if your 10 year old computer can run all the latest software, why upgrade? And with that, capitalism encourages the upgrade cycle.

@ajroach42 Sure, but there's also the whole, ease of development vs. performance vs. "this hardware is getting cheaper, just upgrade already" factor, too.

I mean, when you could get a cheap 386SX replacement motherboard for your XT and get many times the performance, and not have to get an EMS board to load a big 1-2-3 spreadsheet, and you get to run Windows reasonably, or GeoWorks quickly? Eventually it comes to a point where supporting the old stuff really is unreasonable.

@bhtooefr that’s a good point, and that’s how we got here. But it’s not an option for increased performance now, at least for many folks.