Is this why modern software feels like garbage?

For 20 years it was reasonable to expect computers to be twice as powerful ever two years.

So we built things that sort of worked on modern machines, but would work really well in two years.

And then growth slowed.

@ajroach42 I feel like that's not quite the full story.

This graph shows you what the high end was, and it shows you transistor density, not performance.

But what about the low end?

On the low end, in 1978, when that graph started, the average home computer had a 1 MHz 6502 or a 1.8 MHz Z80.

Move 15 years later, to 1992.

The bottom end of the *NEW* home computer market had a 1-2 MHz 6502 or a 3.5 MHz Z80, and there was a massive install base of similarly-performing computers.

@ajroach42 Even in the PC world, a 4.77 MHz 8088 runs about as fast as a ~1 to 1.2 MHz 6502 in real world code. It had a lot more memory, of course, but that's beside the point.

A lot of real-world applications needed to run acceptably on that hardware in 1992.

Sure, there was a lot of stuff that barely ran on a 486DX2 66, the fastest x86 was in 1992, roughly 70x faster than that 4.77 MHz 8088, but DOS applications were expected to run on that 8088 unless they had a damn good reason not to.

@ajroach42 Note that Windows 3.x, from 1990-1992, started to rewrite what a bottom end IBM PC compatible meant.

In 1990, a Turbo XT clone - usually 8-10 MHz - was a perfectly acceptable low-end machine.

By 1992, you really needed a 16 MHz 386SX, because that Turbo XT could not reasonably run Windows 3.0, and couldn't run Windows 3.1 at all. Even a 286 had memory management issues.

Then, Windows 95 really wanted a 486 or better, even though it supported the 386.

@ajroach42 With the move to Windows, the bottom end of the market was forced onto the upgrade treadmill.

(Outside of IBM PCs... the people clinging to 8-bit platforms started having to jump around this point. Apple II users had the Macintosh as an option, Acorn users had the Archimedes and RiscPC as a very natural option, but everyone else's 32-bit platforms had died and everyone jumped to the PC or clones.)

@ajroach42 In any case, the move to multitasking GUIs meant that some performance tricks that applications used in the past no longer worked, and the increased complexity of having to learn new ways of doing things meant that optimization took a back seat to figuring out how to do things in a GUI. And, the increased minimum requirements just to run the GUI meant that you had more hardware anyway, so who cares.

@ajroach42 And then, the Internet happened.

Keep in mind that personal computers were certainly a *thing* into the mid 1990s, but they weren't universal.

The Internet was the killer app for personal computers. It made them universal.

So now, you had a massive install base of computers that were newly sold in the late 1990s.

In 5 years, your baseline performance moves from a decade old 4.77 MHz 8088 to a brand new 166 MHz Cyrix MediaGX.

@ajroach42 And, of course, you had some stuff that was just absolutely dreadful for performance. I distinctly recall Java and Shockwave being the performance nightmares that we think of JavaScript being today - this is the mindset of developing for what's coming, not what's available now.

Of course, in 1997, the top end of the x86 market is a 450 MHz Pentium II... and then the GHz war hits.

@ajroach42 Already, the mindset of "CPU and RAM are cheap now, we don't have to save them" has set in hard, but the GHz war just reinforces it, with massive increases in performance in a very short time.

The Pentium III launched in early 1999, at 500 MHz.

One year later, performance had more than doubled, with the 1 GHz model.

In 2000, a low-end machine probably had a 500ish MHz Celeron, performing similarly to that year old top-end P3.

And, of course, computer adoption is still increasing.

@ajroach42 This means that the baseline keeps getting pushed up higher and higher, following the high end closely.

Even in the Pentium 4 era, low-end machines still got faster, quickly, as the low-end Durons/Semprons and Celerons clocked up and caught up with their high-end Athlon and Pentium counterparts. (We're now in what I consider the modern era, and why I posted this from this account, not @[email protected].)

Core 2 happens in 2006, and pushes the high end up significantly.

@ajroach42 But then, in 2007, netbooks happen, and push the bottom end *DOWN* temporarily - a 900 MHz Dothan Celeron M or a 1.6 GHz Atom is not a fast CPU, but netbooks are now using them, and things need to be performant on them.

Developers eventually did respond for some things, and I'd argue that this creates the plateau. It's not the stagnation in performance from 2011 to present, it's the stagnation in minimum reasonable requirements from 2008 to 2015 or so that creates it.

@ajroach42 However, the first generation of netbooks died out around 2010-2011.

By about 2012, Windows XP starts dying, and with it, IE 6. Now, websites can start using ✨ New Web Technologies ✨ (🤮), because they don't have to support IE 6 (or even IE 8) any more. Everything still seems fine, though, as tablets are taking off, and they're using pretty weak ARM chips, or occasionally recycled netbook Atoms.

...but then tablets get fast.

@ajroach42 And, "move fast, break things" became a mindset, so people stopped targeting low-end old devices, they focused on the latest and greatest.

*This* is why performance is so shit nowadays.

In 1992, people targeted old low-end hardware.

In 2018, you're lucky if people target something other than the new MacBook Pro that they're developing it on, and their new iPad.

@ajroach42 So, I want to touch on a couple of other points while I'm at this.

When Acorn died, the pipeline of new RISC OS hardware utterly stalled, the fastest hardware you could buy was a 233 MHz StrongARM, throttled by a 16 MHz 32-bit wide bus. There eventually was new hardware made, but with disruptive compatibility issues, so the old hardware still had a huge userbase.

This meant that commercial software development for the next decade had to consider the old hardware.

@ajroach42 There was an unwritten rule that new software, up to about *2008*, had to run on a 32 MHz ARM7, and it had to run reasonably on a 202 MHz StrongARM. (There were a few things that targeted the 600 MHz XScale-based Iyonix, and only barely ran on the StrongARMs, but they were the exception.)

The upshot? On a Raspberry Pi 1 B, RISC OS stuff FUCKING FLIES. It's SNAPPY. And this is despite RISC OS being a kinda mediocre cooperative multitasking OS.

@bhtooefr @ajroach42

at some point there was a change in ARM from 26 bit to 32 bit addressing (I forget exactly when).

RISCOS itself is maybe older than both of you, I remember it being a thing in my last years of high school (from 1988 to 1990).

More recently I did experiment briefly with it on an RPi (it is indeed very fast) but after 30 years had quite forgotten how to use the UI (or it worked better with 3 button mouse?) so went back to Raspbian..

@vfrmedia @ajroach42 Depends on what you call RISC OS - Arthur 1.20 is older than me, RISC OS 2.00 is younger than me.

And the UI is *extremely* dependent on a 3 button mouse, although there's an application that interprets the Windows key as the middle button in the RPi distribution of RISC OS.

And the 26 to 32 bit addressing change was with ARMv3 (read: ARM6), but it was a gradual change, 26-bit was still supported.

The problem is when Thumb came about, ARM reused the 26-bit mode bit for it

@vfrmedia @ajroach42 StrongARM was the last shipping ARM design that didn't have Thumb (technically, ARM9 could be configured without it, but nobody ever shipped it that way, and ARM9 is basically a clone of StrongARM anyway, IIRC).

So that's where the compatibility break happened, after StrongARM, when some ARM9 and XScale-based machines started coming out in the 2000s. (Acorn never successfully 32-bitted the OS, it was on the roadmap though.)