Is this why modern software feels like garbage?

For 20 years it was reasonable to expect computers to be twice as powerful ever two years.

So we built things that sort of worked on modern machines, but would work really well in two years.

And then growth slowed.

@ajroach42 I feel like that's not quite the full story.

This graph shows you what the high end was, and it shows you transistor density, not performance.

But what about the low end?

On the low end, in 1978, when that graph started, the average home computer had a 1 MHz 6502 or a 1.8 MHz Z80.

Move 15 years later, to 1992.

The bottom end of the *NEW* home computer market had a 1-2 MHz 6502 or a 3.5 MHz Z80, and there was a massive install base of similarly-performing computers.

@ajroach42 Even in the PC world, a 4.77 MHz 8088 runs about as fast as a ~1 to 1.2 MHz 6502 in real world code. It had a lot more memory, of course, but that's beside the point.

A lot of real-world applications needed to run acceptably on that hardware in 1992.

Sure, there was a lot of stuff that barely ran on a 486DX2 66, the fastest x86 was in 1992, roughly 70x faster than that 4.77 MHz 8088, but DOS applications were expected to run on that 8088 unless they had a damn good reason not to.

@ajroach42 Note that Windows 3.x, from 1990-1992, started to rewrite what a bottom end IBM PC compatible meant.

In 1990, a Turbo XT clone - usually 8-10 MHz - was a perfectly acceptable low-end machine.

By 1992, you really needed a 16 MHz 386SX, because that Turbo XT could not reasonably run Windows 3.0, and couldn't run Windows 3.1 at all. Even a 286 had memory management issues.

Then, Windows 95 really wanted a 486 or better, even though it supported the 386.

@ajroach42 With the move to Windows, the bottom end of the market was forced onto the upgrade treadmill.

(Outside of IBM PCs... the people clinging to 8-bit platforms started having to jump around this point. Apple II users had the Macintosh as an option, Acorn users had the Archimedes and RiscPC as a very natural option, but everyone else's 32-bit platforms had died and everyone jumped to the PC or clones.)

@ajroach42 In any case, the move to multitasking GUIs meant that some performance tricks that applications used in the past no longer worked, and the increased complexity of having to learn new ways of doing things meant that optimization took a back seat to figuring out how to do things in a GUI. And, the increased minimum requirements just to run the GUI meant that you had more hardware anyway, so who cares.

@bhtooefr except that lots of platforms managed to have multitasking GUIs on 80s class hardware.

It was just harder. Geoworks ensemble will run on a 286 without issue.

I used a web browser in system 7 on a Mac plus last night, and even that was bearable. On an s/30 it’s speedy.

But you’re right, Windows put people on the upgrade wagon.

And if your 10 year old computer can run all the latest software, why upgrade? And with that, capitalism encourages the upgrade cycle.

@ajroach42 Sure, but there's also the whole, ease of development vs. performance vs. "this hardware is getting cheaper, just upgrade already" factor, too.

I mean, when you could get a cheap 386SX replacement motherboard for your XT and get many times the performance, and not have to get an EMS board to load a big 1-2-3 spreadsheet, and you get to run Windows reasonably, or GeoWorks quickly? Eventually it comes to a point where supporting the old stuff really is unreasonable.

@bhtooefr that’s a good point, and that’s how we got here. But it’s not an option for increased performance now, at least for many folks.