RE: https://mastodon.social/@stonetoolsblog/116071488450412028

Back in the 1980s, technology was advancing so quickly that every other year seemed to produce new hardware that needed new operating systems and new applications, promising new never-before-seen capabilities. But today the pace has slowed if not halted, and it seems like maybe it's worthwhile to go back and figure out what lessons we could learn about solving real problems with a fraction of the storage space or computation speed.

And so I love this blog, where the author makes an honest effort to try these old tools for what they were best at, figure their strengths and weaknesses, and even how hard it would be to use them seriously in a modern computing environment. The extra context about the history of each software package and the people who made it, is the icing on the cake - like if The Digital Antiquarian were about productivity apps rather than games.

@Screwtapello @stonetoolsblog

When I first started working with “big data” I spent some time with papers from the sixties about streaming algorithms (at that time, the stream would be from one tape, through the machine’s kilobytes of core memory, to another tape). The fact that we’d added a lot of zeroes to the I/O bandwidth and memory size since the 1960s didn’t really matter — the size of the data was once more vastly beyond what the hardware could handle so those old tricks were useful again.

@lain_7 @Screwtapello These are precisely the kinds of stories I had hoped to hear when I started the blog. It's a lot of fun to hear the first-hand accounts and realize how the more things change, the more they stay the same.