Ken McLeod

@_the_cloud
246 Followers
168 Following
1.5K Posts
Retired software engineer. ☁️ @ 
"That's a lot of old computers."
GLOBALTALKCloudbusting
GITHUBhttps://github.com/thecloudexpanse
PIERhubarb

Bring back naming folders with “ƒ”!!

#VintageApple #RetroComputing

What is your favorite #BSD currently? 
FreeBSD
52.9%
OpenBSD
39.1%
NetBSD
6.1%
DragonflyBSD
0.4%
Other (Comment)
1.5%
Poll ended at .

Minor update: support for LZW compression has been added, so those StuffIt archives now take up less space on disk.

https://github.com/thecloudexpanse/sit

#retrocomputing #VintageApple #VintageMac #VintageMacintosh

Do you have a very old computer hooked up to the internet, and want to browse the web like it's 1996 again? Normally this is practically impossible, because a 1996 browser will choke on modern html, css, and javascript. And because most everything is encrypted with TLS 1.3.

Enter the http proxy server that solves both problems: Macproxy Classic

Well, this is in fact no news. The original Macproxy was created by Tyler Hicks-Wright in 2013. I created my fork in 2021. Another super cool fork, Macproxy Plus, emerged in 2024.

What is new today is that I combined the best of all forks, touched it up with some bug fixes and improvements, and polished it to a shine.

It's good stuff, I promise!

tarball: https://github.com/rdmark/macproxy_classic/releases/tag/v25.11.1
container image: https://hub.docker.com/r/rdmark/macproxy

#internetHistory #vintageMac

Like, the sense of dread I have that at SOME point I am going to have to 'upgrade' away from the current macOS to a fisher-price bath-soap toybox version that breaks a pile of shit I rely on is just... exhausting.

I hate that capitalism has made constant change a thing companies are obliged to do.

What is causing my problem?
DNS
71.4%
Cloudflare
7.1%
Solar flare
21.4%
Poll ended at .
I declare that today, Nov. 19, 2025 is the 50th anniversary of BitBLT, a routine so fundamental to computer graphics that we don't even think about it having an origin. A working (later optimized) implementation was devised on the Xerox Alto by members of the Smalltalk team. It made it easy to arbitrarily copy and move arbitrary rectangles of bits in a graphical bitmap. It was this routine that made Smalltalk's graphical interface possible. Below is part of a PARC-internal memo detailing it:

Relevant to our interests, it's possible to proxy TLS connections to iCloud Mail (IMAP/SMTP) in order to access it from Mac OS 9. (Cyberdog, here boi!)

https://old.reddit.com/r/VintageApple/comments/1oz2j5i/got_my_icloud_mail_working_on_mac_os_9_and/

#retrocomputing #VintageApple #VintageMac #VintageMacintosh

×
I declare that today, Nov. 19, 2025 is the 50th anniversary of BitBLT, a routine so fundamental to computer graphics that we don't even think about it having an origin. A working (later optimized) implementation was devised on the Xerox Alto by members of the Smalltalk team. It made it easy to arbitrarily copy and move arbitrary rectangles of bits in a graphical bitmap. It was this routine that made Smalltalk's graphical interface possible. Below is part of a PARC-internal memo detailing it:
BitBLT was implemented in microcode on the Alto and exposed to the end-user as just another assembly language instruction, alongside your regular old Nova instructions -- this is how foundational it was. And since it was an integral part of the Alto, it enabled all sorts of interesting experimentation with graphics: user interfaces and human/computer interaction, font rasterization, laser printing... maybe a game or three...

@fvzappa

Do modern GPUs still do blitting?

@argv_minus_one @fvzappa Apparently GPUs themselves do a lot of fast memory block copies via DMA, (kind of like blitting without the XOR operations), but use shader programs to do what blitter hardware used to do for small memory areas on a pixel-by-pixel scale.

@sleet01 @argv_minus_one @fvzappa

> Apparently GPUs themselves do a lot of fast memory block copies via DMA (…) shaders

*me grimmacing my face*… kind-of… sort-of…

Okay, first things first: GPUs still do have dedicated hardware that also enables bit blitting. Specifically the part of the raster engine that's responsible for resolving antialiased frame buffers. Graphics APIs still expose this with functions carrying 'blit' in their name:

https://registry.khronos.org/OpenGL-Refpages/gl4/html/glBlitFramebuffer.xhtml

https://docs.vulkan.org/refpages/latest/refpages/source/vkCmdBlitImage.html

@sleet01 @argv_minus_one @fvzappa

Second: There are certain aspects of blitting operations that are outside the scope of shaders. Specifically raster logic operations munch source and destination values. If you wanted to implement that in a shader, you'd have to feed back the destination buffer as a source into the shader, which, technically can be done, but is slooooow.

So for things like alpha blending and ROPs, those are done through the raster engine, which is also a blit engine.

@datenwolf @argv_minus_one @fvzappa Apologies, I wasn't _trying_ to invoke Cunningham's Law ^_^
I'd totally forgotten about the 2D acceleration stuff, since it's mostly mentioned (now, at least) in the context of GUI acceleration.
Thanks for the corrections!

@sleet01 @argv_minus_one @fvzappa

No worries – GPUs are weird beasts and in places kind of counterintuitive. In a way my whole career is founded on other engineers having misconceptions about GPUs. :-)

Alas, the raster engine isn't merely there for 2D acceleration, but also forms a vital part in 3D rendering. Besides blitting and ROPing it also implements depth testing and blending.

@datenwolf @argv_minus_one @fvzappa Blending sounds reasonable, but depth testing? Is that because it can compare int values quickly, and the depth is stored as a 2D bitmap? I vaguely recall something like that...

@sleet01 @argv_minus_one @fvzappa

Basically every operation that takes a generated fragment (pixel value tuple) and in-place merge it into the destination framebuffer pixel. If you did that in a fragment shader you'd build a data path feedback loop which gets messy if you have multiple elements in a single draw call hitting the same destination pixels. You can use memory barriers to sort the writes. But this is inefficient.

@sleet01 @argv_minus_one @fvzappa

Depth testing is basically a in-place compare-and-select operation. And in case the fragment shader doesn't modify the depth value, depth testing is executed before the fragment shader, potentially saving a lot of compute for rejected fragments.

@argv_minus_one @fvzappa Modern GPUs have fundamentally different challenges, so the solution on the original computers doesn't necessarily help with the new computers.

Old computers were slow to run instructions, had limited memory space, but had ample memory bandwidth, so having one function that can do X different actions with few opcodes saves on memory space, and you can take extra memory bandwidth to read and write memory because that's available.

Nowadays we have ample space and the cores are extremely fast, but the limiting factor is memory bandwidth. I believe something like less than 100 instructions for a pixel shader wouldn't be able to saturate the cores at all, so most of the time they're just waiting for memory to get there. A solution that doubles memory bandwidth consumption doesn't help with that.

glBlitFramebuffer - OpenGL 4 - docs.gl

@nina_kali_nina @dascandy @argv_minus_one @fvzappa
Does anything similar exist in Vulkan?
@brouhaha @nina_kali_nina @dascandy @argv_minus_one @fvzappa Don't know if you saw it, but a Vulkan-link was posted in another response: https://chaos.social/@datenwolf/115575595195740316
datenwolf (@datenwolf@chaos.social)

@sleet01@fosstodon.org @argv_minus_one@mastodon.sdf.org @fvzappa@mastodon.sdf.org > Apparently GPUs themselves do a lot of fast memory block copies via DMA (…) shaders *me grimmacing my face*… kind-of… sort-of… Okay, first things first: GPUs still do have dedicated hardware that also enables bit blitting. Specifically the part of the raster engine that's responsible for resolving antialiased frame buffers. Graphics APIs still expose this with functions carrying 'blit' in their name: https://registry.khronos.org/OpenGL-Refpages/gl4/html/glBlitFramebuffer.xhtml https://docs.vulkan.org/refpages/latest/refpages/source/vkCmdBlitImage.html

chaos.social
@pianosaurus
Thanks for bringing that to my attention!

@fvzappa

I believe this is slightly misleading. There wasn't really a canonical microcode for Alto. Each language implemented its own VM in Alto bytecodes. It was a bytecode, so you had 255 instructions (I think 0 was reserved? I might be misremembering) that you'd use to implement the common operations for your language. There were Algol, Smalltalk, and a few other VMs.

The Smalltalk bytecode, which included BitBLT, was documented in the Smalltalk Blue Book.

Mostly unrelated, but meeting Dan Ingalls was probably the time in my life when it's been hardest to not make happy fanboy squee noises.

@david_chisnall The microcode built into the Alto's 1K microcode ROM included the BitBLT routine. The Alto's microcode engine was actually specialized for Nova instruction decoding, the "native" instruction set was an extended Nova ISA. The Smalltalk emulator was bytecode oriented, as was Mesa. But there was no such thing as "Alto bytecode," the Alto's microcode was implemented in a 32-bit horizontal format that directly controlled the datapaths, ALU, memory (and many special functions.)
@david_chisnall The later D-machines (Dolphin, Dorado, Dandelion (Star), etc.) were designed to execute bytecodes efficiently (specifically for Mesa, but Smalltalk also took advantage of this). The Dorado could (in theory) execute 16 million bytecodes/sec which was pretty impressive in 1979.
@fvzappa I was just reading about this, albeit from the GPU standpoint.
I've never read a document detailing such a fundamental piece of functionality - and it's so concise!

@fvzappa

Dan is a Hero

His work was some of the prior art that helped to break the Cadtrak patent back in the day.

https://threadreaderapp.com/thread/1317930223816429568.html

Thread by @TubeTimeUS on Thread Reader App

Thread by @TubeTimeUS: in 1990, a tiny company nobody had heard of, Cadtrak, sued Commodore for patent infringement and won. Their company CEO bragged that he put Commodore out of business! Commodore's downfall too...…

@fvzappa And the optimizations you mention are a great example of on-the-fly (or JIT) code generation, explained by Raymond Chen in https://devblogs.microsoft.com/oldnewthing/20180209-00/?p=97995

The original paper describing the optimizations was written by Rob Pike, Leo Guibas and Dan Ingalls (Unix and Smalltalk people working together!) and can be found at https://pdos.csail.mit.edu/~rsc/pike84bitblt.pdf

Optimizing BitBlt by generating code on the fly - The Old New Thing

Artisanal bit block transfers made to order.

The Old New Thing

@ssavitzky perhaps relevant to your interests...

@fvzappa

@fvzappa Note the hand-drawn 0,1,2,3 next to the print opcodes 0,4,8,12. In BCPL the integer and the word pointer were unified. On a 32-bit byte-addressed architecture (not saying this was one, but), it was necessary that incrementing an integer addressed the next _word_ in memory. One solution (possibly one used here): tag all integers with bits 00 at the bottom. A +1 increment actually becomes a +4 in machine code (and possibly from other languages that didn't use the BCPL convention).
@fvzappa im not really a nerdy computer person but how is it that xerox literally developed some of the most important things used in modern computing and no one talks about it? (mostly im talking about GUIs) is it odd? its seems odd to me.
@emily_rugburn @fvzappa I think it's a well known fact in nerdy computer person circles and outside them no-one cares.
@rebolek @fvzappa really. they just seemed so ahead of everyone else. i guess hindsight is 20/20?
@emily_rugburn @fvzappa I don't think they're ahead of everyone else, they're just focused on some niche knowledge. Other groups know fascinating faces about other stuff.
@emily_rugburn @fvzappa They wrote a whole book about it: "Fumbling the Future", by Douglas K Smith and Robert C Alexander (1988). tldr: They were a copier company that made money selling replacement paper and toner and had no idea how to enter new markets. Then Apple ran away with it.
@joshsusser @fvzappa ok finding this book now. thank you!  
@emily_rugburn @fvzappa I don't think it's mentioned in the book, but Xerox did actually sell a few Alto computers. They were sold to the US Senate, and were used for "document processing" IIRC - basically text editing and email. Then there were some later models sold to intelligence organizations for use by analysts. But it wasn't enough to build a business on.
@emily_rugburn @fvzappa If you’re under 30, that topic was talked out before you were born. :)
But you are absolutely correct: Xerox PARC invented a bunch of ideas that they put into the Alto and Star, but they never understood the value of what they had until after Apple was shipping Macs and Lisas. They were lacking in visionaries.
@grumpybozo @fvzappa def over 30 haha i just remember watching pirates of silicon valley and was just *shocked* that they developed the gui and no one really understood the purpose of it
@emily_rugburn @fvzappa At the time, the Xerox slogan was “The Document Company,” and that’s all they thought about for the GUI: a tool to create documents.

@grumpybozo @emily_rugburn

Xerox PARC had visionaries in abundance; what it lacked was upper management that was able to actually do something with it.

@fvzappa @emily_rugburn Right. PARC had technical visionaries but Xerox had no business visionaries. People who were willing to imagine Xerox as more than “The Document Company.”
FWIW, the Star was a great document production device. The secretary for the lab I worked in right out of college used one and her memos were memorable.
@emily_rugburn @fvzappa
If you haven't already, read _Dealers of Lightning_, which gives a wonderful history of computer research at Xerox PARC.
There was also, of course, _Fumbling the Future_, but IMO DoL is a much better book.
@fvzappa Wow, that's pretty cool!!
@fvzappa the fact that these operations worked specifically on *bit* planes (or a single bit map in the case of monochrome displays) is a detail worth mentioning. Storing pixel data in byte- or word-sized chunks became a thing only years later.

@fvzappa Most of that should still reside in the Squeak code base as well, in case people are interested in a "historical" implementation.

Also, Dan is one of the nicest and most modest people I've ever had the pleasure meeting.

@fvzappa Oh, happy birthday, BitBlt! I was a Smalltalker at Xerox in the early 80s (worked on the Star/1108 VM, too). My favorite magic trick with BitBlt was rotating bitmaps with recursive, masked translations of quadrants, though using it to implement Conway's Game of Life was also cool. Using BitBlt to optimize things was so powerful that it often surpassed every other effort, so "you should have just used BitBlt" became something of a punchline on my team.
@fvzappa IIRC folks from PARC prounounced it "bit blip". Kind of how nuclear engineers said "nucular". Sounds wrong, but is actually a shibboleth to show whether you're part of the tribe or not.
@scottknaster you're saying Homer Simpson was pronouncing nucular like an actual nuclear engineer would?
@fvzappa I shared this with Dan, and he was chuffed :)
@fvzappa I remember us doing a graphics card way back in 1984 so BitBLT was already 'widely' known?
@fvzappa bcpl bad computer programming language