I declare that today, Nov. 19, 2025 is the 50th anniversary of BitBLT, a routine so fundamental to computer graphics that we don't even think about it having an origin. A working (later optimized) implementation was devised on the Xerox Alto by members of the Smalltalk team. It made it easy to arbitrarily copy and move arbitrary rectangles of bits in a graphical bitmap. It was this routine that made Smalltalk's graphical interface possible. Below is part of a PARC-internal memo detailing it:
BitBLT was implemented in microcode on the Alto and exposed to the end-user as just another assembly language instruction, alongside your regular old Nova instructions -- this is how foundational it was. And since it was an integral part of the Alto, it enabled all sorts of interesting experimentation with graphics: user interfaces and human/computer interaction, font rasterization, laser printing... maybe a game or three...

@fvzappa

Do modern GPUs still do blitting?

@argv_minus_one @fvzappa Modern GPUs have fundamentally different challenges, so the solution on the original computers doesn't necessarily help with the new computers.

Old computers were slow to run instructions, had limited memory space, but had ample memory bandwidth, so having one function that can do X different actions with few opcodes saves on memory space, and you can take extra memory bandwidth to read and write memory because that's available.

Nowadays we have ample space and the cores are extremely fast, but the limiting factor is memory bandwidth. I believe something like less than 100 instructions for a pixel shader wouldn't be able to saturate the cores at all, so most of the time they're just waiting for memory to get there. A solution that doubles memory bandwidth consumption doesn't help with that.

glBlitFramebuffer - OpenGL 4 - docs.gl

@nina_kali_nina @dascandy @argv_minus_one @fvzappa
Does anything similar exist in Vulkan?
@brouhaha @nina_kali_nina @dascandy @argv_minus_one @fvzappa Don't know if you saw it, but a Vulkan-link was posted in another response: https://chaos.social/@datenwolf/115575595195740316
datenwolf (@datenwolf@chaos.social)

@sleet01@fosstodon.org @argv_minus_one@mastodon.sdf.org @fvzappa@mastodon.sdf.org > Apparently GPUs themselves do a lot of fast memory block copies via DMA (…) shaders *me grimmacing my face*… kind-of… sort-of… Okay, first things first: GPUs still do have dedicated hardware that also enables bit blitting. Specifically the part of the raster engine that's responsible for resolving antialiased frame buffers. Graphics APIs still expose this with functions carrying 'blit' in their name: https://registry.khronos.org/OpenGL-Refpages/gl4/html/glBlitFramebuffer.xhtml https://docs.vulkan.org/refpages/latest/refpages/source/vkCmdBlitImage.html

chaos.social
@pianosaurus
Thanks for bringing that to my attention!