Do modern GPUs still do blitting?
@argv_minus_one @fvzappa Modern GPUs have fundamentally different challenges, so the solution on the original computers doesn't necessarily help with the new computers.
Old computers were slow to run instructions, had limited memory space, but had ample memory bandwidth, so having one function that can do X different actions with few opcodes saves on memory space, and you can take extra memory bandwidth to read and write memory because that's available.
Nowadays we have ample space and the cores are extremely fast, but the limiting factor is memory bandwidth. I believe something like less than 100 instructions for a pixel shader wouldn't be able to saturate the cores at all, so most of the time they're just waiting for memory to get there. A solution that doubles memory bandwidth consumption doesn't help with that.
@sleet01@fosstodon.org @argv_minus_one@mastodon.sdf.org @fvzappa@mastodon.sdf.org > Apparently GPUs themselves do a lot of fast memory block copies via DMA (…) shaders *me grimmacing my face*… kind-of… sort-of… Okay, first things first: GPUs still do have dedicated hardware that also enables bit blitting. Specifically the part of the raster engine that's responsible for resolving antialiased frame buffers. Graphics APIs still expose this with functions carrying 'blit' in their name: https://registry.khronos.org/OpenGL-Refpages/gl4/html/glBlitFramebuffer.xhtml https://docs.vulkan.org/refpages/latest/refpages/source/vkCmdBlitImage.html