god damn this thing is so fucking cool. I've got it hooked up to my drum machine right now and the fm drum in particular is pretty good at isolating parts of the impulse response sample. I'm using a short sample from the Nier Automata song "Alien Manifestation" to convolve the drum machine and it sounds *amazing*. It's a shame I can't modulate the drum parameters on this machine, or I'd be doing some really wild stuff with this right now.

some small problems with this system:

1. I've had to turn down the sampling rate so I can convolve longer samples. 22050 hz works out ok though for what I've been messing with so far, so maybe it's not that big a deal. longer samples kinda make things muddy anyway

2. now I want to do multiple convolutions at once and layer things and that's probably not happening on this hardware XD

I'll probably have to switch to an fft based system for non-realtime convolution to make this practical for designing dynamic sound tracks for games that can run on a variety of hardware, otherwise I'll probably have to opt for actually recording my songs and stitching it together by some more conventional means
this thing is also really good at warming up my laptop XD
idk if I'm done playing around with this prototype yet, but I'd like to explore granular synthesis a bit soon. I think there's probably a lot of cool ways it can be combined with convolution, like having the kernel morph over time.
probably first is reworking this program so i can change out the convolution kernel without restarting it or at least make it so i don't have to recompile it each time
anyways i highly recommend building your own bespoke audio synthesis pipeline from scratch, it's a lot of fun
It occurred to me just now that I might be able to make this faster be rewriting it as a pixel shader. Each pixel in the output is an audio sample. Each PS thread reads a sample from the impulse response and the audio stream, multiplies them together, and writes out the result. To perform the dot product, the draw is instanced, and the add blend op is used to combine the results. I've also got a few ideas for variations that might be worthwhile.
Like, having the vertex shader or a mesh shader read the sample from the audio stream, have the PS read the impulse response, and stagger the draw rect. Main snag there is the render target might have to be 512x1 or something like that, or I'll have to do counter swizzling or something.
Also FP32 RGBA render targets would probably just batch 4 samples together for the sake of keeping the dimensions lower I guess.
I think this should be likely to be a lot faster, because I've made a 2D convolution kernel a lot slower by rewriting it to be compute in the past 😎 but if any of ya'll happen to have inside knowledge on if ihv's are letting raster ops wither and die because AAA graphics programmers think rasterization is passe now or something absurd like that do let me know.

@aeva The actual reason for that was almost certainly memory access patterns. Thread invocations in PS waves are generally launched and packed to have nice memory access patterns (as much as possible), compute waves and invocations launch more or less in order and micro-managing memory access is _your_ problem.

This really matters for 2D because there's lots of land mines there wrt ordering, but for 1D, not so much.

@aeva To give a concrete example: suppose you're doing some simple compute shader where all you're doing is

cur_pixel = img.load(x, y)
processed = f(cur_pixel, x, y)
img.store(x, y, cur_pixel)

and you're dispatching 16x16 thread groups, (x,y) = DispatchThreadID, yada yada, all totally vanilla, right?

@aeva well, suppose we're working in 32-thread waves internally (totally hypothetical number)

now those 32 invocations get (in the very first thread group) x=0,...,15 for y=0 and then y=1.

Say the image is R8G8B8A8 pixels and the internal image layout stores aligned groups of 4 texels next to each other and then goes to the next y, and the next 4-wide strip of texels is actually stored something like 256 bytes away or whatever.

@aeva so, x=0,..,3 y=0 are all good, these are all adjacent, straight shot, read 16 consecutive bytes, great.

x=0,...,3 y=1 in threads 16..19 are also good, these are the next 16 bytes in memory.

But if we have 256-byte cache lines (another Totally Hypothetical Number), well, those 32 bytes are all we get.

x=4,..,7 for y=0 and 1 are in the cache line at offset 256, x=8,...,11 for y=0,1 at offset 512, x=12,...,15 at offset 768.

@aeva And caches are usually built to have multiple "banks" that each handle a fraction of a cache line. Let's say our hypothetical cache has 16 16-byte banks to cover each 256B cache line.

Well, all the requests we get from that nice sequential load go into the first 2 banks and the rest gets nothing.

So that's lopsided and causes problems, and will often mean you lose a lot of your potential cache bandwidth because you only actually get that if your requests are nicely distributed over mem.

@aeva long story short, this whole thing with your thread groups being a row-major array of 16x16 pixels can kind of screw you over, if the underlying image layout is Not Like That.

This happens all the time.

Ordering and packing of PS invocations into waves is specifically set up by the GPU vendor to play nice with whatever memory pipeline, caches, and texture/surface layouts it has.

In CS, all of that is Your Job, generally given no information about the real memory layout.

Good luck!

@aeva If you do know what the real memory layout is, you can make sure consecutive invocations have nice memory access patterns, but outside consoles (where you often get those docs), eh, good luck with that.

The good news is that with 1D, this problem doesn't exist, because 1D data is sequential everywhere.

So as long as you're making sure adjacent invocations grab adjacent indices, your memory access patterns are generally fine.

(Once you do strided, you're back in the danger zone.)

@aeva also I want to emphasize that this Purely Hypothetical Example with row-major invocation layout in CS vs. a column-heavy layout in the HW is of course entirely hypothetical and in no way inspired by real events such as https://developer.nvidia.com/blog/optimizing-compute-shaders-for-l2-locality-using-thread-group-id-swizzling/
Optimizing Compute Shaders for L2 Locality using Thread-Group ID Swizzling | NVIDIA Technical Blog

As part of my GDC 2019 session, Optimizing DX12/DXR GPU Workloads using Nsight Graphics: GPU Trace and the Peak-Performance-Percentage (P3) Method, I presented an optimization technique named thread…

NVIDIA Technical Blog
@rygorous that sounds likely. I don't think I accounted for memory layout of the texture. I assume this is also why Epic seems to be so fond of putting everything in scan line order these days?
@rygorous so, my program as written is two linear memory reads, some basic arithmetic, and some wave ops. I think it should be pretty cache efficient, or at least I don't have any obvious ideas for making it moreso. I would think all the extra raster pipeline stuff would not be worth it, but the opportunity to move one of the loads into an earlier shader stage to effectively make it invariant across the wave and make use of the ROP to implement most of the dot product seems maybe worthwhile?
@rygorous the ROP is, like, free math, right?
@aeva Not really. The "math" may be free but the queue spots are not and you'll likely end up waiting longer in the shader to get to emit your output then you would've spent just doing the math directly
@aeva Looking at the shader you posted yesterday (?) at https://github.com/Aeva/convolver/blob/excelsior/src/convolver.cs.glsl, you're in the Danger Zone(tm)
convolver/src/convolver.cs.glsl at excelsior · Aeva/convolver

Contribute to Aeva/convolver development by creating an account on GitHub.

GitHub
@aeva the issue is SliceStart is derived from LaneIndex (Subgroup invocation ID) which is then multiplied by Slice

@aeva I don't know what value Slice has with the sizes you pass in, but it would be really bad if Slice works out to be some medium to large power of 2.

The issue is that the "i" loop goes in sequential samples but invocation to invocation (which is the dimension that matters), the loads inside are strided to be "Slice" elements apart.

You really want that to be the other way round. Ideally sequential loads between invocations.

@aeva so basically, try making the loop be "for (i = LaneIndex; i < SizeB; i += GROUP_SIZE)" and poof, suddenly those loads are mostly-sequential invocation to invocation instead of always hitting a few dozen cache lines

@aeva separately, don't want that % SizeA in there, that doesn't have to be bad but it can be, I don't know how good shader compilers are about optimizing induction variables like that

might want to keep that as an actual counter and just do (in the modified loop)

j += GROUP_SIZE;
j -= (j >= SizeA) ? SizeA : 0;

(you also need SizeA >= GROUP_SIZE now, but I don't think that changes anything in your case)

@aeva even on a GPU, if you do enough MADs per sample eventually you're going to be compute bound with this approach, but I'd be shocked if you were anywhere close to that right now.

First-order it's going to be all futzing with memory access.

@aeva I mean, you can literally do the math!

If you're on a GPU, then even on a mobile GPU from several years ago, you're in the TFLOP/s range by now for actual math.

So, ballpark 1e12 MADs per second.

48kHz stereo is ballpark 1e5 samples per second.

Math-wise, that means you can in theory do 1e7 MADs per sample, enough for brute-force direct convolution with a >3 minute IR. You're probably not doing that.

@aeva You can always do better convolution algs, but even for brute-force, the math is just not the problem for IR sizes you're likely using.

But as written in your code, you also have two loads for every MAD, and there's nowhere near that level of load bandwidth available, not even if it's all L1 hits.

Making it sequential across invocations should help noticeably. But beyond that, you'll need to save loads.

@rygorous huh. so is the ideal pattern something like out[0...N] += IR[0] * in[0...N], where the IR[0] is loaded once, and you basically just peel the first MAD for each sample being processed at once, and then do it all again for IR[1] etc. And the += would have to be an atomic add 🤔

@aeva I don't know about ideal but there is definitely is some mileage to be had in loading one of the two into registers/shared memory in blocks, double-buffering the next fetch and having only one load via L1/tex in the inner loop.

That said the better FFT-based conv kinda nukes that.

Good news: FFT-based conv kinda automatically exploits all the sharing for you!

Bad news: that means you're now down to loading and using each IR FFT coeff exactly once.

@aeva It is work-efficient and gets both your load count and your mul-add count way down, but it also means what's left is kinda BW bound by construction and there's not much you can do about it.

(I mean you can make the block sizes small enough that you're still summing tons of terms again, but that's defeating the purpose.)

@rygorous ok weird thing happened just now, I gave the your about changing the iterations another try and did notice the worst case runs were cheaper while the average was about the same (this is not the weird part), but then I dropped GROUP_SIZE (sets both the work group size and required subgroup size) from 32 to 16 and the average time went from 7.7 ms to 6.175 ms and the best record time went from 6.8 ms to 1.8 ms.
@aeva this is a Side Question, do you have any idea how much of that is just fixed costs that don’t depend on the amount of compute at all? I‘m wondering because last time we looked at „should we run our audio processing on Not CPU“ the conclusion was a clear nope, latency to dispatch any work in 10ms batches already kills us, but likely a lot / most (?) of that was tflite not being set up for this type thing rather than the underlying systems, and we never had time to dig deeper than that
@halcy I'm seeing round trip times as low as 1ms with an average of about 6. I'm using a UMA GPU though, which lets me leave memory mapped to both the GPU and CPU. Most of my perf is down to how much I can juice the compute shader and bringing down the timing variance caused by synchronization scheduler noise. Right now I have to leave 2 ms of head room or the audio becomes unstable, so my latency is roughly 8 ms.
@halcy the rules are different if you're targeting discrete GPUs, but I'm not sure how much anymore since bus speeds are fast and we got stuff like resizable mbar now. Conventional wisdom says readback is a no-no, but I think enough has changed in the last few years to merit more investigation.
@halcy as more useful proof than measured numbers that can be wrong, I've been able to play using this with a live instrument.
@aeva @halcy main problem with readback is usually the way it synchronizes your CPU and GPU, not really that the readback itself is slow (ofc if you're reading something large that can be slow too). Since GPUs are big async chonkers if you just naively queue up some work and then wait for it on the CPU you're likely going to end up waiting for a lot more than you intended. But you can do high priority queues and whatnot these days if you want to juice the latency.
@dotstdy @halcy so you do have limited bandwidth for transfers with discrete GPUs so in theory that can also be problems depending on how much data you want to read back. I think a lot of the superstition people have about doing any readback at all is a product of bad engine design. like, you can have everything pipelined perfectly and have multiple readbacks in the middle of the frame. you just need to check a fence to see what data is ready to process, and let it be asynchronous.
@dotstdy @halcy in the case of my audio convolver, the CPU thread that manages all of the vulkan stuff is basically just alternating between waiting for audio to buffer, running a compute dispatch, waiting for the results of the dispatch, bumping an atomic to tell the audio thread what is safe to read now, and then repeating the loop. I can probably shave the latency down a little more there by keeping the GPU hot, but the loop is tight enough that I'm not sure.
@aeva @halcy right exactly, in an engine readback often means like "wait for downsampled depth buffer from the previous frame before starting to submit work for this frame", which can be a huge issue. most of the time your cpu and gpu executions are out of sync by a frame or so and throwing a random sync() in there is disaster. but nowdays you can even have a high prio compute pipeline that preempts whatever work is currently running (at least, if you can actually access that hw feature...)

@dotstdy @halcy yup. it would be a pain in the ass to retrofit a sufficiently crufty old engine for that, but there's some really cool stuff you could do designing for it.

Also it's worth pointing out that deep pipelines with full utilization is great if you goal is building a makeshift space heater out of the tears of AAA graphics programmers, but if your goal is juicing latency, frame time, and/or battery life running the CPU and GPU synchronously can be advantageous.