I've recently been working on a dataflow/node graph API for audio processing in #Zig. The initial prototype is operational, but I'm not happy with the API yet. There are also a lot of hacks and assumptions baked in that I'd like to fix.

Now that I understand what's needed to implement the system, I'm trying to figure out what a good API for this looks like. I've mainly used #PureData before, so I'm reading up on #WebAudio and #miniaudio to find useful concepts to extract.

Also, if anyone has experience with these APIs I'd be more than happy to hear about the good parts and the bad parts of working with them, especially pain points that are easier to avoid early on in the design process.