(I'm not sure what I write in this toot is exact, please feel free to correct me)
When doing real-time #AudioProgramming you usually need to implement a function/procedure that will run once for every block of audio samples. Since that function needs to be very responsive, it must contain no memory allocation.
As a corollary, that means languages with automatic memory management are off the table, because you can't trust them not to allocate memory during that procedure.
Now there's this thing called #LinearTypes , which, as far I understand, can be used to control how memory is allocated. And #haskell has them.
So my question is: could one use Haskell with linear types for real-time audio programming? Or does it not offer the guarantees we need w.r.t. control over memory allocations?