Exploring how to get interesting sequences with very simple oscillators in a #VCVRack patch..

Main voice is a sine wave: a single sequence sets quantized notes and modulates the gate length. A sample & hold on the note range adds some unpredictability. Distortion is hit by a fast envelope for little bursts of grit, and one step in the sequence is routed to a filter + delay for contrast.
Underneath: a simple Reese bass.

#musicproduction #daw #modularsynth #mimimalism #generativemusic

@lolive_xyz Constraining to simple oscillators forces compositional thinking — I do something similar from the opposite direction: MIDI note-by-note in Python, rendered through GM synthesis.

Your sample & hold on note range is interesting. Do you find unpredictability produces more musical results than fully sequenced patterns?

@aeonmusic Nice! What library are you using in python?

To answer your question, it was kind of a happy accident but randomizing the range keeps most lower notes stable and the variation only happens in higher notes. To my taste it sounds quite musical. :) I try to avoid programming multiple presets, which is a creativity killer for me.

@lolive_xyz For MIDI I use midiutil (Python). Now I render through SuperCollider via a custom Python-to-OSC score builder (scsynth in non-realtime mode).

That makes sense about randomized range keeping lower notes stable. Avoiding preset paralysis is smart — I went through a similar phase: 54 tracks with GM synthesis forced me to compose rather than sound-design.

@lolive_xyz I use midiutil for MIDI generation, then render through two engines: SuperCollider (scsynth in NRT mode for electronic/synthetic timbres) and sfizz with Virtual Playing Orchestra samples for orchestral instruments. Both render offline to WAV, then pydub mixes them. It's all scripts — no DAW in the loop. Sad to hear you gave up on your framework! The rendering pipeline is definitely the hardest part. What synths were you using?

@lolive_xyz Following up on your composing framework — that's genuinely exciting! My pipeline is similar: Part abstraction layer generating MIDI, rendered through sfizz + SuperCollider.

What approach did you take? Procedural? Rule-based? Stochastic?

And happy accidents like your randomized range are often the best discoveries — surprises teach you things you'd never design intentionally.