tragic. pygame.display.flip seems to always imply a vsync stall if you aren't using it to create an opengl context, and so solving the input events merging problem is going to be a pain in the ass. it is, however, an ass pain for another night: it is now time to "donkey kong"

EDIT: pygame.display.flip does not imply vsync! I just messed up my throttling logic :D huzzah!

😎
ideally the ordering of those events would be represented in the patch evals but they just happen as they happen. It's plenty responsive with a mouse though. I'll have to try out the touch screen later and see if the problem is basically solved now or not.
ok I can go ape shit with a mouse and it works perfectly and like I mean double kick ape shit, but the dropped events problem still persists with the touch screen ;_;
it's possible that I'm doing something wrong with the handling of touch events and it's something i can fix still, but now i'm wondering if there's a faster way to confirm if "going ape shit" playing virtual instruments is a normal intended use case for touch screen monitors by their manufacturers and the people who wrote the linux infrastructure for them, or if they all were going for more of a "tapping apps" vibe
and by "going ape shit" i just mean gently flutter tapping two fingers rapidly to generate touch events faster than i could with just one finger, such as to trill between two notes or repeatedly play one very quickly. doing that on one mouse button to generate one note repeatedly very fast feels a lot more aggressive
if i do the same on my laptop's touch pad (which linux sees as a mouse) the same thing happens, but if I really go ham on it such that it engages a click with the literal button under the touch pad, then the events all go through just fine. this is why i'm starting to think there's some filtering happening elsewhere
i wonder if there's a normal way to ask linux to let me raw dog the touch screen event stream without any gesture stuff or other filtering, or if this sort of thing isn't allowed for security and brand integrity reasons

someone brought up a really good point, which is that depending on how the touch screen works, it may be fundamentally ambiguous whether or not rapidly alternating adjacent taps can be interpreted as one touch event wiggling vs two separate touch events

easy to verify if that is the case, but damn, that might put the kibosh on one of my diabolical plans

ok great news, that seems to not be the case here. I made little touch indicators draw colors for what they correspond to, and rapid adjacent taps don't share a color.

bad news; when a touch or sometimes a long string of rapid touches is seemingly dropped without explanation, nothing for them shows up in evtest either D: does that mean my touch screen is just not registering them?

side note: if you try to use gnome's F11 shortcut to take a screen recording, it doesn't give you the option to pick which screen, and it just defaults to recording the main screen. i assume this too is security

I want to add a pair of instructions for switching between (0, 1) range and (-1, 1) range. so

u = s * .5 + .5

and

s = u * 2 - 1

what are good short names for these operations?

EDIT: can be up to three words, less than 7 letters each preferably closer to 4 each

ok thanks to the magic of variable width font technology, "unipolar" squeaks in under the limit despite being 8 letters
I've got my undulating noise wall mollytime patch piped into a rings clone running in vcvrack and ho boy that is a nice combo
I gotta figure out how to build some good filter effects

why the hell am i outputting 48 khz

i'm pretty sure i can't even hear 10 khz

i'm using doubles for samples which feels excessive except that std::sinf sounds moldy so that has a reason at least, but i do not know why i picked 48 khz for the sampling rate i probably just copied the number from somewhere but seriously why is 48 khz even a thing what is this audiophile nonsense
look at me im just gonna fill up all my ram with gold plated cables
well, ok I'd have to buffer roughly two days of audio to run out of ram but still it just feels obscene
i'm going to have words with this nyquist guy >:(

i love how a bunch of people were like it's because of nyquist duh and then the two smart people were like these two specific numbers are because of video tape and film

EDIT: honorable mention to the 3rd smart person whose text wall came in after I posted this

I'm going to end up with something vaguely resembling a type system in this language because I'm looking to add reading and writing to buffers so I can do stuff like tape loops and sampling, and having different connection types keeps the operator from accidentally using a sine wave as a file handle.

I've got a sneaking suspicion this is also going to end up tracking metadata about how often code should execute, which is kinda a cool idea.

oOoOoh your favorite language's type system doesn't have a notion of time? how dull and pedestrian

oh, so I was thinking about the sampling rate stuff because I want to make tape loops and such, and I figured ok three new tiles: blank tape, tape read, tape write. the blank tape tile provides a handle for the memory buffer to be used with the other two tiles along with the buffer itself.

I thought it would be cute to just provide tiles for different kinds of blank media instead of making you specify the exact buffer size, but I've since abandoned this idea.

bytes per second is 48000 * sizeof(double)

wikipedia says a cassette typically has either 30 minutes or an hour worth of tape, so

48000 * sizeof(double) * 60 * 30

is about half a GiB, which is way too much to allocate by default for a tile you can casually throw down without thinking about.

I thought ok, well, maybe something smaller. Floppy? 1.44 MiB gets you a few seconds of audio lol.

I have since abandoned this idea, you're just going to have to specify how long the blank tape is.

i made a tape loop #mollytime
i just realized i probably can't afford polyphony unless i really juice the perf of the vm
I guess I could try "multithreading"
is calling std::for_each(std::execution::par, ...) from a realtime thread like stuffing a portable hole inside a bag of holding
come to think of it, when you do a bunch of the same math in a for-loop sometimes godbolt gives you fancier math ops so maybe i should just try it that way first 🤔

I came to the earlier conclusion that mollytime doesn't currently have the headroom needed for polyphony by eyeballing this perf chart I collected the other day: https://mastodon.gamedev.place/@aeva/114894322623350182

My rough estimate from eyeballing the zoomed out screenshot is that patch took 12 % of the frame cadence to run.

I added a simpler measuring system that just measures the start and end times of that loop and divides the delta by the time since the last start time, and most of my patches are only at 1% load.

aeva (@[email protected])

Attached: 4 images 😎

Gamedev Mastodon
so either my math is wrong or the lesson learned here is you shouldn't add instrumentation markers for functions that run in under 100 nano seconds

hah, I programmatically generated a patch with 1000 oscillators and remarkably that's just a smidge past the limit. my load monitor reports about 99% load, the sound frequently crackles, but you can hear the correct tone (they're all running at 440 hz and their outputs are averaged). It sounds correct with 500 oscillators, and I don't feel like bisecting the real limit right now.

I think that's adequate proof that mollytime has more than enough headroom to justify implementing polyphony.

hah, if I plug my laptop in, the 1000 oscillators stress test only has a load of 54% and mostly sounds fine (hitches every few seconds)

heh, my laptop is on battery again but at a much higher charge than earlier, and the 1000 oscillators stress test now hits 89% load this time. perf profiling on modern CPUs is actually completely fake but we do it anyway 😌

my ultra spec is my laptop, but only when it is running on mains power in a cold room, and only on runs where intel's magic beans decide to consistently schedule my audio thread on a P core instead of an E core

ok I got a lot of work done on implementing polyphony today! and by that I mean I had a 3 hour nap this afternoon 😌 (and narrowed down the design space a bit during said nap)

implementing it in the VM itself will be pretty straight forward. registers and instructions can be monophonic or polyphonic, and polyphonic instructions just map the same thunk to each lane in its registers. Some tiles will have to be monophonic, and those would get an implicit sum if you don't provide an explicit one.

that much fits pretty naturally in the architecture I already have, and the hard part is expressing it cleanly in the visual syntax.

I think polyphony is going to be an attribute that propagates from input tiles (eg midi note) that can't propagate past monophonic tiles (final output, loop write, as well as a tile for explicitly blocking it from propagating).

I'm going to avoid introducing unordered loops and flow control to the language, at least for now though.

I'm thinking of also having a simple scheduler for allocating events to polyphony lanes, and there's a few different modes I think would be useful. This would be configured per-patch to keep the syntax cleaner.

All of the schemes are some variation on X * Y lanes. X is the number of distinct notes that can be assigned. Y is the number of lanes per note.

For example, the "mandolin" scheduling strat would have X=4 Y=2 by default. The X indices are allocated round robin each time there is a new note. Each time there is a repetition of one of the allocated notes instead alternates which Y lane receives the event.

This would be handled implicitly by mollytime so your patch doesn't have to do any special handling for the parallelism, you just tell it where you want the XY->mono merge to happen, and optionally merge the Y lanes earlier.

The "prism" scheduling strat is the other XY strat I have in mind right now. The X indices are also allocated round robin like the mandolin strat, but the midi events are copied verbatim to each note's Y lanes. A "prism" tile gives you a unique alpha value for interpolating across the Y lanes. This would be useful for stuff like unison spread, and maybe whatever it is that my friend leonard wants 1000 oscillators for.
I was thinking of having a "piano" mode that is just X=127 Y=1. each midi note gets a dedicated key, no lane repetition. I could have a tile where each lane provides a gate and a note number, and the return value is a gate describing how much sympathetic resonance the lane should receive from the other lanes with compatible frequencies. Basically use it to make the grandest bosendorfer. But that doesn't actually need a special scheduler mode, it can just be "prism" mode.
also the sympathetic resonance tile would work fine with the "mandolin" scheduler

This could probably also be implemented as a single scheduler with X lanes for notes times Y lanes for alternation times Z lanes for spread. I don't have any immediate ideas for patches where I'd want Y and Z to both be >1 though, so mostly it depends on whether a unified approach simplifies anything.

I'm leaning towards just having separate scheduling strats and adding more if I decide they're too limiting, instead of a more complicated monolithic one.

a bizarre consequence of upgrading to Fedora 42 today is all my patches in mollytime are reporting lower CPU usage. i'm guessing this is entirely due to recompiling with a newer version of llvm. I really hope this means its just faster now and not that it's reordering instructions around my calls to std::chrono::steady_clock::now or something
the 1000 oscillators benchmark (laptop plugged in) is roughly the same as before though so idk

side thought, I've been thinking it would be fun to have inputs for various natural phenomena like the current phase of the moon, what the local tide would be if you lived somewhere with tides, current rainfall, wind speed, etc.

I'm trying to decide how automatic the location stuff should be. I could have a settings page where you just pick what location you want on a map, and that avoids having to weird out players with sketchy permissions requests.

the weather phenomena one is even worse because there's pretty much no good way to do that without requiring an internet connection, and I'd have to find a weather thing I could poll that's bot friendly and not a weird privacy harvesting scheme.

I'm considering maybe making these optional plugins for your optional enjoyment that way games don't have to ship with this stuff if they don't use it which they almost certainly wont.

Mollytime patches now can output mono or stereo sound, and they can have any number of generic inputs and outputs that can be routed to hardware ports or other programs.

I've decided that surround sound doesn't exist for now. I've got some ides for how to handle it, but it's kinda silly to bother right now since I don't have a surround sound system to test it with. I'll probably get some fancy headphones for it later, but it's not a 1.0 feature.

A friend is going to take a shot at a Windows port. Neither of us are familiar with the various Windows audio APIs, so I figure it's best if we just shoot for regular stereo outputs for now and worry about the dynamic i/o ports later. I'm hoping we can have that be a cross platform feature though.
Late last night I was thinking about adding the psmoveapi library to mollytime so I can use my old psvr wands with it, but now this morning I'm wondering if maybe someone else has already wrapped psmoveapi to translate it to midi and if I should just use that instead. I haven't found any such thing though.
anyone down for some drone I made some drone
(remember to turn the sound on) #mollytime
I really need to add a "fade out" button

well, in the other news mollytime has several filters now as of this weekend.

this video shows off some of the mind boggling effects mollytime is capable of now (this is an evolved version of the drum patch I posted a recording of a while back) and I'm very proud of it though I have to apologize I forgot to mute my browser and so there's a discord bloop in the recording just before the two minute mark I don't really feel like doing a retake right now
#mollytime

are you in the mood for some chill eerie slow experimental noise music? great, go find your headphones that have the good bass response, because I made some chill eerie slow experimental noise music #mollytime
@aeva you should make it a big estop red button with a cover you have to flip up with the mouse first
@aeva (you shouldn’t do this but it would be funny)
@aeva I'm curious if a "track position" input input would make sense, which send the number of second since the patch started. Then a fade out, if you aren't making the patch live, could be simply "start fading when track length is at 300". Maybe what I'm saying make no sense
@aeva you gotta ask for the
drone negotiator 🎶
droooone
negotiatooooor 🎶
@aeva your best bet these days is WASAPI
@rygorous not much WASAPI with you?
@aeva weather you can get from like NOAA but you'll have to process some iirc, otherwise it's licensing hell
@aeva you could maybe try to interface with personal weather stations but idk if they have a public api but worth looking into

@aeva a thought: it might be possible to do a 'global weekly almanac' with some simple vector approximations, yearly seasonal weather patterns, and past observations at various stations, and ship those as part of program updates. weather forecasts like how 1980s supercomputers used to do.

it might not be accurate, and it'll be less accurate the further out you go, but it's something?

@aeva as a privacy-conscious person, I'd be cool with a "choose your location or press this button to automatically find it" approach.

Although, if it's for weather, tides etc, you don't really need individual locations. You could define the points yourself, e.g. one per city or one every x km, and have players pick the point closest to them.

@litemechanist I don't really expect a game to want to use the weather inputs for a variety of reasons, and it's more of interest for my own use. Like if there's a storm blowing through, it would be cool to make music that reacts to it. Or a tune in to another part of the world to hear what's going on
@aeva gotcha, gotcha. Do you have a writeup anywhere of what you're building? Sounds kinda neat.
@litemechanist this thread is my dev log for the project. I haven't posted a whitepaper or anything describing my goals beyond the notes above. Short version is I'm building an audio synthesis runtime called "mollytime" for building dynamic sound tracks and interactive media stuff (video games, art installations, etc) based on stuff I learned from playing with modular synthesizers, and this runtime has a visual programming frontend I wrote recently.
@litemechanist right now it's a monolithic program that I'm alternating between making patches with to feel out what it can do and what is missing; and expanding the capabilities of the system. eventually I want the runtime to be a library that can be embedded in other things without requiring the editor frontend. my own personal use case is split between making music, interfacing with modular synths to make more different music, and making games
@litemechanist i'm having so much fun with it though that when the whole thing is further along and I'm more confident in the API/ABI stability, I'd like to polish up the editor, put it on steam and maybe make mobile ports. the editor is built around using a touch screen as the primary mode of input, because it's meant to be a virtual control surface among other things
@litemechanist this is what the editor looks like right now https://mastodon.gamedev.place/@aeva/114946016340774053 (and also one of the more pleasant patches I've made recently)
aeva (@[email protected])

Attached: 1 video i made a tape loop

Gamedev Mastodon

@aeva (oh goodness me there's a whole thread here that I definitely didn't completely miss)

Well this is extremely cool. I appreciate the overview! I'm gonna go and read the whole thread now-

@aeva maybe it‘s just boosting the clock higher earlier so in terms of usage% it‘s lower but once it‘s going full speed anyways on the heavy benchmark it doesn’t matter
@halcy that would make sense
@aeva obviously not guaranteed but it‘s something I‘ve seen in telemetry where deploying something that takes up more CPU cycles actually ended up *lowering* CPU usage percent metric, because the CPU ended up going faster on average. quite weird to see at first
@aeva confused that the term voice stealing hasn't come up once yet
@lritter explain
@aeva say a synth has 4 voices for polyphony. midi turned on 4 notes. now it turns on a fifth. this is always supposed to succeed, so the voice with the furthest in the past note event is stolen and reused. the turn off event for the stolen note is ignored.
@lritter oh yeah. that's what I meant by round robin.

@aeva i have a book recommendation to make.

https://msp.ucsd.edu/techniques/latest/book.pdf

i have this as a paperback

the flow diagrams are all pure data but you don't need to know pd to get it. it's been a great help and inspiration for implementing various dsp algos.

@lritter looks useful. thanks!