working on an idea to livecode Mandelbrot set fractals. using python3 for string concatenation within a bash script that orchestrates programs from my mandelbrot-numerics library written in C

one example so far:

./m.sh '"0100"*5+"1"+("0100"*4+"1")*5' 10 90

ran into a smol issue, with more complicated things shell sometimes reports "file name too long". guess i need to split into chunks and create directory tree...
so there's a path length limit as well as a file name length limit, and I hit it. workaround: use hash function sha512sum to make a fixed-length path and hope I don't have any collisions

once again the bottleneck in my mandelbrot exploration is the O(n^2) asymptotics for tracing external rays: estimated 30mins for period 15000 or so (AMD 2700X CPU), even after using perturbation techniques to reduce the constant factors.

the work is entirely sequential, no parallelism opportunities other than tracing multiple rays, which isn't always what I want to do

made a new component for the system in 165 lines of C using OpenGL via SDL2 and glew libraries: an image viewer controlled by text commands on standard input. two commands so far:

0 foo.png

load foo.png into image slot 0

0

display image slot 0

the image dimensions and number of available slots are set as start up arguments. loading pipes raw rgb datafrom imagemagick convert program, which resizes to buffer size. limit on number of slots is at least 256 (opengl array texture).

this is much more performant and useful than my previous method, which involved setting xfce4 desktop background, which decoded the png each frame (possibly multiplied by number of workspaces).

tried implementing tempo/beat tracking from scratch but failed miserably, so now I'm using libaubio and can report success, after figuring out how to use it (hopefully) correctly w.r.t. overlapped blocks, which is not at all clear from the documentation.

aubio gives me timestamp of last beat and tempo, from which I can construct a beat phasor, which I can then multiply or subdivide at will. the overall phase offset of the result doesn't align with musical sections (and the detected beat alignment is often off too), so I plan to add a manual button I can hit to say "this is the start of a section, phase should be 0 now".

currently it runs offline, but should be relatively trivial to port to a live callback-based audio input API (for which I'll probably use SDL2 as I'm already using SDL2 in the image viewer).

then the next step will be hooking the phase into the image viewer I made previously to have animations synchronized to audio.

#aubio #libaubio

merged the viewer with the audio part, now playing images synchronized to audio input:

warning flashing images: https://media.mathr.co.uk/mathr/2026-toot-media/mathr%20-%202026-05-11%20-%20mandeltron%20demo-%201280x720p60.mp4 6MB 1280x720p60 with sound, short seamless loop

I don't have any live-coding features in it at the moment, just a proof of concept.

got the live coding part working again

it's a bit fiddly with too many windows, with the output window currently tiny.

i have a plan though (terminal with transparent background on top of full size output window)...

also my dsl for controlling the images sequenciny (after rendering) is too ad-hoc/tiny, need a byte-beat style thing or maybe something uzu....

embedded barry into the viewer...

https://media.mathr.co.uk/mathr/2026-toot-media/mathr%20-%202026-05-12%20-%20mandeltron%20demo%20-%20960x600p30.mp4 960x600p30 1.6GB 45mins with sound, OBS recording of me live coding the mandelbrot set.

thoughts:

- bytebeat is not the best way to approach animation sequencing, something uzu might be more human-friendly

- the latency from thoughts to visible results is very high (minutes rather than seconds in the case of designing new fractal sequences)

- even though I rendered the fractals at 640x360 with 4 samples/pixel, image quality seems ok

- variety is lacking, maybe i should make more colouring presets?

- want to do embedded Julia sets, but aligning them will require two addresses per location

- want to do Misiurewicz similarity loops, which will require one address but a different approach to rending animations

- want to do generalized Feigenbaum loops, might need a different rendering technique to peel off the hairs?

- all this complicated things means i might be better off interfacing the mandelbrot-numerics stuff in the language i'm using for coding the addresses (e.g. have python call out to processes instead of printing instructions for the wrapper shell script to process)

- for some reason libaubio detected tempo as 115bpm, wihch was a 3/2 factor out

- I left the mouse cursor in the middle of the screen

#Fractals #MandelbrotSet #LiveCoding

wrote some python to interface with mandelbrot-numerics using subprocess and gmpy2. can now render embedded julia set views quite comfortably (the current toy renderer doesn't support rotation, need to switch to writing toml settings for fraktaler-3)

added rotation support to mandelbrot-graphics m-render program (breaking change, set the extra positional argument to 0 (in degrees) for previous behaviour)

with my mandeltron python library extended with a couple of features, I can now make embedded Julia set loops with a few lines of code

```python
b = 8
for n in range(1, 1 << b, 2):
render(format((n - 1) // 2, "03d") + ".png", ejs(3, T(("011", "100"), (B(b, n), ""))[0] + "01"), rgb=0)
```

next step is writing .f3.toml files and piping to fraktaler-3, because the m-render program only does double precision without perturbatio, which means it doesn't go deeper than about 10^13 zoom factor before pixelating

I deleted a bunch of now-redundant things from my wrapper shell script and hooked up the python to fraktaler-3-wip batch mode

conclusion: fraktaler-3-wip is *incredibly* slow (~0.3 seconds per frame vs ~1 second for 128 frames) for shallow mandelbrot set zooms compared to the CPU-based m-render from mandelbrot-numerics.

it's almost certainly the overhead of excessive CPU<->GPU back and forth in the tile-based rendering:

- calculate reference and linear approximation tables on CPU
- upload reference orbit and linear approximation tables to OpenCL
- caclulate tile on GPU with OpenCL
- download raw image data (per tile) from OpenCL
- upload raw image data (per tile) to OpenGL
- colour tile on GPU with OpenGL
- download colour image data (per tile) from OpenGL
- accumulate tiles into image on CPU
- upload combined colour image to OpenGL for display (per however many tiles completed in one frame)
- save final combined colour image from CPU to PNG

on desktop, valgrind --tool=callgrind reports 40% of total CPU runtime of fraktaler-3-wip in libpng png_write_image()

on phone:
- 16secs to find 128 locations from specifications
- 28secs to convert .f3.toml to m-render arguments with a bash script
- 13secs to render 128 locations to png
- valgrind/termux got killed for using too many resources

m-render spends 13% of CPU time in libpng, via cairo_surface_write_to_png, which uses PNG_INTERLACE_NONE, while fraktaler-3-wip spends 40% of CPU time (indicating less CPU time was used for actually rendering the images) and uses PNG_INTERLACE_ADAM7. recompiling fraktaler-3-wip without ADAM7 made no measurable difference.

meanwhile the PNG compresses the images to around 36% of the uncompressed size, maybe I try with uncompressed images to see if that is faster. most of the time witin libpng seems to be inside zlib so maybe I can tune it to be faster and less compressy

found png_set_compression_level(pngptr, level) and ran some experiments. time is total time to render and save 128 fractal images. a better experiment would subtract the rendering time but I was sloppy.

with zlib level 0, size is basically uncompressed (85MB). zlib level 9 gets down to 34MB, but takes a lot longer.

conclusion: I should really make png/zlib compression level a user settable option in fraktaler-3...

@mathr I wonder if a custom wavelet-based image codec could be the ideal way to compress renderings of fractals--by working in image space, it would be faster than recomputing the image, but with knowledge of the fractal it could potentially avoid encoding what's redundant across scales