Ported my #FractalFlame renderer to #OpenCL. Even on #GPU the #DensityEstimation is way too slow to be useful, it's quicker to get the same visual quality by plotting a zillion points.

The attached has around 8k samples per pixel, taking around 1min/frame (1 hour total). 256k subframes for motion blur, each being a single path of 16k iterations. Only plain simple #MoebiusTransformation stuff without xaos control..

I switched to ulong accumulation buffer, with 48.16 linear-light fixed point - should be safe enough against overflow and it has native atomics instead of having to do a cmpxchg loop with uint/float unions.

I need to check if AMD have implemented OpenCL device fission yet, because without that it hogs my desktop session making it unusable (ssh -X sessions work fine from outside, interestingly).

I implemented motion blur with a power law for weighting time in the shutter-open interval (for directional effect). Can also make the shutter much longer than the frame time for the interesting trail effects.

The video upthread was using this image as a 2D palette:

https://ichef.bbci.co.uk/news/660/cpsprodpb/999F/production/_111472393_hi060858488.jpg

but the auto-white-balance part of the code seems to have made it more purple. The hivis is still vaguely visible.

Experiment: set target colour dependent on output coordinates instead of transformation id. Makes sort of a fractal blur of the palette image.

This one is using NASA's April Blue Marble Next Generation
https://visibleearth.nasa.gov/images/74017/april-blue-marble-next-generation/74019l

April, Blue Marble Next Generation