Explore this photo album by Jonathan McCabe on Flickr!
@t36s Thanks, Daniel! The "depth" is just an illusion and comes from a creative approach to visualize cell ages. It's all one-and-a-half D only, though... π 2.5D would in principle be possible to, but somewhat harder to visualize/appreciate the interesting structures forming. Could be an animation or in 3D, handled like in the attached images, but even there a lot of the interesting internal structures forming are often getting lost once a certain complexity is reached (see info in alt text)...
Links to the respective projects/workshops:
- https://github.com/learn-postspectacular/sac-workshop-2013
- https://www.flickr.com/photos/toxi/albums/72157604724789091
#CellularAutomata #ReactionDiffusion #3D #Visualization #Processing #Generative
I wrote up the problem and my planned solution for the Reaction Diffusion Toy's multitasking. Check it out.
https://github.com/kbob/Reaction_Diffusion_Toy/blob/7fee5ce827931905a7ccf2f39de4bf4a29c757bb/rdembed/SYNC_NOTES.md
Now I just have to translate pseudocode into runningcode.
(Bumping @lkundrak 'cause I think he likes this stuff.)
π§΅ 22/N
The breakthrough is that I can reduce resolution arbitrarily. I could even draw a tiny 100x100 animation on my already tiny 1.69 inch (43mm diagonal) display. Or scale it to whatever size gives a reasonable frame rate.
So it's not how fast can it run, but how big can it run. π
Anyway, maybe soon I'll post my design documentation. Since this is hard (for me), I'm writing it out in great detail before I code.
No eye candy today, sorry.
π§΅ 21/N
I mentioned upthread in π§΅ 14 that it needs to compute one pixel every 100 clocks minus overhead. And I added a lot of overhead with the buffering scheme. And it has to do two 3x3 convolutions every pixel.
And the ESP32's vector instructions are fine for basic DSP but extremely limited in load/store capabilities.
But enough whining, I had a breakthrough today...
π§΅ 20/N
Anyway, I've got it all pseudo-coded, and I've got the locking 99% worked out so memory doesn't get recycled too soon and work doesn't get blocked. (Four tasks and one interrupt on two CPU cores)
That just leaves the performance problem. Today I had a breakthrough on that.
π§΅ 19/N
I've come up with a a too-convoluted way to keep 1.1 copies of the simulation data. The simulation grid is divided into horizontal bands, and the two simulation threads work from top to bottom. As they finish reading each band to
calculate the next sim step (and the screen driver also finishes with it), they repurpose it to the bottom of the next sim step. I only have to keep about 2.2-2.4 bytes per pixel instead of 4.
But it's insanely complicated.
π§΅ 18/N
The S3 has 512 KB of internal RAM. (It also has 8 MB of slow PSRAM.) The R-D simulation needs 4 bytes per pixel (4 arrays of uint8_t) or 262.5 KB. And at least 12K of I/O buffer for the screen.
The problem is that the internal RAM is about half used by hardware caches, vectors, ISRs, FreeRTOS core functions, etc. I disabled a bunch of stuff and made it all fit with about 6K free, but that didn't include my app, input drivers, task stacks...
So...
π§΅ 17/N