#reactiondiffusion

Chris Keeganchriskeegan
2026-02-04

A couple of frogs I created using reaction diffusion on touch designer. .

Some Bits: Nelson's Linkblogsomebitslinks@tech.lgbt
2025-11-24

Reaction-Diffusion Playground: A class of generative art systems
jasonwebb.github.io/reaction-d
#reactiondiffusion #generative #turing #art #+

Some Bits: Nelson's Linkblogsomebitslinks@tech.lgbt
2025-11-24

Multi-Scale Turing Patterns: Interesting art generated by algorithms
flickr.com/photos/jonathanmcca
#reactiondiffusion #generative #art #+

Karsten Schmidttoxi@mastodon.thi.ng
2025-11-17

@t36s Thanks, Daniel! The "depth" is just an illusion and comes from a creative approach to visualize cell ages. It's all one-and-a-half D only, though... 😉 2.5D would in principle be possible to, but somewhat harder to visualize/appreciate the interesting structures forming. Could be an animation or in 3D, handled like in the attached images, but even there a lot of the interesting internal structures forming are often getting lost once a certain complexity is reached (see info in alt text)...

Links to the respective projects/workshops:

- github.com/learn-postspectacul
- flickr.com/photos/toxi/albums/

#CellularAutomata #ReactionDiffusion #3D #Visualization #Processing #Generative

2D Cellular Automata visualized as smoothed 3D voxel mesh, rendered in LuxRender (2013). Each vertical slice of this structure is a single generation of the 2D simulation/animation...2D Reaction diffusion visualized as smoothed 3D voxel mesh, rendered as depth image (2008). Each depth slice of this structure is a single generation of the 2D simulation/animation. The front is the start of the simulation, branches forming recursively over time...
Kernel Bob :progress_pride:kbob@chaos.social
2025-10-06

I wrote up the problem and my planned solution for the Reaction Diffusion Toy's multitasking. Check it out.
github.com/kbob/Reaction_Diffu

Now I just have to translate pseudocode into runningcode.

(Bumping @lkundrak 'cause I think he likes this stuff.)

🧵 22/N

#ReactionDiffusion #ESP32

I'm trying to understand how the pieces can fit together. | want to keep the data flowing with minimal overhead.
The hard parts are:

* keep two cores busy running the reaction-diffusion simulation.

* there isn't enough memory
Kernel Bob :progress_pride:kbob@chaos.social
2025-10-04

The breakthrough is that I can reduce resolution arbitrarily. I could even draw a tiny 100x100 animation on my already tiny 1.69 inch (43mm diagonal) display. Or scale it to whatever size gives a reasonable frame rate.

So it's not how fast can it run, but how big can it run. 😉

Anyway, maybe soon I'll post my design documentation. Since this is hard (for me), I'm writing it out in great detail before I code.

No eye candy today, sorry.

🧵 21/N

#ReactionDiffusion #ESP32

Kernel Bob :progress_pride:kbob@chaos.social
2025-10-04

I mentioned upthread in 🧵 14 that it needs to compute one pixel every 100 clocks minus overhead. And I added a lot of overhead with the buffering scheme. And it has to do two 3x3 convolutions every pixel.

And the ESP32's vector instructions are fine for basic DSP but extremely limited in load/store capabilities.

But enough whining, I had a breakthrough today...

🧵 20/N

#ReactionDiffusion #ESP32

Kernel Bob :progress_pride:kbob@chaos.social
2025-10-04

Anyway, I've got it all pseudo-coded, and I've got the locking 99% worked out so memory doesn't get recycled too soon and work doesn't get blocked. (Four tasks and one interrupt on two CPU cores)

That just leaves the performance problem. Today I had a breakthrough on that.

🧵 19/N

#ReactionDiffusion #ESP32

Kernel Bob :progress_pride:kbob@chaos.social
2025-10-04

I've come up with a a too-convoluted way to keep 1.1 copies of the simulation data. The simulation grid is divided into horizontal bands, and the two simulation threads work from top to bottom. As they finish reading each band to
calculate the next sim step (and the screen driver also finishes with it), they repurpose it to the bottom of the next sim step. I only have to keep about 2.2-2.4 bytes per pixel instead of 4.

But it's insanely complicated.

🧵 18/N

#ReactionDiffusion #ESP32

Kernel Bob :progress_pride:kbob@chaos.social
2025-10-04

The S3 has 512 KB of internal RAM. (It also has 8 MB of slow PSRAM.) The R-D simulation needs 4 bytes per pixel (4 arrays of uint8_t) or 262.5 KB. And at least 12K of I/O buffer for the screen.

The problem is that the internal RAM is about half used by hardware caches, vectors, ISRs, FreeRTOS core functions, etc. I disabled a bunch of stuff and made it all fit with about 6K free, but that didn't include my app, input drivers, task stacks...

So...

🧵 17/N

#ReactionDiffusion #ESP32

Kernel Bob :progress_pride:kbob@chaos.social
2025-10-04

I am still working on the Reaction Diffusion Toy, though I haven't updated the thread in two weeks.

There hasn't been any visible progress (i.e., eye candy) so no strong reason to post. Instead I've been head down working through how to make it fit in available RAM and run in the available time.

I did write some assembler that uses the ESP32S3's vector instructions to implement memcpy, and I know my way around the vector instructions a little.

🧵 16/N

#ReactionDiffusion #ESP32

Kernel Bob :progress_pride:kbob@chaos.social
2025-09-21

No project news today, so here's some more eye candy. This is running on my Mac in Rust/wgpu. Same code as upthread in 🧵 7, just different reaction diffusion parameters and colormap.

These parameters are much less stable, so the simulation is only sped up 5X, not 30X.

🧵 15/N

#ReactionDiffusion #ESP32 #WaveShare

Kernel Bob :progress_pride:kbob@chaos.social
2025-09-20

Basic envelope math: 2 cores × 240 MHz / (240 × 280 pixels × 69 fps) = 103 clocks/pixel.

That's not a lot of time, especially when the ESP32 spends 30% (?) of its time in drivers. Each pixel gets a 3×3 convolution, among other things. The ESP32S3 has vector instructions, but I don't know how to invoke them.

🧵 14/N

#ReactionDiffusion #ESP32 #WaveShare

Kernel Bob :progress_pride:kbob@chaos.social
2025-09-20

I've spent some time instrumenting, tuning, and yes, debugging the screen update code. Here are the numbers. This is all on a single ESP32S3 core.

I'm thinking I'll keep the LCD code on one core and have the other core process the touch screen. (Touch code is not written yet). I'll split the reaction diffusion simulation between cores, have one core start at the top and the other start at the bottom and meet in the middle.

🧵 13/N

#ReactionDiffusion #ESP32 #WaveShare

A MacOS terminal window displays the following text.

STRIPE_COUNT = 7
StripeBuffer = 3840 bytes
the_stripe_buffers = 26880 bytes
SPI actual frequency = 80000 KHz

2: 30 frames, 68.9178 fps
3: 69 frames, 68.9157 fps
4: 68 frames, 68.9144 fps
5: 69 frames, 68.9143 fps
6: 69 frames, 68.9137 fps
7: 69 frames, 68.9139 fps
8: 69 frames, 68.9142 fps
9: 69 frames, 68.9225 fps
10: 69 frames, 68.9229 fps
drawing : 1701745 (17.0%)
waiting : 5131363 (51.3%)
flushing : 1369279 (13.7%)
command : 117215 ( 1.2%)
enqueueing: 122845 ( 1.2%)

11: 69 frames, 68.8135 fps
12: 69 frames, 68.9137 fps
Kernel Bob :progress_pride:kbob@chaos.social
2025-09-18

Here's more on the color banding. The right half of each color bar has a 4x4 ordered dither applied. The left half is undithered. (The pink sprite is not dithered.) The dither definitely improves the red, blue, and gray bars; the green is just different. TBD whether I'll have CPU power to apply dither in the final application.

This is a "retina" display at 218 dpi/8.6 pixels/mm. The dither screen should look okay at that size.

🧵 12/N

#ReactionDiffusion #ESP32 #WaveShare

A watch face sized LCD display shows a pink rectangle in front of four vertical color bars, red, green, blue, and gray.  Each bar is a gradient from bright at the bottom to black at the top.  The colors are banded.  The banding is more noticeable in some places than others.
Kernel Bob :progress_pride:kbob@chaos.social
2025-09-17

I replaced the LCD library with my own optimized, DMA and interrupts, multibuffered thing. The frame rate went from 14 to 68 FPS, which is close to the ESP32S3's SPI limit. So that's good. This is not a hardware sprite; I'm redrawing the whole screen every frame.

I'm using rgb565, so the color banding is obvious. There is no sync; tearing is obvious with some other patterns, though not here.

🧵 11/N

#ReactionDiffusion #ESP32 #WaveShare

Kernel Bob :progress_pride:kbob@chaos.social
2025-09-16

After two solid days of wrestling with toolchains, I can now write to the screen. I feel like this is a major accomplishment. Screen refresh is noticeably slow, though.

I'm using VSCodium, PlatformIO, ESP-IDF, and an ST7789 driver of unknown provenance (probably Waveshare). That's not the first combo I tried, nor the fourth.

🧵 10/N

#ReactionDiffusion #ESP32 #WaveShare #LVGL #VSCodium #PlatformIO

Kernel Bob :progress_pride:kbob@chaos.social
2025-09-14

I'm reviving this project.

This is the ESP32 board I'm using. It has an ESP32S3, 8+16MB PSRAM, touch screen, yada³. Nice little kit.
waveshare.com/wiki/ESP32-S3-To

I am trying to decide whether to use a high level toolkit like LVGL or roll my own optimized SPI driver for the display and eke out all the performance I can. I'm leaning toward the latter, because that's what I always do.

🧵 9/N

#ReactionDiffusion #ESP32 #WaveShare #LVGL

Kernel Bob :progress_pride:kbob@chaos.social
2025-06-26

Karl Sims, of course, is The Man when it comes to reaction diffusion eye candy. Try his RD tool, and click on the Example button a few times to get an idea of the visual effects it can produce.

karlsims.com/rdtool.html

He's also got a great tutorial. I relied heavily on this while writing mine.
karlsims.com/rd.html

🧵 8/N

#ReactionDiffusion #KarlSims

Kernel Bob :progress_pride:kbob@chaos.social
2025-06-26

It runs!

It's slow though. This demo, sped up for the short attention crowd (that's you), is 1800 simulation steps per second. It uses > 60% of one CPU core. I'm certain the ESP32 will be a lot slower.

I believe I can find another factor of two through optimization.

And there's near infinite scope to make it prettier/more interesting.

🧵 7/N

#ReactionDiffusion

Client Info

Server: https://mastodon.social
Version: 2025.07
Repository: https://github.com/cyevgeniy/lmst