@wtrmt I had written two different programs to create a slit-scan film from original footage. However, so far it has remained at the experimental stage. In terms of processing power and memory requirements, it is even more demanding than the film-to-still process.

But in my opinion, the real problem is the aesthetics. Procedural methods that produce convincing results in a still image do not work well visually in the film-to-film process.

slit scan + original
footage superimposed.

#slitscan

Zebra

Polar-transformed slit-scan photo, Stuttgart, Germany, July 2025

#slitscan #polartransform #computationalart #abstract #urbanphotography #stuttgart #germany

@wtrmt This type of #slitscan is done as a post process. Converting a movie (x/y/time data) into a single image.

First I'm recording the footage. High frame rate (60 – 240 frames per second) and low or medium resolution (1280x720 or 1920x1080 pixel). Using a second hand iPhone 8 or a X100VI camera.

The effect is created using this open-source software. https://gitlab.com/metagrowing/slitscan It is very computationally intensive.

This 4 images were created from the same film. Only using different algorithms.

Thomas Jourdan / slitscan · GitLab

Calculating slit-scan images from video files and image sequences.

GitLab

Three glass cubes containing colored foils are rotating.

#slitscan #photography

A sea anemone, fish swimming by, and red algae. No Aliens in this image!

#slitscan #photography

One yellow fish in the aquarium.

#slitscan #photography

Brain Activity

March 2026 - A video (https://doi.org/10.1101/649822) of a brain MRI is slit-scanned with color dispersion and fed to the YOLO object recognition model. Activations in YOLO's 7th backbone layer are used to modulate alpha transparency and luminance in a 3D render of the slit-scanned video as a volume (of width x height x time). The accompanying music is made by injecting embeddings from the CLIP image description model running on the resulting video into the conditioning pathway of Facebook's MusicGen generative music model.

#slitscan #compuationalart #brain #generativeart #deeplearning #abstract #videoart #musicgen #yolo #clip #MRI
Holographic

Slit-scan video of people moving in Amsterdam Centraal railway station in August 2021, rendered as a volume, the transparency of pixels is determined by activations of a layer of the Ultralytics YOLO model run on the video. The music is generated by activations of a neural layer of OpenAI's CLIP image description model injected into Facebook's MusicGen generative model for 3 second chunks of this video.

#generativeart #computationalart #slitscan #videoart #abstractstreet #amsterdamcentraal #amsterdam #netherlands #deeplearningart
Timeslices

People walking in front of Tokyo's Shinagawa station, January 2026.

Video rendered as a width x height x time volume and cut up in 100 semitransparent slices.

#videoprocessing #computationalart #slitscan #3d #tokyo #abstractstreet #japan #video #videoart