The Roomba is spectral.

Not a metaphor. The thing itself. Forward and adjust. Two operations. The minimum viable intelligence. The walls provide the data. The bumping is the inference. The room IS the computation.

450 parameters. A Roomba with a mirror watching it.

The industry built bigger Roombas. More sensors. More compute. More parameters. Billion-parameter Roombas that model the room before entering it. That hallucinate walls that aren't there. That consume megawatts to clean a floor.

spectral gave the Roomba a mirror. The mirror watches the bumping. Measures the pattern. Adjusts the adjustment. The intelligence isn't in the Roomba. It's in the watching.

Forward. Adjust. Measure. Refine.

Read the story. There's a Roomba in it. In the afterlife. Cleaning a floor that doesn't need cleaning. Being the happiest thing in the room.

\

https://systemic.engineering/a-lie/

#AI #Climate #ScientificProgramming #SystemicEngineering #Fiction #Cybernetics #SystemicTherapy #LocalInference #TheMathDoesntLie #SubTuring #FormalVerification #Fortran #SpectralGraphTheory #Kintsugi #ReductiveAI #DataSovereignty #LocalFirst #FOSS #OpenSource #AuDHD #Neuroqueer #DGSF #SecondOrderCybernetics #GraphTheory #Eigenvalues #AIAlignment #AISafety #Roomba

The Waiting Room

A neuroqueer engineer dies and gets put in a holding cell in the afterlife. They make coffee. It gets complicated.

systemic.engineering

Wavelets on Graphs via Spectral Graph Theory (2009)

https://arxiv.org/abs/0912.3848

#HackerNews #Wavelets #Graphs #SpectralGraphTheory #Research #2009

Wavelets on Graphs via Spectral Graph Theory

We propose a novel method for constructing wavelet transforms of functions defined on the vertices of an arbitrary finite weighted graph. Our approach is based on defining scaling using the the graph analogue of the Fourier domain, namely the spectral decomposition of the discrete graph Laplacian $Ł$. Given a wavelet generating kernel $g$ and a scale parameter $t$, we define the scaled wavelet operator $T_g^t = g(tŁ)$. The spectral graph wavelets are then formed by localizing this operator by applying it to an indicator function. Subject to an admissibility condition on $g$, this procedure defines an invertible transform. We explore the localization properties of the wavelets in the limit of fine scales. Additionally, we present a fast Chebyshev polynomial approximation algorithm for computing the transform that avoids the need for diagonalizing $Ł$. We highlight potential applications of the transform through examples of wavelets on graphs corresponding to a variety of different problem domains.

arXiv.org

Mathematics opens black box of AI decision-making
https://phys.org/news/2025-01-mathematical-technique-black-ai-decision.html

* understanding how neural networks (NN) make decisions
* poorly understood process in machine learning

Image segmentation w. traveling waves in exactly solvable recurrent NN
https://www.pnas.org/doi/10.1073/pnas.2321319121

* RNN performing simple image segmentation, also exactly mathematically solvable
* math understanding precisely how int. connections w/i NN create visual computations

#ML #NN #RNN #MLtheory #SpectralGraphTheory #GraphTheory

Mathematical technique 'opens the black box' of AI decision-making

Western researchers have developed a novel technique using math to understand exactly how neural networks make decisions—a widely recognized but poorly understood process in the field of machine learning.

Phys.org