Pekka Väänänen

@pekkavaa@mastodon.gamedev.place
264 Followers
160 Following
326 Posts

Avid reader, computer graphics fan and atmospheric jungle beats enjoyer.

Demoscene: cce/Peisik.

websitehttps://30fps.net/

This documentation page on #Blender's internal mesh data structure is really good: https://developer.blender.org/docs/features/objects/mesh/bmesh/ It has very thoughtful comparisons to half-edges.

I stumbled upon it when researching mesh libraries for Python that support N-Gons. Perhaps I'll try reimplementing BMesh myself.

BMesh - Blender Developer Documentation

I compared my implementation of Iterative Online K-Means Clustering to the author's C code (Amber Abernathy) and learned that they (a) used a Sobol sequence for random sampling, and (b) assigned each pixel to closest color at the end. Result: No more noise :)
The same algorithm implemented in two programming languages. Guess which one is C 😱

Measured a 256-shade greyscale ramp through my capture card with both on my laptop and the Nintendo 64 (via Retrotink). The response is linear in both (phew!) but I had to mess with the capture card brightness & contrast settings to make the N64 closer to the input. Still clips the whites too early.

#n64 #n64dev

Today I was scratching my head why k-means didn't seem to reduce Mean Squared Error. The clustering seemed fine. Is the error computation broken? How can it be when it's so simple:

def compute_mse(a, b):
return np.mean((a-b)**2)

Well, arguments 'a' and 'b' were images with 8 bits per channel, so...

#numpy

✨ New blog post: "Sharing everything I could understand about gradient noise"

https://blog.pkh.me/p/42-sharing-everything-i-could-understand-about-gradient-noise.html

I had a lot of fun making the WebGL demos, but it took me weeks of work. Boost are really appreciated if you enjoy it.

#glsl #noise #demomaking #shader #blog #programming

Another example on how mean squared error (MSE) doesn't predict final image quality. I implemented Variance Cut Color Quantization in two ways, left is with greedy split plane optimization and on the right side it's with a few rounds of global k-means that results in lower MSE.

I actually prefer the image on the left side since its more detailed.

The algorithm is pretty neat though. It doesn't need a separate initialization step and gradually adds new clusters. Easy to implement too. I tried adding an extra refinement step at the end but it still left some stray "firefly" pixels. Of course it could be a bug in my implementation too, hehe :)

I tried Incremental Online K-Means (IOKM) for color quantization based on a 2022 paper but seems like it also adds random dithering as a side effect so it's not really useful for compressing textures.

The paper: https://web.archive.org/web/20240921154745/https://faculty.uca.edu/ecelebi/documents/ESWA_2022.pdf

Left: libimagequant, Right: IOKM.

Working on a color quantization experiment. I have integrated the PIL and exoquant libraries so far. The method default PIL uses by default is "fast_octree". The "_km" variants do one round of k-means clustering to fine tune colors. It does help with the poorest results but no effect on exoquant.