Clock glitched...nd yr secrets

52 Followers
19 Following
448 Posts
Cryptography. Privacy. Embedded HW and radio.
I kind of always end up in debugging the debugger.
And end up hunting ghosts in electric circuits.

A story of "nontrivial" hobby task.

So we found 6 tile example sets and wanted to do mosaic/pixel art with it. Each has 79 different color tiles in it.

Which means quantization, except with limits per color (plan is to cut each sample tile into 4 smaller tiles - doable because the people cutting it will be actually cutting it).

Limits mean something way complex than basic quantization.

After some debates how to do it and whether it's NP-hard or not, I've landed on Integer Linear Programming on min-max flow ("I don't like ILP" is understatement).

One solution is Network simplex, which I used.

I had almost everything complete, but there was some stupid bug.

With a friend we found 2 bugs, one in prefilter before Network simplex computation and other in missing demand for source and sink in the graph.

So in short solution is:

* CIEDE2000 as color difference (note it in NOT a metric since it's not symmetrical for whatever reason)
* prefilter by K colors (speeds up, though low K may mean insatisfiable solution)
* network simplex on bipartite graph with source+sink

Example result attached (output is scaled since it's < 1900 pixels total).

Claude hallucinates code that gets "artßßeyyy" results.

Original task was: quantize image into known indexed palette of 158 colors, but each pixel color count is limited.

Why? We have in hackerspace bunch of sets of sample ceramic tiles and want to create a pixel art using those colors.

The metric is to have minimal or limited error compared to original color vs tile, but error/fitness function does not necessarily be the standard uaed for quantizing, it just in the end must make sense to human eyes.

Thus HSV or YCbCr color spaces, with main component luma Y or hue H would make candidates for fitness function.

Here, Claude somehow used 4 algorithms, including Voronoi diagram, which I haven't seen maybe in 20 years aince university.

Artšßÿ, anyone bidding?

It was insanely hard to get Wireshark into a working state that was prevented by Nordic Zigbee 802.15.4 dissector seeing everything as "his".

Thus preventing MAC-LTE and LTE-RRC to be parsed correctly.

By this point I know all config options of Wireshark from memory.

Just also need to workaround bugs where you set stuff up and the configuration profiles get stuck and unsaved :(

But we have LTE back now, so can try maybe 5G for a change.

Got LiteVNA64 (measures antenna properties), plus directional 800MHz-6GHz log-periodic antenna.

LiteVNA can go 50 kHz-6GHz, thus measuring antennas like RFID or the magloop ones that could be used to control solar panels (including preamps) up to V2X vehicle-2-vehicle ones. Without upconverter.

LTE, 5G, NB-IoT, Wifi.... everything inbetween.

Trying out the location and movement of people via Wifi on 2 cheap ESP32.

It's actually working, even though I haven't exchanged the builtin antennas for better ones as is sugested.

Still incomplete, but finally it does something and I don't have to fix some weird compile errors.

Never thought depression or schizophrenia could be so similar to Pac-Man:

So I bought USB3 2TB Kingston SSD.

Small about 3x7x1 cm, initially sequential write fast 900-1000 MB/s (measured in actual write operations, not kernel-buffered).

After writing 200 GB it starts heating up, to 55°C measured outside and slowing down to maybe ~130 MB/s.

So I got ~1 kg copper heatsink from scrap, bought thermal-conducting adhesive pad just to try out how much it could cool down (because if 55°C is on outside, inside could be maybe 80°C).

Well the experiment worked, got down the temperature to 30-40 °C, the slowdown is also lesser, around ~330 MB/s. Not exactly practical though.

Apparently other USB SSDs (even physically larger with better thermal dissipation suffer from such slowdowns).

GrapheneOS is ecstatic.

GrapheneOS on Pixel 8a has been showing - if you use Play Services and Play Store - it has a setting that prevents Google Play just downloading and running random code in memory:

> "Google Play Store tried to perform DCL via memory"

Which is a weird thing that actually viruses/virii did - load itself executable, into a process, usuallly with extra thread.

Facebook did this kind of thing to "update" the app without user acknowledgement, and it seems Google is kinda trying to do the same.

I didn't have time nor energy to put the ARM code blob into Ghidra to see if it's "fix" or "malicious", but self-patching executable avoiding normal update for a fix is not fine by any standard.