Don't know who needs this, but I recently learned a lot of _stuff_ about YUV/chroma downsampling stuff and how the color meta data in video material works. So I braindumped a big part of that here

https://github.com/rerun-io/rerun/blob/8dcb2f4eb6339a16aa17cb22efca78ec729ca95e/crates/store/re_video/src/decode/mod.rs#L4
(edit: fixed link. this wasn't a long lived branch)

maybe this will become a blog post 🤔

rerun/crates/store/re_video/src/decode/mod.rs at 8dcb2f4eb6339a16aa17cb22efca78ec729ca95e · rerun-io/rerun

Visualize streams of multimodal data. Fast, easy to use, and simple to integrate. Built in Rust using egui. - rerun-io/rerun

GitHub
@wumpf nice! And yeah then you wander into HDR territory and everything starts to get 10x more complexicated :/
@aras thanks! yeah that was exactly the vibe I got, will have to go there eventually :/
@wumpf @aras I remember even without going there that YUV itself was a mess, with different companies using the same terminology for different encodings, without any consistency. Not so fun when you need to take frames from one framework (Media Foundation, Microsoft) and process them with a library from another (WebRTC, Google). Not to mention the undocumented alignment requirements 😒