📝 I wrote a thing about color spaces in general and oklab()/oklch() in particular.

https://ericportis.com/posts/2024/okay-color-spaces/

Okay, Color Spaces — ericportis.com

@eeeps An excellent read, thank you.
@mia That means a ton and thank *you,* particularly for some key conversations over the years that helped me see just how provisional and patchy my understanding of all of this stuff was (and is [color is fake]).
SPAAAAAACE! - Portal 2

YouTube
@knowler lol. I was thinking about https://www.youtube.com/watch?v=reBzU8E_Ajk, but this also works!
PIGS IN SPACE!!!

YouTube
@eeeps Okay, incredible post! 👏👏👏
@matthiasott Thank you! It took some time (I learned a lot about three.js!) but a fun thing about blogging is there are no deadlines and you don’t have to cut corners.
@eeeps Oh yes! Only on your personal website™!
I love both the interactive parts but also how you use very clear and simple language to explain things that often appear super complex. It’s also wonderful to follow along with what feels very much like your own learning journey. ✨
Huetone • Make colors accessible

Use LCH color space to come up with predictable and accessible color palettes

Huetone
@matthiasott @ardov No! Those are awesome!! They know *way* more about three.js than me!!! (lol) Thanks for sharing!
@eeeps this is fantastic, thank you for the time to write such a clear explanation of colours! (i loved oklab before, but this is a real clear explanation)
@v So glad you got something from it. Thanks for the kind words!
@eeeps Fantastic post. Thank you for taking the time to write that up.
@spiralganglion Thanks for taking the time to read it! (4,000 words is no joke!)

@eeeps Great blog post! I learned a ton. One question. You wrote:

> An implication of this is that there are many times more photons shooting out of the blue side of that gradient, than there are from the green side, even though every swatch on the gradient has the same apparent lightness. Weird!

Isn't it rather that our eyes have non-uniform perception of the same number of photons at different wavelengths? Or am I missing something?

@trs Of all the statements I make that's one of the ones I'm shakiest on – so I might have things wrong. But I think we're on the same page; if we're 8x more sensitive (or whatever) to that green than that blue, and they have the same apparent lightness, that means there is 8x more light being emitted by the blue swatch. I think. https://en.m.wikipedia.org/wiki/Luminous_efficiency_function
Luminous efficiency function - Wikipedia

@eeeps thanks for sending me down this color rabbit hole! Excellent stuff.
@mttkng It is *such* a rabbit hole (hence, all the links). Thanks!
@eeeps one of the best articles I've read on color space so far. I'm glad that I came across this article 🍻
@nrk9819 Thanks for the kind words!
@eeeps This is an excellent post!
@keithjgrant Thanks!! It was a fun one to write.
@eeeps I love the interactive examples, especially the one that shows the shape or P3 in Oklab, which is something I’ve been wondering about. It “should” be a distorted cube but the cross section becomes more triangular towards black. Cool stuff.
@foolip Thanks! And yeah there are many interesting things about its shape; even the basic stuff like “if you want a *really* chromatic dark color in P3 it’ll have to be blue, magenta or red; if you want a chromatic light color it’ll have to be green, cyan or yellow.” Also, @matthiasott just introduced me to https://hueplot.ardov.me/ which lets you take different sorts of cross-sections and compare gamuts by viewing their outlines simultaneously – check it out!
Hueplot

A tool to visually explore the world of color spaces and gamuts

Hueplot
@eeeps @foolip @matthiasott thanks for the tip, that tool is gorgeous!

@eeeps Outstanding, Eric!

I'm an interaction designer prototyping "Google Maps for Color and Light" and better UIs for understanding color and light more generally.

Also incorporating HDR and display-adaptivity into that work, inspired by Rafal Mantiuk's research.

https://www.linkedin.com/company/display-adaptive/
https://www.cl.cam.ac.uk/~rkm38/publications_area.html

Display-Adaptive | LinkedIn

Display-Adaptive | 38 followers on LinkedIn. Network of consultants that do applied research and evangelism at the intersection of computer graphics and display tech | The network's vision for the near-future of great display-mediated entertainment experiences (e.g. PC and console gaming) is one where gaming devs-designers and publishers collaborate with display companies (and other hardware companies) to simultaneously optimize games, displays, and other hardware for one another, especially for HDR. We do applied research and build software tools that make that simultaneous optimization practical and broker secure sharing of HDR gameplay and HDR display simulations.

@stl8k I’m still just starting to wrap my head around how HDR integrates with all of this. Thanks for the links!

@eeeps very fun read! This caught my eye:

> Worse, the more experiments people did, the clearer it became that *no* three-dimensional space could ever be perceptually uniform

That reminded me of this paper showing that 3D rotations cannot be represented in a continuous 3D space (because 360º wraps to 0), but *can* be represented in a continuous *5D* space. Wouldn't it be excellent if there was a higher-dimensional space out there that captured perceptual uniformity? 🤯

https://arxiv.org/abs/1812.07035

On the Continuity of Rotation Representations in Neural Networks

In neural networks, it is often desirable to work with various representations of the same space. For example, 3D rotations can be represented with quaternions or Euler angles. In this paper, we advance a definition of a continuous representation, which can be helpful for training deep neural networks. We relate this to topological concepts such as homeomorphism and embedding. We then investigate what are continuous and discontinuous representations for 2D, 3D, and n-dimensional rotations. We demonstrate that for 3D rotations, all representations are discontinuous in the real Euclidean spaces of four or fewer dimensions. Thus, widely used representations such as quaternions and Euler angles are discontinuous and difficult for neural networks to learn. We show that the 3D rotations have continuous representations in 5D and 6D, which are more suitable for learning. We also present continuous representations for the general case of the n-dimensional rotation group SO(n). While our main focus is on rotations, we also show that our constructions apply to other groups such as the orthogonal group and similarity transforms. We finally present empirical results, which show that our continuous rotation representations outperform discontinuous ones for several practical problems in graphics and vision, including a simple autoencoder sanity test, a rotation estimator for 3D point clouds, and an inverse kinematics solver for 3D human poses.

arXiv.org
@jni that’s what all of those color appearance models (like CAM-16, that OKLAB is in part derived from) are trying to do: adding all of the viewing conditions and context as extra domensions. https://en.m.wikipedia.org/wiki/Color_appearance_model
Color appearance model - Wikipedia

@eeeps this page crashes Firefox about once every ¾ minute or so…
@mirabilos hm! I’m not by a computer but I’ll try to replicate later. Thanks for the tip.
@eeeps I tried a private window, but it still crashed, after using about 50% CPU for a while. The restore tab functionality let me see through it, as links+ wasn’t able to show most of the illustrations.