Most folks with "normal" colour vision will say they perceive yellow, green, red and blue pegs hanging on a line here.

Yet the pixel samples shown along each peg illustrate that not a single point on each peg is "diagnostic" of its perceived body colour.

How do we perceive the pegs as solid body colours despite so much variation in the samples?

Nobody knows, but we can speculate that the visual system somehow integrates the samples to produce a percept of body colour.

An even more challenging question is how we perceive different parts of each peg to be translucent, opaque, shadowed and dirty from the image variations.

@TonyVladusich Oliver Sacks speculated that colour was perceived in the brain completely separately to other kinds of visual perception such as shape or light
@TonyVladusich And then there's people like me, who can't tell the difference between the "yellow" and "green" pegs here. (Partially colourblind.)

@sennoma

Yeah, I thought about stipulating folks with "normal" colour vision. Will update the post.

@TonyVladusich Asking GIMP to "Colors -> Components -> Extract component -> Hue" gives me this image. Doesn't seem that hard? The wraparound in red is because the channel is defined as starting at red and ending at red, so they are adjacent colors still.

@dascandy

Yeah, it's possible to achieve some level of separation of the chromatic components, although I think it can get messy across colour boundaries, depending on the precision of your system.

For example, in the attached images I've used my own (now defunct) "Colors" MacOS app to separate the hues somewhat. But even here I'd argue that perceptually our visual system "collapses" across variations in chromatic purity and luminance in a manner not captured by any existing algorithm. We somehow see unitary "body" colours despite these variations (we don't misperceive those variations as differences in body colour, such as texture).

And, of course, I think you'd agree no existing algorithm can parse an image into objects with surface and illumination "layers", such as object translucency, shadow, shading, gloss, dirtiness etc, as does our visual system?

@TonyVladusich @dascandy Echoing Tony’s point, try to find forms that vary in form stimuli that cross through the achromatic centroid, yet appear as a chromatic cognition.

It is really echoing Vlad’s point in spades.

@troy_s @TonyVladusich @dascandy

Here's the image broken down to Lightness, a&b, a only and b only...(the a and b are additive over a #555 grey) (Krita Lab)...

It appears to me the majority of of the variation comes from lightness differences, with mostly stable hue. (This then reminds me of the yellow dot checker board illusion.).

This perhaps makes sense considering saturation relates to the perception of luminance and chroma.

If we think about it, aren't "translucency, shadow, shading, gloss, dirtiness" largely luminance? By gloss I assume we mean specular or anisotropic qualities?

I'm not sure Lab is the best model for this deconstruction, but it's readily available in a few image apps.

In the second group, it's a&b only against black, mid grey, light grey, and white. From this it seems there is still a lightness component in the ab channels. Or is it a transparency component?

Does transparency belong with chrominance? I'll venture that this is becoming more relevant with HDR.

@Myndex @dascandy *Not* “lightness”. There is no correspondence between luminance or luma and lightness.

@Myndex @dascandy One cannot assign “lightness” by way of luminance, only local frameworks under a segmentation-decomposition lens can work.

The same follows for “chroma”, which again requires segmentation-decomposition frameworks.

@troy_s @dascandy

Sorry troy I wasn't clear, where I used the term "lightness" I perhaps should have said Lstar instead, as I meant specifically the "lightness" correlate of Lab.

It probably seemed off as I'm talking about both the Lab model and luminance and chroma.

@Myndex @dascandy Yes, but you understand that Lab is nonsense, yes? Luminance does not correlate to “lightness” or “brightness” for obvious reasons.

Look at my example; every code value corresponds to “white” and “black” hues.

We need to abandon these nonsense constructs.

@troy_s @dascandy

Well yea I know, that's been a focus of my work in recent years...!

And I agree Lab is weak--it just happens to be an available way to separate into approximated opponent channels,

I only used for illustration of the effect of luminance on color perception (i.e. yellow vs brown is the result of luminance).

@Myndex My issue with L*whatever is less the whatever, but more that it forces models that have a polemic of some cognitive facet like “lightness” or “brightness”, when the driving force are the neurophysiological gradients, and steepness thereof.

Case in point is yellow vs olive or orange vs brown; it is rather clearly a segmentation-decomposition problem, and *not* a monotonic scale of luminance derived metrics.

@troy_s

YES! My favorite example is DLyon's take on the checkerboard illusion, adding the yellow dots.

@Myndex Yes. My silly example is a riff on Adelson too.

Hence why it all seems enmeshed in a super structure of segmentation-decomposition computations.

EG: We segment-decompose the multiplicative component, leading to a “steeper” decrement gradient than the lower example.

Seems challenging to discuss without paying attention to some global / local polarity interactions.

@troy_s

These examples probably demonstrate more what is wrong with the Lab model.

Equal Chroma Dots:

I modified the orange dot examples, setting the a and b channels so that all dots per row were identical chroma (with the L* channel off).

When combined with L*, there are visible shifts, I think these are likely related to the transforms out of Lab to sRGB.

In the second image the left side is using only the a, and right, only the b channel.

The thing I am wanting to explore more is a model with orthogonally independent correlates for luminance, saturation or chroma, and hue, and also "a practical uniformity, consistent with physical light, within a defined range", like with a display.

The third row of the last image makes only the achromatic L* dots darker, without touching a*b*, attempting to match the dots on white. The top, second, and bottom row all have dots same as original. In the third row, some dots ended up black #000, and we still did not get a match, still needing darker.

@Myndex I think the problem with Lab are way, way, way deeper than that; the entire premise is nonsense.

This can be proven.

The *belief* behind these godawful woeful models is that the model metrics map *bijectively* to cognitions. That is, for every stimuli, there is precisely *one* cognition. If we disprove that, we can disprove all of the rest of the garbage. Proof by negation if you will.

One can look at the Tse demonstration and get a very visceral grasp of only a fraction of the errors of this logic.

If the probability of our cognition *modulating* the “configuration” of the forms impacts the colour cognition, we have shown that no such bijective model is possible.

But let’s not stop there…

When we discuss “lightness” and “brightness” using the common nomenclature, we are describing what the researchers have identified as a nebulous demarcation between “reflectance” and “direct view emission” or some such strained nonsense. But frame this from us, the organism.

How do we know which is which?

As best as I can tell, under a *discretized scalar* model, no such demarcation is even reasonable. Everything sits on this polemical scale, and according to these models, that is enough.

But let’s use common sense…

Imagine a “white” cup on a table. Is it “white”, and how do we compute such? Surely there are many other interactions that impact the stimuli presented, that would be in isolation “with colour”?

And what if we slide the cup from the table and gradually move it into shade? Is it still white? What about on the cusp of undetectable sensation? Still white?

The **exact same** logical problem emerges with “black”!

The only tenable conclusion? “White” and “black” must be discernible at every possible stimuli presentation radiance.

But these bogus stimuli derived models are impoverished nonsense, and instead insist on a bijective relationship.

The following pictures illustrate the ridiculous nature of these absurdist model ideologies.

So the first question I have for you would be *What do you mean by “uniformity” or “uniform”?*

If we start there, we may begin to be able to trace the outline of a vastly more tenable framework.

@troy_s

The problem is almost like unifying Newtonian physics and quantum mechanics—there's the easy to quantify "big thing", but an intractably complex minutiae.

Within a tightly defined environment, looking at low spatial frequency diffuse patches, we can model something with some practical use (Munsell or CIELAB). But to encompass perception of physical reality, the model becomes intractably complex.

And for instance a problem with CIELAB, is that it breaks pretty badly for self-illuminated displays, because a display does not fit the narrow environment of diffuse patches. L* is not even close to uniform for a screen which is emitting light and therefore directly affecting adaptation in itself.

But when I say uniform, mostly what I mean is a practical application where a given change in value (say, Lc 15) is perceptually similar, regardless of how light or dark the color pair is overall, and regardless of the adaptating field, assuming all are known (what we might call a three-input-minimum theory).

@Myndex Don’t lump Munsell in with the ridiculous Lab!

Munsell is based on physical energy mixtures, via his Maxwellian disc process.

I reckon we could go a *long* way considering viewing frustum and energy per unit area, and a gradient domain metric under the hood.

@troy_s

I do agree with all of this, especially that a stimulus definition is not valuable when isolated from the context.

L*a*b* assumes the narrow context of the Munsell environment. You might find amusing the circled sentence on page 6 of the 1967 NIST report on the re-notations: http://www.rit-mcsl.org/MunsellRenotation/MunsellRe-renotations.pdf

And this presents the issue of "how many axes do we need".

Next iteration of APCA includes a third color input for proximal surrounding background. And this is not a complete appearance model, but a practical, simple one, tailored to provide useful guidance for designers, aimed at readability.

If we really wanted to be complete, just for a screen/display, then inputs ought to include:

- Stimulus size/thickness, and intensity.

- Immediate proximal background intensity and width.

- Proximal/surround spatial density and contrast value (e.g. surrounding text)

- Surrounding background intensity (screen adapting field)

- Ambient lighting intensity and diffuseness (environmental).

@Myndex I don’t think we need more dimensions. In fact, the colourimetric idea is probably close, albeit framed incorrectly due to the origin of the Standard Observer model, to the neurophysiological signal.

In suspect the challenge rests in:
1. Shifting to a gradient domain form.
2. Integrating segmentation-decomposition frameworks.

The former is possible, the latter remains ill defined.

@troy_s

The contrast matching experiment we've been exploring attempts to define gradient spacing and take segmentation/gain control into account.

@troy_s

Elaborating on this a bit more, looking at the version with the solid bars connecting the in and out-of-shadow dots.

These bars are each the same color throughout, but notice on the grey bar that it appears to have a gradient in the area where it crosses the shadow.

@Myndex PDE-like mechanics. Can be emulated reasonably well using boundary solvers.
@TonyVladusich you don't have to do this every time you hang your cloths Tony

@jinahadam

You just think you've got me all pegged, don't ya?!

@TonyVladusich 🏃‍♀️🏃‍♀️🏃‍♀️