@ttuegel @halcy Current tech yes, but there is no physics reason you can't have each pixel sampling the waveform from IR to UV with say 1000 samples and storing that. And it's what you'd want if really caring about colour rendering.
But yes, you are very unlikely to do that before encountering, and probably starting to staff species with different eyes.
I guess you might start doing it if people start bio engineering replacement eyes with exotic receptors.
@ttuegel @halcy What if you didn't use RGB? Maybe they've all got full-spectrum cameras and can transmit the spectrum data. Or I wonder if there's some fancy signal processing way to produce a synthetic spectrum from an RGB-channels signal so you can convert it to a different-primaries color space.
This'd be a problem even on earth! Wolves don't have the same color receptors as humans, so if I ever somehow got to transition (I won't :<) I'd have issues with every existing display.
@ttuegel @halcy ... Apparently purple is weird. But it sounds like most everything else can be mapped to a wavelength.
I wonder how accurate such a mapping would be. I bet it'd fail for a bunch of stuff because it probably doesn't reflect one singular wavelength, just a mix that humans can't distinguish from that singular wavelength. Other critters with different eyes would be terribly confused.
Exactly.
"I wonder if there's some fancy signal processing way to produce a synthetic spectrum from an RGB-channels signal."
Can't be done, for mathematical reasons. If you pick only four different wavelengths out of the optical spectrum and amplitude-modulate each independently of the other three, you have a whole dimension of additional options than can be covered with the three RGB sensors.
Next, use 100 different wavelengths...
But is that truly the case? Couldn't RGB map comprehensively onto any trichromate system? (Cf space telescopes that work outside of the terrestrial visual range, but can nevertheless be made to produce imagery that makes sense to humans?)
Also, there are many cross-modality mappings that produce interesting & useful results. Print-to-speach being only one example.
@halcy Maybe the systems are just banking on everyone having figured out some product of primes and the systems sling a ton of zeroes back and forth until they figure out a common one and then things get all weird for a few frames until it can figure out the light and dark sums of channels as things move around in the hi-res signals and then try to figure out the rest based on how many channels there are.
Meeting a new species is understood as everyone waving their arms around a bit and being weird and purple for a few seconds and then everyone just pretending it never happened. XD
I would love to see the VFX implied by this. Some skiffy production should really have a go at it. It'd be a marvelous opportunity for some stealth math education.
(Why does Darmok suddenly leap to mind?)
My reference to Darmok is less about the story element of the language decoding than the way the story is structured to bring the audience along while the characters work out the puzzle.
Structuring a story to display the process you describe in the 1st paragraph of the reply I resonded to could be a lot of fun for at least one episode. (Though I imagine subsequently it'd just be implicit, in the way the transporter was first introduced, & then later just assumed.) >
There are a lot of critiques that could (& have been) made about the primary conceit in Darmok.
But as a piece of story-telling that also carries the viewer along through the experience of solving the puzzle, I found it to be a delight.
Oh grumble. Now I really really wanna see somebody do that.
I mean, I love the comms hook-up conceit, but it'd be so much fun from a production standpoint to play with that.
In Star Trek, the Universal Translator translates speech between aliens with entirely different frames of reference (e.g. humans and sentient gas clouds). Presumably it’s also doing some false-colour magic to video streams that come from species with totally different visual perception. Next to that, decoding some data stream that you already know is a video feed seems quite easy.
Okay, this legit made me LOL. I'm surprised I didn't scare the guinea pigs.
See also: "The beautiful thing about standards is that there are so many of them!"
@[email protected] I guess that to an extend stuff like analog #NTSC or even #HiVision + #MUSE is not just trivial to modulate but also that basically any space-faring civilization will have a decently performant #SDR setup. - I mean, #Hamradio operators do #SSTV for decades so the only reason they don't do #TV at fluent frame rates is lack of #spectrum to do so. But in space, that problem isn't existing so quasi-optical links in like 24GHz & 60GHz bands are trivial to setup. - Compared to the challenges of superluminal or spacetime warping travel, [autonegotiation and frequency selection for bidirectional audio & video feed](https://www.youtube.com/watch?v=uhIEfxRLiPI&pp=0gcJCRsBo7VqN5tD) [if not compression] is an absolutely trivial problem that I'm confident elementary schoolkids in a *"Star-Trek" - esque universe* doodle in their free time like some folks nowadays do breadboard. Also a lot of #encoding / #decoding & #compression schemes can be broken down into mathematical formulas and that [is a universal languague](https://www.youtube.com/watch?v=yCkD5GOjvx8) to the point that one could assume said communications to be like a sci-fi variant of [modem handshake](https://www.youtube.com/watch?v=HDhyayQ_Rk0) within a second as computational power and speeds should be abundant anyway... https://en.wikipedia.org/wiki/Hi-Vision https://en.wikipedia.org/wiki/Multiple_Sub-Nyquist_Sampling_Encoding https://en.wikipedia.org/wiki/Slow-scan_television
@rrb @starsider @lispi314 @halcy @cubeofcheese Interlaced formats are going to be even more problematic since they depend on two frame encoding mechanically, and on image persistence biologically.
Your best bet is going to be some form of bitmap, with a significant training/tutorial component.
@starsider @rrb @lispi314 @halcy @cubeofcheese Transmitting circles would be a useful form of tutorial/training.
Assuming all sapients have the same facination with circles that we inherited from the ancient greeks. (Note that the Egyptians were kinda into triangles.)
@GalbinusCaeli @starsider @lispi314 @halcy @cubeofcheese Edit accepted and executed.
Thanks.
@GalbinusCaeli @starsider @lispi314 @halcy @cubeofcheese in addition to all this, the shows often make the local and remote viewer look at each other, even when viewed from an angle. For the purpose of “don’t look at the camera”, which is guaranteed with all 2D screens, the view is then forced to be three dimensional, and not just stereo, but volumetric, rotatable, which makes all the brainstorming so far useless :S
I love the arbitrary crt scanlines idea otherwise, sounds workable from a hw implementation point of view (modulo colour)
@halcy not sure about the protocol itself, but the codec shouldn't be a problem - the expanding radio shell will eventually transmit ffmpeg everywhere