What's the best way to encode a video with an alpha channel?
The Internet says the best way is to encode two videos, one video holding YUV and one video holding the alpha channel, then during decoding time recombine. Not sure how to interleave them in .
(Or I could just key out one colour, but visionOS's e.g. app icon grid probably won't look good without full alpha...)
@zhuowei I played around with this using OBS and chroma keying out colours while looking at blocks of colour in the sim. Due to the inherent alpha blending of the app icon grid and frosted glass panels it didn't look great. Are you able to isolate the layers pre-composition?
@zhuowei OBS made the chroma keyed stream available as a virtual camera device, I then streamed that over webRTC to the Magic Leap which was also sending gyro events. It was laggy and I haven't gone back to optimise.
@keithahern Figured it out: I can just hook RealityKit to prevent it from adding composeSyntheticEnvironment.rerendergraph, and I get a PNG of the visionOS UI without the background scene..
https://notnow.dev/notice/AXTQmrNkveomxqdIrw Zhuowei Zhang: “My visionOS stereo screenshot library can now take screenshots with a transparent background: https://github.com/zhuowei/VisionOSStereoScreenshots/tree/transparent-background Next: stream this t...”
Zhuowei Zhang (@[email protected]): “My visionOS stereo screenshot library can now take screenshots with a transparent background: https://github.com/zhuowei/VisionOSStereoScreenshots/tree/transparent-background Next: stream this t...”