@danilo shared a post about Apple's #VisionPro “Spatial Computing”, that resonates with my childhood experience of #PlusLensTheory glasses.
https://visionprototypes.com/resources/why-spatial-computing
Instead of the single "cyberspace" out in front of my body, I had two different views that were vaguely related to reality, but with different sizes and movement patterns. Like two VR screens one above the other, with even more motion sickness and vulnerability to tripping over real space items. Like the ostrich in the other illustration, I eventually learned to "shove my head in that cave while my ass hung out."
"AR is loaded with cameras and sensors, and if Apple’s approach is to be believed, eye tracking is essential to making the illusions of AR persuasive." I wonder how they will cope with people like me whose sense of visual space is based on wildly distorted proprioception and vestibular awareness. Or even just people who always wear glasses to see "reality" but can't wear them with the AR headset.
Looks like Apple offers prescription inserts for the VisionPro. https://support.apple.com/en-us/HT213965
That could deal with focus and clarity issues, but leave conflicts due to head and eye movement.
Is there an initial setup procedure where the headset somehow learns to duplicate the movement distortions the user has incorporated into their visual world? Seems that could become a valuable diagnostic tool for quantifying the kinds of distortions I'm struggling with IRL.
"A sufficiently advanced version of this technology could create persuasive images that left reality feeling lacking, depending on your reality." Maybe it could be employed to gradually guide visually distorted users back to a healthy perceptual system, where "reality" would become more affirming?