As AI agents enter the physical world via smart glasses and robots, they need to interpret users’ ambiguous references, including imprecise pointing and underspecified language like “what’s that?” in complex real-world settings.

We introduce Uncertain Pointer at #CHI2026 to communicate system uncertainty with AR visualizations and facilitate user disambiguation!

Led by the amazing Ching-Yi Tsai w/ Nicole Tacconi & Andy Wilson

1/4 🧵

We systematically survey prior literature to understand how real-world objects can be annotated with uncertainty cues and find:

• pointer archetypes: internal, external, boundary, and fill
• common visual cues: size, color, opacity, and text

From this, we derive a set of pointer designs for presenting target uncertainty in situ📚

2/4

We evaluate these designs in 2 preregistered online studies (n=100), across scenes that vary in target distance and sparsity, and across different numbers of candidate targets.

The result: a set of practical design guidelines for choosing uncertain pointer styles based on scene context and disambiguation needs 🎯

3/4

Check out “Uncertain Pointer: Situated Feedforward Visualizations for Ambiguity-Aware AR Target Selection”

📄Paper: https://arxiv.org/abs/2602.13433
🔗Project page: https://uncertain-pointer.github.io

@hci
4/4

Uncertain Pointer: Situated Feedforward Visualizations for Ambiguity-Aware AR Target Selection

Target disambiguation is crucial in resolving input ambiguity in augmented reality (AR), especially for queries over distant objects or cluttered scenes on the go. Yet, visual feedforward techniques that support this process remain underexplored. We present Uncertain Pointer, a systematic exploration of feedforward visualizations that annotate multiple candidate targets before user confirmation, either by adding distinct visual identities (e.g., colors) to support disambiguation or by modulating visual intensity (e.g., opacity) to convey system uncertainty. First, we construct a pointer space of 25 pointers by analyzing existing placement strategies and visual signifiers used in target visualizations across 30 years of relevant literature. We then evaluate them through two online experiments (n = 60 and 40), measuring user preference, confidence, mental ease, target visibility, and identifiability across varying object distances and sparsities. Finally, from the results, we derive design recommendations in choosing different Uncertain Pointers based on AR context and disambiguation techniques.

arXiv.org