For blind and low-vision (BLV) people to equally participate in data analysis, they must be able to not only consume data representations created by others, but also create their own.

Unfortunately, existing accessible tools assume the existence of a visualization that can then be converted into an accessible representation.

Introducing Umwelt, a tool for interactive data analysis that de-centers visualization by treating visuals, text, and sound as equal modalities:

https://news.mit.edu/2024/umwelt-enables-interactive-accessible-charts-creation-blind-low-vision-users-0327

New software enables blind and low-vision users to create interactive, accessible charts

Umwelt is a new a system that enables blind and low-vision users to author accessible, interactive charts representing data in three modalities: visualization, textual description, and sonification.

MIT News | Massachusetts Institute of Technology
Unlike many systems where a user must create non-visual representations by first specifying a visualization, Umwelt decouples the authoring process for each modality. For example, a user can explore the data using text and sound without needing to create a chart first (or at all).
Because its non-visual modalities do not depend on the visualization, Umwelt can express a broader set of non-visual representations. For example, here the sonification applies additional transformations (binning, averaging) that aren't present in the visualization.
Umwelt: Accessible Structured Editing of Multimodal Data Representations

We present Umwelt, an authoring environment for interactive multimodal data representations. In contrast to prior approaches, which center the visual modality, Umwelt treats visualization, sonification, and textual description as coequal representations: they are all derived from a shared abstract data model, such that no modality is prioritized over the others. To simplify specification, Umwelt evaluates a set of heuristics to generate default multimodal representations that express a dataset's functional relationships. To support smoothly moving between representations, Umwelt maintains a shared query predicated that is reified across all modalities -- for instance, navigating the textual description also highlights the visualization and filters the sonification. In a study with 5 blind / low-vision expert users, we found that Umwelt's multimodal representations afforded complementary overview and detailed perspectives on a dataset, allowing participants to fluidly shift between task- and representation-oriented ways of thinking.

arXiv.org