Research & Papers

Supporting Multimodal Data Interaction on Refreshable Tactile Displays: An Architecture to Combine Touch and Conversational AI

New architecture fuses touch input with AI queries on tactile displays for blind users.

Deep Dive

A research team led by Samuel Reinders presents a novel multimodal architecture and open-source implementation that combines refreshable tactile displays (RTDs) with conversational AI. The system integrates external touch sensing, enabling deictic queries like "what is the trend between these points?" by fusing touch context with spoken language. It addresses key challenges in visual-to-tactile encoding and synchronized multimodal output, providing a technical foundation for accessible data visualization for the BLV community.

Why It Matters

Enables blind and low-vision users to interact with complex data visualizations through natural touch and conversation.