Research & Papers

Align-to-Scale: Mode Switching Technique for Unimanual 3D Object Manipulation with Gaze-Hand-Object Alignment in Extended Reality

New gaze-hand alignment method solves a key limitation in popular XR interaction models, making scaling a one-handed task.

Deep Dive

A team of researchers has introduced 'Align-to-Scale,' a new interaction technique designed to make 3D object manipulation in Extended Reality (XR) truly one-handed. The work, led by Min-yung Kim, Jinwook Kim, Ken Pfeuffer, and Sang Ho Yoon, addresses a critical gap in the prevalent 'Gaze + Pinch' interaction model used in devices like the Apple Vision Pro and Meta Quest. While that model allows for one-handed selection, movement, and rotation, scaling an object has remained a two-handed operation, limiting mobility and accessibility. The Align-to-Scale technique solves this by using the spatial alignment between the user's gaze direction and their hand position as a seamless mode switch, enabling scaling with a simple, one-handed pinch gesture.

Presented at the ACM Symposium on Eye Tracking Research and Applications (ETRA) 2026, the research paper details the design and evaluation of several techniques for unimanual scaling. The team assessed these methods in a compound translate-scale task to measure usability. Their findings confirm that all proposed methods effectively enable one-handed scaling, though each comes with distinct performance trade-offs in speed, accuracy, and user preference. From this analysis, the researchers derived practical design guidelines to help interface developers build the next generation of XR applications.

This advancement is a significant step toward making immersive 3D interaction as flexible and commonplace as using a smartphone. By eliminating the need for a second hand for a core manipulation task, Align-to-Scale paves the way for XR use in scenarios where users are holding an object, are in motion, or have limited mobility. The technique helps fulfill the promise of XR as a ubiquitous, mobile computing platform.

Key Points
  • Solves a key flaw in the Gaze + Pinch model by making 3D object scaling a one-handed operation.
  • Uses gaze-hand spatial alignment as an intuitive mode switch, requiring no extra buttons or menus.
  • Provides evaluated design guidelines from ACM ETRA 2026 research to inform future XR interface development.

Why It Matters

Enables truly mobile and accessible XR interactions, critical for adoption in real-world scenarios where users' hands are occupied.