iOCT Sonification Turns Eye Surgery Visuals into Audio, Boosting Accuracy 23%
83.4% event detection vs 60.6% baseline — surgeons hear retinal deformation in real time.
Subretinal injection demands millimeter precision to avoid puncturing the retinal pigment epithelium (RPE). Surgeons currently rely on intraoperative OCT (iOCT) cross-sections, but splitting attention between the microscope and OCT display increases cognitive load. A team led by Luis D. Reyes Vargas (Johns Hopkins, TU Munich) created a structured sonification framework that translates iOCT B-scan data into physics-inspired acoustic feedback. As the needle moves and the retina deforms, the system generates corresponding sounds — allowing surgeons to 'hear' tool position and tissue strain without constant visual checking.
In a controlled study (n=34), the sonification method outperformed a state-of-the-art baseline in overall event identification (83.4% vs 60.6%, p < 0.001), with gains driven primarily by detection of injection-induced retinal deformation. Four expert surgeons evaluated the system and confirmed its intraoperative applicability. The approach establishes iOCT sonification as a viable complementary modality for real-time surgical guidance, potentially reducing perforation risks and improving outcomes in delicate vitreoretinal procedures.
- Physics-based acoustic model uses iOCT-segmented retinal layers and needle motion as excitation inputs for real-time auditory feedback.
- Achieved 83.4% event identification accuracy vs 60.6% baseline in a 34-participant user study (p < 0.001).
- Four expert surgeons validated clinical relevance, highlighting potential for reducing cognitive load during subretinal injection.
Why It Matters
Surgical sonification could make microsurgery safer by offloading visual attention, reducing RPE perforation risk.