A Semi-Automated Framework for 3D Reconstruction of Medieval Manuscript Miniatures
A new pipeline uses Hi3DGen and SAM to convert 69 manuscript figures into tactile, interactive 3D assets.
A team of researchers has published a novel framework that semi-automates the conversion of 2D illustrations from medieval manuscripts into detailed 3D digital models. The study rigorously evaluated seven leading image-to-3D AI methods—including TripoSR, Wonder3D, and Hi3DGen—on a dataset of 69 figures from collections like the Vatican Library's Decretum Gratiani. They found Hi3DGen, with its normal bridging approach, offered the best balance of volumetric expansion and geometric fidelity, making it an ideal starting point for expert artists to refine in tools like ZBrush.
This pipeline, which integrates Segment Anything Model (SAM) for initial segmentation and AI for texturing, is designed for practical cultural heritage applications. The resulting 3D assets are not just for digital archives; they are built for extended reality (XR), enabling WebVR experiences and AR overlays that can superimpose reconstructions onto the original physical manuscripts. Crucially, the models are also suitable for tactile 3D printing, creating new avenues for accessibility and engagement for visually impaired users and the general public.
- Tested 7 AI models (Hi3DGen, TripoSR, Wonder3D) on 69 manuscript figures from two major collections.
- Hi3DGen identified as the best base model for balancing topological quality with surface detail.
- Final pipeline produces models for WebXR, AR overlay, and tactile 3D printing for the visually impaired.
Why It Matters
This tech bridges historical preservation with modern accessibility, creating interactive and tactile experiences from fragile 2D art.