Positioning radiata pine branches requiring pruning by drone stereo vision
A new AI pipeline uses stereo cameras and deep learning to map branches for robotic forestry.
A research team from Victoria University of Wellington and Scion has published a paper detailing a novel AI system that enables drones to autonomously identify and locate tree branches for pruning. The system, designed for radiata pine forestry, uses a stereo camera (a ZED Mini) mounted on a drone to capture 71 custom image pairs. The core innovation is a two-stage pipeline: first, it segments branches in the images using models like YOLOv8, YOLOv9, and Mask R-CNN; second, it estimates the precise 3D position of those branches.
For the critical depth estimation, the researchers rigorously compared traditional computer vision methods against modern deep learning. They found that deep learning models—including PSMNet, ACVNet, and RAFT-Stereo—produced more coherent and accurate depth maps than the traditional Semi-Global Block Matching (SGBM) approach. A custom centroid-based triangulation algorithm then fuses the segmentation mask and disparity map to calculate the exact distance to each branch, with a Median Absolute Deviation filter to reject outliers. Qualitative tests at 1-2 meter distances confirmed the system's viability, marking a significant step toward fully autonomous robotic pruning in forestry operations.
- The system uses a two-stage AI pipeline: branch segmentation with models like YOLOv9 and depth estimation with models like ACVNet.
- It was trained and tested on a custom dataset of 71 stereo image pairs captured with a ZED Mini camera on a drone.
- Deep learning-based disparity mapping outperformed traditional SGBM methods, enabling accurate branch positioning for autonomous pruning.
Why It Matters
This research paves the way for automated, precision forestry, reducing labor costs and improving the efficiency of large-scale tree maintenance.