Occlusion Handling by Pushing for Enhanced Fruit Detection
A new robotic system physically moves branches to reveal hidden fruit, boosting detection accuracy for automated harvesters.
A team of researchers from Monash University and LIRMM has developed a novel robotic system that physically interacts with its environment to solve a major problem in agricultural robotics: finding fruit hidden behind leaves and branches. Published for IROS 2024, their method combines a deep learning model to estimate the shape of occluded fruit from an RGB-D camera with classic image processing to determine where to push. The core innovation is a 3D extension of the 2D Hough transform, which allows the system to detect straight line segments in a point cloud, identifying the specific branch causing the occlusion.
Once the offending branch is located, the system commands a robot arm to push it aside, clearing the line of sight to the fruit. This active perception approach—moving to see better—was validated with real data across different lighting conditions and on various fruits including apples, lemons, and oranges. The practical demonstration showed the robot successfully improving fruit visibility and enabling subsequent picking operations. This hybrid technique marries the predictive power of deep learning for fruit appearance with the geometric precision of classic computer vision for branch manipulation, creating a more robust and effective agricultural robot.
- Uses a deep learning generative model to estimate the occluded parts of fruit in depth space.
- Introduces a novel 3D Hough transform to detect straight branches in point clouds for targeted pushing.
- Successfully demonstrated on a real robot arm, clearing occlusions for apples, lemons, and oranges in varied lighting.
Why It Matters
This active perception approach could significantly increase the success rate and efficiency of fully automated fruit harvesting robots.