Computer Vision-Based Vehicle Allotment System using Perspective Mapping
A new AI system merges four camera views to create a real-time 3D map of available parking spots.
A team of researchers has proposed a novel AI-driven solution to urban parking congestion, detailed in their arXiv paper "Computer Vision-Based Vehicle Allotment System using Perspective Mapping." The system, developed by Prachi Nandi, Sonakshi Satapathy, and Suchismita Chinara, leverages computer vision to overcome the limitations of traditional sensor-based smart parking. By utilizing the YOLOv8 object detection model, it can accurately identify vehicles and vacant spaces in real-time. The core innovation lies in its use of inverse perspective mapping (IPM), a technique that merges the visual feeds from four strategically placed cameras into a single, coherent top-down view of the parking area.
This stitched image is then used to simulate a dynamic 3D model of the parking environment. Available spots are represented as points on a 3D Cartesian plot, providing a clear, visual guide for users. The approach is designed to be cost-effective and easier to implement than infrastructure-heavy sensor grids, as it relies on standard cameras and adaptable software. The researchers argue that this vision-based method offers superior accuracy and the flexibility to adapt to changing parking lot layouts, presenting a significant step forward for scalable smart city infrastructure aimed at reducing traffic and improving urban mobility.
- Uses YOLOv8 for real-time vehicle and space detection with high accuracy.
- Applies Inverse Perspective Mapping (IPM) to merge four camera views into a unified top-down map.
- Generates a navigable 3D Cartesian plot of available parking spots to guide drivers visually.
Why It Matters
Offers cities a scalable, vision-based alternative to expensive sensor networks for reducing traffic congestion and driver frustration.