Autonomous Safe-Landing Drone with Machine Learning-Based Detection and Navigation
This autonomous drone identifies safe landing zones using an object detection model and a single downward-facing camera to analyze terrain in real time, avoiding hazardous surfaces like water, trees, or shrubs. A pixel-to-world coordinate transformation uses the drone’s altitude, field of view, and image geometry to calculate real-world landing positions and guide the drone toward firm, stable terrain such as concrete or grass. Built on a Raspberry Pi and Navio2 flight controller, the system integrates AI, computer vision, and flight control to enable autonomous landings in complex environments — laying the groundwork for future planetary exploration missions (like Dragonfly), where drones must land safely on unstructured terrain without human guidance.