Leveraging Large Language Models for Robotic Control
The goal of this project is to allow a non-technical operator to issue natural language movement instructions to a ground robot. For operation planning, a simulation will model the robot in its environment prior to deployment. Integrates on-board sensors and edge compute such as LIDAR and a depth camera for real-time situational reporting and navigation.
Interns: Krishna Gawandi, Sai Chandra Sekaran, and Daphne Wen
Mentors: Erika Yu, Michael Vilsoet, and Urjo Nahid (AOS)