SCS Undergraduate Thesis Topics
|Hatem Alismail||Brett Browning||Exploring Visual Odometry for Mobile Robots|
A key task for an autonomous mobile robot is being able to navigate an unknown environment. An interesting challenge arises when neither a map of the environment, nor the location of the robot are present. In such cases, the robot needs to localize itself and map the environment simultaneously (SLAM). The SLAM problem has been extensively studied using LIDAR as the main sensor. Recently, however, attention is shifting from LIDAR-based SLAM to vision-based SLAM, or vSLAM. Nonetheless, the accumulated unbounded nature of error, due to image noise, prevents vSLAM to scale using visual inputs only. Most of the research in this area integrates other sensors inputs, such as IMU's or wheel odometry to improve navigation scalability. In this work, we focus on the development of a scalable robot navigation system using visual inputs only. Two approaches are proposed: (1) visual odometry, or motion estimates using visual inputs only, to provide accurate geometry estimates within limited distances, and (2) describing the environment as a topological graph-like map to connect the limited segments of visual odometry and allow the system to scale. A series of evaluation and testing experiments will be carried out on video streams from a camera mounted on a mobile robot navigating indoor and outdoor environments.