Tuesday, December 5, 2017 - 1:00pm to 2:30pm
Location:Reddy Conference Room 4405 Gates Hillman Centers
Speaker:DANNY ZHU, Ph.D. Student /DANNY%20ZHU
Augmented Reality Visualization for Autonomous Robots
We believe that it is essential to be able to analyze the reasoning of autonomous robots as it relates to their behavior, and to be able to display this reasoning in a quantitatively correct manner. Videos of robots are often naturally used to aid in replaying and demonstrating robot performance; plain videos contain no information about the processing or behavior of the robots, but such videos can be enhanced by being combined with extra information. Overall, the goal of this thesis is to combine real systems of mobile robots with tools for visualizing algorithms, so that the behavior of complex autonomous agents can be displayed in tandem with the real world. Concretely, the thesis will investigate the addition of visualizations onto initially plain, uninformative videos.
We focus on the creation of augmented reality visualizations to explain the reasoning of three autonomous robot systems -- a quadrotor, a robot soccer team (with separate discussions of offense and defense aspects), and a robot soccer automatic referee (autoref) -- and how to extend those visualizations to general robot domains. After motivating the work by providing a detailed explanation of how to build a visualization of reasoning for an example quadrotor domain, we explain how to generalize the concepts introduced in the process. We contribute a specification of a means of storing a set of graphical information, along with corresponding times and additional organizational information, that we can use to store sets of time-varying spatial drawings and combine them with videos to create visualizations. We also demonstrate a working implementation of the whole procedure, which we have already used with those robots.
An important part of the contribution is the ability to dynamically change what objects are visualized for a given plan. We can show multiple levels of detail of the plan, or filter the visualizations corresponding to different portions of the plan, according to a viewer's choices. Allowing these actions extends the idea of layered disclosure for text, which we have already heavily used, to these visualizations.
After introducing the general contributions, we return to the specific robots; for each algorithm, we give a description of the algorithm itself (brief for the offense, detailed for the other two) followed by an exploration of how to instantiate the general visualization principles for that particular algorithm. We demonstrate that we can apply our general methods of visualization across multiple diverse robot systems.
Manuela Veloso (Chair)
Stefan Zickler (iRobot)