Research Areas - Robotics Research in the Computer Science Department at Carnegie Mellon
Robotics explores the problem of how a machine can interact with the physical world as effectively as humans and other living creatures. We typically divide this ability into the three categories of perception, cognition, and action. Some research efforts are focused in just one of these areas, but an important thread in robotics is to study an entire integrated system. Carnegie Mellon has strong efforts of both types: studies of machine perception and artificial intelligence in particular, and also studies of integrated robotic systems working in key application areas. We have particular strengths in learning robots and human robot interaction. Much of the work on perception and cognition is addressed in other sections of this web site: Graphics ; Human Computer Interaction ; Vision, Speech and Natural Language; Artificial Intelligence; and Machine Learning . This section focuses on cognitive robotics, robots in teams, robots collaborating with humans, manipulation, and humanoids.
1 Cognitive Robotics
David Touretzky’s Cognitive Robotics project is developing a high level approach to robot programming drawing inspiration from cognitive science. The idea is to provide an integrated set of primitives for perception, mapping, navigation, and manipulation that work well on a real robot, at least in circumscribed domains. We are using the Sony AIBO at present, but we hope eventually to move to a humanoid. A current touchstone task is playing a tabletop game of tic-tac-toe by pushing colored game pieces around. With the right set of primitives, an undergraduate should be able to program a tic-tac-toe player with minimal effort. Perceptual primitives based on Ullman’s notion of “visual routines” and Paivio’s “dual coding theory” of mental representations make it easy to visually parse the game board. And drawing on Gibson’s notion of “affordances,” we represent objects in terms of the operations that can be performed on them, tying perception to action.
We have created an application development framework for the AIBO called Tekkotsu (Japanese for “framework,” literally “iron bones”.) The current Tekkotsu release, in use at about 20 schools around the world, provides mid-level facilities such as a hierarchical state machine formalism, an event-passing architecture, an inverse kinematics solver, and a collection of GUI-based remote monitoring and logging tools. In the coming year, we will release more abstract cognitive primitives for Tekkotsu in preparation for a new undergraduate Cognitive Robotics course that Touretzky taught in January 2006.
2 Multi-Robot Systems in Adversarial Domains
The problems of robotic collaboration are both scientifically interesting and vital to the practical use of robotics. Manuela Veloso is leading several projects addressing integrated systems deployed across a team of robots, working in the presence of an adversarial team of robots.
Learning Four-Legged Motion. Developing fast gaits for legged robots is a difficult task that requires optimizing parameters in a highly irregular, multidimensional space. In the past, walk optimization for quadruped robots, namely the Sony AIBO robot, was done by hand-tuning the parameterized gaits. In addition to requiring a lot of time and human expertise, this process produced sub-optimal results. Several recent projects have focused on using machine learning to automate the parameter search. Algorithms utilizing Powell’s minimization method and policy gradient reinforcement learning have shown significant improvement over previous walk optimization results. We have developed a new algorithm for walk optimization based on an evolutionary approach. Unlike previous methods, our algorithm does not attempt to approximate the gradient of the multi-dimensional space. This makes it more robust to noise in parameter evaluations and avoids prematurely converging to local optima, a problem encountered by both of the algorithms mentioned above. Our evolutionary algorithm matches the best previous learning method, achieving several different walks of high quality. Furthermore, the best learned walks represent an impressive 20% improvement over our own best hand-tuned walks.
Purposeful Perception and Multi-Fidelity Behaviors. Robots perceive their environments through sensory data. Processing a complete stream of sensory data, including visual data, is overwhelming and intractable. The cognitive behavioral states of the robot have the effect of focusing and filtering the perceptual data to the only information that is of relevance to the robot state. We have developed multi-fidelity behaviors that allow a robot to achieve a given goal in a variety of different ways as a function of its confidence on its perceptual data.
Sensor-Resetting Localization. Small robots, particularly in adversarial environments, are exposed to a variety of actions that cannot be fully modeled a priori. For example, picking up a robot and teleporting it to some position, pushing a robot, charging a robot, all are actions that are unexpected. Although we could attempt to detect each one of these situations through sensory data, we realize that there will always be situations that were not correctly detected. We developed a novel robot probabilistic localization algorithm, sensor-resetting localization, that allows for a robot to check for model failure and grab the information from the sensors to update its localization belief.
Multi-Robot Multi-Hypothesis State Estimation. In addition to its own individual sensors, a robot in a team can exchange information with its teammates. Therefore a robot in a multi-robot system faces the problem and opportunity of merging information coming from multiple sources. We have developed several algorithms for this multi-source state estimation to effectively address the real-time needs of adversarial domains.
Detecting Environmental Change from Sensory Data. Robots constantly encounter changing environmental conditions. These changes vary from the dramatic, such as the dynamic change of obstacle layout, to the very subtle, such as one burnt out light bulb among many bulbs. We use sensory data to automatically identify and adapt to the environmental changes. Sensory readings are aggregated into probability distributions and associated online with the state of the environment. The Sony AIBO robots have used the algorithms effectively to adapt to lighting and carpet changes in particular.
AIBO Course. We have been teaching for the last two fall semesters the CMRoboBits course on Creating an Intelligent AIBO Robot. The course had about 20 students each semester. Students studied legged motion, robot sensors, vision, and special emphasis on behaviors, cognition, learning, and multi-robot systems.
3 Human-robot interaction and collaboration
Reid Simmons is leading a project to investigate issues of human-robot social interaction—how to get robots to behave more naturally around people. Key issues are modeling and recognition of user intent, detecting level of interest, acting in a socially acceptable manner, and personalizing interaction with people. These issues are explored using two robots: Grace and Valerie. Grace is a multi-institutional effort to attempt to solve the AAAI Robot Challenge (to get an autonomous robot to attend a conference). Valerie is a “robot receptionist” and is meant to investigate how to maintain long-term engagement with a robot. It is a joint project with the Carnegie Mellon Drama Department, which is responsible for developing the robot’s character, personality, and story lines.
The project is also investigating “sliding autonomy” between robots and humans. The idea is to make the boundary between full autonomy and teleoperation more fiuid, and to give both humans and robots the ability to decide when to cede control to the other. The work employs a testbed called “TRESTLE”, which employs multiple heterogeneous robots (a crane, roving eye, and mobile manipulator) to do large-scale assembly.
Manipulation is the study of how a robot can rearrange the world around it. Manipulation shares all the problems of robotics, including perception and robot architecture, but focuses on the physical interactions between the robot and objects being manipulated. Typically a robot interacts with other objects through frictional contact, so modeling of frictional contact is key to all aspects of manipulation, including perception, control, and planning. Recent progress in modeling of frictional contact includes a new formulation that allows dynamic and kinematic issues to be cast into a common framework (Erdmann, Mason). In some instances, such as the problem of rolling a planar object in a palm, optimal plans can be constructed efficiently. Some of the work employs a mobile manipulator platform which uses its wheels both for locomotion and manipulation, called the “mobipulator.” Other experiments use Adept 550 industrial manipulators.
We also are pushing into challenging new task domains, such as manipulation of deformable bodies. Our most recent accomplishment is the recent work on robotic origami. Origami spans a range of manipulation skills, so we believe it is an excellent testbed for measuring progress in robotic manipulation. We have constructed a taxonomy of manipulation skills, and developed analysis, simulation, automatic planning, and robotic implementation for a variety of skills at the simpler end of the spectrum (Mason).
We have recently begun an SCS-wide initiative on humanoid robotics. There is a core group of 13, including Hodgins, Kanade, and Pollard in the Computer Science Department. There are many reasons for focusing on humanoids. The key scientific reason is that we expect to gain new perspectives on vision, learning, planning, and control theory. The key application reason is that humanoids share features with humans that we believe will lead to a new generation of consumer and assistive technology. Humanoids research has for several years been centered in Japan, and they have a considerable advantage, especially in hardware. We are focusing our efforts in software, especially human-robot interaction, where we believe our special strengths enable us to have an immediate impact. There is also a strong synergy with our computer graphics group. Already we have a million dollar equipment grant (Hodgins) for humanoid robot hardware, a 1.4 million dollar NSF grant for research on humanoids (Atkeson (RI), Hodgins, Kuffner(RI), Pollard), an ASIMO provided by Honda (Kanade, Hodgins), and a strong collaboration with Japan’s Digital Human Research Center (Kanade, Kuffner (RI)).
|CSD Home Webteam ^ Top SCS Home|