CSD Home | SCS Home

 

 

Research Areas - Research in AI: Planning, Knowledge Representation, and Game Theory in the Computer Science Department at Carnegie Mellon

 

CSD faculty: Avrim Blum, Emma Brunskill, Jaime Carbonell, Michael Erdmann, Scott Fahlman, Eugene Fink, Alan Frieze (Math), Carlos Guestrin, Anupam Gupta, Matt Mason, Tom Mitchell, Ariel Procaccia, Tuomas Sandholm, Reid Simmons, Danny Sleator, David Touresky, Raul Valdes-Perez (Vivisimo), Manuela Veloso, Eric Xing (ML/LTI)

1 Introduction

Carnegie Mellon has been a leader in AI research since the field of AI was founded. Herbert Simon and Allen Newell played a crucial role in the birth of AI, with grand scientific goals, including the understanding and reproduction in a computer of human problem solving as symbolic reasoning, knowledge representation, and learning. Building upon these remarkable roots, in the last 15 years, Carnegie Mellon faculty have been leading the field of AI through an expansion across several dimensions.

Artificial Intelligence Research at Carnegie Mellon is spread across many units of SCS, including not only Computer Science Department , but also Robotics Institute, Language Technologies Institute, Machine Learning Department, and some activity in Human-Computer Interaction Institute and Entertainment Technology Center.  We consider it a great strength of Carnegie Mellon that people can collaborate across these departmental boundaries without any problem.

First, the field has evolved to address problems of probabilistic and numeric nature leading to the incorporation of approaches from mathematics, engineering, operations research, and economics. AI researchers scientifically build upon each others’ work, as well as work from these other disciplines creating analytical and highly systematic experimental work. The resulting AI systems are able to deal with uncertainty—both in an objective sense and in the sense that arises in those AI applications that include non-trivial perception (requiring signal-to-symbol grounding).

Second, with the explosion of computerized information on the Internet, in data warehouses, and from physical sensory artifacts, AI faces very large-scale problems. Carnegie Mellon has been at the forefront of that progress, with pioneering innovations in intelligent digital libraries, data representations and algorithms, market clearing technologies, and complex task-embedded robotic applications.

Third, Carnegie Mellon faculty are recognized for their leadership in expanding beyond monolithic AI to the area of multiagent systems: systems where the key challenge is the intelligent interaction of multiple parties. Fourth, Carnegie Mellon faculty have led AI from the abstract (“Can one build a thinking machine?”) to a host of concrete applications and techniques, many of which are significantly improving welfare in the world.

The CSD AI faculty’s scientific leadership is well recognized. Mitchell was President of AAAI for 2002-2003. In the last four years, there were four CSD AI faculty elected as AAAI Fellows: Moore (2005), Fahlman (2003), Simmons (2003), and Veloso (2003). Sandholm received the prestigious Computers and Thought award in 2003 “for his contributions to computational economics and the theory and practice of negotiation and coalition formation among computationally bounded agents,” and the inaugural ACM Autonomous Agents Research Award in 2001. Veloso was program co-chair of AAAI-05 and Sandholm of AAMAS-03. Moore will be program co-chair of ICML-06 and Veloso of IJCAI-07. CSD AI faculty have a consistent strong participation in the editorial boards of AI journals and program committees of the major AI conferences. See the list of all the AAAI fellows within SCS.

 

2 Multiagent systems and game theory

CSD is recognized for leading strengths in many areas of multiagent systems, from building large-scale systems to developing game-theoretic foundations.

Robot soccer

Veloso is one of the pioneers of robot soccer, a fascinating multiagent research problem. Robot soccer has offered challenges on the real-time integration of perception, cognition and action in highly uncertain adversarial domains. She has contributed several new algorithms, including (i) layered learning for effective control across low-level and high-level information and decision making; (ii) robot localization with poor motion and environment models; (iii) robust multi-robot path planning and coordination; (iv) automatic detection of and adaptation to environment changes; (v) tree-based state abstraction reinforcement learning; and (vi) coaching as learning and using environment and agent models for advice. Veloso has used simulation and several robot platforms to ground her contributions, namely small-size wheeled robots built by her group with offboard perception and control, Sony four-legged AIBO robots (Figure  1 ), and Segway RMPs. Her algorithms are robust across these different platforms and also apply to more general complex multiagent problems, as briefly described next within the Radar project.
Figure 1: Multi-robot research: robot soccer teams of Sony AIBO robots

Mechanism design

The aggregation of conflicting preferences is a central problem in multiagent systems, be the agents human or software. The key difficulty is that the agents may report their preferences insincerely. Mechanism design is the science of designing the rules of the game (e.g., auction or voting) so that the agents are motivated to report their preferences truthfully and a desirable outcome is chosen. The desirability objective can be, for example, social welfare, seller’s revenue, fairness, or some tradeoff among these. Sandholm has shown that computational concepts can enhance mechanism design in novel ways and has made fundamental contributions in: automated mechanism design; mechanisms that consider agents’ bounded rationality; mechanisms that use preference elicitation (with A. Blum, Simmons); protocols for executing mechanisms while preserving privacy; and safe exchange mechanisms.

Peer-to-peer negotiation

Sandholm introduced the idea of automated contracting based on (approximate) marginal cost calculations. He also pioneered the use of combinatorial contract types in peer-to-peer negotiation, as well as leveled commitment contracts as a game-theoretically sound backtracking instrument for multiagent negotiation. These ideas have also been applied to task allocation among robots (Stentz, Sandholm, and Simmons). Sandholm also developed a normative model of bounded rationality that enabled the study of bargaining between computationally limited agents. He also derived a strong impossibility result about yielding in bargaining when the parties have private deadlines.

Coalition formation

Sandholm has continued pursuing his long-term agenda of automated coalition formation, with focus on agents that are computationally limited, and recently also on agents that can use pseudonyms such as in anonymous e-commerce.

Solving games

Sometimes it is hard for an agent to determine its best strategy for a game.

The best strategy may depend on how others play, so game-theoretic equilibrium analysis is required. In the most common equilibrium notion, Nash equilibrium, each agent bestresponds to the others. Sandholm’s group presented the first mixed integer programming algorithms for finding a Nash equilibrium (or the best one), and a preprocessing technique for equilibrium finding. They also developed a lossless automated abstraction technique for sequential games of imperfect information. It enabled them to solve Rhode Island Hold’em poker, a recognized AI challenge problem, exactly. Its game tree has 3.1 billion nodes, while the largest poker game solved previously only had 140,000.

Multiagent reinforcement learning

Another important way of solving games is learning. While it is not as efficient as direct equilibrium-finding algorithms, it can be used in settings where the game is not explicitly given (but needs to be identified by exploration), and it can be used to explicitly exploit opponents that do not play rationally.

Learning in games is fundamentally different from learning in 1-agent settings because the other agents’ learning makes the environment nonstationary. At least two types of learning problems arise in games: 1) learning the game to be played, and 2) learning to play a known game (the learning is with respect to the other players’ behavior). On (1), Sandholm’s group introduced a theoretical framework (BL-WoLF) for analyzing the inherent disadvantage to a player that does not know some aspect of the (zero-sum) game being played, and has to learn it. On (2), Veloso’s group devised a multiagent learning algorithm that learns to play both rationally and converging to an equilibrium by varying the learning rate as a function of whether the learner is winning or losing during its learning.

Playbook adaptation and coaching

Veloso’s group has studied playbook adaptation for team strategy in adversarial domains. Her group also developed a coach agent that learns advice to give to a team of distributed agents by externally and globally observing the complete team playing in the presence of an opponent team. Using a predefined coach language, the coach agent can advise any team of agents by providing advice in the universal coach language.

3 Search, planning, and knowledge representation

Another AI focus at Carnegie Mellon CSD is search, planning, and knowledge representation.

This is often intertwined with multiagent systems.

Search algorithms for market clearing

Sandholm pioneered the idea of mediated marketplaces that allow the participants to use highly expressive preference specification languages in order to reach better outcomes. He developed the world’s fastest optimal algorithms for clearing the market. On highly expressive real-world procurement auctions, they have optimally solved problems with over 2.6 million bids and over 160,000 items to be procured, as well as instances with over 320,000 side constraints. The techniques are a combination of tree search from AI, mixed integer programming from operations research, and dozens of techniques that he developed. Since 2002, he has used these techniques to clear over $20 billion of the most combinatorial procurement auctions ever conducted, resulting in value creation in the world (by increased economic efficiency) of over $2 billion. He has applied many of the techniques to other search problems as well.

Search for homeland security

Carbonell and Fink have studied techniques for fast identification of matches in multi-attribute exchange markets, which allow fast-paced trading of complex non-standardized goods. They have also applied these matching techniques to a homeland-security project, focused on identifying suspicious and unexpected patterns in massive structured databases. For example, the developed techniques may allow the detection of money-laundering patterns in banking transactions.

Distributed planning

Guestrin is working on efficient distributed multiagent coordination, planning and learning. Using the factored Markov decision processes representation, which exploits problem-specific structure using Bayesian networks, he designed efficient approximate planning algorithms, leveraged by a novel linear programming decomposition technique. The decomposition technique yields efficient distributed algorithms for planning and learning in collaborative multiagent settings, where multiple decision makers must coordinate their actions to maximize a common goal. Guestrin also works on wireless sensor networks using efficient inference methods from probabilistic graphical models.

Probabilistic replanning

Veloso et al. introduced extended rapidly-exploring random trees (E-RRT), as a novel reuse strategy that solves the general replan/reuse question, in which a past plan probabilistically guides a new search. The replan algorithm considers an initial state, a path, and a goal to be achieved; from the initial state, it grows a search tree by extending towards the goal with probability p , towards a point in the path with probability r , and towards a random exploration target with probability 1 - p - r . The past (or failed) plan is effectively used as a bias in the new search, therefore solving the general reuse problem in a probabilistic manner.

Learning domain-specific planners

Instead of hand writing domain-specific planners to solve large-scale planning problems, Veloso uses example plans to demonstrate how to solve problems in a particular domain and to use that information to automatically learn domainspecific planners that model the observed behavior. Her group developed the ITERANT algorithm for identifying repeated structures in observed plans and show how to convert looping plans into domain-specific planners, or dsPlanners. Looping dsPlanners are able to apply experience acquired from the solutions to small problems to solve arbitrarily large ones. The automatically learned dsPlanners are able to solve large-scale problems much more effectively than any state-of-the-art general-purpose planners and are able to solve problems many orders of magnitude larger than general-purpose planners can solve.

Knowledge representation

Fahlman is working on Scone, a knowledge representation system. In addition to representing all kinds of knowledge (including “common sense” knowledge), Scone is designed to support efficient inference and search. Compared to some other knowledge-representation efforts, Scone’s emphasis is on efficiency, scalability (up to a few million entities and statements about them), and ease of use. Members of the Scone research group are working on a natural-language front-end that will make it possible to add knowledge to Scone and to ask questions using simple English. Scone is intended to be a software component, useful in a large number of other software systems, much as databases are used today. As a longer-term goal, the Scone group is working to develop a flexible declarative representation for episodic memory, i.e., a hierarchical representation of action-types and event-types, along with the entities involved and affected by each event.

 

 

      CSD Home   Webteam  ^ Top   SCS Home