CSD Home | SCS Home

 

 

Research Areas - Networking Research in the Computer Science Department at Carnegie Mellon

 

CSD faculty: Dave Andersen, Mor Harchol-Balter, Robert Harper, Adrian Perrig (ECE), Srini Seshan, Peter Steenkiste, Hui Zhang

 

Over the past decades, the Internet has grown from a small experimental network that served as a playground for researchers to a global infrastructure that connects hundreds of millions of people. Today, the Internet stands as the largest and the most complex computer system that has ever been created.

Complexity is now the dominant constraint for the design, engineering, and management of computer networks and protocols. The first consequence of high complexity is that even though the Internet and various underlying protocols are engineered artifacts, it becomes increasingly more difficult to understand how the Internet works and how various protocols interact with each other in the Internet. Complexity also directly contributes to the high fragility of the Internet: minor disturbance or change in one part of the Internet may have large-scale cascading effects, causing wide-spread failures and performance degradations of applications and critical network services. The chains of casual relationships for these failures are poorly understood today. The problem is exacerbated by the fact that there is an increasing need to rapidly develop and deploy new capabilities (such as security, multicast, mobility). The challenge is to support these new capabilities without increasing the complexity of the overall system, and in addition, to understand how new solutions will perform and interact with other components of the network.

Networking researchers of the Computer Science Department of Carnegie Mellon University are engaging in fundamental research to understand, contain, and reduce network complexity. These efforts fall into four broad categories, as described below.

 

1 Internet Measurement and Measurement-Driven Protocol Design

Over the past several years, Carnegie Mellon researchers, in collaboration with industrial partners such as Akamai and AT&T, have conducted sone of the largest network measurement studies. A distinguishing aspect of Carnegie Mellon’s Internet measurement research projects is that our studies are purposely designed to aid the development and evaluation of new protocols, a methodology we call Measurement-Driven Protocol Design.

As an example, consider the multi-homing routing protocol design study led by Srinivasan Seshan and Bruce Maggs, in collaboration with IBM and Akamai. First, they identified key links belonging to various carrier ISPs that could directly limit the Internet performance of the end-networks. Second, they studied how the end-networks can employ a clever route selection technique, called multihoming route control, to avoid these performance bottlenecks and obtain much better Internet performance. Third, they investigated whether improvements in network link capacities over time will eliminate Internet performance problems altogether. They observed that the Internet’s topological structure and routing might, in fact, worsen the situation in the future. Finally, they outlined simple changes to the topology of the Internet that can ensure robustness and efficiency in the functioning of the future network.

Other projects include the bandwidth measurement tool project led by Peter Steenkiste (in collaboration with ATT), the routing design project led by Hui Zhang (in collaboration with ATT), the DNS study by Srinivansan Seshan and Bruce Maggs (in collaboration with Akamai), the feasibility study of large-scale End System Multicast (ESM) video streaming led by Hui Zhang and Bruce Maggs (in collaboration with Akamai), the Global Network Position (GNP) project led by Hui Zhang and Eugene Ng, and the resilient overlay routing project led by David Andersen. Together, these projects have significantly improved our understanding of the Internet and the research methodology for designing scalable and robust protocols for the complex Internet environment.

In our research, we also emphasize the building of measurement tools and data repositories to facilitate the research for ourselves and other researchers. As an example, David Andersen is leading an effort to build a large-scale measurement collection and analysis framework, called the Internet Datapository. The Datapository acts as a collection point for data streaming from probe machines located around the world, provides researchers with a unified interface to analyze and compare all of this data, and also supports mechanisms to permit these researchers to contribute their own data and analysis tools.

 

2 Next Generation Network and Protocol Architecture

Due to the phenomenal success of the Internet, most networking researchers today are working on solutions that incrementally improve the Internet with the implicit assumption that radical new solutions are not needed or have no chance of ever being deployed. IP, the technical foundation of Internet, is widely regarded, by both the general and technical communities, to be the convergence technology layer for all communication infrastructures and services.

The 100x100 project is exploring an alternative clean state design approach to the design of the next generation network by posing the following question: given the benefit of hindsight and our current understanding of network requirements and technologies, if we were not bound by existing design decisions and would be able to design the network and protocols from first principles (a clean slate design), what should be the design. The 100x100 project involves a multi-institution (Carnegie Mellon as the lead, Berkeley, Stanford, Rice, Fraser Research Internet2) and interdisciplinary team that includes network researchers, economists, and security researchers.

Network control is the first area we have identified that needs radical improvement. Originally designed to support only best-effort delivery, today’s network control system must also support network-level objectives such as traffic engineering, survivability, security, and policy enforcement, in diverse environments ranging from data center, enterprise, to service provider networks. Retrofitting these network objectives on today’s box-centric control architecture has led to bewildering complexity, with diverse state and logic distributed across numerous network elements and management systems. This complexity is responsible for the increasing fragility of IP networks and the tremendous difficulties facing people trying to understand and manage their networks. Continuing on the path of incremental evolution would lead to additional point solutions that exacerbate the problem. Instead, we advocate re-architecting the control and management functions of data networks from the ground up. The 4D project, led by Hui Zhang and in collaboration with AT&T and Princeton, aims to dramatically simplify network control and management by introducing network-wide abstractions/direct control primitives.

Security is another of our focus area. The Internet as it stands today is plagued by a wide variety of malicious attacks such as email viruses, worms, DoS attacks, and DDoS attacks. Much research has been done to improve the accuracy and response time in detecting attacks. However, it is obvious this is an arms race where new attacks will be invented trying to outwit existing signature-based detection and analysis techniques. In the Dragnet project, led by Hui Zhang and Mike Reiter, we try to identify primitives that, if built into the network architecture, can break the arms race and provide the tools needed to obtain security against attacks initiated remotely across the network. In particular, we are investigating the feasibility and advantages of having auditing and forensic capabilities as the fundamental building block for a network security architecture.

We also investigate other fundamental elements that need to be added to the network architecture. For example, Andersen and Seshan have shown in their previous work that giving end-points a choice of paths through the network helps them achieve much higher end-to-end availability and performance. Given these benefits, Andersen is exploring a set of simple primitives to make path selection a fundamental component of the future network architecture.

 

3 Self-Managing Wireless Networks

Until recently, most dense deployments of wireless networks were in campus-like environments, where experts carefully plan cell layout, sometimes using special tools. The rapid deployment of cheap 802.11 hardware and other personal wireless technology (2.4Ghz cordless phones, Bluetooth Rdevices, etc.), however, is quickly changing the nature of deployed wireless networks. Increasingly, the wireless networks found in residential neighborhoods, shopping malls, and apartment buildings, are dense. The resulting high complexity makes it increasingly more difficult to even professionals, let alone non-experts, to plan and manage these networks. As a consequence, these networks often suffer from serious contention, poor performance, and security problems.

To address this challenge, Peter Steenkiste and Srini Seshan are leading a project on self-management of wireless networks. The research is organized into a set of stages that use increasingly more advanced technology. Our research today focuses on auto configuring parameters that are accessible in today’s wireless systems (channel, power, SSID, operational mode). As emerging technologies such as directional antennas and cognitive radios become available, they will be used to improve the performance, security and manageability of the wireless networks.

A key challenge in studying wireless networks is the inability to perform repeatable and realistic experiments. Techniques that have proven successful for wired networks (e.g., testbeds such as PlanetLab) are inadequate for analyzing wireless networks. The reason is that while the physical layer can often be ignored in wired networks, in wireless networks the physical layer fundamentally affects operation at all layers of the protocol stack. Researchers have used many different techniques to evaluate wireless protocols, but none are very attractive. Running experiments using real hardware and software is highly realistic, but this approach faces serious repeatability and controllability challenges. Simulation avoids these problems, but it faces formidable challenges in terms or realism since the simulator has to recreate all layers of the system. Peter Steenkiste and Dan Stancil (ECE) has developed a new FPGA based emulator approach that combines the benefits of simulation (repeatability, configurability, isolation from production networks) and real world experimentation (high level of realism), opening the door to a new, rigorous ways of evaluating wireless network protocols.

 

4 Protocol Design

The IP service model has stayed largely unchanged ever since it was invented 30 years ago. There have only been two significant efforts to make changes to the service model: QoS (Intserv and Diffserv) and Multicast (IP Multicast). Carnegie Mellon researchers have made fundamental contributions to the design of both QoS and multicast protocols.

In the area of QoS, most existing QoS resource management architectures require a stateful network, i.e. routers need to maintain per flow state. There have been concerns, both at philosophical and technical levels, that it might be too expensive to engineer a stateful network that is highly robust (with respect to network failures) and scalable (with respect to number of flows, number of nodes, and link speed). In contrast, the current Internet is based on a stateless architecture. While such a stateless network is usually more robust and scalable than stateful networks, it cannot provide the rich QoS functionalities demanded by applications, administrators, and service providers. Ion Stoica (was at Carnegie Mellon, now at Berkeley), Hui Zhang, and Scott Shenker (Berkeley) proposed an architecture that does not require core routers to maintain per flow state yet can provide QoS services similar to those provided by stateful networks. This is the first architecture that combines the advantages of stateful and stateless networks. Stoica’s Ph.D thesis that embodied this work won the 2001 ACM Dissertation Award.

In the area of multicast, prior to Yanghua Chu, Sanjay Rao, and Hui Zhang’s work on End System Multicast, it had been taken for granted that multicast functionalities should be implemented at the IP level. In the decade after Deering’s seminal SIGCOMM’88 paper on IP multicast, hundreds of technical papers and tens of Ph.D dissertations were written on various aspects of IP multicast: multicast routing, reliable multicast, secure multicast, multicast applications and so on. Included were SIGCOMM, SIGMM award papers, and Steve McCanne’s thesis that received the 1997 ACM Dissertation Award. In addition, IP Multicast was accepted by IETF, the standard organization for the Internet, as the standard for both IPv4 and IPv6. Significant investments were made by industry to implement IP multicast in routers (Cisco, Juniper, etc) and host operating systems (Windows, Linux, etc).

The End System Multicast (ESM) project at Carnegie Mellon and the Yoid project (by Paul Francis) were among the first to argue that IP Multicast is the wrong approach for supporting multipoint applications in the Internet. Together, they laid the architectural foundation for overlay multicast, that has since evolved into a mainstream research area.

 

 

      CSD Home   Webteam  ^ Top   SCS Home