CSD Home | Research Home

 

 

Acar
Andersen
Balcan
Blelloch
Blum, A.
Blum, L.
Blum, M.
Brookes
Brunskill
Bryant
Carbonell
Christel
Clarke
Crary
Dannenberg
Datta
Erdmann
Fahlman
Faloutsos
Fatahalian
Fink
Garlan
Gibson
Goldstein
Gupta
Guruswami
Haeupler
Harchol-Balter
Harper
Hauptmann
Hodgins

Kanade
Kolter

Lee, T.
Mason
Maxion

Miller
Mitchell
Morris
Mowry
O'Donnell
O'Hallaron
Pavlo
Pfenning
Platzer
Pollard
Procaccia
Reddy
Rudich
Rudnicky
Sandholm
Satyanarayanan
Schmerl
Seshan
Shaw
Siewiorek
Simmons
Sleator
Steenkiste
Touretzky
Treuille
Veloso
Von Ahn
Wactlar
Wing
Zhang
 
Affiliated Faculty
Aldrich
Brumley
Durand
Ganger
Hoe
Mutlu
Perrig
Schwartz
Stern
Xing
Yang

VENKATESAN GURUSWAMI
Associate Professor, Computer Science
www

My research interests span several topics in theoretical computer science including algorithmic coding theory, the role of randomness in computation, pseudorandomness and explicit combinatorial constructions, the theory of probabilistically checkable proofs, and the computational complexity of approximate optimization. Below I briefly describe some of these research directions.

Coding theory:  Error-correcting codes provide a systematic way to add redundancy to data so that errors that occur during storage/transmission can be tolerated. In addition to their obvious importance to reliable storage and communication, codes have found many "extraneous" applications in complexity theory and cryptography, and are by now indispensable items in the toolkit of theoretical computer scientists. The basic principle in the design of codes is to ensure that distinct messages get encoded into codewords that are far apart, so that one will not be confused with the other even in the presence of some errors. The broad challenges in coding theory are three-fold: (i) formally specifying the noise model and understanding the potential and limits of error-correction for the concerned model; (ii) finding and explicitly specifying good configurations of codewords; and (ii) exploiting the structure of the designed code to to efficiently recover the transmitted codeword from a noisy received word. The game thus is to construct codes with low redundancy together with fast algorithms to correct large amounts of noise. I have been working towards this long term goal for several years, and in particular have extensively studied the algorithmic, combinatorial, and complexity-theoretic aspects of a notion of error-correction called "list decoding" which allows recovery from approximately 2x more errors compared to traditional error-correction algorithms. I am also excited by the practical potential of list decoding and am interested in experiments to validate the utility of list decoding for realistic noise models.

Theory of approximation algorithms: Many computational tasks arising in practice can be cast as optimization problems, where the goal is to find a solution subject to some constraints that optimizes a certain objective value. Unfortunately, for most interesting optimization problems, finding an optimal solution is NP-hard. One versatile approach to cope with this intractability is to settle for approximation algorithms that find approximately optimal solutions with provable guarantees on quality (for example, a solution of cost at most 20% more than the optimal).  Ideally, for each problem of interest, we would like to have an algorithm with a provable performance ratio along with a hardness result ruling out a better performance ratio, thereby identifying its approximation threshold. Tremendous progress has been made on this topic in the last two decades, thanks to both algorithmic techniques such as linear and semidefinite programming and metric embeddings, and breakthroughs on the hardness side using the machinery of probabilistically checkable proofs (PCP) and new coding-theoretic, probabilistic, and (Fourier) analytic tools.  My own (ongoing) research has made many contributions to this subject using a wide range of techniques, proving strong, and often near-optimal, bounds on the approximability of a broad class of fundamental optimization problems, including constraint satisfaction problems, ranking problems, graph-theoretic problems such as coloring and independent sets, disjoint paths and low-congestion routing, graph partitioning, etc.  The power of semidefinite programming and its relation to the Unique Games (and related) problems is an important focus of our current research.

Pseudorandomness: Pseudorandomness is a broad area that deals with efficiently generating objects that exhibit the desirable properties of "random-like" objects despite being constructed either explicitly or with limited randomness. Such pseudorandom constructions are important in the study of error-correcting codes, complexity theory, combinatorics, cryptography, and high-dimensional geometry. My research in recent years has addressed some of these challenges and led to good constructions of error-correcting codes, expander graphs, randomness extractors, Euclidean sections, compressed sensing matrices, etc. Despite the seemingly different definitions and motivations for the study of these objects, much of this progress was based on insights uncovering intimate connections between them. I am interested in strengthening these constructions when possible, and more broadly in expanding this body of work to find explicit constructions of other pseudorandom objects that arise in emerging applications (a salient example is the concept of subspace-evasive sets which found a surprising application in some of our recent work on list decoding).

 

      CSD Home   Webteam  ^ Top   SCS Home