When the 2000 Nobel prizes were announced, a New York Times article highlighted a theme common to all the science prizes. The physics prize had gone to three scientists who invented microelectronic chips and other computer components. The chemistry prize had been bestowed on researchers who enabled plastic to conduct electricity, a technology that could enable the design of foldable low-energy video displays, and the physiology prize had been given to researchers who clarified an aspect of how the brain transmits information. "There is no Nobel Prize for computer science. But obliquely and perhaps unconsciously, the judges were using the tools at their disposal to recognize how formidable the notion of information has become, pervading not just the technologies we devise but the way we think about ourselves," George Johnson wrote.
To four theoretical computer scientists at Berkeley, the article was a public affirmation of a vision they had recently put forth in a joint research proposal. "We saw computation as a kind of lens through which to view the world, and we felt that this viewpoint would become increasingly important in the 21st century" says Umesh Vazirani, one of the four.
Independently, each had been using ideas from computation to give a new perspective on major research challenges in other disciplines. Vazirani was attempting to determine the power and limitations of a new model of computation based on quantum physics. Christos Papadimitriou was trying to understand computation's role in economics. Alistair Sinclair was using computation to gain a new perspective on models from statistical physics, and Richard Karp was looking at computation's role in biology (see story on page X). "People have worked for a long time on problems generated by other sciences, but this is a little different," says Sinclair. "We're not just taking some application and coming up with an algorithm for it. We're getting at something deeper: looking at a natural process or mechanism from a computational viewpoint gives a completely different perspective."
The goal of the research proposal, subsequently funded by the NSF and DARPA, was to build on this theme. In 2002, the researchers convened an unusual multidisciplinary conference, showcasing the role of computation in physics, economics, and biology, and sketching a bold research agenda for the future, an effort that has continued to gain momentum. "We believe computer science can transform in a very profound way how other disciplines are going about their business," Papadimitriou says. "That is our rallying cry."
In 1992, Umesh Vazirani stumbled upon a paper written 10 years earlier by the celebrated physicist Richard Feynman. In the paper, titled Simulating Physics with Computers, Feynman had observed that the familiar model for a computer, in which information is manipulated and stored according to the principles of classical physics, appeared incapable of simulating a several-particle quantum system efficiently. If this is the case, Vazirani reasoned, the converse opens up an exciting prospect: a computer that harnesses quantum physics might be more powerful than a classical computer.
At the time, the gospel of computer science was an axiom about computation developed in the 1960s and 1970s, known as the Extended Church--Turing thesis. The thesis postulates that there is essentially only one general purpose model of computation: that no computer, no matter what its design, can solve problems asymptotically faster than other computers. A computer based on quantum physics might violate this thesis, Vazirani realized. Moreover, a quantum computer could have profound implications for quantum physics, providing the means for testing an unproved hypothesis of the theory.
"Once I realized there are deep connections between quantum physics and computation, I had to learn quantum physics," Vazirani says, and he did. A year later, Vazirani, together with Ethan Bernstein, his Ph.D. student at the time, showed a quantum computer is digital and can be programmed, and came up with a clever technique for lining up quantum states and measuring them to perform Fourier sampling, a problem thought to be beyond the reach of a classical computer. Their result showed that quantum computation apparently violates the Extended Church-Turing thesis.
A classical bit is typically represented by an electrical current or voltage that is high or low. Bernstein and Vazirani envisioned quantum bits as states of elementary particles, which, by the mysterious laws of quantum mechanics, can be in several classical states at once. The nucleus of a hydrogen atom, for instance, is not necessarily just spin up or spin down, but a linear combination of both. As a quantum system grows, the number of parameters needed to describe it is believed to grow exponentially, suggesting that a quantum system contains more information than its classical counterpart.
That the description of the state of a quantum system grows exponentially is an essential aspect of quantum theory that has never been tested, says Vazirani. "Physicists have missed the significance of this point, but the computational perspective underscores its importance." Demonstrating that a quantum computer is exponentially faster at some task than a classical one would prove this fundamental hypothesis about quantum physics.
Accessing the extra information in a quantum system, however, involves performing a measurement, which collapses the quantum state into a classical one and loses much of the information. Bernstein and Vazirani's Fourier sampling algorithm featured a clever technique for using a measurement to distill information.
A year later, Peter Shor, now a professor of mathematics at the Massachusetts Institute of Technology, built upon this technique to devise an efficient quantum algorithm for factoring integers, an achievement that could have dramatic practical consequences. The difficulty of factoring on classical computers is the basis of the RSA cryptosystem, which is widely used to secure financial transactions over the Internet. If someone succeeds in building a working quantum computer of sufficient size, those cryptosystems will be broken.
At the same time, several results hinted at limits to quantum computation's power. In 1994, Bernstein, who is now at Microsoft, and Vazirani joined forces with two other researchers to give evidence that the NP-complete problems are unlikely to fall to a quantum computer. With Charles Bennett of IBM's Thomas J. Watson Research Center in Yorktown Heights, N.Y. and Gilles Brassard of the University of Montreal in Canada, they showed that if finding a solution to an NP-complete problem is simply a matter of finding a needle in an exponentially large haystack, then a quantum computer could not help all that much: it would provide only quadratic gains over what a classical computer could achieve. "We were too ambitious," said Vazirani. "We were trying to solve NP-complete problems on a quantum computer, and then we realized that there was a fundamental reason why we kept failing."
Vazirani's current efforts are aimed at finding a function that is easy to compute classically, but for which there is evidence that its inverse, unlike factoring, is hard to compute on a quantum computer. Such a function could be a basis for a public key cryptosystem that would be robust against quantum computation. "We are trying to find a cure to the havoc that a quantum computer would wreak," he says.
The Internet, unlike traditional computing environments, provides a wealth of new arenas for strategic interaction. There is competition for the use of pipelines and routers, for the use of server and processor time, and in the increasingly popular online auctions for advertisements, goods, and services.
Not surprisingly, computer scientists have been adopting concepts from game theory, the study of strategic behavior, to analyze computer networks and online markets, and the Internet as a whole. At the same time, computer scientists are also leaving their mark on economics, introducing new ways of understanding the effects of strategic behavior and efficient algorithms for finding market equilibria. Christos Papadimitriou has been in the vanguard of these efforts, taking key ideas from his areas of expertise -- algorithms and complexity theory -- and using them to yield a powerful new perspective on game theory.
In 1998, Papadimitriou -- together with University of Athens professor Elias Koutsoupias -- combined ideas from economics and computer science to define the price of anarchy, a measure of the loss in efficiency when a system is operated through the competitive interaction of participants, rather than socially optimized by an all-knowing central controller. For example, the system might be a network in which users are choosing routes that minimize their own packets' delays, but slowing the network as a whole. In this case, the price of anarchy would measure how the total network delay incurred by such selfish routing compares to that of the optimal routing, which minimizes overall delay.
The price of anarchy was inspired by standard measures of efficiency for algorithms that run in restricted environments, Papadimitriou says. Online algorithms, for example, have limited access to information, while approximation algorithms assume limited computational resources. Analogously, the price of anarchy measures the worst-case cost of decentralization, a cost that had not been systematically quantified by economists. "Economists never embarked on a systematic investigation of efficiency loss, only because the social optimum is not an option," Papadimitriou says. On the other hand, "computer scientists often work with engineered systems where the social optimum is the status quo."
For a simple network model of two nodes and parallel pathways of equal link speeds, Papadimitriou and Koutsoupias showed that the price of anarcy is 3/2, i.e. the cost of selfish routing is a network that is 3/2 times slower than an optimally managed one. For a network in which the delays along each pipeline are proportional to the flow, the price of anarchy is 4/3, as Cornell researchers showed in 2000. Such results suggest that for network routing, "an all-powerful coordinator might make things a little more efficient, but not enough to be worth the trouble," Papadimitriou says. "A factor of 4/3 is absorbed by the galloping technology in a couple of months." Researchers in electrical engineering and operations research have since looked at the price of anarchy for decentralized market resource allocation, and for decentralized power markets.
Recently, Papadimitriou addressed a fundamental question about Nash equilibria, a central concept of game theory. A Nash equilibrium is a steady-state configuration of a market in which no agent has the incentive to switch unilaterally to a different strategy. Such equilibria always exist in finite games, and they have long been assumed to be the natural outcome in a competitive environment. Yet, at the same time, there are no known efficient algorithms for computing Nash equilibria, which raises a philosophical question: Is it meaningful to know that a market equilibrium exists if the participants cannot compute it? Papadimitriou thinks not. "If your laptop can't find it, then neither can the market," he says.
Papadimitriou has long been obsessed with understanding the complexity of computing Nash equilibria. Nearly two decades ago, he defined a seemingly esoteric complexity class with Nash equilibria in mind. In the summer of 2005, his approach finally succeeded. Together with his student, Konstantinos Daskalakis, and Paul Goldberg, a professor at the University of Warwick, Papadimitriou showed that finding a Nash equilibrium seems hard.
Typically, showing something is hard to compute means showing it is NP-complete. But computing Nash equilibria is unlikely to fall into this category because equilibria always exist, while problems known to be NP-complete sometimes have no solution. To capture the complexity of Nash, Papadimitriou had to characterize a new class of search problems that are essentially similar to the Nash problem. "I had to ask the question: 'What is the most basic principle on which a solution to these problems is based?'" Papadimitriou says.
The shared characterization turned out to be the following: For each such problem, the candidate solutions can be viewed as nodes of a graph linked via a directed path with a given starting point and with an actual solution as the endpoint. Papadimitriou called the class of problems of this form PPAD. A growing body of evidence has since indicated that complete problems for PPAD, which include finding fixed points for functions satisfying the hypotheses of Brouwer's famous 1909 fixed point theorem, are likely to be intractable.
Papadimitriou, Goldberg, and Daskalakis were able to show that computing Nash equilibria for games with three or more players is PPAD-complete. Subsequently, another group of researchers, which included a former student of Papadimitriou, was able to show that even for two-player games, finding Nash equilibria is PPAD-complete.
Now that finding Nash equilibria is known to be hard, Papadimitriou is trying to resolve the philosophical dilemma in a different way: In real games, perhaps the players don't arrive at a Nash equilibrium; they only get close to one. With this in mind, Papadimitriou is exploring the complexity of finding a situation where each player, by altering his own strategy, is able to improve his lot only by a very small amount -- an approximate Nash equilibrium. "It's a realistic equilibrium concept," he says. "One would hope that it is not plagued with the same intractability problems, but in the complexity business, you never know."
Cool water down to 0 degrees Celsius, and it hardens into ice. Heat up a magnet, and suddenly, its magnetism vanishes. Phase transitions have long been a focus for physicists, but now their computational manifestations have attracted the attention of computer scientists. "Physicists and computer scientists have realized that they are investigating the same questions from different angles, and using complementary techniques," says Alistair Sinclair.
Sinclair works on mixing problems, which include combinatorial questions such as how many times do you have to shuffle a deck of cards before each ordering occurs with nearly equal likelihood. Instead of cards, however, he has focused on the mixing problems of statistical physics, such as shuffling the orientation of atomic magnets in the Ising model for ferromagnetism.
The Ising model describes the behavior of a large array of individually magnetized atoms, each pointing up or down. Atoms like to take on the orientation of their neighbors, and at low temperatures this interaction is so strong that the magnet as a whole tends to be ordered, with nearly all atoms aligned. At high temperatures, however, entropy takes over, and each atom picks a direction more or less at random. At a temperature somewhere in the middle of the scale, there is a phase transition -- an abrupt change from order to chaos. Mysteriously, at that same temperature, natural algorithms for solving the Ising model appear to shift from inefficient to efficient. "Our goal is to turn this apparent connection -- and others like it -- into theorems," says Sinclair.
The Ising model doesn't give the exact configuration of ups and downs a given magnet will adopt; rather, it gives the odds of finding the system in a particular configuration. To "solve" the model is to compute a function of the odds that determines the model's entropy, specific heat, and other thermodynamic properties. Computing this function is known to be hard; indeed, it has a property analogous to NP-completeness for combinatorial counting problems.
A quest for algorithms for such problems is what led theoretical computer scientists, starting in the late 1980s, to statistical physics. Physicists had long ago developed a clever technique for getting at some of the properties of their models by producing sample configurations according to the probabilities with which they naturally occur at equilibrium. Computer scientists went further by figuring out how to use such sampling methods to devise approximation algorithms for the hardest counting problems of combinatorics and statistical physics. "We were interested in sampling," says Sinclair, "not because sampling is so interesting in its own right, but because if you can sample you can count."
To arrive at a sample, the physicists would follow a mixing procedure analogous to shuffling a deck of cards. For the Ising model, for example, the procedure starts with a given configuration and then repeatedly picks an atom and flips its orientation with a probability that depends on the orientations of its neighbors and the temperature. After a while, this procedure (known as a Markov chain Monte Carlo algorithm) arrives at a typical configuration, and is said to be mixed. This process, Sinclair notes, is not only a reasonable way of producing a sample, but also a plausible model for the how the actual magnet evolves toward equilibrium. A key question, then, is this: How do you know when you've done enough flips to produce a typical configuration?
The physicists' criteria for when to stop flipping were often ad hoc. Typically, they would continue flipping until certain observables stopped changing. "This is known to be dangerous," says Sinclair. "You might get something that looks mixed, but isn't."
So, two decades ago, Sinclair, together with Mark Jerrum, his Ph.D. advisor at the University of Edinburgh, set out to place the physicists' sampling methods on a solid mathematical foundation. Together, they wrote a series of seminal papers on mixing, introducing a new technique for showing whether or not a particular sampling algorithm takes polynomial time to mix. Their method is intuitive to visualize: Suppose the sample space is shaped like a barbell, and the probability of crossing the neck is exponentially small. Then, when moving randomly around the space, it's easy to get stuck for a long time on one side of the barbell and never get a representative sample. What Jerrum and Sinclair showed is that this is the only way to get stuck: If there is no bottleneck in the sample space, sampling methods are guaranteed to converge quickly.
In a subsequent paper, they introduced a new analytic method for showing that a sample space has no bottlenecks, and used it to devise an efficient algorithm for a classic hard counting problem: approximating the permanent of an important class of matrices. (The permanent looks similar to the determinant but is computationally far harder to compute.) Finding the permanent of a matrix, as it turns out, corresponds to solving the dimer model, a classical physical model for diatomic molecules. In 2001, Jerrum, Sinclair, and Eric Vigoda, Sinclair's former student now at the Georgia Institute of Technology, extended the algorithm so it now approximates the permanent of any positive matrix. They won the 2006 Fulkerson prize for their work.
Sinclair's recent work is on the relationship between phase transitions of physical models and mixing times of the corresponding natural flipping process. The dimer model does not have a phase transition, but the Ising model does, which makes the behavior of the mixing process more complex. The exponentially slow mixing behavior below the phase transition is due to a bottleneck in the sample space: At low temperatures, each atom exerts a strong influence on its neighbors. Starting with the all-down configuration, it is difficult to get to the all-up configuration because it is improbable that significant up-pockets will form.
Thus far, researchers have been able to show that this bottleneck suddenly vanishes at the phase transition only for the case of two-dimensional Ising models. Sinclair is exploring the question of whether there is a similar precise correspondence between mixing times and phase transitions for higher dimensions, and for other statistical physics models.
He is also looking at what happens when a model is affected by an exterior environment. For instance, if the Ising model is surrounded by a fixed boundary of upward-pointing atoms, the physical phase transition disappears. Does this mean that the computational 'phase transition' also disappears? In recent work with his former student, Dror Weitz, now a postdoc at Rutgers' DIMACS center, and mathematical physicist Fabio Martinelli of the University of Rome, Sinclair showed that the answer is yes, for the Ising model and for several other models, if the underlying graph is a tree.