Christos Papadimitriou, the C. Lester Hogan Professor of Electrical Engineering and Computer Science at the University of California at Berkeley, is this year's recipient of the Katyanagi Prize for Research Excellence. Carnegie Mellon University has cited Dr. Papadimitriou as "an internationally recognized expert on the theory of algorithms and complexity, and its applications to databases, optimization, artificial intelligence, networks and game theory."

We recently spoke with Papadimitriou, where among other topics we delved into the underpinnings of science, the economics of the programming market, the mysterious complexity of the Web, quantum computing, and the computer scientist as popular novelist. Next month, we talk with Dr. Erik Demaine, recipient of this year's Katyanagi Emerging Leadership Prize.

** DDJ:** Congratulations on the Katyanagi Award.

**CP:** I didn't know I had been nominated. When the chairman of the committee e-mailed me about the prize, I skimmed the e-mail quickly. She mentioned the previous winner, so I thought someone else won the prize and that I was invited to speak at the ceremony. I replied, "Yeah, okay, let me think about it, give me a week..." She wrote back in astonishment, thinking I was not accepting the prize!

** DDJ:** You certainly cover wide territory in your papers. One that interests me in particular is "On the Complexity of Protein Folding." I was just discussing the same subject with your co-Katyanagi honoree, Erik Demaine, recently. Have you two worked together?

**CP:** No, we talked briefly after I heard him mention his work in his talk at Carnegie Mellon University.

It's a fascinating problem. It's something people have called "Leventhal's Paradox." How come these stupid amino acids know how to fold in a millisecond and we can't with the most powerful computers figure it out in ages? Erik and his colleagues have a very nice explanation for this: The proteins get a lot of geometric help from the ribosomes and the chaperones.

I had a different idea: It's a very hard problem for proteins to fold. Nobody can solve it, not even proteins themselves. Except that evolution has selected the proteins that know how to solve it. In simpler organisms, not much precision is needed. For more complex organisms, a great deal of precision. Sometimes proteins require very exact shapes because they are engaged in certain exercises. They stick together, they can grab to hold other molecules.

** DDJ:** So when you're talking about "simpler forms of life": You don't mean the difference between microbe and macrobe, you mean perhaps the difference between a microbe and a virus?

**CP:** This is not precisely my field, but I think there is question whether a virus is "alive" or not. A virus doesn't figure in the tree of life as currently described.

** DDJ:** I guess it depends on how you define "life." Does a virus have a soul?

**CP:** It can kill me, but can it recite poetry? [Laughs] Perhaps such categorization is anthropocentric.

** DDJ:** Diverging from the philosophical to the mundane, your work on protein folding touches on an interesting economic phenomenon.

*Dr. Dobb's Journal*is a magazine that addresses working programmers. As we pursue our careers, are we going to be sucked into biotech in large numbers?

**CP:** I would say that, quite generally, computer scientists are going to find themselves interacting more with other fields. I encourage my students to go completely wild in their curriculum, to go out and learn not only that which they think they should learn in order to be good computer scientists—usually mathematics and programming and engineering and so on—but learn about everything else, about psychology, economics, about business, about biology, about the humanities.

I think the future belongs to programmers who are well-rounded people who have diverse interests, who are flexible, who understand deeply other fields and are ready to transform them. Biotech is definitely part of the picture, but it's not unique.

** DDJ:** I looked at your paper "Inferring Tree Models for Oncogenesis from Comparative Genome Hybridization Data".

**CP:** You're fishing up papers I'm very proud of and nobody else knows about! Thank you. That paper was undertaken due to the excellent work at the National Institutes of Health [NIH] of my former Stanford student Alejandro Schdffer.

Today with relatively low-tech techniques you can get a glimpse of what regions of the genome are overactive or underactive in cancer cells compared to healthy cells. You get a bunch of regions in the genome where obviously something fishy happens. Cancer is the result of this. When you get lots of such cells, various stages of cancer development, it's like there were many species of cancer cells.

Our thought was, "Why not use the techniques biologists use in the field of phylogenetics and construct the tree of life?" In phylogenetics, you get the either the genome or the phenotype, the external characteristics of the species you find, and from this you infer what went on. Who was the common ancestor?

So we thought about using the same technique to understand how cancer grows and develops. We conjectured that something goes wrong and that's the root of the tree. Because of that, a bunch of other things go wrong, and so on. Looking at the data, you can infer quite reasonably what happened. A lot of other clever clinicians from NIH gave a paper, which led to this.

We found tree models for the progress of cancer. There are molecular events that predispose the cell. Either a suppressor is busted or a promoter is overactive. The amazing thing was how hard it was to find data in these days of overabundant data. We could only find data for a few kinds of cancers. People have used the technique after us, and I'm sure that there are much more elaborate, sophisticated lab techniques.

** DDJ:** Have there been concrete research results?

**CP:** One of our predictions is that ovarian cancer is really three different diseases that have very close clinical symptoms. I believe there is some support for that now. Our tree model had a root, and the next three nodes were way down, suggesting it was not really one tree. If you look at ovarian cancer, the tree that, according to our model, describes the progression of the diseases is really a forest of three trees.

** DDJ:** What got you started in math?

**CP:** Back home in Greece, my father was a math teacher. Mathematics always fascinated me, but I don't believe I ever spoke about math with my father. His influence was tacit. We never spoke about it because we were both so passionate about the subject, not until I was in college and just before his death, when it was obvious that I had become a better mathematician than him, so the tension was not there anymore.

** DDJ:** He must have been a wise man, because one thing a father can do is instill in his child a hatred of a subject. Obviously, you speak at least two languages, probably more.

**CP:** I mumble many languages. I speak Greek and English and have working knowledge of German, French, Spanish, Italian, some Turkish.

** DDJ:** Are you a musician?

**CP:** Yes, how did you guess? I am a failed musician! I'm very interested in music but I'm not very good at it.

** DDJ:** I've observed that many computer scientists are also linguists, musicians, and play chess.

**CP:** So you were wrong about the chess part.

** DDJ:** No chess?

**CP:** Actually, all my life I've been next to incredible chess players. The two top players in Greece when I was growing up, one was my cousin, the other was my schoolmate.

** DDJ:** So you narrowly missed becoming a chess player!

**CP:** More like, I was so discouraged by my encounters with those two. But I think you are close. I'm very passionate about backgammon. I'm a serious backgammon player. Frankly, I think backgammon is a much more interesting game, much harder to learn.

** DDJ:** Because you have to assess such fine probabilities?

**CP:** It's something much simpler than that. In chess, when you play like an idiot, you always lose, so you learn. In backgammon, you can play 10 games, not play well, and win. So you think you are great but you have made a great number of mistakes. Tragically, life is closer to backgammon, because you can play a perfect game and lose!

** DDJ:** So can chess be solved without exhaustive search, by some model?

**CP:** My most famous book, perhaps, is *Computational Complexity*. Complexity Theory is a mathematical field that tries to find the intrinsic difficulty of problems. "This problem looks difficult, is it because it is really difficult or because I'm stupid?" That's the question. Computational Complexity is a very difficult field, because it's nearly impossible to prove what everyone knows from experience. You have to be very, very lucky. The *P* versus *NP* problem, whether you can telescope exhaustive search, is always the key, because there are instances where we know how to bypass exhaustive search.

Compare chess with Nim. There are many games where, by creative thinking, you can see that they don't have intrinsic difficulty. They can be solved front-to-back. You can look at a position and make a calculation and say it's a win. People have proved that chess is part of the family of games that seem to have intrinsic difficulty, that seem to embody computation, require exhaustive search.

There can be games on a large board that have very suggestive symmetries, but still, they have subtle mathematical properties.

** DDJ:** What are you working on now?

**CP:** I'm always looking at lots of problems in various areas. But in the last 10 years, the larger part of my work is on this new area, trying to understand the Internet and the Web, trying to understand the underpinnings. In computer science, we don't have great mysteries. We want to solve problems, but it's not like we have mysterious objects we don't understand. It's not like Physics, which has the Universe, or Economics, which has the Markets, Neuroscience has the Brain, and Biology the Cell. For us, the computer and its software are huge, complex, powerful, and fascinating, but we constructed them. Intrinsically, there's very little mystery.

In many ways, the Internet and the Web, we did not create them. They arrived, appeared, emerged. All these other artifacts, software, processers, and so forth, there was a designer, a team, an entity that intentionally built them. The Web emerged from an interaction of millions of entities on the basis of deliberately simple protocols. Thus the Internet and the Web are our mysterious objects. Computer scientists are looking at them the way other scientists are looking at their mysterious objects. We have to look at them using the scientific method: observations, measurement, experiments, verifiable theories, applied mathematics.

** DDJ:** Where do you start looking at the Web and the Internet? Link networks? Semantics?

**CP:** The Web and the Internet are two separate subjects, both fascinating and mysterious. Links are important. Content is important. The way routers and ISPs are linking on the Internet is important. They're very organic.

** DDJ:** John Gilmore said, "The Net interprets censorship as damage and routes around it."

**CP:** Very well said! I think we should be proud. The Internet embodies healthy principles of the programmers who came up with and implemented the protocols. If it's democratic and censorship-resistant and open and end-to-end, all these great things that are under attack these days, it's because of ingenious and strategic programming.

** DDJ:** I guess you could model how rumors spread on the Internet.

**CP:** There are a lot of people working on this. How rumors spread and how product preferences spread interests advertisers. Call it "information epidemiology." Say you look at a network and try to decide which are the leader nodes to which you give a free sample of your product and conquer the market as a result.

** DDJ:** Tell us about your paper "Algorithms, Games, and the Internet."

**CP:** This is about the interface between algorithm theory, networking and economics. When I say "games" I don't mean "Guitar Hero." I mean the Game Theory of John Nash, economic games. The paper was basically a call to arms to study this interface. I proposed a few problems. It had a result I am proud of.

In 1950, Nash proved this theorem that every game has equilibrium. This had completely eluded the other great game theorists up to that time. Von Neumann suspected it was true but didn't know how to prove it. This brought Nash the Nobel Prize almost 50 years later [1994 Nobel Memorial Prize]. So since Nash's paper, everyone was curious how you find this equilibrium. This sounds like a job for a computer scientist.

We did try to discover how hard it is to find the Nash Equilibrium. Nash's theorem is an existence theorem, that equilibrium exists. If you want to find that equilibrium, to put it to work, you have to look at the proof. And the proof doesn't help you here. The proof relies on another difficult theorem called Brouwer's fixed-point theorem, which was proved 50 years earlier. Is the dependence on Brouwer intrinsic or is there a quick way to prove the existence of equilibria that would lead to an algorithm?

Three years ago, with my students Constantinos Daskalakis and Paul Goldberg, we proved that these two great theorems of the 20th century, Brouwer and Nash, are equivalent, that you need Brouwer to prove Nash, and you can prove Nash from Brouwer. As a result, you can't find Nash equilibrium, it possesses intrinsic difficulty.

What a mathematician always dreams is that he will one day prove a theorem that will teach the world something they don't know, that will point to a new direction. But what a mathematician does every day is prove what everybody knows is true but couldn't prove.

** DDJ:** You seem to be enunciating theories that do have an effect on the external world.

**CP:** From my point of view, it's humbling how little and how belated is the effect of anything one does. It's a complex world!

** DDJ:** Tell us about your novels.

**CP:** I have written a novel called *Turing (A Novel about Computation)*. It's a modern love story in which the protagonist is an AI program called "Turing." It started about 18 years ago. I discovered this was inside me and had to come out, so I took time to write it. I couldn't resist it. It was one of the most powerful experiences I ever had. I found myself thinking about myself from every perspective. I worked hard, I cried a lot. I saw things in myself that were perhaps there before but I had never seen them. If I had not done it, I would be a less happy man. I shudder at the thought I might have lived my life without writing this novel.

** DDJ:** Perhaps a way to let out the irrationality after all the rationality of mathematics?

**CP:** Yes. Now I'm writing a graphic novel called *Logicomics*. It's on the history of mathematical logic.

** DDJ:** You encourage your students to read the classics.

**CP:** The classics of computer science. It's very instructive. It happens that my students sometimes realize there is something seriously broken in their field and they take time and fix it. What is fascinating reading these papers, you can see the author knows that it's important, that they are writing history. Graduate students should see this and be inspired. Euler's paper, written when he was quite young, on the Bridges of Konigsberg is the beginning of Network Theory, whether you can walk over each bridge once, saying that sometimes geometry is not important, the only thing that matters is what is connected to what.

** DDJ:** Is Bridges of Konigsberg a version of the Traveling Salesman Problem?

**CP:** Not quite. It's like the Knight's Tour, with the difference that the Knight goes to every island, but in the Bridges you have to cross every bridge; that is, traverse every edge. This turns out to be much easier than the Traveling Salesman. It's one of these problems that look equally hard but has no intrinsic complexity, you can eyeball it.

Euler's classic serves to tell us that not everything that looks intrinsically complex is so.

Another paper I like my students to read is where Feynman proposed quantum computers. I'm interested in the mathematics of quantum computing. It's a beautiful question in mathematics, it's a beautiful question in physics, it's a beautiful question in computer science.

You think quantum computing is about powerful computers, but quantum computing may prove to be about testing quantum physics, to find out why we cannot build these powerful computers. Who knows? Maybe there is something fishy with the theory. One of the reasons quantum computing is so interesting is that it looks at some extremely counterintuitive predictions of quantum theory.

If building quantum computers fails on a practical engineering basis, that will be a disappointment, meaning the idea dies of a thousand cuts, it's too difficult, we can't afford it, we stop trying. But what would be fascinating is if there is a theoretical difficulty! There are physicists who see quantum computing as the ultimate test of quantum theory.

This brings us back to where we started this conversation, the Algorithmic Lens. In some sense, Physics looks at itself and its most prestigious theory through the Algorithmic Lens. That's a great triumph of the algorithmic programming way of thinking.