Philip is a researcher at Avaya Labs (a spinoff of Lucent Technologies). Prior to joining Avaya, he was a researcher at Bell Laboratories. His e-mail address is [email protected]
As the 19th century drew to a close, logicians formalized an ideal notion of proof. They were driven by nothing other than an abiding interest in truth, and their proofs were as ethereal as the mind of God. Yet within decades, these mathematical abstractions were realized by the hand of man, in the digital stored-program computer. How it came to be recognized that proofs and programs are the same thing is a story that spans a century. The story ends with a new principle that is guiding the design of a new generation of programming languages, including mobile code for the Internet.
Modern logic began with Gottlob Frege in the late 1870s. Two aspects of Frege's work reached a culmination a half century later, in Gerhard Gentzen's natural deduction, which captured a notion of proof, and Alonzo Church's lambda calculus, which captured a notion of program, both first published in the early 1930s. As it turns out, there is a precise correspondence between Gentzen's proofs and Church's programs, but that was not uncovered until the late 1960s. As the 21st century begins, that correspondence sits as a cornerstone of modern programming language design.
For my money, Gentzen's natural deduction and Church's lambda calculus are on a par with Einstein's relativity and Dirac's quantum physics for elegance and insight. This article gives a taste of these ideas; for more detailed information on these topics, go to http://www.ddj.com/ articles/2000/0013/0013toc.htm.
Gentzen's Natural Deduction
Aristotle formulated his syllogisms in antiquity, and William of Ockham studied logic in the middle ages. But the discipline of modern logic began with Gottlob Frege's Begriffschrift, written in 1879 when Frege was 31. Frege's logic was already capable of expressing quite complex concepts, as illustrated by the excerpt from Frege's work, shown in Figure 1. Once Frege got the ball rolling, logic grew by leaps and bounds, with contributions from Giuseppe Peano, Bertrand Russel, and Kurt Goedel, among many others.
Among Frege's central contributions were the notion of deduction and the notion of function. We can see examples of each in Figure 1; the sideways "T" towards the middle of the frame marks a deduced fact, and the symbol , repeated throughout, stands for a function. Frege's deductions eventually gave rise to Gentzen's natural deduction, and Frege's functions eventually led to Church's lambda calculus.
Gerhard Gentzen introduced natural deduction in 1934, when he was 25. A small proof written in Gentzen's notation is shown in Figure 2. Many of the ideas are direct descendants of Frege's work. The sideways "T" symbol serves exactly the same purpose as the sideways "T" shape in Frege's work, and the horizontal lines that structure Gentzen's proof were used by Frege in just the same way.
Alonzo Church introduced lambda calculus in 1932, when he was 29 (see Alonzo Church, "The Calculi of Lambda Conversion," Princeton University Press, 1941). Lambda calculus was intended as a new way to formulate logic. That formulation turned out to be flawed, but from the start Church suspected that lambda calculus might have independent interest. In that first paper he wrote, "There may, indeed, be other applications of the system than its use as a logic." Prophetic words!
By 1936, Church had realized that lambda terms could be used to express every function that could ever be computed by a machine. Independently, at about the same time, Turing wrote the famous paper on the machine that bears his name (http://www.alanturing.net/). It was quickly recognized that the two formulations were equivalent, and Turing came to Princeton to study with Church between 1936 and 1938.
Shortly thereafter, Turing returned to Britain. During the war, At Bletchley Park, he worked on machines -- -early proto-computers -- -designed to break enemy codes. And within a decade, John von Neumann, who was at Princeton and familiar with the work of Church and Turing, wrote "First Draft of a Report on the EDVAC" (June 1945, Contract No. W-670-ORD-492, Moore School of Electrical Engineering, University of Pennsylvania, reprinted in part in Origins of Digital Computers: Selected Papers, by Brian Randell, Springer-Verlag, 1982), his famous note on the architecture of the stored-program computer.
Church reduced all calculation to the notion of substitution. Typically, a mathematician might define a function by an equation. If a function f is defined by the equation f(x)=t, where t is some term involving x, then the application f(u) yields the value t[u/x], where t[u/x] is the term that results by substituting u for each occurrence of x in t. For example, if f(x)=x X x, then f(3)=3 X 3=9. Church provided an especially compact way of writing such functions. Instead of saying "the function f where f(x)=t," he simply wrote lx.t. In his notation, the example function is written lx.x X x. A term of the form lx.t is called a "lambda expression."
Church introduced a typed version of lambda calculus in 1940. His goal was to avoid paradoxes that beset other logics (including Frege's original logic, which fell prey to Russel's paradox). Just as lambda calculus had applications beyond what Church first intended, so too did types. A small program written in Church's typed lambda calculus is shown in Figure 3.
If you ignore the red part of Figure 3, what remains is identical to Figure 2. This is true not just for the proof and program shown: It turns out that any proof in Gentzen's natural deduction corresponds to a program in Church's typed lambda calculus, and vice versa. What is truly remarkable is that this correspondence was not uncovered until much later, thanks to work by the logicians Haskell Curry and W.A. Howard. Curry outlined some of the key points in 1956 (see http://haskell .org/bio.html), but Howard's paper, setting out the correspondence in full, was not written until 1969, and though widely influential, it was not actually published until 1980. (See Haskell Curry and Robert Feys, Combinatory Logic, Volume I, North-Holland, Amsterdam, 1958, and W.A. Howard, "The Formulae-as-Types Notion of Construction," in J.P. Seldin and J.R. Hindley, editors, To H.B. Curry: Essays on Combinatory Logic, Lambda Calculus and Formalism, Academic Press, London, 1980, pages 479-490.)
The living proof of the Curry-Howard correspondence is the appearance of other double-barrelled names in the literature. Time and again, a logician motivated by logical concerns and a computer scientist motivated by practical concerns have discovered exactly the same type system, usually with the logician getting there first. Most modern functional languages use the Hindley-Milner type system, discovered by the logician Roger Hindley in 1969 and rediscovered by the computer scientist Robin Milner in 1978. And much recent work on type-directed compilers is based on the Girard-Reynolds system, discovered by the logician Jean-Yves Girard in 1972 and rediscovered by the computer scientist John Reynolds in 1974.
Church and Curry lived to see their abstract theories have an impact on practical programming languages. They were guests of honor at the 1980 conference on Lisp and Functional Programming, both highly bemused by what had been done with their work. Gentzen was not so fortunate. Working in Prague during the disorder at the end of World War II, he was sent to prison on May 5th, 1945. Gentzen faced this calmly; he confided to a friend that "he was really quite contented since now he had at last time to think about a consistency proof for analysis." He died in his cell, of malnutrition, on August 4, 1945.
The Impact of Logic
Church's lambda calculus, both typed and untyped, had a profound impact on the development of programming languages. The notion that functions may take functions as arguments and may return functions as results was particularly important. In the 1960s, Christopher Strachey proposed this as a guiding principle for the design of programming languages, under the motto "functions as first-class citizens."
Lisp used the keyword lambda to define functions, but its definition of a function differs subtly from that of lambda calculus. Languages that took more direct inspiration from lambda calculus include:
- Iswim (Peter Landin, 1966; http://lucy .csc.uvic.ca/534/luswim.html).
- Scheme, a dialect of Lisp which got lambda right (Guy Steele and Gerald Sussman, 1975; http://www.schemers .org/).
- ML, short for "metalanguage" (Gordon, Milner, and Wadsworth, 1979; see http:// www.cl.cam.ac.uk/users/lcp/MLbook/general.html), later succeed by Standard ML.
- Miranda (David Turner, 1985; http:// www.cs.nott.ac.uk/~gmh//faq.html# miranda).
- Haskell, named for Haskell Curry (Hudak, Peyton Jones, Wadler, and others, 1987; see http://haskell.org/).
- O'Caml, a French spinoff of ML (Xavier Leroy and others, 1996; see http://pauillac.inria.fr:80/caml/).
Iswim and Scheme are untyped, but the others have type systems based on the Hindley-Milner and Girard-Reynolds systems. Standard ML is noted for its exploration of module types, Haskell for its type classes, and O'Caml for its object-oriented types. Standard ML, Haskell, and O'Caml are all continuing development, and innovation in their type systems is one of the principal directions of research.
Applications built on top of functional languages, and which themselves use type systems in innovative ways, include: Kleisli, a database language for biomedical data (implemented in Standard ML); Ensemble, a library for building distributed applications and protocol stacks (implemented in O'Caml); Lolita, a natural language understanding system (implemented in Haskell); and Xduce, a language for processing XML (implemented in O'Caml).
Functional programming and types have also had a significant impact on Java. Guy Steele, one of the three principal designers of Java, got his start with Scheme. John Rose, another Scheme programmer, introduced lambda terms into Java in the form of inner classes, where they are used to implement callbacks in the GUI and the event framework. The security model of Java depends heavily on type safety, spurring much new work in this area, including type systems to enhance security. And the next generation of Java may include generic types, based partly on the Girard-Reynolds type system (see my article "GJ: A Generic Java," Dr. Dobb's Journal, February 2000).
Java and Jini provide some support for distributed computing, but it is clear that they are not the last word. A whole new generation of distributed languages is currently under development, including Milner's pi calculus; Cardelli and Gordon's Ambit; Fournier, Gonthier, and others' Join calculus; and Lee and Necula's proof-carrying code. All of these languages draw heavily on the traditions of typed lambda calculus.
Hewlett-Packard's Runway multiprocessor bus underlies the architecture of the HP 9000 line of servers and multiprocessors. Hewlett-Packard applied the Higher Order Logic (HOL) theorem prover as part of an effort to verify that caching protocols in Runway did not deadlock. This approach uncovered errors that had not been revealed by several months of simulation (see http://www.cs.bell-labs.com/who/ wadler/realworld/hol.html).
The Defense Science and Technology Organization, a branch of the Australian Department of Defense, is applying the Isabelle theorem prover (http://www.cl.cam.ac.uk/research/hvg/isabelle/) to verify arming conditions for missile decoys. The system was used to prove, for example, that the missile cannot be armed when the launch safety switch is not enabled. As a side effect of constructing the proof, some potential errors in the code were spotted.
Both HOL and Isabelle are implemented in Standard ML. Standard ML is a descendant of ML, the metalanguage of the groundbreaking LCF theorem prover, which is in turn an ancestor of both HOL and Isabelle. This circle reflects the intertwined history of theorem provers and functional languages. HOL was developed by Gordon, Melham, and others, with versions released in 1988, 1990, and 1998. Isabelle was developed by Paulson, Nipkow, and others, with versions released in 1993, 1994, 1998, and 1999.
ML/LCF exploited two central features of functional languages -- higher order functions and types. A proof tactic is a function taking a goal formula to be proved and returning a list of subgoals paired with a justification. A justification, in turn, is a function from proofs of the subgoals to a proof of the goal. A tactical is a function that combines small tactics into larger tactics. The type system was a great boon in managing the resulting nesting of functions that return functions that accept functions. Furthermore, the type discipline ensured soundness, since the only way to create a value of type Theorem was by applying a given set of functions, each corresponding to an inference rule. As noted, the type system Robin Milner devised for ML remains a cornerstone of work in functional languages. ML/LCF was developed by Milner, Gordon, and Wadsworth, with the complete description published in 1979.
HOL and Isabelle are just two of the many theorem provers that draw on the ideas developed in LCF, just as Standard ML is only one of the many languages that draw on the ideas developed in ML. Among others, Coq is implemented in Caml, Veritas in Miranda, Yarrow in Haskell (http://www.cs.kun.nl/~janz/ yarrow/), and ALF (http://www.cs.belllabs.com/who/wadler/realworld/alf.html), Elf, and Lego in Standard ML again. The March 1999 issue of the Journal of Functional Programming (http://uk.cambridge.org/journals/JFP/) examined the interplay between functional languages and theorem provers.
Trust and Security
Trust is essential in computing. Almost all of the code you run is not code that you yourself have written. How can you know it will do what you intend to do, rather than, say, wipe your hard disk, or e-mail your credit-card numbers to a third party? The Internet has brought this problem into sharper focus, as it increases opportunities for downloading programs while it decreases opportunities to build trust via social interaction. Computer viruses in programs hidden in e-mail documents have brought this problem to public attention, but the problem exists for any program you might run on your machine. Further, not only do you have to trust the programmer not to be malicious, you also have to trust him or her not to violate the security policy by mistake. Such mistakes are in fact quite common, as many who have suffered a machine crash can attest.
There are technical means available to complement the social means of building trust. For instance, an operating system may allocate permission to read or write files to various accounts. (UNIX and NT support this, Windows 98 does not.) You can then arrange for an untrusted program to be given permission to read or write only a small selection of programs. (Alas, even on systems that can easily support this, it is not always standard practice to do so.) The policy describing which resources may be accessed is called a "security policy."
The bedrock of any security policy is limiting the areas of computer memory that a program can read or write. Once a program overwrites the kernel of the operating system, all bets are off. Even more simply, many viruses or worms operate by overwriting a return location on the stack, which in turn is achieved by writing into an array with an index outside the array boundary.
Thus, types can be intimately tied to security. For instance, if a program is passed the address of an array, the type system can ensure that the program always writes to addresses within the array boundary. Strongly typed languages (such as Java or Modula-3, http://www.research.digital.com/src/modula-3/html/home.html) guarantee such properties; weakly typed languages such as C or C++ do not. Java's popularity as an Internet programming language is based in large part on the fact that its design addresses these security issues. The foundation of this is that Java bytecodes are verified before they are run; that is, the bytecodes form a typed programming language, and the verifier checks the types.
The simplest way to achieve security is to check each operation each time it is performed, but this can be expensive. To reduce the cost, before running the program, you may wish to check once and for all that certain operations are guaranteed to be safe every time they are performed; that is, you want to prove that certain operations always conform to your security policy, and that is exactly what a type system does. Often, you use a combination of both approaches.
For example, when Java indexes an array, the top two slots on the stack contain an index and a pointer to the array. The verifier guarantees that the slot that is supposed to point to an array really does point to an array (a block of storage containing a length followed by the array contents), so there is no need to check this at run time. However, the code does need to check at run time that the array index is bounded by the array length. (A clever just-in-time compiler may be able to eliminate this check for some loops, such as when the array index is the same as the loop index and the loop index is bounded by the array length.)
Software Fault Isolation
Of course, Java is not the only game in town. For some applications, you may want the added flexibility of supplying machine code rather than Java bytecodes. One method is to simply trust that the code provided is safe, perhaps using cryptographic techniques to verify that the code is supplied by a known party. When executing machine code, is there any way to supplement trust with tests? There is -- and the solutions span an alphabet of acronyms.
One is SFI, short for or Software Fault Isolation. This inserts extra instructions into the machine code, to check that accesses will not exceed set bounds on the memory. SFI introduces extra run-time overhead, and the security policies it implements are cruder than those imposed by a type system. Typically, you can restrict a whole program to access data in a particular region of memory, but not restrict accesses to a particular array to be within bounds. SFI was developed by Wahbe, Lucco, Anderson, and Graham at the University of California at Berkeley in 1993 (see http://www.cs.berkeley.edu/~gribble/osprelims/F95/ summaries/sfi.html).
Typed Assembly Language
Another is TAL, or Typed Assembly Language. The type system developed for lambda calculus is flexible enough to be applied to a range of other styles of programming languages -- and, remarkably, it can even be extended to assembly language. In Java, the bytecodes are annotated with types that are checked by the verifier before the code is run. Similarly, in TAL the machine code is annotated with types that are checked by the type checker before the code is run. The difference is that while Java bytecodes are especially designed for checking, in TAL the type system is overlayed on the machine code of the Intel x86 architecture. (The TAL researchers display a variant of the familiar "Intel inside" logo modified to say "Types inside," and their motto is "What do you want to type today?") TAL was developed by Greg Morrisett and others at Cornell University in 1998 (http://www.cs.cornell.edu/talc/).
Typed Intermediate Language
TAL is a refinement of TIL, or Typed Intermediate Language. The TIL project exploited types as the basis for constructing an optimizing compiler for a functional language. The typed functional language is translated to a lower level typed intermediate language, which is optimized and then compiled to machine code (or, later, TAL). Indeed, most compilers for functional languages now exploit types heavily, in a style similar to that used for TIL. TIL was developed by Greg Morrisett and Bob Harper at Carnegie-Mellon University in 1995 (http://www.cs.cornell.edu/home/jgm/tilt.html). Both TAL and TIL take their inspiration from the Girard-Reynolds type system mentioned earlier, John Reynolds also being on the faculty at Carnegie-Mellon.
Proof Carrying Code
Finally, there is PCC, or Proof Carrying Code. In TAL, as in all type systems, the types are, in effect, a proof that the program satisfies certain properties. In PCC, instead of adding types to the machine code, one adds the proofs directly. The disadvantage of this is that proofs are typically larger than types, and proof checking is slightly harder than type checking; the advantage is that proofs can be more flexible than types. PCC was developed by George Necula and Peter Lee at Carnegie-Mellon in 1996 (Necula moved to Berkeley shortly thereafter; see http://www-nt.cs.berkeley.edu/home/necula/public_html/pcc.html).
The first application of PCC was to packet filters. A packet filter needs to be fast (servers process a lot of packets) and trusted (it is part of the kernel). A special-purpose solution is the Berkeley Packet Filter (BPF), a small interpreted language for packet filters built into the kernel of Berkeley UNIX, which enforces the safety policy by run-time checks. You can avoid the overhead of interpretation by using SFI (which adds its own run-time checks) or PCC (where the proof is checked just once, and there is no run-time overhead). The original PCC paper presented measurements showing that SFI was four to six times faster than BPF, and that PCC was another 25 percent faster than SFI. A drawback of PCC is that it takes a while to check the proof, so SFI is faster than PCC if checking a tiny number of packets, but PCC is faster than SFI if checking more than a dozen packets (the common case). This work won the best paper award at OSDI '96 (http://www.usenix .org/publications/library/proceedings/ osdi96/index.html), the Symposium on Operating System Design and Implementation.
Back to Java
Another application of PCC is our old friend, Java. Instead of transmitting bytecodes and using a just-in-time compiler, with the PCC Special J system you can transmit already compiled machine code together with a proof that the code satisfies all the properties that would normally be checked by the byte code verifier or at run time. Some Java users resort to the native code interface, which gives no security guarantees but provides a way to execute machine code directly; with Special J, you can execute machine code directly while still preserving the security guarantees. Necula and Lee have founded a start-up, Cedilla Systems (http:// www.cedillasystems.com/), to pursue commercial possibilities for PCC, and PCC is now patent pending.
One advantage of both TAL and PCC is that the trusted computing base is smaller. Designing a secure system is hard -- a number of bugs have been spotted and corrected in the Java platform since it was first released. If a platform for secure computing is simple and small, then it is less likely to contain errors. In TAL, the only code that needs to be trusted is the type checker for assembly language, and in PCC the only code that needs to be trusted is that of the type checker. In particular, the compiler that generates the code does not need to be trusted. If the compiler contains an error, then it is likely to generate code that does not type check (for TAL), or an incorrect proof (for PCC). Thus, TAL and PCC, as well as enhancing security, also aid in debugging the compiler.
In rare cases, it might be that the compiler generates code that does not do what was expected, but still type checks or has a correct proof. Even in this case, the types or proof will guarantee that the code cannot violate the security policy. For instance, the security policy might ensure that the code never indexes outside the bounds of a given array, but the code might still look at the wrong array index.
What happens if a malicious user changes the code? Then the type or proof will not match the code, and the program will not run. What happens if a malicious user changes the code, and changes the types or proofs to match the new code? Then the types or proofs still guarantee that the new code cannot violate the security policy, even though the code may not behave as expected. (There is not much difference here between a malicious user and an erroneous compiler.) The flip side of this is that neither TAL nor PCC is a panacea. You need to set the security policy in advance, and design the type or proof system to enforce that policy.
TAL and PCC have attracted a great deal of interest in the research community. There is a biannual workshop devoted to typed compilers in the style of TIL, and there is ongoing research on PCC at Princeton, Purdue, and Yale, as well as Berkeley and Carnegie-Mellon.
Both TAL and PCC work in much the same way. They were both designed by researchers in functional languages, and they depend heavily on the logics and type systems with roots that were traced in this paper. As it happens, they were also initiated by researchers working in the same place, Carnegie-Mellon, at the same time, just a few years ago. The similarities were recognized from the start, and research on each has reinforced the other.
As we've seen, types and proofs are strikingly similar, and, in the right circumstances, can even be in precise correspondence. With Gentzen and Curry it took 30 years for the underlying similarities to be spotted, with Girard and Reynolds the same idea popped up at nearly the same time for very different reasons. So the only surprise with TAL and PCC is that the similarities were recognized from the start.
It is too early to say, but just as physics is underpinned by the work of Copernicus, Galileo, and Newton, one day computing may be underpinned by the work of Frege, Gentzen, and Church.