Microsoft Research: Looking for a RiSE in Developer Productivity

Wolfram Schulte talks about Research in Software Engineering


December 12, 2008
URL:http://www.drdobbs.com/tools/microsoft-research-looking-for-a-rise-in/212500077


In the summer of 2008, the leadership at Microsoft Research Redmond reorganized an existing set of research groups with a refreshed, more encompassing mandate: reinventing all aspects of software development. The revamped area, Research in Software Engineering (RiSE), headed by Wolfram Schulte, includes a diverse set of topics such as experimental software engineering, human interactions in programming, software reliability, programming-language design and implementation, and theorem proving. He recently found time to discuss the new area and the challenges it is addressing:

Q: How is RiSE going to address all of those topics?

Schulte: As our Web site shows, you'll see that it is organized into 10 broad research areas, ranging from the development process itself all the way to the underlying runtimes and system software the code eventually runs on. Our research projects span multiple areas. The important thing is to address the set of current challenges we see: global software development, multicore and Web concurrency, reliability, security, and energy-efficient computing. These challenges are interdependent and so their solutions must be as well.

Q: Concurrency is a hot topic these days. What is RiSE doing in that area?

Schulte: With multicore machines, most of our programs are parallel. Today, we need a way to express parallelism, and we need analysis tools to find errors related to concurrency.

To give .NET programmers a much more reliable way to program multi-core machines, Daan Leijen recently developed a library of high level concurrency abstractions, called the Task Parallel Library (TPL), which will ship as part of the Parallel Extensions to .NET in .NET 4.0.

Now that we can express parallelism, we need to have analysis tools to make sure there are no races, deadlocks, or performance bottlenecks. To address some of these problems, Madan Musuvathi and Shaz Qadeer have developed CHESS, which uncovers data races by controlling the scheduler for the different threads in a program.

But in the long run, our existing concurrency models of programming, with threads and locks, are not adequate. If you think about the way that our abstractions have evolved, we started with assembly language, we went to C, and we have managed languages today. Threads and locks should be seen as the assembly-language level. We need better abstractions and are looking to do more here.

Q: Static analysis has always been a strong point of MSR. What is new in that area?

Schulte: Yes, we have had a lot of success in static analysis. For instance, Tom Balland colleagues built the Static Driver Verifier toolkit, which helps eliminate blue screens from drivers in Windows and improves the reliability of the platform. Manuel Fahndrich contributed to the SAL project, the annotation language for the static checkers PREfix and PREfast, which won Microsoft's Chairman's Award for Excellence in Engineering in 2007. Rustan Leino has led the Spec# project, a research effort that has created the world's leading program-verification system for object-oriented programming.

Now, a spinoff from our previous work is making its way into .NET. At PDC, the Code Contracts project will be shown. It has been developed in part by Mike Barnett and Francesco Logozzo. It provides a language-independent way for programmers to record their design decisions for any .NET language. We then have tools that mine those contracts to generate proper documentation, to check them at runtime, or even to statically verify code relative to the contracts. In a sense, code contracts and the associated tools act as grammar checking for programmers, while today's compilers are like spell checkers.

Q: What does the virtual team for testing work on?

Schulte: Testing is in desperate need of automation: It continues to cost as much as 50 percent of the development effort. We have recently combined automated testing with static analysis based on research Patrice Godefroid started while he was at Bell Labs.

Another PDC presentation will be the Pex project, headed by Nikolai Tillmann and Peli de Halleux. Pex builds on Patrice's ideas and brings them to a new level. It performs an intelligent exploration of software to systematically discover its behavior, thereby forcing it into the possibly faulty corner cases that manual testing is often unable to find. Even without guidance, Pex often automatically reaches high code coverage. As a side effect, Pex immediately generates concrete test cases. The test cases can be used for immediate debugging or as regression suites.

Q: What does the virtual team for experimental software engineering look at?

Schulte: This is an interesting virtual team because it spans two Microsoft Research labs. It includes Nachi Nagappan in Redmond and Brendan Murphy of Microsoft Research Cambridge. They measure Microsoft's internal software-development process to make better predictions on how the next similar project will run. The team analyzes the development process and resulting software artifacts to understand how likely it is that a particular piece of code still has defects. If you do those analyses and have the right statistical models, you can also do better risk estimation and resource allocation. Their focus is not just at the software-system level, but also, more holistically, at the team/project level, for instance, the impact of organizational structure on software quality.

Q: This all sounds very applied. Are any teams looking at more foundational issues?

Schulte: Certainly! But foundations often pay off in very applied ways. We are doing amazing work on automated theorem proving. Nikolaj Bjorner and Leo de Moura have created Z3, which has won most of the categories in the worldwide competition for Satisfiability Modulo Theorem solvers. This is not only a theoretical achievement, but also provides the underlying engine for most of our program analysis and testing tools.

A true foundational result is recent work by Yuri Gurevich: he has solved a logic problem that has been open for more than 70 years. Yuri has proven Church's thesis, which tries to rigorously define what computability means.

Q: People in your area are also looking at the human side of programming. What is the research there?

Schulte: Ultimately, our code is all developed by people. If we don't understand how people collaborate with their teammates, we might miss a huge opportunity to help the overall development process.

Previous work by Rob DeLine and his team has had impact by mining all of the information related to a development project to help new developers who have just joined a product team. During PDC, Andrew Begel will show one incarnation of this work, Deep Intellisense.

One interesting future question is: How can we develop a code base when engineers are distributed over different locations and over different time zones? It's the question of global software development. Can we provide tools to enable them to work as if they were in the next room?

Q: Then you have a virtual team looking at programming languages themselves. What is going on there?

Schulte: You can either create a new programming language or impose rules for using an existing programming language. We are working on both.

Programmers appreciate type systems because they provide partial guarantees for your program, for instance, that the variables only hold values of a proper type. But the type systems for existing languages such as C# and Visual Basic are very restrictive. We are exploring much more expressive type systems. For instance, we can guarantee that processes have to obey a certain protocol if they want to exchange messages with each other. The Singularity project followed this approach with the static analysis enabled by type system extensions to C#.

But we also need to address all the systems already written. In fact, we can improve them a lot just by changing the underlying runtime systems. Using the traditional engineering techniques of redundancy and randomization, Ben Zorn has created RobustHeap, a memory allocator resilient against buffer overruns. Trishul Chilimbi uses program analysis from the performance point of view. He has built various low-overhead profiler and data-relocation tools that help find performance bottlenecks in code, whether it's sequential code or contention problems in concurrent code. And he is focused on extending this work to improve software energy efficiency.

Q: Another hot topic these days is Web programming. Are you looking into that?

We have three great projects that will be presented during PDC. Ben Livshits and Emre Kiciman have developed AjaxView, the first system that provides a view of the overall performance of a Web application. This includes the server, the client, and the latency on the wire in between. They also developed Doloto, which rewrites a JavaScript application so that individual pieces can be downloaded on demand, thus reducing the wait before a Web application can begin running. Finally, there is BAM, developed by Ethan Jackson, which enables cloud applications to be specified as logic programs independently from implementation technologies and service connectivity. Complete, runnable, programs are then generated with the single push of a button.

Q: With the RiSE initiative, how long will it take before you can evaluate what you have accomplished? What are you hoping to see once you get there?

Schulte: We believe we can make major advances in each of the research areas we are looking at. For instance, in concurrency we think that a new programming model will emerge that can guarantee programs will behave correctly by construction. For testing, we think we can extend the impact we have had on finding security bugs for codecs to whole applications.

Research isn't on a time schedule. Addressing the right research questions, then coming up with solutions, and then looking for opportunities to harness those ideas -- that's what we would like to do.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.