Ajay Sudan, Product Manager, and Alex Torone, Group Program Manager, members of Microsoft's Modeling Team, spoke with Scott Swigart about Visual Studio Team System's modeling tools and Microsoft's thoughts about modeling in general.
SD: Thanks for taking the time to chat about the modeling features in Visual Studio Team System (http://msdn .microsoft.com/vstudio/teamsystem/ default.aspx). Before we drill into those, could you take a minute and talk about VSTS in broad terms?
Ajay Sudan, Product Manager
Ajay Sudan: In Visual Studio 2005, we're expanding the product line from just being a developer-centric tool to a full software suite for all members of the software development lifecycle. This includes architects, developers, testers, project managers and so on. At the high end of Visual Studio, there are three editions: Visual Studio Team Edition [VSTE] for Software Architects, for Software Developers, and for Software Testers. And for the first time, there's a server component for Visual Studio called Team Foundation Server [TFS]. This is really an extensible team collaboration server that includes version control, work item tracking, defect tracking, project management, reporting, integration with Microsoft Project, integration with Microsoft Excel and a build system as well.
Microsoft has always provided a development environment and source control, but Microsoft hasn't provided these other kinds of software development lifecycle tools. Why, and why now?
AS: These different roles have always been there, but we haven't always provided optimal solutions for them. Take testers, for example. We haven't typically provided testing tools. Organizations are taking testing more seriously, and we want to provide solutions more customized for testing and other specific roles.
With architecture, you'll see that we're doing a lot of things that further Microsoft's Dynamic Systems Initiative (www.microsoft.com/windowsserver system/dsi/default.mspx). That's a Microsoft-wide initiative, which is relatively new, and these tools are designed to support that initiative.
A broader thing that we saw when we were putting together our Visual Studio 2005 product plans was that there are a lot more issues involved in software development today than there have been in the past. Software is becoming more and more complex. Teams are becoming more geographically distributed. With Team Foundation Server and the collaboration features that we provide, we're hoping to make team collaboration and communication a lot easier and work better.
When we talk about modeling, VSTS provides a number of designers, and the designers that you get depend on which version of VSTS you're using, correct?
AS: Correct. The Class Designer is available in Visual Studio Standard edition and above. That's because we see the Class Designer as a developer productivity tool more than the traditional architect's class diagramming tool. It allows developers to visualize code, quickly design classes, refactor code, create documentation and so on. We've seen a lot of interest from all levels of developers in the Class Designer.
The Distributed System Designers, on the other hand, are specific to the Visual Studio Team Edition [VSTE] for Software Architects.
I understand that the TFS isn't a released product yet, but there's "go-live" licensing available. Could you talk a bit about that?
AS: We've been "go-live" since Beta 3, which came out in December. What that means is that companies can put TFS into production use. We will support complete migration to the final version of TFS. We've been using TFS internally for quite some time, and as the product has progressed we've been migrating our internal systems from one version to the next, so we have quite a bit of experience with these kinds of migrations. We know we'll be able to deliver tools for customers to do the same if they start out with the Beta 3 version now.
Let's talk about the modeling tools. What kinds of modeling does VSTS let you do? What are the different designers that relate to modeling?
Alex Torone, Group Program Manager
Alex Torone: One of the things we found was that while we have a lot of developer productivity features, our customers have told us that they need better ways to communicate the design and talk about the architecture of a system. In terms of the architect tools, we looked at how we could communicate the architectural structure of the design. The result is a number of modeling tools.
First, we have the Application Designer, which lets you view the high-level application structure. This shows the communication dependencies between services, configuration of end-points and the like. When an architect starts to design a system, they can think about the messages that will get passed. They can think about the security between applications. They can think about database connections. Today in Visual Studio, you don't get a visualization of this. You have a project system with a number of items, but you don't get an overall view. Developers like to see context. If a developer's building a class library, they like to see the context in which that class fits. Furthermore, you want to leverage the IP of the architect. An architect can span multiple projects, and they don't want to end up recreating the same things over and over again. They especially don't want to find out that their designs weren't implemented as intended.
If we go to the opposite end of the spectrum, getting back to the point of why we built these tools, the reason that applications really fail is communication, and communication at multiple levels. The IT pros who are running and configuring the data center, and understand the services that are on the machine, often don't communicate with the architects or the development organizations who know about the requirements of the application on the services of those machines. What typically happens is they take it off the development environment, and they move it into test or a preproduction environment, and they discover all sorts of nonfunctional requirements that they missed that could simply mean that services on these machines are not available, or they're behind firewalls or protocols have been restricted. If that communication happens, it's done in a very nonstructured way. You see this in diagrams or other kinds of documentation that aren't rigorous. What we've done with the Logical Data Center Diagram Designer is provided a way to say "Let's describe what the hosting environment should look like, or what it does look like in the data center." It's not an infrastructure diagram, or an active directory model. It's a description of the runtime environments that are available in the data center.
Applications require a runtime environment, so we've married the two together. We say, "Here's an application. Here's a runtime environment. Let's make sure that this application meets the requirements of the logical data center runtime environment." Using the Deployment Designer, you map the application on to the logical data center, you get all sorts of validation rules to ensure that your application will actually work in the real environment. This results in two directions of communication. The logical data center will have requirements for the application, but the application developer can also specify needs against the data center. In Visual Studio, issues detected appear as tasks, and the development and IT organizations work through the list to determine if the application of data center will accommodate a given issue.
The last designer we have is the System Designer. We have this because the architect wants to describe the abstractions of larger systems. You can have a system that contains multiple kinds of applications, and you can talk about high-level communications across systems. It's in fact the system that you can configure for deployment. The System Designer lets you nest systems and describe configurations of systems for deployment, whereas the Application Designer is primarily a design surface for your development environment. So we have application design, system design and logical data center design, and they all come together in Deployment Designer.
But when you talk about the Application Designer, this is something that's at a higher level than class diagrams. You're really talking about the applications themselves as blocks, and how the applications connect and communicate with databases and with each other, is that correct?
AT: Correct. The way we think about this in Version 1 of our product is that the application is the atomic unit of deployment, which means it will run inside of process in some hosting environment. The classes, or the implementation of that application, could consist of multiple projects, and that is where the Class Designer would really come in. In our first version, we don't visualize to the level of implementation classes on the Application Designer.
Right. And so the Logical Data Center Designer lets you, at development time, validate the applications that you're building against the runtime environments that they're going to be deployed into, to ensure that you're not making certain assumptions about what's going to be available. So that you don't find out after the fact that certain ports aren't open, and so on.
AT: Correct. You increase the predictability that your application will meet the requirements of the environment.
When I think about developers versus IT pros, the IT people typically aren't Visual Studio users, and yet they're the ones that have all that knowledge in their heads about the specifics of the runtime environment. How do you envision the logical data center diagrams getting built? Do you envision an architect, who's familiar with Visual Studio, asking lots of questions of the IT pros so that the architect can build the diagram? Do you envision IT pros using Visual Studio to build the diagrams? From a people perspective, who's using the Logical Data Center Designer?
AS: When you look at the tool, it's primarily a development tool, so we thought about how we'd get that communication working. We view it in two senses. We view the architect collaborating with the IT pro, and getting the IT pro to describe this. So in one sense, the architect could do it. In the other sense, there are a lot of diagrams that are in Visio or other diagrams, and the IT pro might actually want to describe this for the development team. To answer your question, both the architect and IT pros could do this. The advantage of the IT pro is that he gets to communicate his requirements against the application. He could have multiple models, and when development teams start on a specific type of application, the IT pro could say, "Validate against this model."
If I remember correctly, this tool has some ability to interrogate the server and pull in things like the IIS settings.
AT: Right. We wanted to make sure we have the Web service application and deployment environment fully modeled. We can connect to a server and import the IIS metabase settings. We can get a complete configuration of the Web server. We're planning to extend that in future releases to allow import of all different types of servers.
What are some of the next servers that might be supported down the road, so that those configurations wouldn't need to be entered into a Logical Data Center diagram manually?
AT: I should first say that we needed to support multiple kinds of applications and hosts, so we built an SDK. The SDK allows a customer, third party or partner to model the other kinds of applications or hosting environments that we don't provide in the tool today. Through the Visual Studio 2005 SDK, you could write an add-in to do that importing. In the future, we want to make that an inherent part of the modeling space. So when we add a new server, like SQL Server or BizTalk, we'll have the discovery and importing facilities. Those are the areas we're looking at for future releases. The same goes for the application side. Today with the SDK, we can model any other kind of application. With an add-in, you could write project-level synchronizations, but in future versions, we want to make that better supported; where it's just a part of the system and you get the full fidelity design experience.
Also, as you've seen in the Application Designer, you can define contracts. In future releases we're going to be supporting Windows Communication Foundation and let you launch into those design experiences. What you're going to see is the Application Designer being able to launch into the other designers that other teams in Microsoft are building, so you'll be able to design in context. You'll be able to drill into those application components so you can design inside the services that you see.
In other words, today the Application Designer shows black boxes, but in the future you'd be able to drill into those and design the internals of those boxes and the interface contracts?
AS: I also want to point out that we have a number of partners who've made use of the System Definition Model (SDM) SDK, which is part of the Visual Studio 2005 SDK. Partners such as AviCode have taken that SDK and built models for Microsoft Operations Management (MOM) servers so you can drag and drop MOM servers to your design surface, create a MOM end point, and configure your MOM management packs. Then, as part of the deployment process, actually generate the MOM management packs that can be installed right into MOM.
And this is what you're referring to with the Deployment Designerconnecting your application design with a logical data center design?
AT: Right. The tool produces a deployment report, which is a complete description of the application environment, married to a complete description of the hosting environment, so that you have the full description of the application to be deployed. It's a human-readable report, and there's an XML file associated with it. Script writers, in version 1, could generate scripts to automate deployment. That's what Ajay was mentioning. Macrovision has an MSI builder that's driven from the deployment report.
The primary problem that this report addresses, and that these tools address, is that when applications move from one environment to another, people often think of the application, meaning the DLLs, the binaries, the application configuration. But there's configuration on the machine that's often missed when you move it, and that's what breaks the application. The Logical Data Center Designer takes that missing information and moves it along in a deployment report.
One of the terms that I've seen coming out from Microsoft and other companies is "Model-Driven Development." It seems that "(fill in the blank) Driven Development" is in vogue these days. When Microsoft says "Model-Driven Development," what does that mean?
AT: The reason that models are important is when you think about the architectural development space, and the design space, and the deployment space, and the management space, you need to have a common description of what it is that you want to operate on. The reason a model is important is that it gives you a common definition across these spaces. When you look at where we're heading with the Dynamic Systems Initiative (DSI), we want to be able to build manageable applications. Well, what is a manageable application? If you think about a manageable application, you need to be able to describe things like computers, security and hosting environments. The beauty is, with that information in a model, you can run automated analysis over it.
Visual Studio is very much a development environment, but when you look at the management space, they have a different set of concerns, but they want to operate on the same information. If you have a model that describes the connectivity, you can now author a health model over it. The management people want to know about things like application connectivity, and "What are the services on this thing?" They don't need to know how the application is implemented. They want to understand how they could author a management pack or a health model.
You could also run other sorts of analysis, such as security validation. These models are often in different domains. You can have a model for business process, a model for requirements or a modeler for contract design. The difference is that models are often described within the specific domain. So business process will have a different model than application design. The core to this strategy is that you want to be able to transform the modeling information from one domain into another. We are working in that area with our software factories and domain-specific languages and tools.
You just mentioned "Software Factories." Spend a minute on that if you don't mind.
AT: Okay. There's a lot of information on the Web about this concept (http://msdn.microsoft.com/vstudio/teamsystem/workshop/sf/default.aspx). The factory paradigm is based on the idea that every application is unique, but applications in the same domain are similar. There are many different aspects to application development, there are processes for each aspect, and there are tools, templates, patterns, libraries, and other assets used in each process. Some of those tools are DSL designers that focus on one aspect of the domain. Some are wizard and template driven. A factory defines the important aspects of application development for a specific domain, and supplies the processes and supporting assets for each aspect. A developer can use these assets to build an application in the domain. The concept of a factory is the ability to provide domain-specific assets for building that specific kind of application.
You could think of Visual Studio Team Edition for Software Architects as a factory for building Connected Systems. It contains all the tools to allow you to build a service-oriented application. You use the application designer, logical data center designer and validation logic, and those comprise a Connected Systems factory. If you think about a factory, it's composable. A connected systems factory might have a specific factory that embodies the language, the guidance and the tools for contract design. Maybe I have another factory for data access. When I combine the two together, I can build a specific kind of application. Factories are very focused on the kind of applications that you build. Usually, a lot of these companies end up building the same kinds of applications over and over. And what we see is that they end up building tools and processes that streamline the kinds of applications that they build. Microsoft isn't going to build individual tools for all these specific domains, but we are providing the ability to create your own designers, models, and guidance for very specific kinds of applications.
AS: There was a great blog post by Steve Cook, one of our software architects, about how a software factory differs from this notion of an industrial factory, in that there's always going to be craftsmen involved in software development (http://blogs.msdn.com/stevecook/ archive/2006/01/19/514869.aspx). The analogy in Steve's blog is that if you're building a house, you think about brick layers and carpenters. You're not ever automating the entire house building process, but you are automating the tools that those people use. Today, we're at the stage where the brick layer builds his own bricks, and the carpenter cuts down his own trees. We're not talking about replacing those people; we're just talking about automating and standardizing the tools that they use.
You mentioned that a lot of this was in the context of Microsoft's larger "Dynamic Systems Initiative." Could you talk for a minute about DSI?
AT: The Dynamic Systems Initiative is a way to describe applications, and how they can be composed, throughout the application life cycle. There's a large IT space in the industry, and they're focused on things like, "How do I get a description of the application that's running in the environment? How do I manage updates? How do I scale that?" The word "Dynamic" is to represent the kinds of systems that they can add into the environment.
Let me take a minute and describe what's behind all this. At the core of DSI is something called the System Definition Model. And the reason this is core is that you have to have a common model that development environments understand, as well as the management tools, as well as deployment tools. When you have this common model, you can then start to describe the deployment characteristics and management characteristics of an application. The initiative is to create models for all these environments. This includes everything from hosting environments, disk drives, services, Windows and so on. Each one of these can have a model that describes how it connects and communicates, how it can be managed, how it can be composed. Once you have this, think of all these parts that are self-describing. The dynamic part is that I can now use these descriptions and compose applications or compose management services. That is all part of DSI. It's to align the organizations of development and management and push those closer together. What you see in Visual Studio 2005 is the first step in that direction. We are producing system definition models out of our tools, and those models will be consumed by downstream management tools.
AS: The key takeaway, in one sentence, is that DSI is an industry initiative to simplify the design, management and operation of distributed systems. It consists of investments in hardware, software and services. And underlying technology for this is the System Definition Model, or SDM. There's a lot more information about this at www.micro soft.com/dsi.
When many in the industry think of application architecture modeling, they think of UML. However, Visual Studio Team System doesn't use UML for any of it's modeling, even for things like class diagrams. I know the fact that Visual Studio Team System uses something other than UML for its notations has caused a bit of a stir and, "Why doesn't this use UML?" seems to be a frequently asked question. Talk about the advantages that you see in going with a domain-specific language approach instead of something like UML.
AS: First, we continue to think that UML is important and has a place in the development lifecycle. We continue to ship Visio as part of Enterprise Architect. We're also partnering with Sparx, which has their Sparx Enterprise Architect, which is a pretty well-known solution that integrates right into Visual Studio 2005, providing UML 2.0 designers. We've chosen to invest in this area of DSLs [domain-specific languages], where we're providing modeling tools that are tailored for very specific application domains. If you take a look at the Distributed System Designers, those are examples of DSLs. We're able to do some very interesting things by having a very formal definition and structure to our modeling language. For example, it makes things like application validation a lot better. A lot of that's not possible when you have a more generic UML diagram. It's very difficult to do a compute over that kind of model if the structure is somewhat arbitrary, and the definition isn't as formalized and rigorous. Customers using UML are spending a lot of time adding stereotypes and tweaking the underlying schema for the models. In the end, they've often produced a kind of DSL. If you think about distributed system design, if everyone has customized their UML model in a unique way, it's hard to do any sort of computation over that. We think UML is important, but we're investing in more precise modeling tools.
It also seems like it required a lot of effort to make tools that do really good round-tripping between UML and code, and it would appear that DSLs simplify that problem quite a bit. You're also letting the developer think in the same terms when they're in the model and when they're in their code. There's not a lot of guess work about what the architect meant, and there's not some entirely new vocabulary that the developer and architect have to use to communicate.
AS: We've simplified the problem a lot, providing more fidelity with the .NET Framework. We're able to represent things like generics and partial classes in our modeling tools, and that would be hard to do in a UML diagram. That doesn't just apply for .NET, but even for Java, or any place where languages continue to evolve. These UML specifications take years and years to go through approval process, and even when they do, it's difficult to represent very specific things in .NET or Java with high fidelity.
And when you're working in the logical Data Center Designer, you're using terms that are automatically understood by an IT pro and they don't need to learn some other modeling nomenclature to decipher your visualization.
What you've done, which is interesting, is built a framework for building arbitrary DSLs. Are there other DSLs that Microsoft is working on, or DSLs that partners are building? Are you seeing people take this concept and run with it?
AS: There are definitely many examples of partners building factories for specific problem domains, and DSLs for the same aspects of those factories. Jack Greenfield has been working with some of our partners to put together factories for the healthcare industry, for example. EDS, Unisys and other partners are building factories for other industries. There's definitely a lot of interest.
Something that always occurs to me when I see these kinds of tools is how they handle very large quantities of information. Demos often show something like a class diagram with five classes in it. Are you thinking about ways to really make the meaning stand out when you have hundreds of classes, or thousands of classes? Are there ways to make the "important" classes stand out?
AS: The Class Designer can support hundreds of classes. As you know, the Class Designer can be used to reverse-engineer an unfamiliar code base. We've pointed the Class Designer at the .NET Framework, and you can actually visualize the entire .NET Framework and the relationship between the classes. We're looking at how we can get some of that information up on MSDN to supplement the reference guides that are online, and let developers see class diagrams along with documentation, and drill down.
That's very cool. I haven't seen the diagram, but with a lot of tools, you'd end up with 5,000 boxes that are roughly the same size, and I wonder if you're doing research into how to let the few hundred really important classes stand out. When I think about Avalon, and its ability to scale, zoom and the like, and the research that's gone into visualizing information in general, I wonder if there's a better paradigm than thousands of boxes all drawn to the same scale. I wonder if there's potential there to provide class diagrams that really let the developer quickly derive meaning from very large amounts of information.
AS: Because the Class Designer is so core to the developer experience, it's actually being driven more by the programming language teams, and they might have more specific information about where they're focusing for V2 features.
One final question: What about something like Windows Workflow Foundation? Do you see that integrating in as another DSL modeling tool, or is that really separate from this?
AT: We are looking at integrating that into our designer as well. Remember that earlier I mentioned being able to drill into applications and get to other designers? I can see it being supported there.
Scott Swigart is a consultant specializing in convergence of current and emerging technologies. Contact him at [email protected]