Eric Bruno is a technology consultant specializing in software architecture, design, and development. Write to us at firstname.lastname@example.org.
In considering the history of distributed computing, CIOs should focus on how companies knew that it was the right time to make a big architectural bet on the next wave of technology. This past experience could help CIOs who are being asked to consider a similar move today. In the 1970s, for instance, Reuters introduced Monitor, through which journalists entered information via dumb terminals and a mainframe computer sent it out to readers.
In the early 1990s, about the time I joined Reuters as a developer, it was taking that distributed computing concept further by building a pioneering electronic trading system, Globex, for the Chicago Mercantile Exchange, drawing on a mainframe and Windows-based PCs. As Globex's costs grew with its popularity, CME moved the Globex architecture to an even more distributed model: a pair of mainframe-class computers coupled with about 1,500 workstation-class servers running Linux and Solaris.
Now a new question is arising for CME: What role will cloud computing play in Globex and other platforms?
IT leaders face real pressures in today's economy to push the limits of technologies such as virtualization and cloud computing that extend distributed computing models. The decisions CME and Reuters faced in choosing which distributed model to adopt, and when, are much the same as the ones companies face today. One difference, however, is that economic pressures are forcing companies to consider emerging, often immature, distributed computing technologies.
Distributed computing refers broadly to applications that adhere to the client-server model, a cluster, an N-tier architecture, or some combination of them.
While there are variations on these base models, what they have in common is that they divide computing across multiple computers to achieve greater application scale and availability. Large Web sites like eBay use a combination of these models, with database and app servers clustered within each tier of the design.
With the increasing use of Ajax at the browser level, many sites have added a client-server element to the mix. As a result, large-scale distributed applications such as Google and Yahoo leverage all three computing models.
Web services take the distributed model a step further, sharing the data processing load. Since Web services are based on HTTP, it's straightforward to deploy a single service to multiple servers to share the load. This design lets developers distribute even single application components, resulting in greater scalability, more code reuse, and reduced costs. Thanks to open standards and Web services, message formats and protocols can be defined in XML, C++, or Java, then easily implemented on other platforms, reducing costs further.
But the biggest recent change to the distributed computing model is virtualization, letting IT divide a physical server into multiple virtual servers. Beyond the oft-cited hardware, energy, and space savings, using virtualization to divide a physical server into virtual ones helps solve the problem of getting the most value from multicore computers.
For instance, even if individual software components aren't yet written to take advantage of multicore architectures, developers should still consider using virtualization to run multiple software components on one physical computer as though they were each running on separate computers. This maintains the security and robustness of the software, while squeezing the most value from multithreaded-capable multicore computers.
As a result, individual application components can execute on multiple virtual servers, all running on a single physical server. One approach is to pair applications that communicate as much as possible, thereby eliminating network-induced latency that might be present if those apps are separated.
Combining virtualization with the other models of distributed computing can result in a cost-effective, scalable architecture.
But effectively monitoring virtualization requires administrator oversight, a time-consuming process that companies should look to minimize using configuration management and other life-cycle tools.
Instead of manually building configurations, for instance, administrators can link multiple templates containing the operating system, and build scripts and applications to a new virtual machine volume. Companies that need to speed up the process also should provide developers with self-service facilities for submitting and acting on virtual resource requests, instead of formal submission processes and the delays they entail.
While virtualization is the here and now of distributed computing, cloud computing is its future. Although its full impact has yet to be realized, it's becoming clear that cloud computing will be the service-delivery option of choice for distributed applications, thanks in part to today's CPU, memory, and bandwidth capabilities. Once vendors can combine a high level of developer control, a reasonable and flexible cost model, and compelling services, cloud computing will change the face of distributed computing.
When will that happen for enterprise systems? Some of the most important factors to watch are design decisions for security, data protection and recovery guidelines, and application architectures. For instance, developers working with cloud computing need to contend not only with familiar dedicated security problems such as firewalls and intrusion detection, but with security in a shared cloud environment as well. Heavy lifting in these areas is being done by groups such as the Cloud Security Alliance, which has released its "Security Guidance for Critical Areas of Focus in Cloud Computing," a best-practices document that provides security guidelines for enterprises cloud computing.
Perhaps most importantly, as with other distributed computing models, cloud computing isn't an all-or-nothing proposition. Hybrid models of dedicated/cloud resource implementations may be just the thing for applications with bursty usage patterns.