Database Clustering

Major database vendors are beginning to tout clustering as an alternative to large single servers. With recent releases from Oracle and IBM, is database clustering ready for prime time?


March 12, 2002
URL:http://www.drdobbs.com/database-clustering/184414502

Database Clustering

Is it the end of server budget blues?

by Neil McAllister

April 2002

Real Application Clustering (RAC) is the most hotly promoted new feature of Oracle's 9i database software, as anyone who has seen the company's "unbreakable" ad campaign knows. Database clustering technologies like RAC let a group of interconnected computers work together to serve a single database. The idea is that together they can create a fault-tolerant, high-performance, scalable solution that's a low-cost alternative to high-end servers—though, so far, that claim has yet to be proven.

Speaking at a press conference at Oracle's OpenWorld event in December 2001, CEO Larry Ellison was characteristically grandiose in his praise for RAC. "This is certainly the most important thing we've ever done, in terms of actually changing the face of the industry," he insisted. "This affects a lot more than just Oracle."

But how will it affect you? Database clustering is generating a lot of buzz, but empirical data and real-world case studies remain scarce. Is RAC reason enough to switch to Oracle 9i, or is there a compelling reason to ever migrate to a clustered database at all?

Fair Share

Clustering isn't a new idea. Companies like Tandem (now a Compaq subsidiary) have offered specialized server-clustering solutions for decades. However, it wasn't until the mid-1990s that major relational database vendors began experimenting with clustering as a way to make their products more efficient and fault-tolerant.

To properly evaluate the top database clustering products from IBM and Oracle, you must first understand the fundamental differences between their underlying technologies. Currently, there are two common methods of clustering relational databases, known as "shared-disk" and "shared-nothing" clustering.

IBM's DB2 Universal Database (DB2 UDB) uses a shared-nothing approach. In this architecture, each node in the cluster holds only one segment of the database, and the node also handles all of the computational work that corresponds to the data it stores. A master server assesses the task at hand and then parcels it out, distributing a portion of the job to each node that contains data to be processed. The task is then executed by all of the nodes in parallel, and the master server reports the result.

Oracle, on the other hand, uses shared-disk clustering—a design that's structured around a single large data store (for example, a disk array). Each node on the cluster has equal access to all of the information in the data store. Only the processing work is divided amongst them, and not the data itself. The result is a particularly fault tolerant database. Even if one or more servers fails, all of the application data remains available to the other nodes. By comparison, if one node on a shared-nothing database crashes, all of the data stored on that node likewise goes offline, until a failover system can recover from the fault.

Know Your Application

Another advantage Oracle claims is that its software can run any application in a clustered environment that could run on a single server, whether the application is commercial or homegrown. By comparison, IBM's demonstrations of off-the-shelf software for its DB2 clustering architecture thus far have only been benchmarks and certain applications from SAP, including mySAP.com and SAP Business Information Warehouse.

IBM is quick to point out, however, that it's important to evaluate various clustering solutions based on the type of application you want to run. Data warehousing is the easiest application for any clustered environment to manage. Warehousing applications typically assume you have a large amount of data serving many users, but in a read-only capacity.

In contrast, applications that rely more heavily on SQL-based transaction processing (both reading from and writing to the database) are much more difficult for any clustered database to manage. "You're going to hear vendors trot out all of their warehouse successes to try to convince you that their database scales the best for every application, but they're only showing you half the story," says Jeff Jones, director of strategy for IBM's data management solutions group.

On a shared-disk design, where every node has access to the database simultaneously, transaction-based applications require fault-tolerant resource locking and management to avoid data corruption. Although both IBM and Oracle are tight-lipped about real-world performance figures, IBM engineers believe this overhead will significantly limit Oracle 9i's performance for transactional applications.

Oracle insists that IBM's criticisms of the shared-disk design are disingenuous. Oracle representatives are fond of pointing out—rightly so—that IBM, too, markets a database using a shared-disk clustering architecture: the DB2 for OS/390 and z/OS product, which runs on IBM mainframes. "We think the mainframe guys are right," Ellison quipped in his OpenWorld keynote.

But IBM counters that the shared-disk design of its mainframe database is a unique situation. That architecture was only deemed feasible because of the tight integration that IBM has maintained between its mainframe hardware, operating system, and database software. "Oracle isn't inventing any database-specific hardware," says Jones. "It's partnering with people to do some of that, but we just don't see competition on the upper end."

Mainframe Killers?

In this case, the "upper end" is the mainframe database market, where IBM remains the undisputed leader (Jones claims his company has more than 98 percent market share). The OS/390 version of DB2 can handle as many as a million simultaneous users, making this a mainframe solution extremely attractive for high volume environments.

When comparing the merits of clusters versus mainframes, however, it's important to account for cost. Clusters let you either add low-cost hardware or repurpose existing hardware to scale your database, instead of buying ever-larger single servers (which can be extremely costly).

Oracle's Ellison insists that an Oracle RAC cluster running on low-cost, commodity hardware is a viable, even preferable alternative to expensive servers or mainframes. "It would be faster than an IBM mainframe, more reliable than an IBM mainframe, at less than one twenty-fifth of the cost," he said.

Whether or not you believe the performance claims, there are likely to be hidden support costs for any clustered database. In the case of shared-nothing architectures like IBM's, growing the database may require manually reapportioning the data across newly added servers. Even Oracle has hardware certification requirements for RAC, so not just any old box will do. If transitioning away from reliance on mainframes is an issue, IBM DB2 UDB may be the database of choice. Although its internals are built using a different architecture than DB2 for OS/390, it still offers strong integration with IBM's mainframe product. Also, because DB2 UDB is built from a single, unified code base for a plethora of OS platforms, it's perhaps the most flexible clustered database solution.

Software and Strategy

That flexibility lies at the heart of IBM's database strategy, and ultimately it may expose the most telling contrast between the two companies. IBM's willingness to support heterogeneous environments extends even to the database software itself. "We don't come into a customer's shop and say, for us to help you, you're going to have to move everything you have onto DB2," Jones explains. "We tell our customers, work with us and we will help you manage the data you have, wherever it is."

While IBM remains committed to being a full-service solutions provider, at heart Oracle remains a software vendor. Little wonder, then, that Oracle consistently encourages migrating data to its own software. "IBM's business is to say, take whatever you've got—this morass, this briar patch of computing—we'll just take it over, and we'll raise your prices," Oracle's Ellison scoffs. To Oracle, standardizing on a single database platform is one of the best ways to lower costs and create efficiencies within your IT infrastructure.

Each approach has gained its converts, and each clustering solution is beginning to see real-world deployment. On the high end, IBM reports that a company called NewTech Sciences is running DB2 on a 1,250-node cluster of Linux servers. Another example is Florida International University, which operates a data visualization application called TerraFly using a 13-terabyte clustered DB2 database.

Oracle, on the other hand, boasts such corporate customers as American Airlines and logistics management provider Vector SCM among its RAC client list. Neither company would comment on its experience with RAC, however, as is often the case with large enterprise software customers. As a result, it's difficult to accurately assess market reaction to these emerging technologies.

Slow Adoption

Rich Niemiec, president of the International Oracle User's Group, predicts increasing interest. "Most people think that they need a super server or terabyte database to need and/or benefit from clustering," Niemiec explains. "When more people understand that clustering is not only beneficial but preferable for even the smallest servers, you'll see implementations on a larger scale."

But some analysts disagree. Gartner Research Director Betsy Burton, for example, has predicted that less than 10 percent of Oracle customers will adopt RAC by 2006, citing the software, hardware, and maintenance investments involved as major limiting factors.

Faced with such skepticism from an analyst of Burton's expertise, even Oracle's Ellison is forced to temper some of his rhetoric. "Even 10 percent would be a phenomenal update," he told reporters at Oracle OpenWorld. IBM's Jeff Jones, meanwhile, thinks Oracle's enthusiasm oversteps reality. "Clustered databases are still a territory of very large customers," he says.

Based on these assessments, the best advice for now is to approach clustered databases with caution. If you already manage an extremely large data store and are reaching capacity on your existing hardware, clustering may provide a welcome alternative to costly single servers. For most applications and environments, however, clustering is not yet a drop-in replacement for stand-alone database servers, despite the hype.


Neil ([email protected]) is senior technology editor for New Architect.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.