Is it the end of server budget blues?
by Neil McAllisterApril 2002
Real Application Clustering (RAC) is the most hotly promoted new feature of Oracle's 9i database software, as anyone who has seen the company's "unbreakable" ad campaign knows. Database clustering technologies like RAC let a group of interconnected computers work together to serve a single database. The idea is that together they can create a fault-tolerant, high-performance, scalable solution that's a low-cost alternative to high-end serversthough, so far, that claim has yet to be proven.
Speaking at a press conference at Oracle's OpenWorld event in December 2001, CEO Larry Ellison was characteristically grandiose in his praise for RAC. "This is certainly the most important thing we've ever done, in terms of actually changing the face of the industry," he insisted. "This affects a lot more than just Oracle."
But how will it affect you? Database clustering is generating a lot of buzz, but empirical data and real-world case studies remain scarce. Is RAC reason enough to switch to Oracle 9i, or is there a compelling reason to ever migrate to a clustered database at all?
Clustering isn't a new idea. Companies like Tandem (now a Compaq subsidiary) have offered specialized server-clustering solutions for decades. However, it wasn't until the mid-1990s that major relational database vendors began experimenting with clustering as a way to make their products more efficient and fault-tolerant.
To properly evaluate the top database clustering products from IBM and Oracle, you must first understand the fundamental differences between their underlying technologies. Currently, there are two common methods of clustering relational databases, known as "shared-disk" and "shared-nothing" clustering.
IBM's DB2 Universal Database (DB2 UDB) uses a shared-nothing approach. In this architecture, each node in the cluster holds only one segment of the database, and the node also handles all of the computational work that corresponds to the data it stores. A master server assesses the task at hand and then parcels it out, distributing a portion of the job to each node that contains data to be processed. The task is then executed by all of the nodes in parallel, and the master server reports the result.
Oracle, on the other hand, uses shared-disk clusteringa design that's structured around a single large data store (for example, a disk array). Each node on the cluster has equal access to all of the information in the data store. Only the processing work is divided amongst them, and not the data itself. The result is a particularly fault tolerant database. Even if one or more servers fails, all of the application data remains available to the other nodes. By comparison, if one node on a shared-nothing database crashes, all of the data stored on that node likewise goes offline, until a failover system can recover from the fault.
Know Your Application
Another advantage Oracle claims is that its software can run any application in a clustered environment that could run on a single server, whether the application is commercial or homegrown. By comparison, IBM's demonstrations of off-the-shelf software for its DB2 clustering architecture thus far have only been benchmarks and certain applications from SAP, including mySAP.com and SAP Business Information Warehouse.
IBM is quick to point out, however, that it's important to evaluate various clustering solutions based on the type of application you want to run. Data warehousing is the easiest application for any clustered environment to manage. Warehousing applications typically assume you have a large amount of data serving many users, but in a read-only capacity.
In contrast, applications that rely more heavily on SQL-based transaction processing (both reading from and writing to the database) are much more difficult for any clustered database to manage. "You're going to hear vendors trot out all of their warehouse successes to try to convince you that their database scales the best for every application, but they're only showing you half the story," says Jeff Jones, director of strategy for IBM's data management solutions group.
On a shared-disk design, where every node has access to the database simultaneously, transaction-based applications require fault-tolerant resource locking and management to avoid data corruption. Although both IBM and Oracle are tight-lipped about real-world performance figures, IBM engineers believe this overhead will significantly limit Oracle 9i's performance for transactional applications.
Oracle insists that IBM's criticisms of the shared-disk design are disingenuous. Oracle representatives are fond of pointing outrightly sothat IBM, too, markets a database using a shared-disk clustering architecture: the DB2 for OS/390 and z/OS product, which runs on IBM mainframes. "We think the mainframe guys are right," Ellison quipped in his OpenWorld keynote.
But IBM counters that the shared-disk design of its mainframe database is a unique situation. That architecture was only deemed feasible because of the tight integration that IBM has maintained between its mainframe hardware, operating system, and database software. "Oracle isn't inventing any database-specific hardware," says Jones. "It's partnering with people to do some of that, but we just don't see competition on the upper end."