Modern Engineering: Availability, Capacity, and Volume
A comprehensive review of modern engineering, covering the challenges of volume, capacity, and availability, should include engineering work done for 'The River' and 'The Cloud'. Whether the topic is Mississippi River flood control or cloud computing, there have been important engineering accomplishments and spectacular failures. The Mississippi and the Internet remind us that capacity and volume can define success for a flood-control system or a transaction-processing system.
During the timesharing era, the solution for limiting workloads was a busy signal and the inability to login. That's not acceptable when high availability is a requirement, such as when substantial revenue is lost due to a database or website falling over.
'The Cloud' and 'The River'
Clusters and cloud computing gained traction by ensuring the continuous availability of applications and databases. This became important after some highly publicized system outages, such as those that cost Merrill Lynch and eBay million of dollars. The online transaction processing (OLTP) system at Merrill Lynch could not keep up with a day of heavy trading on the New York Stock Exchange. Being unable to scale resulted in the firm paying millions in fines. The failure at eBay was prolonged database server outages that cost the website millions in lost revenue. The server problem was eventually traced to human error. The data center staff had failed to apply updates to the Solaris operating system of the database servers.
Today's flood control systems are a successor to efforts to control the Mississippi River that date back to building the first levee at New Orleans in 1718. The U.S. Army Corps of Engineers has played a role in building levees since the 19th century. The levee system along the Mississippi River is longer than the Great Wall of China, spanning from Illinois to the Gulf of Mexico.
Whether we're talking about high-volume web commerce or Mississippi River commerce, the economics of availability can be measured in millions. In addition to lost revenue, when eBay's database server fell over, the price of eBay shares fell. The inability to utilize the Mississippi River for shipping costs the U.S. economy $275 million per day.
Clearly, there's a financial incentive to put in the engineering work to ensure capacity and provide availability, even in times of an extraordinarily high volume — of water or transactions.
What Do We Mean By High Volumes?
In an average year, the flow of the Mississippi River at New Orleans is 600,000 cubic feet (17 million liters) per second. Recently the flow was measured at 2.3 million ft3 per second at the Old River Control Structure in Louisiana. Spillways have been opened at three locations along the Mississippi, with the goal of limiting the river's flow between Baton Rouge and New Orleans to 1.5 million ft3 per second. The flood control plan for that segment of the river assumes that, even at 250% of the average flow of the river, the levees will hold.
In the context of processing data and transactions, today's computer systems are also seeing record volumes. Researchers at Yahoo used Hadoop on 3800 nodes to sort a petabyte of data in 16.25 hours. The 27 instances at Salesforce.com routinely process a daily aggregate (on weekdays) of more than 400 million transactions against SQL databases. The Teradata data warehouse at eBay has grown to exceed a whopping 10 petabytes of data. Organizations such as Wal-Mart, Dell, and Bank of America also have data warehouses with data volumes that exceed the petabyte mark.
Amazon.com and eBay are well-known examples of large-scale OLTP, but there are other high-volume websites. Social networking sites, such as Facebook.com and Twitter, have a user population measured in hundreds of millions.
Today it's often unacceptable to have systems, networks, or websites that are not available on a 24x7 basis, with 99.95% or better uptime guaranteed with service-level agreements. That calls for web-scale database architecture with scalability and sufficient capacity to ensure a site does not fall over during times of heavy volume. In addition, research has shown that web users will turn away quickly from websites that are slow and unresponsive.
Availability and responsiveness require web-scale thinking to develop solutions for application workloads, query processing and database-related work, such as data replication. The system architects of high-volume sites, such as eBay, Amazon.com, and Facebook, were pioneers of web-scale thinking. They crossed a new threshold of user activity and data volume that required creative solutions for high-availability and scalability. They looked to architecture, not just application-oriented performance tuning, to reduce query execution time.