Transactional Systems: Real-Time, High-Volume, and Scalability

The demands on transactional systems are becoming more and more -- well, demanding


June 11, 2007
URL:http://www.drdobbs.com/architecture-and-design/transactional-systems-real-time-high-vol/199903006

DDJ: Speaking with us today is Nati Shalom, CTO of GigaSpaces, a company that specializes in real-time, high-volume transactional infrastructures.

Nati, what's driving today's demand for real-time scalable applications?

NS: There are very broad trends that affect almost all industries and we can see them manifest in specific ways in each industry. Generally, there is sort of a "perfect storm" gathering. With the wider adoption of the Internet, more and more users are using it for e-commerce, trading, reservations, gambling, and on and on. This in turn increases the numbers of transaction and information processing that needs to be conducted in near-real-time. In addition, organizations are using more and more real-time systems to analyze information on-the-fly and make automated decision based on this information analysis in real time. This is also becoming feasible because of the significant reductions in the cost of computing infrastructure: powerful machines based on commodity x86 architectures, open source operating systems (Linux, Solaris), etc. With grid technologies (such as GigaSpaces), they can aggregate these machines into powerful computer systems previously only possible with expensive mainframes and proprietary SMP machines.

As stated this phenomenon manifests itself in different ways across industries, for example:

This addresses the "real time" aspect of the questions. The other aspect of it is scalability. Scalability comes into play here in two ways:

DDJ: What kind of demands is this putting on software and software developers?

NS: It is requiring no less than a complete paradigm shift in how we design and implement applications. The existing prevailing approach -- N-Tier architecture -- just doesn’t cut it for these kinds of performance and scalability requirements. Let’s examine why:

In N-Tier architecture the application is built out of tiers of functionality -- the web tier (managed by the web server) for presentation, the business logic tier (typically managed by J2EE application servers), the messaging tier (typically managed by MOM, "messaging-oriented middleware") and the data tier (typically managed by a database). Each of these tiers is managed by a separate product, and runs on a physically separate set of servers. This means that there is a lot of latency (messages and data need to physically hop across the tiers over the network), there is lack of scalability (everything needs to be recorded in a central, remote database server, which becomes a point of contention and, therefore, a scalability bottleneck) and most of all -- complexity, because each of these tiers needs to maintain its own high-availability (fail-over) model, its own load-balancing and partitioning models, etc. Even if I manage to scale one tier -- scaling the whole system across the many tiers and making them work with each other in a scalable fault-tolerant manner is near impossible. On top of that, developers are expected to write their applications in a way that is Service-Oriented, Event-Driven,and based on Service Level Agreements. I’d say this is putting pretty high demands on application developers, without giving them the tools to achieve it.

DDJ: You're releasing something you call "Extreme Application Platform." What is it and how does it address the issues we've been discussing?

The GigaSpaces eXtreme Application Platform is infrastructure software (aka middleware) made to address the demands of real-time, scale-out applications for extreme transaction processing, real-time analytics, and high-performance SOA. How does it do it? By turning N-Tier architecture on its ear. XAP is a single platform that addresses management of data, messaging (events) and business logic. Because all of these tiers are managed in a single product, there is no need to integrate them. Moreover, the tiers are physically co-located and share the same memory -- so latency is at a minimum. By doing this we create what we call a "processing unit" -- a single computer process that can execute the transaction end-to-end. It is completely self-sufficient -- a shared-nothing architecture -- and does not need to interact with other computer processes -- certainly not make hops over the network. This makes the application linearly scalable -- i.e., if one processing unit can handle 1000 transactions/second, two processing unites can handle 2000 transaction/second, three units 3000, and so on. The processing unit is managed by an SLA-driven container that handles all fail-over and load-balancing needs for the entire application stack based on service level agreements.

The architecture is service-oriented and event-driven by nature.

DDJ: Is there a web site readers can visit to find out more information?

NS: Yes, they might start by looking at www.gigaspaces.com.

Terms of Service | Privacy Statement | Copyright © 2024 UBM Tech, All rights reserved.