AJAX and its simple use of the XMLHttpRequest object lets us emulate the event-driven based behavior of client-server applications in the Web browser, leading to the design and deployment of real-time, interactive clients that can be accessed via browsers. Suddenly the Web browser isn't so limited as a client platform.
But AJAX comes at a price. Traditional client-server applications were designed to run over LANs. The local network is fast and fat and our client/server applications had little or no effect on the network, and vice-versa. AJAX, on the other hand, is designed to run over the public Internet where network speeds and feeds are unknown, uncontrollable, varied, and can be negatively affected by our applications, and vice-versa.
While it's exciting to roll out that first AJAX-based application, it's important to understand what effect that application may have on the network. Why? So that you can troubleshoot issues that will invariably crop up -- or better still, prevent performance problems by preparing the network and application infrastructure for the rollout.
While AJAX applications can certainly be developed from scratch, most of us have gravitated toward the use of a toolkit --or library -- that provides us with a consistent, easy-to-use AJAX framework, such as prototype or the Dojo toolkit. These frameworks are invaluable for developers as they deal with seemingly inevitable cross-browser issues, and let us concentrate on application logic, rather than on the nitty-gritty details of submitting and receiving requests.
But these frameworks also come at a cost. The cost most obvious to users is the increased size of the base application page due to the inclusion of the framework libraries. It's most obvious because it can cause the initial load to take longer than anticipated, especially over lower-speed network connections. If the application in question is one heavily used by remote corporate users, say a CRM (Customer Relationship Management) application, the surge of users first thing in the morning could ostensibly flood the network because of the amount of data being requested and cause the application -- and all others -- to perform poorly, if at all.
Whether for customers or remote users, this initial load time is important because it's the first impression the user has of the application. When that first impression is not the best it can be, there is the potential to drive customers away or hinder user adoption of your application. Both can impact the corporate bottom line, not to mention the headaches you'll have trying to solve the problem.
You can, of course, reduce the size of the page by eliminating any unnecessary code in your application as well as in the libraries. This solution has its own issues, as sometimes there is no unnecessary code in your application and eliminating code in a third-party library can be tricky, if not dangerous, as well as time consuming.
Another option is to enable compression on the server to reduce the size of your application. This can significantly impact the transfer speed and improve initial load times -- but it can also have a negative impact on the server by increasing its duties. Compression is not free, and in some cases can actually increase processing time when used on small objects, especially those less than 10K in size, and over high-speed (LAN) connections.
A third option is to take advantage of intelligent compression capabilities offered by application acceleration solutions, such as an application delivery controller. These devices ensure that compression is used only when it will have a positive impact, and often do so with the assistance of hardware, making the entire process more efficient.
Also at issue is the security of these third-party libraries. Unless you know for a fact the library has been thoroughly tested for vulnerabilities -- because you did the testing yourself or the developer provides verification -- you're leaving yourself at risk for exploitation. To mitigate this potential risk, consider deploying an application firewall capable of preventing malicious attacks that might result from the use of third-party libraries.
When Less is More
It may be ironic but AJAX requests tend to be smaller than their traditional counterparts, especially those involved in real-time updates. While this might at first glance appear to be a very good thing -- after all, smaller requests mean less bandwidth, which should translate into faster responses -- in reality it places a huge burden on your network infrastructure, especially routers.
In the network world, it's well known that packet size has a direct effect on router performance. As packet sizes drop and the number of packets per second increases, routers have to work harder to keep up with the load. Interestingly enough, an increased number of smaller packets (requests) is exactly the traffic profile that's often seen with AJAX-based applications. And it's not just your AJAX requests that are small. Also tiny are the underlying TCP/IP packets acknowledging receipt of those requests. With an abundance of small application and network-layer packets flying back-and-forth between the browser and the server, a router can easily become overwhelmed and begin to process packets more slowly.
The impact on the performance of your application can be disastrous as a result, resulting in timeouts or slow responses and frustrating users. Users, of course, only know that your application is performing poorly, and don't care why -- they just want it fixed.
Unfortunately, aside from reducing the frequency of requests, there isn't much you can do from an application perspective. It is rare that you will have control over the TCP/IP stack of your operating system, chosen programming language, or application server -- so it's nearly impossible for you to do anything about this situation. Unfortunately there's little the network team can do to help in this situation either.
The best solution in this case is the deployment of an application delivery controller capable of optimizing the network-layer communications to decrease the number of small packets traveling between the browser and the server. There are a number of RFCs (Request for Comments) that address the inefficiencies of TCP and its impact on the network that are often implemented by application deilvery controllers. These RFCs improve the underlying network performance by optimizing TCP-based communications and relieving much of the burden that causes routers to perform poorly.