Developers caught up in the maelstrom of debate and opinion that surrounds programming for, and programming within, cloud-based computing environments have had a perplexing time of late due to a dearth of standards and accepted methods for best practice. The issue of application latency has been among the most misunderstood and misreported of the concerns herein.
CEO of OS NEXUS Steven Umbehocker suggests that it matters a lot where the cloud is situated — and that a cloud provider may have plenty of web bandwidth from a given data center, but if that data center is thousands of miles away, then developers will need to accommodate and program for significant latency.
"Latency is generally measured as the round-trip time it takes for a packet to reach a given destination and come back, usually measured using a tool called 'ping'. Many applications are written to only send more data after the destination has acknowledged that it has received the last bit of data, so latency can have a significant impact on the effective bandwidth you have to a given cloud. Some file transfer protocols are designed to work fairly well in high latency situations, so it also depends on the applications you're running," said Umbehocker.
"As an example, if it's your email server, it is better to have the cloud situated nearby. Also note that major Content Delivery Network (CDN) providers (companies whose main purpose is to serve files, video content, and other media on behalf of a website) have data centers around the world to not only distribute the load but to minimize latency and thereby improve throughput to customers. You generally don't even notice this happening when you download a file from a CDN network, but many will automatically route your download request to the data center that's closest to you," he added.
Developers working within the cloud computing model of IT services delivery have had to learn fast and, largely speaking, had to learn "dynamically," as many of the cloud's technologies are still-nascent and constantly changing. David Hughes is founder and CTO of 'data center'-class WAN optimization provider Silver Peak Systems; Hughes has said that without doubt, the performance of cloud services is constrained by the underlying WAN infrastructure.
"Cloud services are subject to the same physical laws that have always made running applications over distance problematic. Irrespective of the type of cloud service deployed, all cloud-computing initiatives have one thing in common — data is centralized, while users are distributed. This means that if deployment is not planned carefully there can be significant issues due to the increased latency between the end users and their application servers. All cloud services inherently use shared WANs, making packet delivery — specifically dropped or out of order IP packets during peak congestion — a constant problem in these environments. This results in packet retransmissions which, particularly when compounded by increased latency, lower effective throughput and perceived application performance," said Hughes.
"Fortunately, in parallel with the cloud trend, WAN optimization technology has been evolving to overcome these challenges. WAN optimization helps 'clean up' the cloud in real-time by rebuilding lost packets and ensuring they're delivered in the correct order, prioritizing traffic while guaranteeing the necessary bandwidth, using network acceleration to mitigate latency in long-distance environments, and de-duplicating data to avoid repetition. So with WAN optimization it is possible to move the vast majority of applications into the cloud without having to worry about geographic considerations," he added.