Latency and Application Performance: Are They Mutually Exclusive?

As principal architect at Aryaka, I meet with the IT teams of potential customers for in-depth discussions about how to enhance their global application performance and deliver a seamless user experience to their employees around the world.

It’s a significant challenge today, when even mission-critical enterprise applications are commonly hosted in far-off cloud datacenters and are accessed by globally distributed workforces, resulting in higher latencies over the network.

During these discussions, I’ve often noticed how IT teams tend to directly link the latency profiles of a network to the application performance delivered by the network. And why not? Until a few years ago, IT buyers had little choice than to invest in MPLS technology to meet their application performance needs, and the best way to evaluate an MPLS link is by looking at its latency profile.

But the fact is, in today’s connectivity technology landscape, there’s more to performance than just latency numbers.

Low Latency ≠ Better End-User Experience
Low_latency
While it’s true that latency is still an important network performance indicator, latency alone as a measure of application performance can often be misleading. Today, it’s possible for a network link with a latency of, say, 100 milliseconds to deliver a better end-user experience than a network link with a latency of 80 milliseconds.

Pepperoni Pizza Without Cheese and Pepperoni?

Say you’re ordering a pizza. If you choose low latency, the pizza will show up at your door in a few minutes. However, this doesn’t guarantee the quality of the pizza. What if the pizza shows up at your door quickly, but portions of it are missing, like the cheese and pepperoni? Does it matter that it reached your door at a moment’s notice?

Low Latency but High Packet Loss and Jitter?

The same is true when comparing the quality of data delivery over a network and its latency profiles. If a network has very low latency, but experiences high packet loss and jitter along the way, it isn’t going to result in an enjoyable, seamless experience for the end-user. Along with a low latency profile, a network needs to maintain the integrity of the data in order to truly deliver enhanced application performance.

The Illusion of Enhanced User Experience
Enhanced_user_experience
Another factor to note here is when latency effects are actually felt by a user. Latency in a network only begins to have substantial effects on user-experience when it goes beyond a certain point, typically around 60 milliseconds. Below that point, any perceived latency is not readily noticeable, which is one of the reasons why using latency as a measure of total network performance can often lead to inaccurate conclusions.

This is similar to the concept of persistence of vision – motion picture audiences enjoy a smooth visual experience when the frame rate is around 16-24 frames/second, and a higher frame rate makes little
difference to the experience.

Yet, many IT teams I talk to are inclined to choose a network with a lower latency, even when they cannot fully perceive differences in small measures of time, like the difference between 10ms and 30ms of latency.

So What’s the Missing Piece?
Optimization
Optimization.
When low latency links are combined with WAN optimization, traffic is shaped for improved throughput, resulting in more consistent and reliable application performance. And this is one area that Aryaka has continuously focused on from the beginning.

When we engineered a global private network by aggregating bandwidth capacity from Tier 1 service providers, we also built a WAN optimization overlay as part of our Software-Defined Network Platform.

Ways in which the WAN optimization overlay helps optimize data over our network platform include:

  • Compression techniques, which minimize effects of latency by reducing the amount of data over the network.
  • Byte-level deduplication, which eliminates the need to resend duplicate bytes across the entire network, thereby reducing congestion over the network.
  • Application acceleration proxies, which act on specific kinds of applications to boost their performance by reducing “chattiness,” enabling deduplication, and more.
  • TCP optimization using techniques like window scaling, selective acknowledgement, TCP termination, and more, which effectively supercharge TCP apps for faster delivery.

There’s also the added benefit of optimization technology easing the strain on IT budgets by conserving and making a more efficient use of expensive network bandwidth.

The Complete Picture: Low Latency Links + WAN Optimization
WAN_Optimization
With this in mind, it is vital for IT buyers to make more informed decisions while evaluating various connectivity solutions and not base their choices on a restricted set of performance parameters that don’t reveal the complete picture.

If they don’t, not only do they risk the performance of their applications and the productivity of their employees, but they might just be spending more than they need to on network costs at the same time!

Hundreds of Aryaka’s customers, who’ve recognized optimization technology as a key determinant of network performance and cost savings, have seen tremendous improvements in application performance over our Software-Defined Network Platform, while having saved up to 56% in overall network costs.

But you don’t have to take our word for it. You can check it out for yourself by participating in a free trial with us!

Otherwise, you might just be forced to chomp on a pizza with no cheese and no toppings. Food for thought (pun intended).

About the author

Mouli Radhakrishnan
Mouli is Vice President of Product Management at Aryaka Networks, and leads the technology process for strategic customer opportunities and business development partnerships.