How Latency, Packet Loss, and Distance Kill Application Performance

Latency, packet loss, distance, and application performance. What do all these terms have to do with each other?

If you manage IT networks for a global enterprise, it’s important to step back and look at big picture, so you can more clearly see how they all impact one another.

This may sound like “Networking 101” to some of you, but it’s critical to understand the relationships between these terms and their combined impact on application performance.

Definitions:

  • (Network) Latency is an expression of how much time it takes for a packet of data to get from one designated point to another.
  • Packet loss is the failure of one or more transmitted packets (could be data, voice or video) to arrive at their destination.
  • Distance is the intervening space between two points or, for the sake of enterprise networks- two offices.
  • TCP (Transmission Control Protocol) is a standard that defines how to establish and maintain a network conversation via which application programs can exchange data.

The Big picture:

When there is distance between the origin server and the user accessing that server, to complete a task the user needs a reliable network to connect. This network may be a private network, like a point-to-point link or MPLS. It may also be public, typically over the Internet. If the network has packet loss, the overall throughput between the server and the user significantly reduces with increasing distance. This means that the further away the user is from the origin server, the more unusable a network becomes.

Why is that?

The main culprit is TCP (Transmission Control Protocol), the standard that defines how to establish and maintain a network conversation via which application programs exchange data.

TCP is the protocol or mechanism that provides reliable, ordered and error-checked delivery of data between servers and users across a network. TCP is a good guy and helps with data quality. It’s also a connection-oriented protocol, which means a data communication mode in which you must first establish a connection with a remote host or server before any data can be sent.

The next step after the establishment of a TCP connection is to establish flow control to determine how fast the sender can send data and how reliably the receiver can receive this data. Depending on the quality of the network, the flow will be determined by window sizes negotiated from both ends. The ends may disagree if the client and the server view the network’s characteristics differently.

flow control for TCP connection
This has a major impact on application performance!

Certain applications like FTP would use a single flow and scale to the maximum available window size to complete the operation. However windows-based applications tend to be more ‘chatty’ and need multiple back and forth to get the operation(s) completed.

The simplistic model to consider:

Network + Packet Loss + High Latency = Application Performance for TCP Applications.

In fact, looking at the graphic on the maximum throughput one can achieve, you wonder how organizations get any collaboration across long distances at all.

Maximum TCP Throughput with Increasing Network Distance
Maximum TCP Throughput with Increasing Network Distance

Voice and Video perform poorly when there is packet loss, especially over long-distance Internet links. However, even minimal packet loss combined with latency and jitter will make a network unusable for real-time traffic. Why? Because these applications run over UDP (User Datagram Protocol).

Unlike TCP, the good guy who polices all interaction, UDP couldn’t care less. UDP is connectionless with no handshaking prior to an operation, and exposes any unreliability of the underlying network to the user. There is no guarantee of delivery.

Here is the path most organizations with a global user base and growing application performance issues tend to take.

  1. Focus on Internet links. Buy more bandwidth. Throughput typically increases somewhat but not enough to fix the issue.
  2. Upgrade to MPLS links. Wait for 6-9 months for deployment. Realize that the problem has not been solved for long-distance connections.
  3. Consume more and more and more bandwidth. Deploy QoS to deal with congestion and its impact on real-time traffic. Voice and Video do okay, assuming enough bandwidth is configured.
  4. Realize that you can’t afford to keep buying more bandwidth at this alarming rate.
  5. Add WAN Optimization appliances. With TCP optimization, compression of data and application proxies, it does address the issues of throughput.
  6. See prices skyrocket to manage and maintain WAN Optimization hardware, and then experience sticker shock when it’s time to refresh those appliances.
  7. Consider your options. Cloud Services? Mobility?
  8. Revisit your entire enterprise network design. Vow to transform that network. Plan for the Cloud and for Mobility. Account for Big Data and your growing needs. Accommodate acquisitions and business changes.

And how would you do that? If you know that the status quo is broken, you also know that the traditional hardware vendors are trying to squeeze every last red cent out of those boxes before their business model becomes completely outdated.

Aryaka is the world’s first and only global, private, optimized, secure and Managed SD-WAN as a service that delivers simplicity and agility to address all enterprise connectivity and application performance needs. Aryaka eliminates the need for WAN optimization appliances, MPLS and CDNs, delivering optimized connectivity and application acceleration as a fully managed service with a lower-TCO and quick deployment model.

We invite you to learn more by contacting us today, or download our latest data sheet on our core solution for global enterprises.

About the author

Shehzad Karkhanawala
Shehzad is Director of Marketing at Aryaka. He leads public and analyst relations globally and demand generation as well as partner marketing activities for the Asia-Pacific region. Shehzad is an SD-WAN evangelist and often participates in webinars, events and discussions on the subject.