Select a Language
Close

What is Data Latency and How Can it Be Mitigated?

Published May 20, 2024

186A4870

A look into how data latency impacts companies in Asia Pacific and ways to address it.

What is Data Center Latency and Why is it Important?

The definition of data center latency is the time it takes for data to travel from point A to point B, usually measured in milliseconds (ms). We often talk about two-way latency, which is the time from a user sending a request to an application to  receiving the response from that application. In layman’s terms, latency is the time it takes for your online shopping to be processed or how much buffering you see when streaming your favorite show, the lower the latency, the better.

Latency is a natural occurrence in any network as it will always take time for a data packet to travel from its source to its destination, this is true regardless of distance or the amount of data being sent. What is critical to consider is low latency (preferred) vs high latency and how we can achieve optimal latency in today’s data dependent world. A good analogy is to consider your commute from home to the office by car – how much time it takes is of course highly dependent upon the distance, but also affected by other factors such as the number of available lanes, the amount of traffic, the weather and so on.

Critically, any delay in data transit will have an adverse impact on the UX (user experience) which is an important component of any business operations. For example, if a website loads slowly, it may make you reluctant to visit and interact with the company’s online products. When considering the financial services industry, where trades entailing many billions of dollars are completed in milliseconds and a slight lag in say a trading application can cause significant losses,  we can see how low latency becomes even more critical.

What Factors Impact Latency in a Data Center?

There are numerous factors that impact latency; the most obvious of which is distance, or proximity. Naturally this means there are significant advantages to geographically distributed digital infrastructure compared to centralized data centers, and we are gradually seeing the decentralization of hyperscale cloud platforms, especially in the emerging markets and in second-tier cities in more mature markets. In particular, for AI and other technologies such as the Internet of Things (IoT), low latency is the most critical aspect of the customer experience, prompting internet and cloud providers to strive for equipment deployment in closer proximity to the end user.

In addition to distance, there are other technical factors that can lead to higher latency. These factors include inefficient router paths for instance; while routers function as connectors, too many routers can slow down the network which leads to high latency, especially when data moves from one router to another. An example would be using publicly shared internet connections, with indirect routing, instead of a CDN (Content Delivery Network), or even better, dedicated data center interconnection.

Other technical factors include network issues whereby certain network devices, such as switches or routers, may suddenly cause latency to rise if the CPU is overloaded or runs out of memory. Transmission media issues can also be problematic, the purpose of transmission media is to connect sender and receiver, allowing them to exchange data by converting the data into manipulated code. If the transmission media is not compatible with the hardware or software, it can also cause high latency.

How can Data Center Latency be Optimized in Asia Pacific?

As applications become ever more data-intensive, the negative impacts and associated costs of latency and data transfer are becoming ever more significant. This makes the case for moving computing capabilities as close as possible to where data is generated, to what is sometimes referred to as the ‘digital edge’, ever more compelling. Taking proximity into account, along with factors such as investment costs, time to market and allocation of resources, many companies are choosing to partner with a colocation edge data center provider with localized coverage in major metros throughout the Asia Pacific region, such as Jakarta, Manila and Tokyo.

In addition, it’s important to find a data center provider that has the expertise to address the technical factors that impact latency mentioned earlier, as well as with people on the ground and with local connectivity partners in key metros throughout the region. Such providers can mitigate latency problems from both the proximity perspective, as well as by providing local market and technical know-how. This makes it significantly quicker and easier for businesses to enter or expand their digital presence in a new market.

Digital Edge is an edge colocation data center provider specializing in the Asia Pacific region, making us well placed to support businesses to mitigate their latency issues. With our hybrid, carrier neutral strategy, we can cater to a wide range of different customers and their unique needs. We also deploy a range of interconnection products to support businesses with their connectivity solutions, including  secure, low latency, fiber and copper cross connects (and high-speed, high-density, low-cost SDN-driven metro connectivity services among our data centers and other strategic locations within the same metro Cross Link™).

 

For more information, feel free to reach out to our Interconnect team at peering@digitaledgedc.com.

Horatio Chan
Senior Director, Business Development