Latency: The Variable That Makes the Difference

In a world where everything seems to be measured in gigabytes, where advertisements promise sky-high connection speeds, there is a word that is rarely mentioned but that determines, to a large extent, the user’s real experience when browsing the internet: latency.

While bandwidth takes center stage in the discussion about speed, latency, silent and often ignored, has a specific weight in the equation of good digital service.

“Speed depends more on time than size”

By: Gabriel E. Levy B.

www.galevy.com

For years, the dominant discourse around quality internet revolved around how many megabits per second (Mbps) a provider can deliver. This obsession with bandwidth—understood as the amount of data that can be transferred per second—overshadowed another equally or even more decisive factor: latency. Latency, in simple terms, is the time it takes for a data packet to go from the source to its destination and back. It is measured in milliseconds and represents that brief but crucial interval between when we click and when the system responds. It’s not about how much you can send, but how quickly what you ship arrives. In a digital world that rewards immediacy, latency is the true thermometer of fluidity.

The problem is that this delay is not always visible until it becomes an obstacle. In a video call, it can manifest as an awkward silence before the voice of the interlocutor reaches us; in a video game, like that fatal second in which the character does not react in time. The latency is there, hidden, undermining the experience, even though the internet speedometer promises high figures.

The American engineer Jim Gettys, who coined the term bufferbloat to describe the bottlenecks that are generated in networks overloaded with data, warned that the quality of the internet lies not only in the volume of information that can be transmitted, but in how and when it is delivered. “The unnecessary accumulation of packets on the network causes an artificial inflation of latency,” he explained in a series of technical publications that are now considered a reference. In other words: it doesn’t matter how wide the highway is if all the cars get stuck in an eternal traffic light.

And it is precisely there where a crack opens up between what is promised and what is experienced. While internet service providers compete in a race to offer more megabytes, many users continue to face erratic, interrupted, frustrating browsing. The explanation, almost always, lies in those invisible but decisive milliseconds of latency.

“A data packet can travel at the speed of light, but get lost in the bureaucracy of the road”

To understand why latency matters so much, just look at how the network behaves in different contexts. Latency—measured in milliseconds (ms)—represents the delay between a user action and the server’s response. In applications such as video calls, online gaming, or even basic web browsing, high latency can make the experience unbearable, regardless of whether you have a bandwidth of 300 or 1000 Mbps.

In this sense, the metaphor of the digital highway takes on value: bandwidth would be the number of lanes available, while latency would resemble the state of traffic. You can have a six-lane highway, but if each vehicle takes a long time to react to traffic lights or has to go through unnecessary tolls, the total travel time skyrockets. This is precisely what happens in internet networks that prioritize volume over efficiency.

Countries such as South Korea and Japan, world leaders in connectivity, understood this distinction years ago. In their national strategies, they not only expanded coverage and speed, but also focused on reducing latency through infrastructure closer to the user, low-interference fiber optic networks, and distributed data center architectures.

However, in many countries in Latin America and other developing regions, the race is still focused on offering “more megabytes” without improving the conditions that allow those megabytes to translate into an efficient experience.

“Silence between packages also communicates”

One of the main problems facing latency as a concept is its invisibility in marketing. It cannot be sold as easily as megabytes. No promotions that say “latency of less than 10

 more guaranteed”, although such a figure would be decisive for a professional gamer or for a surgeon who operates remotely with robotic instruments.

The general public, moreover, tends to confuse speed with download time, without noticing that many of the daily annoyances when using the Internet – messages that take time to send, calls that are interrupted, pages that seem to “hesitate” before loading – are due to high latency, not low bandwidth.

This misunderstanding is also perpetuated in regulation. Quality of service regulations in many countries require providers to meet a minimum bandwidth, but they do not care about setting acceptable latency thresholds. This omission enables operators to comply with contractual figures without actually improving the user experience.

Even in corporate environments, where time is money, there have been cases where high-capacity networks generate disappointing results because they don’t consider latency. A fiber network connecting two offices can transfer large files in seconds, but if real-time communication between teams is hampered by 100-millisecond delays, collaboration suffers.

“When every millisecond counts”

In the world of online gaming, latency is no longer a technical term but an everyday concern. League of Legends, Call of Duty,  or Fortnite players  know that a latency greater than 50ms can make the difference between victory and defeat. In this ecosystem, the famous ping – a measure of latency – has become part of the common vocabulary, and there is no game where it is not discussed whether the connection is “slow”.

In Latin America, for example, the term “lag” was popularized to describe those moments when the image freezes just as one is about to make a crucial move. In many of these cases, the problem is not that there is a lack of speed, but that there is too much latency.

Another emblematic case is that of telemedicine. With the rise of virtual consultations and robotic-assisted surgeries, especially after the pandemic, it became critical to maintain ultra-low latency. According to a study published in The Lancet Digital Health, latency greater than 250 ms can seriously compromise safety in remote surgical procedures.

Video calling platforms, such as Zoom or Google Meet, also faced this dilemma. During the peak of the pandemic, many of these platforms invested in regional servers and compression algorithms that reduced latency, given that a simple half-second delay in a conversation can completely desynchronize a meeting.

Not to mention the Internet of Things (IoT), where sensors connected in real time must make decisions instantly, as in an autonomous car detecting a pedestrian. In these types of contexts, low latency can literally save lives.

In conclusion, internet speed does not depend only on bandwidth, but is based, to a large extent, on a variable that is often ignored: latency. In a digital environment that demands instant answers, ignoring it is like building a highway without worrying about traffic lights. Understanding and valuing latency not only improves the user experience, but redefines what it really means to have a good connection.

References:

  • Carr, Nicholas. The Shallows: What the Internet Is Doing to Our Brains. W.W. Norton & Company, 2010.
  • Gettys, Jim. “Bufferbloat: Dark Buffers in the Internet.” ACM Queue, 2011.
  • The Lancet Digital Health, “Latency in telesurgery: implications for safety”, 2020.