Subscriber OnlyInnovation

Internet data congestion: a solution that slows broadband speeds and video streaming

Developed by engineers at Nokia Bell Labs, BT and Cablelabs, L4S could solve the latency issues that interfere with video streaming or cause financial transfers to freeze

An unfortunate truth of road improvement schemes is the resulting increased efficiency merely accelerates traffic into the next bottleneck.

Many roads alternate clot and flow, shaped just like the proverbial string of sausages. Frequent traffic lights and roundabouts do slow road traffic and improve safety for pedestrians and cyclists. But the staccato of relentless start and stop increases driving stress and literally wastes energy in accelerating and then slowing each multitonne vehicle.

The internet is rather similar. You might have a very nice 25 Mbps broadband service to your home, or a healthy 5G connection to your smart device. However, there are usually numerous bottlenecks on the journey data undertakes between you and the far side of the internet for whatever the remote website, streaming service or other cloud software you are using. Sure, you might be able to upgrade perhaps to a 100 Mbps service from your favourite internet provider, but that will only accelerate your traffic into the first bottleneck on its odyssey. The Stillorgan dual carriageway does a similar fine job in efficiently delivering traffic to the back of the endless queue into the narrow conduit through Donnybrook.

In the case of the internet, what are these bottlenecks? There are multiple routers and switches, which steer traffic across the internet, as well as gateways which mediate traffic between different network operators, including across international boundaries. Simultaneous traffic for thousands of consumers and businesses can converge through the same pathways, and temporary spikes in demand lead to congestion. Every bottleneck separately adds to the overall delay in getting your traffic across the internet.

READ MORE

Network infrastructure devices not only temporarily queue data if congestion arises but may deliberately discard data under heavy load conditions. For audio and video streams, a small data loss is barely noticeable but, at higher losses, the streaming becomes choppy. For other applications, such as financial transactions, data loss is overcome by detecting gaps and requesting retransmission of the missing traffic. Higher loss rates lead to slower response times.

The classic internet control algorithms have barely changed since their original design 40 years ago. They throttle traffic when a sender and receiver infer that congestion is happening somewhere between them. If traffic is flowing without loss from the sender to the receiver, the sender typically increases the rate at which it dispatches data. Equally, if losses are occurring, the sender reduces its transmission rate. Oscillating rates often result, with regular peaks and troughs, rather than a consistent flow. Worse still, the troughs may underestimate the available capacity and bandwidth, while the peaks then overestimate them. The overall result is that the latency – the average time it takes data to cross the internet – can vary very considerably, depending on the concurrent traffic loads on the various arteries and intermediary devices in the internet infrastructure.

Sometimes, unsteady oscillating data flow is barely perceptible to us as users. But as internet usage has grown, problems are increasingly surfacing. Latency delays can snarl multiparty video conferencing, disrupt game play in multiplayer online gaming and impact remote control of automated systems.

Internet architects have been working on improvements, but changing the core control algorithms implemented literally on millions of devices, worldwide in a couple of hundred national jurisdictions, and across thousands of network operators requires intricate intervention. Any adjustment can only be deployed to a comparatively small number of devices at a time, must coexist with older versions of the technologies and – most of all – must “do no harm”.

At last, though, there seems to be a breakthrough. Engineers at Nokia Bell Labs, British Telecom and Cablelabs (a not-for-profit collaborative research lab in Colorado), and others, have worked together for over 10 years on a viable improvement. The innovation, Low Latency, Low Loss and Scalable (L4S) management, results in the senders of traffic adjusting their transmission rates but with minimal queuing within the network infrastructure. The approach includes a mechanism for downstream infrastructure equipment to explicitly warn senders of congestion. Importantly, it largely eliminates the oscillating rate behaviour of the traditional algorithms, thus matching the data flow to the actual transmission capacity of each network link.

L4S can coexist alongside the traditional internet control mechanisms, without making their performance any worse.,Adopting the innovation requires updates to software in the systems connected to the internet – your smartphone, laptop, gaming computer as well as the cloud and server systems to which they connect. It also requires software updates to the myriad routers, switches and gateways of the internet infrastructure.

However, it can be deployed gradually, and at different rates by different manufacturers and operators. Apple has introduced L4S into its latest iPhone, laptop and desktop operating system releases. Comcast, Deutsche Telekom, Ericsson, Google, Nvidia and Valve have all also announced support.

So, when your broadband operator next tries to convince you to upgrade your package, ask them whether they implement L4S across their entire network.