Your Website Title
mobile

Streaming Latency: What Is It And When Does It Matter?

Published on: March 2, 2021
blog-author-avatar
Written by:
Johan Vounckx

Latency, low latency, ultra-low latency are becoming increasingly important. New developments like LL-HLS, CMAF-CTE and HESP confirm and support this statement, in addition to other streaming protocols such as webRTC and RTMP. With all the technology and the definition of latency, it can be difficult to see the forest for the trees. So let us start with a few, all equally valid, definitions of latency.

This is a snippet from our "A Comprehensive Guide to Low Latency" guide which you can download here.

Different definitions of latency

Glass-to-glass or End-to-End Latency

End to End Latency

Figure 1 - Latency between broadcaster and viewer

The most trivial definition of latency is the so-called glass-to-glass latency or the end-to-end latency. That is the time it takes between the moment that action happens (and is in front of the first glass, the camera) and the moment that a viewer sees this action on his screen (the other glass). This definition of latency is especially useful for the streaming of live and interactive events.

Protocol, Startup and Channel Change (Switch Latency)

Switch latency

Figure 2 - Latency between broadcaster and viewer

A second definition is the so-called protocol latency. That is the latency between the output of the encoder and the actual playback. This latency is interesting for low latency applications where we do not want to compromise the quality of the encoder. We also have the startup and channel change latency. That is the time it takes to start a video or to change channel, once the command has been given. This is an important parameter for streaming video applications that want to provide a lean-back TV experience where viewers are used to instantaneously change channels with a simple push on a button.

When does latency matter?

In the table below we indicate which latency really matters for five typical use cases:

USE CASE STARTUP TIME CHANNEL CHANGE TIME PROTOCOL LATENCY GLASS-T0-GLASS-LATENCY DESCRIPTION
Broadcast +++ +++ +++   Protocol latency is important to ensure simultaneous arrival on main screen and on OTT devices. Startup and channel change times are crucial to ensure a lean-back TV experience and to ensure that people stick to the service.
VoD +++       Playback need to start rapidly. VOD user interface are designed not to need fast channel changes. Protocol and glass-to-glass latency is not important.
Live Events ++ (++) Implicit +++ Glass-to-glass latency is crucial. Startup latency is important as for every video service. Channel change is important for large events with multiple stages or multiple cameras
Video Call ++ (++) Implicit +++ Glass-to-glass latency is the major criterion for video calls (and even more so for the audio)
Interactive Events ++ (++) Implicit +++ Glass-to-glass latency is crucial for interactive events (though mostly slightly lower than for video calls). Channel change is important for setups with multiple concurrent interactive events.

Table 1 - The Importance of Latency in Different Use Cases

What causes latency?

As you can see from above, the relative importance of different streaming latencies is dependent on the use case. To get the full picture, it is important to understand the different factors at play, contributing to these streaming latencies. In our next blog post, we are going to explore where latency gets introduced into the stream. You can also download the complete version of this topic in our “A COMPREHENSIVE GUIDE TO LOW LATENCY” guide here.

Want to talk to us about (low) latency? Contact our THEO experts.

Contact us

Want more information on one of our THEO solutions? Get in contact with us today.

Contact us

Want more information on one of our THEO solutions? Get in contact with us today.