THEO Blog

Streaming Latency: Protocols and Zapping Time

Previously, we’ve explored the different definitions of latency and when it is important and what causes it, in this blog we will explore how popular streaming protocols fare against each other. Additionally, we will also discuss one other factor that plays a big role in allowing you to provide the best viewing experience.

This is a snippet from our "A Comprehensive Guide to Low Latency" guide which you can download here.

Latencies of different protocols

The different streaming protocols all have a different glass-to-glass / protocol latency. The table below gives an impression of the capabilities of the different protocols:

LL Cheat Sheet_Streaming Latency Continuum WEBFigure 1 - Streaming Latency Continuum

We see that the use of the traditional DASH and HLS protocols leads to large latencies. These latencies can be reduced by shortening the segments. But the latency remains high because a segment is handled as an atomic piece of information. Segments are created, stored and distributed as a whole. LL-DASH and LL-HLS overcome this problem by allowing a segment to be transferred piece-wise. A segment does not need to be completely available before the first chunks or parts of the segment can be transferred to the client for playback. This significantly improves the latency.

For ultra-low latency, approaches are needed that allow for a continuous flow of images that are transferred as soon as they are available (rather than grouping them in chunks or segments). This can be done using webRTC and HESP (HESP Webinar & Whitepaper), THEO Technologies’ next generation streaming protocol. HESP using Chunked Transfer Encoding over HTTP whereby the images are made available to the player on a per image basis. That ensures that images, extremely rapidly after they are generated, are available at the client for playback.

What about zapping time?

The zapping time is also important. For DASH and HLS this is a trade-off with the latency since playback can only start at segment boundaries. That implies that a player needs to choose between waiting for the most recent segment to start or starting playback of an already available segment. In case latency is not critical this allows for a very fast startup. If latency is critical, there is a penalty for the startup time.

WebRTC allows for a shorter zapping time, but still is bound to GOP size boundaries.

HESP allows for ultra-low start and channel change times, without compromising on latency. HESP does not rely on segments as the basic unit to start playback. HESP can start playback at any image position. As explained in (reference), HESP uses range requests to tap into the stream of images that is made available for distribution as soon as they are created.

Low latency and fast zapping is fine, but scalability is equally important, especially for video services reaching out to tens or hundreds of thousands of concurrent viewers. HTTP based approaches (DASH,LL-DASH, HLS, LL-HLS, HRSP)  have an edge over webRTC, since HTTP based approaches ensure the highest possible network reach, can tap into a wide range of efficient CDN solutions and achieve scalability by file servers. WebRTC on the other hand relies on active video streaming between server and client and supports much less viewers on an edge server compared to a regular CDN edge cache.

In our next blog post, we are going to talk about the low latency use cases and its real life examples. You can also download the complete version of this topic in our “A COMPREHENSIVE GUIDE TO LOW LATENCY” guide here.

Want to talk to us about streaming latency? Contact our THEO experts.

Subscribe by email