Streaming Latency: Low Latency Use Cases and Real-Life Examples

Previously, we’ve explored the different definitions of latency and when it is important, what causes it, as well as how popular protocols fare against each other and the importance of zapping time. Now, we are going to take a look at different use cases for low latency and provide some real life examples to conclude this blog series.


Low latency use cases


VoD traditionally focuses on the highest possible quality for the lowest number of bits. Fast startup is the only latency metric that really impacts the user experience. Besides adopting the right streaming video approach, user interfaces are adopting latency hiding techniques to give the viewers the impression of instantaneous startup times. This includes prefetching the video or starting at lower qualities so that the video is transferred faster and starts earlier.


Broadcast traditionally focuses on quality of experience for a large audience. This calls for longer encoding times to ensure the best possible visual quality of the video for a given bandwidth budget, but also for fast start-up and channel change times and for scalability. Latency is becoming increasingly important as well, driven by the desire to have the playback on online devices occur at the same time as the existing broadcast distribution.

Therefore, this industry gradually moves to shorter segment sizes, and to LL-DASH and LL-HLS. This still does not give a really good solution in combination with fast channel change though, because a trade-off will have to be made between start-up and channel change times on one hand than the latency on the other hand.

Live Event Streaming

Live event streaming critically depends on low glass-to-glass latency. Traditional HLS and DASH protocols are not satisfactory. Therefore, live event organizers are using WebRTC. This works fine for small audiences. The cost of scaling for WebRTC is high though. Consequently, live event organizers targeting mass audiences are looking for LL-DASH and LL-HLS when they can afford the increased latency. HESP brings an answer here, with slightly higher latencies than webRTC but the same scaling characteristics as any HTTP based approach, largely outperforming webRTC.

Real-life examples

  • THEO reaches between 1 and 3 seconds latency with LL-DASH and LL-HLS depending on the player and stream configuration. In the past few months THEO engaged with several customers worldwide, both for PoCs and for real deployments, reaching latencies of around 2 sec in real life conditions.
  • Synamedia, Fastly and THEO set up an end-to-end demonstrator with HESP, reaching out to a virtually unlimited number of viewers with sub-second protocol latency and zapping times well below 500msec.LL Cheat Sheet THEO Synamedia Demo WEB

Figure 1 - Synamedia, Fastly and THEO HESP Demonstration


  • WebRTC is being used for video conference tools such as Google meet.
  • YouTube Live brings live content with a delay of several seconds.
  • Wowza has a hybrid system for live events, using WebRTC for a limited number of ultra-low latency critical participants and LL-HLS for the rest (LL-HLS Webinar with THEO and Wowza).


Different applications come with different low latency expectations.  

Existing technologies are typically designed to cover a range of latency needs. For stringent latency requirements (down to a few seconds) we need LL-HLS and LL-DASH. Sub-second latency at scale is made possible by HESP, the High Efficiency Streaming Protocol. WebRTC is capable of achieving even lower latencies, but often at the expense of quality of experience, and falls short when it comes to scalability to a large number of viewers.

Any questions left? Don’t hesitate to reach out to our team. You can also download the complete version of this topic in our “A COMPREHENSIVE GUIDE TO LOW LATENCY” guide here.

Want to talk to us about (low) latency? Contact our THEO experts.

Subscribe by email