Webinar

Delivering Reliable Low Latency Live Video

21Q3_Webinar_LL_Videon-Mux_WebHeader_Mobile-01-01

Watch the recording and learn how to deliver reliable low latency video

In this session, THEO Technologies alongside with Videon and Mux are sharing insights about:

  •  Implementing Apple Low Latency HLS in the real world.
  •  Solutions to improve HLS latency at scale.
  •  Camera-to-Screen nuisance.
  •  How to reach all connected devices with HLS latency.
  •  An exclusive End-to-End Solutions Demo.
 

Webinar transcript

Introduction

Lionel: Good morning, good afternoon, good evening, because we have a truly global audience today. Thanks for joining us with this new webinar about live video and redefining low latency with reliable workflows and scalable workflows globally. And again, globally, because we were just chatting about that, we have people coming from all over the world.

Let's start with some introductions and you'll see it's a global panel also we have today. My name is Lionel Bringuier, I'm running the product management team at Videon, and I'm really, really happy to host this webinar in conjunction with Ashok from Mux and Pieter-Jan from THEO Technologies.

I will just do a very quick intro of Videon: we're providing the live edge computing platform, what we call the edge is the edge close to the source, close to the camera. So, we have this edge caster, which is a small form factor device where you can input SDI inputs, or you can connect SDI inputs, you can connect HDMI inputs, we do the transcoding, we can run applications at the edge as well. And we think we have a great solution for low latency in conjunction with our partners at Mux and THEO. And I will pass it over to Ashok to tell us what Mux does.

Ashok: Yeah, thank you, Lionel. Hello, everyone, nice to meet you all. Thanks for joining the webinar. My name is Ashok Larwani, Senior Product Manager at Mux, and I'm super excited to do this webinar. Before I pass on to Pieter, a little bit about Mux, what we do: so, in a nutshell, Mux is a video infrastructure provider for developers. I think our core mission is to democratize video, solving the hard problems for developers, so every company doesn't need to solve the same problems again and again.

And as you all know, it's no surprise, video is in every application today. To so much that watching video is the third most performed activity after sleeping and working that everyone does today. And yet streaming hundreds of videos delivering to millions of viewers is not very easy. It's incredibly hard and Mux is determined to solve that problem. The same way Stripe did for payments and in finance, Twilio did for messaging, we want to do that for video. And how we do this: basically you can host, process, and deliver videos all using a single API. And video is ready for playback in seconds. The same thing we want to apply to everything we do, whether it's live streaming, or on demand. And we're talking about low latency today, so I'll give you a quick demo on how that's possible with a single API, getting a live stream and enabling it for low latency using a single API with Mux. So that's Mux at a very high level. Let me pass on to Pieter for his introduction.

Pieter-Jan: Yes. Well, hello and welcome everybody as well. From THEO’s side, of course, well, we're more on the playback side. So, we're not doing the hard stuff on the edge like Videon is doing or trying to distribute the content across a global scale. But well, with THEOplayer, we actually focus on video playback. Just making video play across all devices is already incredibly hard, let alone the stuff to get the video there, but making it play across basically every possible device with a screen, that's what we're focused on mostly. We've done a lot of work around getting playback at the right quality, making sure that there's always the highest possible performance, but of course that also includes for us, making sure that it's at low latency. We've done a lot of work on this over the past years. Some of you may have heard me go on and on about low latency in some past webinars. So very excited to showcase what we've been doing together with Videon on Mux. So, let's get started.

Lionel: All right, thank you everybody. So yeah, let's get into it. Let's talk about the real problem here. And it's exactly what Pieter-Jan just said: it's all about latency. We used to watch video a lot on the TV screens, and we've been used for years and years to have a low latency experience. When we talk about digital broadcast, end-to-end latency was in the six, seven seconds. And moving to those connected devices - to the video over IP, there have been many phases to get video to connected screens. And the latest evolution is to deliver over HTTP with technologies like HLS, what I would call classic HLS or Dash, and the latency was much higher than what we were used to get with our broadcast TV.

So, the whole point is how can we make that better? Are there technical solutions today that can bring the latency as good as broadcast TV or better than broadcast TV? And the answer is yes, there is. And we will show you that we can achieve that. But also, be very careful on how we want to measure and monitor latency, because you'll see there are several steps in the overall streaming chain, and we want to make sure we not only have good performance at every level, but we can also monitor the performance and see what needs to be tuned, what needs to be fixed when you need to troubleshoot it.

And there is also a cost - when you stream an event, it's over the internet, but it's not free to stream over the internet. And depending on the technology you're using, depending on the workflow, depending on your audience, depending on the resolution, the quality of the streams you want to have, you will have to manage the balance between higher quality, higher costs, and higher or lower latency. I will show you some details about that during the demo, because we will really focus the webinar today around the demo we can do together.

Quick housekeeping notes. I haven't talked about that, but if you haven't seen, we have a chat window on the webinar UI, we have polls - please answer the polls. It's going to be interesting, and we will show the results of the poll at the end of the webinar. Please ask questions in the chat window, there is no stupid question, ask anything you want. Depending on the time we have, we will answer the questions at the end of the webinar.

Exploring Low Latency HLS (LL-HLS)

Lionel: So, the technology we'll be talking about today is LL-HLS, LL standing for Low Latency HLS, HTTP Live Streaming. And we're very fortunate here because as Pieter-Jan said, he's been working for years in that field in making HLS faster and with low latency over the years. So, Pieter, I will hand it over to you. Can you tell us a little bit about LL-HLS, what it is, what are we using it for?

Pieter-Jan: Yeah, absolutely. I don't remember exactly what the year was when we started looking at low latency HLS itself. But well, at THEO we've been doing HLS for a very long time. And to understand what low latency HLS is, let me first maybe explain a little bit what HLS itself is: HLS is a very simple protocol. You just take an entire video feed, you chop up everything in very small parts, and you basically list them in what used to be a simple playlist file. So, you list every piece of video. The downside there was of course, that to play all of that video, the size of those chunks, greatly impacted the latency that you would have because there were specific rules that you would need to have about three of those segments in a buffer on the player side.

With low latency HLS, Apple actually flipped it around. They started announcing it a few years back in 2019, they already were working on it way before that because they saw the need for reducing the latency. But with low latency HLS, it actually is relatively simple: you have an old HLS stream, but instead of using segments which are bigger chunks of video data, you chop it up into smaller chunks. And well, that's about it, that's the most important piece at least of low latency HLS. There are all kinds of extra stuff: blocking playlist reloads, pre-empting with preload hints, which data will become available next. All lot of improvements, all of them actually attributing to reducing latency tiny bits. So, if you really want to implement it well, you do have to implement all of the parts. but I won't spoil how all of that works because that's a very interesting specification to read for those of you who want it.

What is most important today actually low latency HLS, it is quite available. There is a very good player support at this point in time. We actually implemented the first player. We launched a few days before Apple actually did because they officially supported pretty good in their latest iOS versions. There are ExoPlayer versions, there are HLS.js and Video.js, there are betas and experimental flags which allow you to start doing this as well. And the important thing is what Lionel actually mentioned. It's not about reducing the latency to an arbitrary small amount. It's usually about beating the broadcast latency. There are reasons to get it even smaller: low latency HLS will actually allow you to get into that broadcast space, into that broadcast area of latency, of about, let's say six seconds, eight seconds, usual broadcast latencies. Low latency HLS can do a little bit lower than that. It really depends on how you optimize your tool chain. But this is definitely something which is in the realm of possibilities at this point in time. And I think that that's the great benefit that low latency HLS actually brings. Not the long half a minute kind of latency anymore. I think that's the most important part here.

Lionel: Yeah, thanks, Pieter-Jan. I think you just nailed it. It's really not about providing the lowest latency achievable. It's beating broadcasters and being able to do that at a very large scale, because it's still all HTTP based. So, you can stream that with audiences of millions of concurrent viewers without stressing the CDNs too much or paying a fortune in different scaling technologies than pure HTTP.

We asked a question in a poll about using LL-HLS and we got a third of the audience who responded to that question. And the answer is - we have 68% who have not used LL-HLS yet; we have 26% of the audience who have used LL-HLS; and we have 6% of people who were asking, what's that? And I think Pieter-Jan just answered the question.

Analyzing Latency in Streaming Protocols: From HLS to LL-HLS

Lionel: So let me go to the next slide. It's really trying to visually represent what we're achieving with LL-HLS in terms of latency. And, you know, HLS was really very first protocol to enable live streaming over HTTP at a very large scale. Then, MPEG-DASH arrived a few years after that to have something that was not fully controlled by Apple as it was in the early days of HLS.

The problem is that the end-to-end latency is half a minute. We're talking about a minimum of 24, 25, 30 seconds for your end-to-end latency from glass to glass, from the camera to the iPhone, iPad, laptop, connected TV you're using. And one of the reasons is - if you look at the Apple spec, there is a recommendation for the HLS chunk size and the recommendation today is still six seconds per chunk. So that means that there is a little segment of chopped video that is six seconds. In the early days of HLS, they were talking about 10 seconds, which means that we have even more latency, because you have to produce those chunks of video, you have to deliver them through the CDN, and you have to play them back. And on the player, you have to get a buffer because it's all about IP-streaming and IP is not reliable, it's over HTTP. HTTP, it's kind of reliable, but it's not meant to be used for real-time applications. It's not meant to be used for live applications. So, it's a little bit of a hack where we need to have this buffering on the player and when you have a buffering of at least one, two segments, you add 6, 12 seconds only on the player side.

So, when we wrote that slide, you can see we have the scale varying from 0 to 7 seconds, it's a linear scale and then it's a little bit logarithmic because if I wanted to represent 24 or 30 seconds, I would be off my screen to show that. But again broadcast TV: in most cases, we're talking about six, six and a half seconds of a latency. With standard HLS, we're talking about half a minute. Now, with LL-HLS, we can meet or beat the broadcast latency.

And that's really what we're going to show during the demo today. And to give you a little overview on how the latency fits for the encoding site. So, you know, we need to get the content, the source encoded first from an HDMI or SDI input. And our encoding latency, depending on the parameters, on the resolution, the frame rate, the quality you want to achieve. At Videon, our encoding latency goes from 150 milliseconds to usually 250, maximum - 300, 400 milliseconds for encoding latency. So in the demo we're going to have today, we will do one 1080p stream sent over to Mux. Then Mux will re-encode and do all the LL-HLS packaging from their Mux video SAAS application. They also include the delivery part, the CDN, as part of the service that is provided by Mux. And then it's sent to the player by THEO, which also needs to do all the buffering and decoding part.

So that's our presentation here to give you an  idea where what is the most important part where you can optimize latency. And we have many parameters at each level where we can do some tuning. And we will show you what we can do in a few seconds. Ashok, Pieter-Jan, anything you want to add on this slide?

Pieter-Jan: It's indeed as you mentioned, it's all in the parameters. You can make this as efficient or as inefficient as you would want, even with normal HLS. I mean, in theory, yes, you can reduce it from the 20, 30 seconds that you mentioned, but you will never get as low as you can get with low latency HLS. But even with low latency HLS, if you configure it wrong, you can be slower than other HLS implementations. So, it's really about knowing what you're doing and about tuning and testing the end-to-end. That's usually our biggest experience at least.

Lionel: Thank you. So yeah, choose the right solution. As you just highlighted, it's really important to make sure that you have the right partners, you have a best of grid approach. It's also really important to see how fast you can deploy, because you can always tune a little more, you can always optimize the workflow. But what we also see - two months ago, I was attending an e-sport conference and what was really interesting for e-sports is, there are hundreds of cameras for large e-sport events. And everyone was telling it, there is a huge change in how fast we want to deploy VITA-workflows. We have to deploy in days, not weeks. So few years back, sports as a whole, has always been something generating a lot of audience, a lot of revenue, and moving from traditional physical sports to e-sports, we see also that the challenges of deploying something that would meet the latency requests for something like sports, because you don't want to hear your neighbours cheering for a touchdown during a game and seeing that 30 seconds later, 25 seconds later on your screen. So, there is a change of latency, but there is also a challenge of deploying it fast. And that's what we're going to show in the demo today.

Live Demonstration of LL-HLS Workflow

Lionel: Demo is very simple. We will get an HDMI source into the Videon Live Edge caster. So again, this little device is what will do the initial transcoding. We will send it over to Mux through RTMP, and I will show you we have a pre-integrated UI with Mux. Mux will do the LL-HLS packaging and send it over to THEO for the playback.

All right, let me go to my demo here. I hope it's not too small on your screen. So that's the UI of the Edge caster, the LiveEdge OS provided by Videon. So yeah, I have an HDMI source. I'm sure nobody has seen that movie before, it's called Big Bug Bunny. It's brand new. I've never seen that before, quite funny with a fat rabbit. We could switch to an SDI input if we wanted to do that. So that's the input we have, as you can see, a timestamp that is embedded in the source. So, in the HDMI input, we have this timestamp burnt in, so that we can do some latency measurement and we have also external tubes to show it. Then on the EdgeCaster you can define your encoding profile. So, for that demo it's going to be a 1080p, classic full HD resolution, 30 frames per second, we do a 5000-kbps encoding, we could select H.264 (AVC), so we're staying with H.264, high profile, something we can select all the encoding parameters. We can select what is the best trade-off between quality and latency. So, it's basically what you can have easily on an encoder. We can do the same for the audio track, so, we could select the bitrate of the audio track. And basically, we're doing this encoding from the HDMI source to RTMP.

And that is where it's interesting because the LiveEdge platform is pre-integrated with multiple technology providers. In this case here, we use Mux. So, we have an RTMP input. In different workflows, we could do multiple bitrate-encoding. So, we could do a full ladder of HLS, DASH, and we could host the origin server on the live edge unit. We could push that to a different origin server. We could do UDP, TS, Unicast, multicast, SLT, I mean, we have multiple protocols embedded in that.

But here we use RTMP. So, as I was telling you, we could use a generic RTMP. But we have this integration with Mux, where we can log into Mux and select all the parameters. I will do that live. If you have your token ID and it’s secret, it's extremely easy to have access to all the streams. And then I can select the stream I want to use - so, that's the one we're using here. 

Ashok: You can use the one which starts with a K.

Lionel: One which starts with a K. I cannot use it. I cannot select it. I don't know why. Let me reload it.

Pieter-Jan: This is why you never do a live demo, Lionel.

Lionel: I like when it's dangerous. I like it when we show the real thing. So, I could select the KKZZ, which is the one we have, status is streaming. So let me see. I can see that the streaming bandwidth is getting back to a normal setup. So, this is what we have on the live edge unit. We have this ability to run applications through Docker containers. So, it's extremely extensible because we can run any kind of Docker application. Here we have a node exporter with a Grafana dashboard. On the Grafana dashboard, I can see that it's been up for one week, my bandwidth is five megabits per second - it's what I'm expecting. I can see that he's been streaming 244 gigabytes in the past one hour. So, which is correct because we have five megabits times 3,600 seconds. Over the past 24 hours, we have 24 times two 44 gigabytes, so it's streaming. I stopped the stream here. I can see the impact if there was any on the memory or input, but it's just a very small stream here. So, you can see the CPU was a little bit more idle here. And if everything is going right, I should be able to playback. Reloading it.

Pieter-Jan: But I think you have the wrong streaming endpoint personally. I thought it wasn't the right streaming URL yet.

Lionel: All right. So let me fix that. Yeah, I'm streaming on AF, not KKZZ. So, let's do that here. So, you can see, it's extremely easy to switch from one stream to a different one. There's streaming. Seems to be streaming.

Pieter-Jan: And it's actually back if you refresh. So that's good.

Lionel: It's back. Yeah, we're not faking it, it's a real demo. All right. So, we have the stream back and we can do some quick measurements of what would be the end-to-end latency. I will leave it to Pieter Jan for all the details. But we are at about five, six seconds. If you look at the timestamps and the latency that is measured here, start at 8 seconds. It's getting lower because it's getting optimized. I will hand it over to Ashok to show us how it's going in the Mux realm. Can you share the screen, Ashok? You want me to?

Ashok: Yep, give me a sec. Can you all see my screen? All right.

Configuring Low Latency with Mux

Ashok: Good. All right, so before we jump into showing you all the different things that Mux is doing behind the scenes. Just want to quickly recap on what we're doing for low latency; how low latency is configured with Mux when you create a live stream with low latency enabled. Right now, as you can see, the low latency feature is available in beta for everyone, to try it out, give us feedback, we are constantly tweaking and improving, squeezing latency as much as possible in our pipeline. So, feedback is welcome.

We support the full ABR ladder. As you saw, Lionel pushed a 1080p stream at 30 frames per second. We can go up to 60 frames per second and it's full of ABR ladders available even with low latency live streaming as well. As part of our product, we do deliver live streams as well. We have a multi CDN setup. So, we are always optimizing for availability and higher reliability. So, we pick the right CDNs at the right time for the right user so that they get the best possible quality of experience.

From a low latency perspective, what we're doing under the hood, as Pieter mentioned, low latency basically takes a single segment of video and further chops it down into sub segments. We are creating a two-second segment and we're chopping it down into one-second parts or sub-segments as part of the low latency implementation.

For a securely delivering live stream, we support both public URLs and signed URLs so you can attach a token to a low latency live stream that has expression windows, referral, domain validations, all can be enabled on live streaming.

And we do understand sometimes some geo's may have issues playing at a low latency and players may have, they may run into some other challenges. We've seen with some open-source players where it has a hard time keeping up, or once it goes beyond a certain time latency, it just falls back to a low latency and not doing a great job of it.

So, we also have provided a fallback option. So, the same live stream that you're publishing for low latency can also be available in the non-low latency mode for players that do not support low latency, or you're running into problems with those players trying to keep up with low latency. One way or the other, I think it's available for you to try it out.

Now in the demo I'm going to talk about three different things. One, how to set up a low latency, Lionel mentioned how to create, how to start publishing an RTMP stream to Mux. I'm just going to show how easy it is to create a new live stream that's low latency enabled and then focus the second part of the demo more on the monitoring side of things. It's always important when you're rolling out new features to monitor how it's performing, whether it's how the live stream has been received by Mux or how the viewers are experiencing the live stream in general. So, I'm going to give a quick demo on both of these. All right, let me just switch over. I hope you can still see my screen.

Lionel: Yeah, we can.

Ashok: Yeah, so this is the Mux dashboard where you can configure live streams and look at statuses of different live streams. This is the one that Lionel just configured to push on Mux, it's active. But let's see how you can easily create a live stream. So, I can just go here. Set what playback policy for that live stream is, whether it should be publicly available, or do you want it to be assigned a playback policy, which means you can add a token with expiration window and other policies attached to it. The other setting is more about recording. We won't go deep into that, but this section talks about how do we record and how the recording is available.

By default, we create standard latency live stream, so that gets you anywhere. between 25 to 30 seconds of end-to-end latency. So, to turn on low latency all you have to do essentially is, set low latency mode equal to low and create a live stream. It does create this live stream and you get the stream key, which you then configure into Videon or your encoders with the RTMP endpoint. And then it's ready to playback, ready to live stream a low latency live stream. That's it, all it's needed is just this particular API call to get everything. This playback ID is essentially what you can use to construct your playback URL and then feed it into the player. THEOplayer is already configured with the playback URL that Lionel is publishing on, which is basically this stream. And so, if I go to the stream, it has all the details about the live stream, you can see the live stream is active. We have one viewer playing it, which is probably the analyst player playing the live stream right now. And so, we want to see how are we receiving live streams because it is very important to monitor live streams on not just on the playback side but also on the ingest side. Whether we receiving the live stream in its pristine quality so that the downstream viewers will have a great experience. If there are any issues on the playback on the ingest side, there's definitely bound to have issues on the playback side as well. So, it's important to have a monitoring for that.

So, this is where we bring in the health monitoring for the live stream that Lionel is pushing to, we're receiving about 4.8 megabits per second, you can see over time how the bitrates are fluctuating depending on how we're receiving it, the audio bit rate is about 144 kbps and we're receiving about 30 frames per second consistently. So, you can constantly monitor how the live stream is being received on the Mux side. And then if you receive a pristine stream, all the other pipeline works as expected as well.

Similarly, we have other things where you can see what are the encoder settings, what are the different events that we are raising. But I won't go into detail for those events. The more important point is to monitor live streams.

Now, we talked about how the playback experience is and what latency your users are getting, your viewers are getting. We have much data for that where you can integrate the SDK into your video player. And from that point onward, it starts monitoring different playback, performance monitoring like startup latency, what are the rebuffering that you're seeing for all your viewers. And we provide both real time view of what's happening right now with all your viewers also a historical view which basically takes the entire view into account and aggregates all these metrics across each view.

And so, for this particular demo, I want to focus on a new metric that we recently launched, which is basically a live stream latency metric. What it is essentially doing is calculating the end-to-end latency. For us, end-to-end latency is - when we receive the live stream and what the player is playing at the moment. And so, we calculate the difference in that time and then report that as an end-to-end latency. And so, you can see, there are a few sessions that's ongoing and we measure across all different dimensions, whether it's browsers, whether it's devices, whether it's country, and which videos that you're playing. So, we measure across all of those different devices and break down each of those metrics based on those dimensions. So, in this case, I'm breaking it down by a browser. In the last 60 minutes, there were five views on Safari and two views on Chrome, and they were seeing a median of close to six seconds of end-to-end latency. In this case, since we don't have enough data, a lot of views happening. You can see the 95% somewhere in the nine seconds or nine seconds on Safari and six seconds on Chrome.

And so, the way we have a full description of how we are calculating it, this is the tag that is available in the manifest. It's part of the HLS specification that tells you the timestamp of when the frame was captured by Mux, and then when that frame is actually played on the player, and it's calculating the difference in time between that, and then that's what is reported as latency.

With that, that's all I had to show from the Mux side. I want to pass it on to Pieter for the player side of things. Take it away, Pieter.

Customizing Latency for Viewer Experience with THEOplayer

Pieter-Jan: Yes, let me set it up on my side as well. That should be working, if everything is working as it should. Let's start generating some data for you.

What we do of course is we try to play this. We've actually launched our first player playing low latency HLS more than, I think two or even three years ago. So pretty long ago, we've done quite a lot of updates over the years. But the most important thing is of course, we try to play at whatever is the possible lowest latency for our viewers.

There's a lot of complexity if you want to do low latency, some of them being - How do you manage your buffer? Make sure that it's always big enough if something messy happens on your network. And well, this being a live demo, my network not being super stable at this point in time, let's hope that nothing happens.

But there are a few other things which are quite important. Some of the things for our users or for our customers is that it's not always just about getting a low latency. As you can see here, currently our latency is around six seconds, 5.8, 5.9. Fluctuating a little bit because of course you can't measure it millisecond accurately with the EXTX program daytime same approach that Ashok just explained that Mux is using. But with THEOplayer there actually are a few additional things that we're doing beyond just making sure that you can play this at the lowest possible latency.

We actually allow our customers to tune whatever latency they want to play at. You have it in here - you have the latency manager is what we call it in this demo. You can actually start enabling and trying to go towards a lower latency. It's the same flag that Lionel actually had turned on. If you enable this, then what you will see here in the graph is that the latency will actually start to go down. Now I could squeeze this all the way to the bottom. Of course, the buffer will run out. Will not be a very good experience, but this kind of approach can actually be very interesting if you want to target and get all of your viewers at more or less the same latency. Ashok just mentioned they target between five and 10 seconds of end-to-end latency. Well, with this setup at this point in time, we're actually at 4.7 seconds of latency because of how the latency manager is configured. We could try squeeze it down a bit more. You can actually monitor what the buffer size is as well. We're around 1.5 to 2 seconds. I can probably drag it down to like four seconds, three and a half. I know it's feasible if the network is good.

But these kinds of things. That's actually what we also try to bring in. Not just trying to playback at a latency which is low enough, but really trying to optimize for use case of our customers. If it's crucial that they're all watching at the same latency, we can do that. If it's crucial that the latency is identical to broadcast latency, we can do that as well. We've even optimized low latency HLS implementations to go to like two seconds, three seconds. If your audience is small enough, their network is good enough, those kinds of things can actually be possible with THEOplayer as well.

But that's about it. That's what I wanted to show. There's a lot I could show about doing low latency streaming, but well, that would bring us pretty far beyond the scope of this webinar, I think.

Ashok: I want to add one thing. If you haven't noticed, I think PJ is in Belgium, the stream was published on the East Coast of US. So, you're seeing four seconds from North America and playback in Europe. 

Pieter-Jan: That's pretty good, right?

Ashok: Yeah, very good.

Q&A

Lionel: Thank you, Pieter. Thank you, Ashok. We have a ton of questions. I'm really happy. So many questions. Can you stop sharing? I will take it over from here. All right. I hope you can see my screen.

So, we have  tons of questions. And the first one, which is a great transition to the next slide, is “What are the streaming cost advantages of LL-HLS over WebRTC?” We talked a little bit on how we can improve more. You know, the demo today, you saw it. We can match, we can beat the broadcast end-to-end latency, but we can also do better. We can do more with what Pieter-Jan just presented on the player, you can tune all the parameters at the player level to get to a lower latency, but you can also have more risk if you have a smaller buffer and more constraints. You have more risk of being dependent on the network bandwidth and network condition. Again, he is in Belgium, streamed from the US. So, we are not in the best case for this kind of streaming. So, you never know what's going to happen when the bandwidth fluctuates. So it's a question of balance, of finding the right trade-off between the quality of user experience, bandwidth, and latency.

We can also do things with i-frames. It's all depending on the GOP, the structure of how compressed the images are in your segment. And we could do better with more i-frames. So, an i-frame is basically the full image without the temporal difference between the previous or the next image of the current one you're seeing. So, we could have a higher bandwidth with a lower compression to also reduce the latency based on the compression patterns. Same thing, if we do that, we would have a higher bandwidth.

WebRTC is a protocol that is designed for ultra low latency, that is designed for real-time interaction. You can absolutely achieve half a second end-to-end latency. What we're doing here with Zoom or all those face-to-face communication systems, they're using WebRTC to ensure ultra latency. The problem is you're very limited in terms of audience. If you want to do WebRTC with one server, you will not achieve more than 50 streams comfortably, maybe 100 streams. There is no way you can do a million concurrent streams with WebRTC without deploying thousands and thousands of those servers with very complex mechanisms to ensure that you have the right synchronization between your WebRTC peers.

So, it's really complicated. You can do it, but it's going to cost you a ton of money versus something that would be purely HTTP based with LL-HLS.

Pieter-Jan: I think that the difference is the architecture, right? With WebRTC, it's more like if you would be streaming RTMP, like people did years and years ago, while that's why they invented HTTP-based streaming protocols like HLS and DASH and Smooth and all the others, because then you have the power of that CDN. It allows you to scale to huge audiences way more efficiently compared to the RTMP or WebRTC approach.

Ashok: And also, I think I would add to that is, cost is definitely one aspect of it. You also want to figure out what your use case or what your application really needs. If it's a real time communication side of things, WebRTC is a good approach, but if it's a broadcast, but you want a sports broadcast, for example, but then you want to deliver it to thousands and millions of users, you want the scale that the HTTP infrastructure provides today, and that's why LL-HLS plays is more suitable for that sort of use cases than a smaller audience size.

Lionel: Thank you. Again, we have many questions and we have 11 minutes left. So, I'm sorry I won't be able to get through all the questions. I will select some of them. One interesting question is, “Can we use LL-HLS in conjunction with DVR applications?”

Pieter-Jan: Yeah. You can.

Lionel: Ashok, you support that?

Ashok: Yeah, I think the spec doesn't. Yeah, the spec doesn't limit you to not use the DVR. You can use LL-HLS with DVRs. From a Mux implementation perspective, we are currently supporting LL-HLS with the live, true live use case. And we are looking into adding support for LL-HLS available also with the DVR. You still can do DVR with MUX. It's just not available on the low latency side of things, but something that we are looking into right now.

Pieter-Jan: Yeah, but from a protocol side of things, of course, there's no difference at all. If you go into DVR mode, you just fall back to the high latency profile of HLS. So that's fine.

Lionel: But if it's DVR, it's not live. So, you don't really care about latency. It's on demand.

Pieter-Jan: Yeah, absolutely.

Lionel: It works.

Another question we got was, I like this question. “Does the ingress to Mux have to be through Videon or can you use other encoders?” Of course you can use other encoders, but it's better if you use Videon. And the reason is we are the first, maybe the only, but I know for a fact that we are the first encoder to have this direct integration with Mux. So, I showed you that, in a real condition live, that it works. And I think we were the first. I don't know if you can confirm if we are still the only encoder who can provide this selection Mux parameters from the encoder level, Ashok?

Ashok: Yeah, so I think from Mux perspective, I think, yes, I think with Videon, you get all the optimizations that they put in place with reducing latency as you process the incoming live stream. But at the end of the day, I think it's RTMP based. So, any encoders that support RTMP should be able to work, should be able to provide a low latency live stream on Mux. But we definitely love looking at Videon for some of the optimization works that they've done.

Lionel: Thank you. And there was a question about RTMP. “Does Mux support other protocols than RTMP for the input?”

Ashok: Not today. It's one of the things that as we are starting to see more adoption, we're trying to look at other protocols like WebRTC-based ingest because we will want to go live from browsers directly from devices. We're also looking at one end and on the other end, talking about to media customers, broadcasters and other large streaming services, they may be looking at SRT as another input protocol. So, we're looking at all of these different options and trying to figure out which ones we should start launch to market first and which segment that we should target. But absolutely we're considering it. I don't have any timelines to share, but something that we're absolutely considering.

Lionel: Thank you. Next question. So, we are showing that we have this best of breed approach where we can select the best encoder, the best string platform, the best player. There was a question about the CDN. We haven't shown anything with the CDN. It's part of the Mux application. And someone was asking, if they have to use your CDN or if they can bring their own CDN and connect it to your service, Ashok?

Ashok: Yes, today we package CDNs with our video product, with the streaming product, and primarily for a few reasons. One is - I think we've worked with our CDNs to optimize the overall delivery configuration. So, every time we have some certain requirements for the CDNs to support, if you want to provide a great quality of experience. That plays a role in terms of how many CDNs we can add to the mix. Second, we are absolutely open to trying to hear customers feedback on how bringing on CDNs can support some of the use cases, what are the benefits, and how we can support it. We're absolutely listening to customer feedback on that. If you have questions, reach out, we're happy to talk to you understand more about your use case.

Lionel: Thank you. Next question - “How do we get access to video and host metrics?”

So, as I was telling you, it's part of the LiveEdge Compute platform. So, reach out to us if you don't have LiveEdge Compute on your Live Edge device yet. We can enable LiveEdge Compute, which is basically the LiveEdge platform to any third-party application. So, we can basically run any Linux Docker container on the Edgecaster itself. And that's exactly what I have shown you. I didn't give a lot of details, but I will take five seconds to explain that.

It's basically a node export service that is running inside the Docker container that we housed on the a live edge unit. So, you've seen that we're actually doing all the encoding hardware-based with the chip we have in our devices and we have a lot of headroom in terms of CPU and run those applications. So, we export all those metrics to a Grafana server. In that example here, I just opened up Grafana, the online free account, but you can also deploy your Grafana servers or any kind of metrics and data analysis platform. And we can just, you can select what you want to stream, and you can provide that dashboard with all the metrics. It's fully extensible. It's really, it's an open-source product. So, it's very easy to extend. And if you want to get more details about that, we have a GitHub server. We're launching very soon the Dev community platform so that you can share all those applications with the rest of the Dev community around Videon. Come back to me, reach out to me, I will give you more details.

Next question about scrambling and DRM. So, first of all, “Is LL-HLS compatible with DRM technologies? Can we do FairPlay over LL-HLS? Can we do rotating keys and does it add more latency?” Pieter?

Pieter-Jan: Yeah, in general, the answer is yes. So, the nice thing about low latency HLS is that it is just an extension of HLS. Everything that's possible with HLS, you can do with LL-HLS as well. That includes doing FairPlay DRM, it also includes today doing Play Ready or Widevine DRM even. That's something where we see a lot of interest from partners that we're working with, where they're trying to go to like a uniform stream with FairPlay, PlayReady, Widevine, and especially for the low latency profiles.

Another question we sometimes get, and I saw it popping up here in the chat as well, like does it add actual latency? It does, but it doesn't add a lot of latency. It adds like one frame duration of latency, usually that's more or less the order of magnitude you have to think about. The more important thing is making sure that if you do, for example, a key rotation or something similar, if you have a small buffer on the player side, you have to make sure that your DRM server can issue a new key or a new license in time for that key to be rotated. That's the only thing you really have to take into account when you think about low latency and DRM as a combination. But while that's a general problem that a lot of even higher latency solutions have as well. So, there are definitely solutions for this as well. It's not the biggest problem, but in general, doing DRM or encryption or whatever with low latency is definitely possible. But I don't know Ashok if Mux supports it at this point in time. That's probably also a good question.

Ashok: Yeah, so it's something, I think we are looking at security as a whole, which basically in some cases, DRM is a legal requirement, and in some cases, it's for protecting your IP. And so, there are different solutions to solve those problems, but we are working on some of how to provide, how to build up a security offering that prevents unauthorized playback, first on the transport level, and then from following that, we would be working on the content protection, which is what DRM provides. So, it's on our roadmap for this year and next year.

Lionel: Thank you. Another question, and yeah, probably the last one, because we have one minute left: “You showed that there was a difference in latency for Safari and Chrome, depending on the browser you use. Can you expand on that?”

Pieter-Jan: I can. The reason is probably the network, to be honest. I don't know which one was the fastest or which one was the slowest. I actually get the same question often as well, like slow latency DASH faster, slow latency HLS faster or whatever. It all doesn't really matter. The biggest impact usually is the network, chance, when do you actually join the stream? Cause that has a small impact as well. And then for example, what we showed in the demo, do you have like this - the synchronization approach or like the approach to reduce the latency or to go towards a specific target latency. But there's no reason why one browser wouldn't be able to hit the same latency as the other one, given that, for example, the network is the same. If you, of course, have a very bad network, then our player will be smart about it, and it will increase the latency slightly if you don't configure a target latency to avoid stalls and those kinds of things. But in general, on any platform you can get more or less the same latency. So that's a good thing.

Ashok: And one thing I would add is sometimes what we've seen in our experience with customers, if you're using the same player implementation across all your browsers, you should generally see the same latency in it as Pieter-Jan mentioned. But what we've seen sometimes happen is Safari natively supports low latency HLS. So, people, when they implement the players, they use Apple's implementation of low latency on Safari, and they use other implementation on other browsers. So, you see the player implementation changes so that is different players behave differently in terms of how they manage latency and buffer. So that's why you sometimes see a difference there, but generally if you're using the same player implementation, they should have the same latency on all browsers.

Closing Remarks

Lionel: All right, it's 10 a.m. pacific time. So, I'm sorry, we are not able to answer all the questions, but keep that coming. Don't hesitate to follow up with us individually. You can reach out to Ashok, to Pieter-Jan, to myself for any follow-up question. Thank you very much, Ashok, thank you very much, Pieter-Jan! I was really happy to have this webinar with you guys. You were awesome. Thank you for the audience for all those questions. I really love it when it's interactive and people are asking questions.

Back to top

Speakers

Pieter-Jan Speelmans - Hexagon-1

PIETER-JAN SPEELMANS

Founder & CTO at THEO Technologies

Pieter-Jan is the Founder and the head of the technical team at THEO Technologies. He is the brain behind THEOplayer, HESP and EMSS. With a mission to ‘Make Streaming Video Better Than Broadcast’, he is innovating the way video is delivered online from playback all the way to ultra-low latency streaming. Pieter-Jan is committed to enable media companies to easily offer exceptional video experiences across any device.
 
 
 

22Q1_Headshot_hex-grey_lionel-Bringuier-01

LIONEL BRINGUIER

VP Product at Videon

Lionel Bringuier possesses 25 years of professional experience in the telecommunications, broadcast and media industry, managing real-time and mission critical services in voice and video applications. His domain of industry expertise is around video streaming, where Lionel launched several successful products over his career for live, VOD and live-to-VOD media applications, both on appliances and in the cloud. 

 
 

22Q1_Headshot_hex-Grey_Ashok-Lalwani-01

ASHOK LALWANI

Senior Product Manager at Mux

Ashok Lalwani is currently a Senior Product Manager at Mux for over 2 years, building and scaling the Mux Video product. With over 15 years of industry experience, Ashok Lalwani helped build many industry-leading products including Fastly's Media Shield, the first of its kind mid-tier product designed to provide more control and visibility on Multi-CDN setup and Akamai's HD Network for Live and On-Demand.

 
 

Want to deliver high-quality online video experiences to your viewers, efficiently?

We’d love to talk about how we can help you with your video player, low latency live delivery and advertisement needs.