As video streaming rises globally, the challenge is not only to get your content to your viewers’ screens, but also in their language. Only providing your application, audio and subtitles in one language can present a barrier to reach a wider audience, or miss a majority of viewers completely. As a content provider, through Multi Audio and/or Language Localisation, you can give your content the variability it needs to reach audiences who may not speak the language of your content. Giving viewers the ability to change the language of your application, audio and subtitles/Captions instantaneously addresses this. In this blog we discuss the why, how and when of Multi Audio and Language Localisation, as well as how to enable it using our THEOplayer Universal Video Player Solution and its Multi Audio and Localisation features.
When Should You Use Multi Audio?
There are more common use cases for Multi Audio and Language Localisation, such as Video On Demand-based services like Subscription Video On Demand (SVOD) and Advertisement [supported] Video On Demand (AVOD), which allow customers to consume their favourite shows or movies in their native language. Multi Audio can be crucial for some use cases such as international conferences live streamed to multiple countries, shareholder or government meetings with international members, e-learning platforms or live news coverage.
This feature allows your audience to easily tune in, via live stream or VOD, regardless of their location and language capabilities with audio and subtitles in their preferred language. Subtitles are the words that are spoken in audio from a video, but in text form that appears on screen. Captions appear in order to indicate who is speaking (if it isn’t clear) and important noises such as music. This can help viewers better understand and enjoy what they are watching. The change in audio and subtitles is seamless, giving your audience the optimal viewing experience. A larger audience reach also means better monetisation opportunities for content, allowing you to optimise your revenue.
Multi Audio, Language localisation and Subtitles/captions are also opportunities to make your content more accessible for those viewers with disabilities. Accessibility simply means that viewers can perceive, understand, navigate, and engage with video content equally, and without obstacles or barriers, in this case language barriers. Additionally, accessibility can apply to all viewers, regardless of disability or not (i.e. situational limitations, slow internet connection, temporary disabilities, etc.). If you’d like to read more about accessibility for video, check out our blog here.
How Does Multi Audio Work?
In case of a live streaming event there is a camera feed and the primary Audio track that is being transmitted to a central production facility or a truck. At the production center, additional audio commentary, in different languages, is multiplexed onto the transport stream. Increasingly, using some machine learning technology, on the fly subtitles/captions are also added and also multiplexed into the transport stream. This master feed is then delivered to a multi-bitrate encoder and presented to a packager which maps the audio-tracks, subtitles and/or captions onto the streaming protocol's specific constructs. This is then made available to video players via a CDN. Based on the specific settings of the player, either the configured default audio-track and subtitle/captions are presented, or a customer specific variant is used.
For a post-processed workflow like for Video On Demand things are slightly different in the sense that in general a movie is shot in different takes and those are montaged together to create a mezzanine or master format. During post-production, different audio tracks as well as different subtitles are added to the master file. In case of subtitles, this can also be made available as separate subtitle files. This master file is then handed to an offline multi-bitrate transcoder and the transcoded asset is handed [possibly with the separate subtitle files] to a packager who maps the audio tracks, subtitles/captions onto protocol specific constructs and so on.
Streaming protocols allow you to explicitly indicate which is the default audio, subtitle or caption tracks to use. Application developers can override this and select the language based on the language settings of the platform, or the customer can select the specific language themselves.
Example of Apple HLS Manifest; Audio Track Section Details
THEOplayer Multi Audio and Platform Support
THEOplayer Universal Video Player Solution supports multiple audio and language subtitle/caption tracks in one single video – both for live and on demand streaming. The THEOplayer SDKs come with a default User Interface, which allows users to select a different Audio Language and subtitle on the fly. Equally, our SDKs expose a rich set of APIs to select the audio/subtitles and caption tracks from within the application code.
Besides this capability, we also offer the possibility to “localise” the default THEOplayer User Interface (UI) of SDK. Using this feature, is possible to localize the text based UI helpers messages. Also, all text displayed in the THEOplayer SDK UI can be localised, which ultimately allows your viewers to understand the different options of the player and content’s capabilities visibly within the UI. THEOplayer's localisation solution allows you to automatically adapt the player to different languages.
The THEOplayer Universal Video Player solution ensures that you offer multi-audio, language localisation, captions and subtitles support in a consistent manner across all SDKs ensuring your content reaches your audience on all devices and major browsers including: Chromecast, iOS, Android, Android TV, tvOS (Apple TV), Samsung Tizen, LG webOS, Fire TV, Roku and Google Chrome, Safari, Microsoft Edge and Firefox.
Multi Audio Demo Video with THEOplayer UVP Solution
Check out our Multi Audio demo video below, or check out the full demo here.