On Thursday, March 10, 2022, we hosted Live Stream Captioning: Trends and Solutions. During the live webinar, experts from EEG and Ai-Media shared their insights to attendees seeking to expand the reach of their live streams and events, as well as increase the accessibility of their content.
Featured speakers Eline Henriksen, Mo Sabbar, and Jared Janssen walked through the challenges and opportunities of captioning live streams, as well as the solutions utilized by EEG and Ai-Media customers to reach more audiences and increase engagement.
Live Stream Captioning: Trends and Solutions • March 10, 2022
Look to EEG and Ai-Media for the latest closed captioning news, tips, and advanced techniques. To find out about upcoming webinars, visit here!
Transcript
Regina Vilenskaya: Hi, everybody, and thank you for joining us for today's webinar presented by EEG and Ai-Media, Live Stream Captioning: Trends and Solutions. My name is Regina Vilenskaya and I'm the Marketing Lead at EEG. Today's featured speakers are Eline Henriksen, Mohammed Sabbar, and Jared Janssen. Eline is the VP of EMEA. Mo is the Live Streaming Lead and Jared is the General Manager of Key Accounts. In this webinar, you'll hear about a variety of topics for live streaming and captioning Intersect. The speakers will walk through live streaming trends, solutions, and best practices to deliver better events, meetings, and more to global audiences. Now I'd like to welcome Eline, Mo, and Jared for live stream captioning trends and solutions. Hi, everyone.
Jared Janssen: Thank you, Regina. And thank you to everyone for attending with us live and those that will be catching us later on-demand. We really appreciate you taking the time. So I wanted to kick off with live stream trends. As Regina said, Jared Janssen, the GM Key Accounts in the Americas region. I joined the industry 12 years ago with caption IT and then joined Ai-Media a little over a year ago with Ai-Media's acquisition of Capital 19. So kicking off into live stream trends, what we're seeing and what we say live stream it can have a number of different names. It could be known as webinars, webcasting, video chats, or just Zoom. But we're we are firmly in the era of video. We consume video at work. We consume it at home. We consume it when we're not working at home and we view it on all of our screens, our mobile phones, our computers, our TVs and it's everywhere and it allows for a global reach. When you look back two years ago, COVID hits the pandemic starts and we see this great pivot to work from home.
Well, then there's this big rush to implement video solutions, video conferencing solutions, and consumer-grade readily adaptable, right? You can spin it up quickly. The Zoom, the Teams, the 1-to-1 really consumer-grade type application. So you need to quickly implement this to connect, to communicate, to check in with your teams, the 1-to-1 small group sessions. And so now when you look at present-day two years have passed since the really the pivot from work from home and really entering the era of video. And now you have, security teams, IT teams now really starting to analyze is the video solution that we have. Is that a fit for our organization? Does that need our security needs? Does it fit where we want to go? And we're seeing this pivot from quickly adapting to consumer-grade to now moving into enterprise solutions. And we see a lot of platforms doing really nice things where they're taking consumer-grade video and then they're adding scale, they're adding layers. They're bringing it to a professional level broadcast-type output in the enterprise.
And they're doing this to improve analytics, to improve engagement, to really bring forth and sort of bringing in and streamline what they're doing on video. And now, the rise with video, you have work from home, you have remote, but now we're seeing some return to work and that creates this new buzzword is essentially hybrid events. And so really, what that means is part onsite, part remote. And we see that with corporates having some presence onsite, some team members being remote. And so you'll have events where maybe there's a conference room that's your attendees and then you also have team members or employees remotely that are tuning in to catch the same meeting. We also see it on conferences, trade shows where there will be an onsite element. You have attendees onsite, they'll have keynotes, but not everyone can travel. And so there's still the remote presence of sessions or live streams, and you have an equal viewing experience and equitable viewing experience for everyone to consume the content.
And so with this rise in video with sessions having a physical presence on-site and a remote presence, accessibility needs have come along with it. And what we're really seeing corporates do a lot of really wonderful things with diversity and inclusion, captioning really fits nicely into that. Accessibility is growing. The need for captioning is growing and captioning is inherently inclusive, right? It gives an equitable viewing experience to all participants. And now we're seeing with the growth of video, the growth of accessibility. Now there's the security component coming into it. And companies are looking at how can we make our content accessible but have it secure? What are the systems needed? What are the processes needed? And can we do it? And the answer is yes, we can do it. And so I will turn it over to El and she'll touch on some of the live streaming solutions.
Eline Henriksen: Thank you, Jared. And hi, everyone. Just to introduce myself briefly, I'm the VP for the EMEA region and I've been with Ai-Media for nearly four years and I'm here today because I'm looking after our business and service offerings in EMEA to ensure that our customers and our partners have the best solutions possible to help make their content accessible one word at a time and in this case with captions on live streams. And it's very interesting, Jared, to hear what you're saying about the trends in the live events space and how live streaming has shaped the way that we do things nowadays and how that has had an impact on how we're operating, particularly with the focus on hybrid delivery. And in terms of live captioning solutions for live streams, I think what's worth noting is that the scope just continues to increase, the scope in terms of what's possible in the streaming space, but also in regards to the captioning space and the captioning requirements. So just backtracking a little bit in terms of Ai-Media and EEG coming together, what that has really meant is that we are now able to operate in a space that we necessarily weren't before, and we're able to support our customers and partners with key solutions that are best fit their needs, regardless of what (UNKNOWN) looks like, how they operate, what their quality requirements are.
And last but not least, which is often the biggest one budget constraints. And what I mean is that if you are a media production company looking for a self-service solution or a fully managed service, we're now brought together at the expertise of both companies, which means they can do it either as well as add open captions, closed captions to your live streams and not necessarily just in English, but also in other languages. So these solutions are really taking things to the next level, and they're compatible with cloud workflows. If you're using RTMP and hardware encoders, if you're using SGI feeds, or if you're operating with other workflows, as well as the major streaming platforms. And I realize I'm jumping around on this slide a little bit, we're going back to finding the solution that really fits your needs. You always need to consider which platform or workflow that you're working with, meaning that some streaming platforms might use players that don't allow for the 608 708 closed captions.
So, therefore, there might be an opportunity for an open caption solution, which is then embedded onto the video feed, which again means that the audience can't necessarily turn the captions on and off. So whilst closed captions in some scenarios might be better from an accessibility perspective, open captions also creates an opportunity for the captions to be embedded onto the actual stream feed, which is again a good solution for video on-demand recordings. And then moving on to platform compatibility from an enterprise perspective. We're working with events, agencies, and virtual events platforms, and they're now taking steps to integrate captioning as part of their offering. And the reason why they're doing it is to add value to their end customers. So what this means is partnering with platforms to provide live captioning solutions to help make it easier for them to operate as a one-stop-shop really leads me to the question of how are we doing it and how can you do it as a production company, as a media agency, as a corporate, how can you do it?
And that really leads me on to Mo, who will actually touch upon that little bit more in detail.
Mohammed Sabbar: Hi, everyone. Thanks, El. I'm Mo, I'm the Live Streaming Lead for EMEA. So as El was mentioning at the start, she's been around with Ai-Media for four years. I've pretty much been the same. When she talks about live streaming solutions for our clients, I'm primarily charged for finding solutions for some of our clients based in EMEA to ensure that live delivery for their events is successful. I'm just going to talk about Falcon a bit more. So for a very basic standpoint, I always think of Falcon as our Amazon, where it can take in an original video source primarily in RTMP format and distribute that out to relevant support destinations, whether that's your YouTube, Facebook, or any other platform. In genuine terms, there's so much to talk about Falcon, but I like to define Falcon in five different ways. The first being Falcon is customer orientated. It's an affordable solution, is cloud-based. It's a cloud-based service that takes the place of hardware encoders, especially for clients, and that might find hardware encoders are out of their budget range, Falcon is an alternative solution for many streaming productions.
The simplicity and Falcon. So Falcon, which is a cloud-based approach, is an easy-to-use alternative to SDI video hardware and caching coding for streaming production, which can embed directly into RTMP streams like I mentioned or upload them in HDTP. One of the great things, especially from my experience about Falcon, has always been the UI experience. So when Falcon was developed, it was always from a self-service perspective, which basically means anybody and everybody whoever uses the platform when they use it, there's an element of simplicity there, there's nothing too difficult. There's no quirks that are out of the box. It does what it says on the tin and it does it really well. But the benefit is it's scalable, which is really important. Excuse me. The third thing is easy integration. With Falcon and our iCap network, which doesn't allow for just human generate captions. It also allows for connectivity with iCap Translate, which is one of our machine translation tools, sorry, multilingual tools.
Beyond that, also Lexi and Smart Lexi, which is our ASL tools which Jared will talk about it later on in this presentation. The fourth being cloud-based, obviously using iCap and Falcon all connections are encrypted and authenticated. It provides you with detailed connection history logs that can also be retrieved remotely, as well as post-event caption files in whatever format you need whether SLT or a plain transcript. And finally, logging and monitoring, having been around the block over the last four years with streaming and using various tools, what you'll find is streaming and working in this field can get very clunky where there's so many monitors and so many types and there's so much different views that you have. But with Falcon, the real benefit is being able to monitor everything, whether it's simple output, stream (UNKNOWN), anything along those lines in one place, which allows. And like I mentioned, the core benefit is its simplicity of use. It's cheap, it's cloud-based, it's secure, and it's very much customer orientated and focused.
And one of the elements which I spoke about, which was the easy integrations with some of the other tools Lexi and Smart Lexi, Jared will talk about that a bit more now in the following slide.
Jared Janssen: Thanks, Mo. And to add one more quick comment about Falcon, Falcon being an IP encoder, it creates a similar caption viewing experience. And so when say you're watching Netflix, you see the captions in the video, you have options for multilingual. And when you'd be watching, say, a live stream or webcast, it traditionally was delivered via a URL, or maybe (UNKNOWN). It's a separate browser window or maybe the captions are below or somewhere within the platform that you have to navigate to find. Well, Falcon puts the captions in the video.
So it's it creates that similar viewing experience, whether you're on your phone, your computer, or your TV. So one really nice feature and something that I think that a lot of viewers really enjoy. So touching on Lexi, Lexi is an ASR solution rolled out from EEG a number of years ago. It's available directly in your EEG cloud account, but really, it's the low touch budget-friendly cloud-hosted ASR captioning solution. A really nice feature is that it utilizes machine learning workflows to achieve higher accuracy than that baseline ASR, that of the box ASR.
And Lexi is highly scalable. It's very flexible, it's quick to implement. It has a number of really nice built-in features. So not only does it have machine learning, but it has Topic Models. So there's baseline dictionaries already applied and it has Lexi Vision. So if there is a banner on the lower third or you have names or text that pop up, Lexi Vision is a nice technology that will adjust those captions so it does not cover over any text on the screen. And then Lexi has advanced schedules so you can set start times, end times, you can set time outs if you want it to turn off after maybe a period of 30 minutes or an hour with no audio. So again, it's a solution that we are seeing an immense take up on. And really in all industries, in both broadcast and in the enterprise live streaming space. And so now if you go on to the next slide, I'm going to touch on Smart Lexi. So Smart Lexi is a new feature from Ai-Media and it's something we developed and rolled out to achieve a higher accuracy to be the next level (UNKNOWN) from Lexi.
And so it takes everything that's great about Lexi and it adds that human element. It's the semi-automated captioning solution to achieve that higher accuracy, yet still a budget-friendly option. And so Ai-Media does human-curated custom dictionaries using our styles. It still has that really nice machine learning automation, and its full service captioning and support that's booked to manage through Ai-Media. And again, highly scalable and flexible. It's quick to be implemented. It's well-suited to again for broadcast and live streaming enterprise environments. So, Mo, if you go on to the next slide. Here's just a breakdown, and you can see the three live captioning solutions. We have Lexi, Smart Lexi as that middle tier and our premium human delivered captions. And what we've done is we wanted to give clients options. It can meet any budget and certain clients will have different needs. Maybe Lexi is a fit, Smart Lexi or maybe the preference is for human delivered captions. And we have clients that utilize all three.
And again, it goes back to what is the need and what's the right type of event? Is it something that is put very last minute? Is it something that has open-ended, I guess, end times? A lot of times your local governments will have open-ended type meeting structures. And so each type of live captioning service has needs and we are always happy to talk to anyone about what might be the best fit for their events. And then not to mention, we do have multilingual aspects, which I will hand over to El to talk more on.
Eline Henriksen: Thank you, Jared. So beyond the standard English captioning solutions that we have available, there's also been an increasing focus on global reach and reaching audiences with other native languages. So what's really, really interesting with live streaming solutions is that there is scope to do everything nowadays. And because we've seen that things have progressed and shifted over the last 18 months to two years, there's been a shift in focus from just making your content accessible with captioning solutions, but then also making sure that the content is reaching audiences in the global space in other languages. So what I mean by that is adding multi-language solutions to the live streams to enable that global reach for audiences in other countries. And why we're doing this, and that's because we're now in a time where we shifted away from just being in the room with people and having to be physically present to consume content to being able to sit at home in London or in Stockholm, in Tokyo, New York, wherever you may be and actually consume the same content by having that content being made accessible in the source language, or in the target language that you require.
So what this actually means is that you might have seen in some cases on live stream. So you might have thought, oh, I wish I could do this, having one language spoken and then watching real-time translated captions into another language. So I myself am Norwegian, and right now, if I was to speak Norwegian and you all would be able to follow along with English captions or the other way around, that's actually possible nowadays. And it's something that's really, really exciting, and it's enabling that reach to a global audience from anywhere. And it's not just important for those who are relying on speech to text services, but it's also very important and it really helps with improving overall comprehension. And, Mo, if I could just ask that you skip to the next slide, please. And it's very important for comprehension specifically for those that don't have this source language as their native language to be able to comprehend what the content is, which in turn also leads to higher engagement.
So to ensure that there's a solution for everyone out there now with bringing together EEG and Ai-Media, we have flexible, versatile solutions with tiered options in relation to quality, in relation to budget, and the requirements associated with that. And then having that being ranged from machine-translated solutions to human-translated captions from one language to another out or including multiple language pairs in there. So when I'm saying one language to another, this isn't just limited to captioning and speech to text, but there's also so many other solutions out there for live streams that really help take the live stream and the event to the next level, which I'll hand over to Mo to speak a bit more about.
Mohammed Sabbar: Thanks, El. So as El mentioned with live multilingual streaming and services, what we've found over the last year in particular I'd say, and many more than that, clients are looking at a global reach. But beyond that, we're really trying to make their events as inclusive and as accessible as possible. And so one of the services that has really had an uptake is not just live conventional streaming with English captions in general or a single language, they're looking at multilingual like El said. But also sign language overlay, which is a service we now offer and live audio interpretation. There's been a number of events over the last year and a half in particular where this has been a growing service and a growing demand because organizations, educational institutions, governments institutions, they want to make that extra level and they want to go that extra level in terms of providing the access so audiences around the world are able to comprehend what's happening during an event in particular.
So very much a trend that's picking up globally. And we'll just go on to the next slide, back to El.
Eline Henriksen: Thank you. So I think that's really exciting and it really helps bringing that event to that live stream to the next level. And then the question is around best practices for live stream captioning. And then what is it that people and organizations can do to simplify the captioning process for live streams and what are the things that you need to think about while you're doing it? So in regards to best practices, first of all, is having captions on your live stream. An example around that is 85% of Facebook video is watched without sound. So if you don't have captions, you're also missing out on opportunities of connecting and engaging with your audience. But presuming you already have captions or you are looking to add captions, a good practice is always making sure that the audience that you're working with receives the quality that they require for good comprehension of the content. So a good example for this is that if you're translating from one language to another instead of having two tools that are AI-based, working with a semi-automated solution that at least includes the human aspect on one side or the other of that translation piece.
And then I'd just like to touch upon a couple of quick wins for best practices, and the biggest one is really sinking captions to the audio. And this really helps with better comprehension and it's easier also for the audience to then follow along with the content. And in terms of how-to and specifics of how this can be done, this can be done with our CC match modules with Falcon and EEG hardware encoders. We can also be done if you're using an Ai-Media-managed workflow. That's also something that we're working with. And then secondly, is repurposing the live content for video on demand. So if your session or live stream is being recorded, making sure that those captions are following through or that you have captions on the recording afterward to make sure that it's accessible. And then, as I touched upon just briefly earlier, is ensuring that the quality matches the required standards for the audience, for the content, and for the message to ensure that the message is relayed as you want to.
And then lastly, add that reach to the global audience. So overall, EEG and Ai-Media's history and Ai-Media and EEG's background and expertise really ties everything together in terms of the how and allows our customers and partners out there to be able to offer unique solutions to their audiences, regardless of what their requirements are around quality, budget, language pairs or workflow specifications. So that's just a couple of quick wins in terms of best practices for live stream captioning. Thank you. Will hand over to Regina.
Regina Vilenskaya: Thank you. So we have now reached the live Q&A portion of today's webinar, so we have received a lot of great questions that we'll try to get through as many as possible. So first question is what languages are supported for live captions and how are multi-language events supported?
Mohammed Sabbar: I can take that. So in terms of which languages are supported for live captions, if you're looking at human-generated captions, there's a certain amount of languages we do offer the more common ones being English, French, Spanish, E-Spanish, Latin Spanish, Brazilian, Portuguese. We can do German, Italian, some of the more common languages. But beyond that, in terms of machine-translated, we can do pretty much any language out there. And the second part of this question, how are multi-languages events supported, my understanding of this is, you mean, how would Ai-Media support these events as they're taking place live? And as such, we have someone from our team obviously on hand to support you throughout these events, providing you assistance and support and explanations on how things work. And obviously, whilst the events of the day are being delivered, someone on hand to support you through any issues or concerns you might have at the time.
Regina Vilenskaya: Thanks, Mo. And what can you offer in the way of translation from Quebec French to Canadian English and the other direction as well?
Eline Henriksen: I can take that. So in terms of translation and there's many ways to do so when we operate with a couple of different solutions. So one of them is simultaneous interpretation. So using the audio interpretation from one language to another and embedding that onto the live stream. A second solution would be to utilize a machine translation workflow where you would have the source language being captioned with human-generated captions and then using a machine translation tool layered on top of that to the target language or target languages. And then lastly, we also have a fully human-translated captioning solution, which basically utilizes interpreters and captioners in the target language. So there's different quality tiers around this. And in terms of just jumping back in terms of how the multi-language events or deliveries are being supported in terms of how it actually works from an Ai-Media perspective, particularly in relation to the multi-language, the stream would be sent to Ai-Media, the captions, the overlay, the interpretation is being added.
The captions are then being or the video feed is then being held up for a couple of seconds to sync the audio and the output and then pushing out to the stream destination.
Regina Vilenskaya: Someone says that they're interested in learning ways to provide captions to Zoom and YouTube simultaneously. Could you please let us know how that can happen?
Mohammed Sabbar: Yeah. Again, I'm happy to take this. So there's a few different ways we can do this. I think probably the easiest and most less confusing is captioning directly into Zoom via our API integration and then using the live customer streaming tool within Zoom itself to distribute that out to YouTube is the more commonly used one. But again, we can follow up on this post again and provide you with the different potential solutions and workflows for this.
Regina Vilenskaya: Matt asks, "Are there any solutions that help in dealing with jargon and acronyms specific to an industry? Automatic captions do not seem to deal with that very well."
Eline Henriksen: It's a really good question, and the answer is yes, both with our Lexi solution and our Smart Lexi solution that Jared was touching upon earlier on. A Lexi captioning automatic captioning solution allows you to add a dictionary to the workflow to help recognize specific words. Our Smart Lexi solution is a groundbreaking, innovative solution that allows you to achieve greater accuracy than the standard Lexi model, which works with the expertise of Ai-Media and the technology expertise of EEG bringing those two together and utilizes human-curated dictionaries and topic models that have been developed by the specific expertise teams here at Ai-Media. So yes, that's absolutely possible.
Regina Vilenskaya: Can the Falcon do more than just RTMP streams?
Mohammed Sabbar: I think yes. We have other workflows for various streams, streaming protocols are being used SDI, IP, and other. It just depends on the typing streaming format that you need, and it's something that we can definitely explore further.
Regina Vilenskaya: Is Falcon outputting CEA 608 or 708 or both? I've gotten duplicate caption services on Brightcove seeming to indicate the presence of both 608 and 708 services.
Mohammed Sabbar: I'll take this one on also. 608 708 is the caption track that we usually send. I do know from previous events we've covered that go to Brightcove for some reason, it shows CC as the caption track then another option beneath that as a dropdown for you to select the same caption track. Brightcove are aware of this issue, and it's something that they're exploring.
Regina Vilenskaya: Serena had asked, what are the different accuracy rates for the three automatic captioning products, or, I'm sorry, live captioning products?
Jared Janssen: Yeah. And it's a good question, and so our premium human delivered traditionally falls in that 98 and a half to 99 and a half percent. We developed Smart Lexi as that mid-tier option between Lexi, ASR, and the premium human delivered. And the biggest factor for all events, no matter what you're using, is good quality audio. And when I touched on the pivot to work from home, a lot of times we saw a decrease in audio quality. Maybe not stable internet, not great or no microphones for events and or even on-site when people are wearing masks, difficult to pick up audio sometimes. And still, it all goes back to audio. Good quality audio is the most important piece of things. And so going back to Smart Lexi, we've gotten very good results, and a lot of times they're coming at 98, I'm sorry, oh yeah, 98, 98 and a half percent on a lot of NER scores. And so we're really pleased with that. Lexi is going to be a few ticks down from Smart Lexi. So again, it is three tiers, Lexi, Smart Lexi, and then premium human delivered.
Regina Vilenskaya: Thank you, Jared. Matthew asks, "For multi-language captioning, how does latency come into play? In other words, displaying the translated text in sync with source text. Various languages use different quantities of words to say the same thing."
Eline Henriksen: It's a really good question. In terms of latency, so if you are sending a stream and then pushing it out, the captions would have a longer latency if you're having the translation and it really depends on which sort of workflow that you've opted for. So if you have a semi-automated versus a fully automated versus a fully human solution, all of those will have different latencies. So what we do is the stream is being sent to us. We hold the feed and the captions and then send it out so that the delay of the output is synced up before it's being pushed out so that the audience have the best viewer experience that they can do. I do realize that there was a saying that these different languages use different quantities of words for the same thing, and that's a really good question. So this also depends on whether you are using closed captions or open captions. If you're using closed captions and have multiple caption tracks to the same destination, the delay won't be or the sync will be the same throughout, whereas if you're using open captions and you're using multiple stream destinations, then you can hold those streams and then the latency as you would need to to sync up the captions. But in terms of pace of speech and everything, there will be a slight difference, but it will be synced to as much as we can do.
Regina Vilenskaya: Thank you. What has been the evolution of multi-service output in Falcon? I see new options for CC to CC for outputs. Can I output two additional CC services from one Falcon instance or do I need more than one license? How does that multi-service tie into my iCap account and Access Code setup?
Jared Janssen: It's a good question. And so when you look at multiple CC tracks, a lot of times whatever your platform you're putting to will have limitations. Maybe it only allows for one caption track. Maybe it's two, maybe it's four. So it does vary. But the ability with Falcon traditionally will allow for four. But we are we're seeing that it ends up on the end platforms side that really dictates how many CC tracks are available and visible to the end-users. We do have clients that use multiple Falcon channels every day. And that's what's nice about Falcon is that it allows you to scale. You have a lot of events, you can scale it. You can have multiple Falcon licenses. And each license comes with the ability to add more caption tracks. So any further questions on that, I'm happy to answer.
Regina Vilenskaya: Thanks, Jared. With audio interpretation and ASL BSL overlay, what is your workflow and signal flow and how does that integrate into my existing stream workflow?
Eline Henriksen: The beauty of this is that it doesn't necessarily impact the existing workflow that production companies, media companies, or corporates would work with already. And how it would work is you would send us the stream destination details, the stream would go to us and we'd embed the audio interpretation, ASL, BSL, ISL, whatever the sign, language, and language pairs would be required, and then being pushed back to the stream destination of your choice.
Regina Vilenskaya: Crystal asks, "What technology solutions are there to capture clear audio in on-campus classes where it involves mainly group discussion? We have tried various conference microphones, but they are not able to pick up enough for the remote caption or to provide captions. We're keen to use Ai-Media more, but this is a stumbling point for us. I'd be keen to hear how other institutions have approached this.
Jared Janssen: Yeah. It's a great question. And when you're looking at classrooms really in the education space, they can vary and anything from large lecture halls where maybe you have a couple of hundred students or small classrooms where there's 15 or 20. And it's an interesting thing where this particular question is with a group discussion. So I'm imagining a large conference or a large lecture hall and you have a lot of overlapping speakers. And it goes back to clear audio is optimal for a great outcome on the captioning end. And so when you have background noise, overlapping speakers, it can really make things difficult. And so and even in small classrooms, you have the same thing. And having good quality audio with oftentimes the instructors will be wearing lapel microphones or an external microphone to at least get good audio quality from the instructors that are presenting for most of the class. But in group discussion, they can often be very difficult. So I'd be happy to chat further on that one to see if we can't come up with a good solution to offer captions in those large group discussion-type situations.
Regina Vilenskaya: Can you show multiple languages captioning on the screen at the same time on the screen?
Mohammed Sabbar: Yeah, I can do this. So I assume by this question what you mean is having two different languages at the same time on the same screen, and it's in a sense, yes, we can do that. We have done it before for certain events. I can recall doing French and EU Spanish on the same screen at the same time. The only caveat is it would have to be open captions which are (UNKNOWN) onto the screen rather than closed captions.
Regina Vilenskaya: Thank you, Mo. Whereabouts are EEG and Ai-Media on speaker recognition, can you do the double chevrons or preloaded speaker names?
Eline Henriksen: Yeah, absolutely. So this also then really ties back to which solutions are automated, semi-automated, or fully human-generated captions. For the Ai-Media human-generated captioning solution, what we do is we utilize the speaker names. We prepare those ahead of the event of the session so that whenever there's a new speaker, there's also speaker tags. In terms of the automated solutions and everything, at the moment depends on whether the solution is able to recognize whether there is a new speaker or not.
Regina Vilenskaya: Thank you. So I'm just looking through the list of questions. And I think that that is about it. I would really like to thank everybody for joining us today. We got a great crowd for this event. And thank you to El, Mo, and Jared for the time you've taken to share your knowledge about live stream captioning, and also a special thank you to the captioning team behind the scenes, too, that have made sure that this event is accessible. So if you have any other questions about EEG, Ai-Media, or any of the topics that we discussed today, you are welcome to reach out to us. So thank you all again, and we hope to see you again at our next webinar.
Eline Henriksen: Thank you, everyone.
Mohammed Sabbar: Thank you.
Jared Janssen: Thank you.