On February 3, 2022, we hosted Live Broadcast Subtitling | EMEA Market. This webinar focused on the EEG and Ai-Media acquisition, live broadcast subtitling trends, the architecture of iCap, and considerations for insourcing vs outsourcing subtitles.
Featured speakers Keith Lucas, Senior Sales Manager at Ai-Media; James Ward, Chief Sales Officer at Ai-Media; and Bill McLaughlin, Chief Product Officer at EEG and Ai-Media shared the ways broadcasters can utilize subtitling solutions to power content accessibly.
Live Broadcast Subtitling | EMEA Market • February 3, 2022
Look to EEG and Ai-Media for the latest closed captioning news, tips, and advanced techniques. To find out about upcoming webinars, visit here!
Transcript
Regina Vilenskaya: Hi everybody and thank you so much for joining us today for Live Broadcast Subtitling | EMEA Market. My name is Regina Vilenskaya and I'm the Marketing Lead at EEG. Joining me today are Keith Lucas, James Ward and Bill MacLaughlin. Keith is the Senior Sales Manager at Ai-Media. James is the Chief Sales Officer at Ai-Media, and Bill is the Chief Product Officer at EEG and Ai-Media. Excuse me. Today you'll hear about the reasons behind and outcome of EEG and Ai-Media coming together. Live broadcast subtitling trends. The architecture of iCap, a network trusted by many broadcasters and insourcing versus outsourcing. At the end of the webinar, we will have a live Q&A session where you can get your subtitling questions answered. Now, I would like to welcome Keith Lucas, James Ward and Bill MacLaughlin to kick off Live Broadcast Subtitling | EMEA Market. Welcome, everyone.
James Ward: Thanks, Regina. And yes, welcome, everyone to today's webinar. We're really excited to have you all here. And thanks to so many of you for signing up to talk about what is a really exciting time Ai-Media and EEG's journey, but also in the industry of life subtitling. We're here today to talk to you about why Ai-Media and EEG have come together. And for those of you who don't know, Ai-Media has been providing high-quality captioning services to the broadcast industry for almost 20 years now. And in the EMEA region, we've been here for almost 10 years. And we've seen a rapid uptake in the need for live subtitling. And I think at the moment, as we've seen in the industry, there is a huge amount of live content that's being distributed. And that's on multiple platforms, multiple devices, and different mediums are consuming content. And I think what we've all realized in the last two years, especially, is how important it is to subtitle that content. Not just for access in terms of the deaf and hard of hearing, but also access to more audiences and more regions, more languages, and different ways that we needed to communicate with a broader audience as the world has become more remote.
And the exciting part of bringing Ai-Media and EEG together is that we now have more solutions, more technology to actually help you do that. And so today, we're going to talk through some of those pieces of technology and those products for all of these new ways that people are consuming content. And, you know, this now is really an opportunity for us to simplify and make more available and scalable the technology that has been servicing this industry for for a number of years. I'm so delighted to be here, more to come and I will introduce my colleague, Keith Lucas, who tell you a bit more about why he joined Ai-Media and what he's seen in the industry.
Keith Lucas: Lovely. Thanks, James. Welcome, everybody. First of all, thanks very much for the wonderful amount of people that have joined this. There's over 300 people signed up to join us., that's great to start with. My name is Keith Lucas. For those of you who don't know me, I've been with Ai-Media now since December. And as James said, I'm gonna explain why I joined and what the reasons are behind that. I joined the broadcast industry, just to give you a little bit of backstory on me. I joined the broadcast industry back in 2000, that's 22 years ago now. And with almost 21 of those with the same company selling basically subtitling. So, I'm very familiar with the broadcast market and the subtitling segment of the market. Back then, when I joined in 2000, it was a lot simpler. Very hardware orientated, pretty much a box, a single box to do everything. And those boxes were in the broadcast chain. Windows wasn't always, Microsoft products weren't always in the broadcast chain back then, so it's a very different time.
And obviously, things have changed enormously along the way. With European broadcasting is very slightly different to the US market. And I learned this very much when I was selling subtitling transmission systems to the US market, to the international broadcasters who were broadcasting their content out into Europe. So subtitling was was very key. If you want to sell that premium content across Europe, you have to subtitle it in different languages. And that's when I first became aware of EEG, and all the wonderful things that were they were doing. Now, back then, you know, the workflow was fairly rigid and and simple for the particular parts or sort of Firebase cloud systems that I was selling at that time. But of course, the broadcast market is changing dramatically with the impact of the IT infrastructure. With IP, obviously, we saw the transition from SD to HD. So we've seen these trends before coming into the broadcast market and the impact they have. Let's not even talk about 3D and that little fad for a short period of time.
But that's, so there's been change but the change has accelerated and got much faster, and has changed a great deal the whole landscape of broadcasters as there more IT influence in it has become apparent. So when I was selling domestically in the US, I met EEG, and the guys there, Bill, Phil and Eric mostly, and we meet up at trade shows. And if I was in broadcasters, I'd always see EEG equipment, it didn't matter where in the States I was going at, I always saw EEG. They got this market sewn up, you know. And we at that time, the company that I was working for never really bothered to try and even compete, there was no point. You know, they were the de facto, they were the industry standard. And as I got to know the guys, and we talked in some dinners all over the world at various trade shows, NAB, IBC, we talk about technology, we talk about the evolution of the market and what have you. And I always liked the way that they looked at things and then the way that they were excited and passionate about that, the market that they were in, about their company, and it was clear and evident to me that they very much had the finger on the pulse, very much customer relationship-driven and that's what I kind of enjoyed most about selling in the broadcast market.
So that admiration had always been there. And in the back of my mind, I thought, you know, one day I wouldn't mind working for these people and because but they need to expand and play on a bigger platform and expand into Europe and what have you and Asia and beyond. So to play on a global platform. Now, last year, the acquisition of EEG by Ai-Media, they've been working together for a while. And that synergy had come together. So that got me thinking and got me excited about any possibilities. And in December, that possibility came true so here I am. Now, obviously, one of the other key influences in the broadcast market is us embracing the cloud, and all that it can bring. It's a maturing area, it's got some way to go, it's got some limitations. Customers and suppliers have made mistakes in the way perhaps we've used it, but that's now polarizing and I think a lot of people see the benefits of it. So my conclusion is, the domestic US market was, EEG were very much the kings, they were the go-to people.
You can't just take that American domestic market and expect it blanket in Europe and go, well, that's going to work. So, we're still on a bit of a learning process and we're still understanding, but we've got some great, great products and services that I think are going to be of benefit. Now, when we're talking about ASR, I've been around the ASR technology for a little while. We know it's coming. It's maturing, it's getting better. The main concerns about automatic speech recognition is security, latency and accuracy. And I think that's very much where EEG is focused, on getting that right, on getting that technology, right. But one of the main reasons I'm so excited about this, where I am now and I want to get out and talk to everybody about what we're doing, is I think services companies get to know their customers better than product people. Product people, and I know I'm guilty of this, they go in with features and benefits of a product and go, this is my product, this is what it does, this is what you put in, this is what it gets out.
And you don't really take that much notice of what the customers workflow because you know that that box can do what the customer wants and you sort of go, yeah, but that's the box. And well, we could think about that and we could, you know, we maybe we could tweak it and develop something but this is the box I've got to sell today. Now, with EEG, I don't see so much of that approach. I see, these are the products we've got. These are the systems these are the tools and technology, how are you going to use it? How would you want to use it? And those little nuances are being embraced, which is fantastic. But the experience of being a service-based company, I think services are much, they get much closer to their customers. And I'm seeing this now since December, in the way that the rest of the Aamir team that I'm in, talk to the customers, understand what they're trying to achieve. What are they doing? How are they doing it? Where's this going now? Have you thought about this, have you thought about this?
There's a little bit of education about to the customer sometimes as well, with these live events and what have you, there's things that they just haven't thought of. So, services companies tend to have a slightly more open-minded approach where product-based is, it's around the product. Well, you put those two together and you've got a fairly powerful combination. And that's why I'm excited, not just because I love talking to people. Unfortunately, we haven't had much opportunity to do that over the last couple of years with trade shows being not presenting not being put on with being banged up in a office at home, for two years is not my natural environment.
And in fact, this is my first webinar that I've ever done. So if I'm overexcited, I'm waving my hands about, I make no apologies, I don't get out that much. So I'm excited about what we can deliver. I'm very excited to get out there in the market as the awful pandemic that has affected all of us. But let's look at the positive bit, it has changed the market again.
It's accelerated the technologies that we're looking at. You know, everybody's using these mediums and getting more used to it. And captioning is so important, especially for live events. So, I want to get out there and into the marketplace. Bring that enthusiasm and the energy. And I'm sure that between me and the tools that I have, and the confidence that we have, we've got the right product at the right time. It's an exciting place in the market. I'm going to try and stop talking, James. Have I missed anything?
James Ward: No, Keith. That's, that's exactly right. And, you know, I think we spoke about this for a long time on, you know, leading up to joining Ai-Media, about that synergy and that coming together of two businesses that, you know, that really looked at broadcast in slightly different ways of services and technology but always with the customer at the forefront of the conversation. And I think, you know, now people often ask, you know, when an acquisition is made, like, what's the reason, why now, why not earlier? And I think timing is a great thing. And, you know, as you said, Cloud has been around for a while, it is maturing now. And we are seeing advancements in all of the areas that you touched on. And the key one, I think that everybody, you know, amongst all of those that are equally important, but accuracy. That can this actually deliver what we need in order to satisfy our customers? And I think you know, what's exciting right now is that we have the experience of captioning delivery from the Ai-Media team.
You know, there's built up over many years of how to curate words, dictionaries, and merging that with, you know, market-leading technology is really why, you know, you and I have sat down and both are equally excited about the opportunity and when we're speaking to our customers now that that new innovative approach, which is really gonna help scale, you know, as more content gets pushed out to the market. So, yeah, I absolutely share your enthusiasm and I think this is a really exciting moment for us.
Keith Lucas: Great. Well, I think that's a good time to introduce Bill. Bill, do your stuff. Tell us all about it.
Bill McLaughlin: Thank you. So, yeah, I'm Bill McLaughlin, I am the Chief Product Officer at EEG and Ai-Media. You know, I'm really happy to be the one helping Keith realize that products can kind of be for a customer and about a customer and can actually, you know, take the needs of the actual market, you're selling into account. When you build a product, it's sure better than finding some technology at the junkyard and just praying for success. So, I'm glad to be part of the change. You know, so what kind of customer trends are we actually really trying to respond to in the broadcasting market? And I think that what we see broadly is the business needs that motivate our customers entire business is surrounding a transition from a traditional linear broadcast where maybe there's one or two channels going out over the air cable and satellite distribution, and moving to a world of content that's a lot wider and more distributed. And so, we're going to have, you know, more live content, and we're going to be doing the live content through OTT channels.
And because we're producing more content, and really there's the same number of human eyeballs out there in the world to compete for, one of the pressures that we're going to feel is to say, how can we essentially produce more and more content with a similar or only slightly reduced budget for production. You know, so anything that helps us produce more live content, you know, to kind of get more fresh views without greatly expanding the budget, and essentially multiplying a brand new heavyweight capex intensive facility every time we want to produce another channel of content or another, you know, sporting event, you know, something like, where you have a Olympics or a tennis tournament where, you know, you have like, just an opportunity that short-lived, but to create, you know, just dozens of streams of unique, engaging content during the course of the event, right? How do we do that? And you know, the answer across the broadcasting world is really, we're going to focus on software, virtualization, IP, and cloud.
And this is going to give us the flexibility to spin up and down and to change our apex costs. This is going to mean that we don't have to necessarily rent as much physical facility space, we don't necessarily, you know, we can do more of our monitoring and control through software rather than scaling up all of our operations and, you know, control teams. You know, so, it's a different kind of broadcasting. And, you know, this is the overall world that our subtitling solutions need to live in. We want to have subtitling all of this content. In some cases, that's going to be required. In some cases, it's not required, but it's still an important, nice to have that's important for accessibility. It's important for expanding our audiences. You know, things like automatic translation into other languages that were, you know, not really possible or widely done in broadcasting are very easy and accessible and OTT. So these are the kinds of opportunities we're helping our customers explore.
The main vehicle for that is a system called iCap. And iCap is kind of a concept that originally grew out of problems in the live captioning industry with access to the video program from a remote captioning team. And in the early days, right, of live captioning in the US, this was often done by a dial-up phone modem. All the dial-up phone modems stopped working overtime because you know, the telephone systems when digital no one had these pots lines. It became a worse and worse solution and a more and more outdated one. It had, you know, it was not at all secured, encrypted, etc. You know, and I kept really grew up around that problem statement to say, can we make an IT-centric, encrypted and software secured way for remote live captioners to make their contribution into the broadcast plan, and to add their live captions. So, you know, that's been very successful and it's spawned a lot of new opportunities, because it's a natural fit for a cloud or virtualized workflow. Because of course, you're not, you know, you don't want when you're spinning up workflows in the cloud to be dependent on having a specific static IP address that captioners need to get into access, right?
Because it's dynamic, you're spinning up more channels, you're changing the IP addresses, right? So you can't have a master inbound IP address for captioning. You obviously can't have dial-up modem. You know, so iCap is fulfilling that need very naturally within a virtualized, flexible system. It's also going to be the same system as you evolve your video standards. So, it abstracts away from the concern of your captioning suppliers or captioning staff questions like, is this SDI video? Is it a compressed IP stream like transport stream? Is it an uncompressed IP stream like SMPTE-2110. iCap offers a consistent interface where you put the terminal device like one of our 492 hardware encoders, or the Alta software encoder into the plant and essentially it presents the same interface to captioners that are working through it. You know, you can use a mix and match on essentially, captioner doesn't need to know doesn't need to care. You can use the same approaches, whether that's human or AI, and it's agnostic to video standard.
And importantly, as we'll go into in more detail, it's also kind of agnostic to who performs the captioning and how. And I think that can be a big advantage for us as we move into content production that isn't one size fits all, that has, you know, bigger audiences and smaller audiences, bigger budgets and smaller budgets. And you know, iCap works with about 100 global human services partners that provide captioning in a variety of, you know, languages, styles, specialized knowledge. Obviously, Ai-Media is one of the major partners and you know, now more than ever, is able to deliver a really high quality human service iCap. iCap also supports an AI service in Lexi and a hybrid human AI service and smart Lexi and these are all usable interchangeably to the broadcaster. And I think that's a really important point to realize that you don't need more or different infrastructure to use different vendors or different quality styles of captioning. The basic setup workflow of iCap is also really lightweight.
And this is something that I think hasn't really sunk in and transformed the broadcast markets in the UK and Europe as deeply as it has, say, in the United States and Canada yet, where essentially, the caption inserter device is provides everything that you need to actually send a proxy video encrypted, low latency out to a remote captioner. You know, whether that's persons at another location in the building, whether they're working at home, whether they're working with a, you know, a vendor, you know, close to home or internationally, right? Or whether it's an automatic system like Lexi, the proxy feedback is built into the caption server device, and the captions will come back in a single manage transaction. The transaction is outbound to the relay cloud that helps you connect to people on iCap, and it's designed to work on a conventional reliable internet connection. It's fairly low bandwidth, it's going to use less than a megabit per second per channel. And, you know, it's meant to be, you know, the connection needs to be reliable, it needs to work when you need captioning, it doesn't need to be super fast, it doesn't need to be super low latency, doesn't need to be dedicated dark fiber to a caption facility.
It's just your standard solid connection to the internet. And you know, you can whitelist that to only be able to go outbound to the iCap relay servers, you know, that you're trying to use in your region, and there's no inbound connection. You don't need a public IP address at all. You don't need to kind of distribute changes in that to the vendors. All of the stuff about the identity of your encoder that you need to share with the caption vendors is in your relay codes, which is you know, you'll have a secure relay key, you can regenerate that as you need to, you don't need to regenerate it when you don't want to or need to regenerate it. But essentially, your permission to use the relay code had, you know, signaled by both the caption or having that code and you haven't granted permission to it, you know, through the iCap encoder system to their company name, right? So it's an authentication controlled system, is what's going to let a captioner use your encoder for captioning. So, you know, it's essentially, it's a one-device system.
And it's intended otherwise to work with your other off-the-shelf components. It sits in the video flow, it sits in the network flow, works with off-the-shelf components.
So like, what are some of the advantages of that understanding that it might be a somewhat different IT model than has been used in some of these installations in the past, right? For AV monitoring is integrated, rather than having something like a separate rack of, you know, proxy generating cards or separate, you know, way to transport them, I'm needing to upgrade and maintain that. And is that the broadcaster responsibility? Is that the caption service company responsibility when it's a vendor? Right, this has been a pain point, we've seen that in Ai-Media. You know, one of the real things about e g, that I think any caption service company will see is, wow, that's easy. When a customer says we have EEG, you know, we're able to just start service. You know, it's not, we don't really need like a project manager and a six month implementation plan, you know, we can just start service.
So, that's going to save a lot of costs on both sides when you really get into that mindset and you know, it makes it easy to try different services and use what's going to work right for you. The IP routing, again, it's integrated. You know, we run through InfoSec and security audits with customers all the time, we're not going to have a problem with getting this approved with your IT team. Definitely encouraged scrutiny. But right, you don't need to supply and inspect and inspect a separate VPN system for captioning. You don't need to allow inbound access to anybody. It's an outbound-based system, and the encryption and security model is built into the system. We've talked about vendor flexibility and what does all this lead to ultimately leads to a decrease in cost. You have a virtualizable system, it's one component per channel, it doesn't require, you know, extra operational costs outside the caption service you're using. And it's going to provide you with an ability to explore, you know, having below cost vendor that's right for you actually supply real-time caption services.
So, you know, how does this all fit together? You know, I think what we're seeing with customers is, you know, it's not all about captioning, right? I think for us, that can be the fundamental, when you talk about a product and customer insight, it's not all about captioning. It's actually about the business need to produce more content through these IP and cloud systems in an affordable way. And what needs to happen, you need to be able to supply high-quality broadcast standards captioning while you fulfill those needs. You don't want a bunch of converters down to SDI, you don't want to lag a bunch of old fashioned kit, like, quote somewhere, right? And not be able to have it in AWS. So, you know, we're there for you on that. And I think that's kind of that's the part of the solution that just like is absolutely needed to move forward with the broad business goals of the broadcaster. At that point, what you actually have is something that allows you to move at your own pace as you examine caption services.
And, you know, there's like, I think we all expect to see tremendous change in the adoption and in the quality of automatic speech recognition, AI captioning and modeling over say, the next five to 10 years. You know, for some broadcasters, the time to put that on air is now and in a lot of countries for a lot of types of content that really is working. You know, other broadcasters are, you know, not convinced yet maybe, but they're looking to use it on something like an OTT channel or so, you know, late-night content that like less heavily regulated in many countries, you know, this is all going to be supported. And essentially, iCap is giving you a way to, you know, defeat in that type of market to experiment with the automatic captioning, and to do that, without really rocking the boat on infrastructure that also supports, you know, human captioning in house or from vendors. And, you know, essentially, you can use the right captioning style for each type of programming that you're responsible for having captioned.
So, Ai-Media increasingly has been moving towards what we look at as a, you know, a three-tier system for live broadcast captioning. And in some programs, the difference between the three tiers may be very small or non-existent, right? I mean, when you have, you know, newsreaders speaking about very well known topics, speaking clearly, and, you know, in an accent that's well understood by automatic speech recognition programs, this is an easy application, and it's going to work really, really well, even using basic Lexi ASR. You know, we have a well-tuned engine, we have modeling for kind of general news stories that are being covered, you know, across the world and across target regions. So, you know, this is going to work really well. Now, for some programs where you're going to have, for example, a lot of, you know, a big guest roster and you want to make sure you get all the guests names right. Or you're going to, you know, have sports with a roster full of athletes from around the world, that's where you start to move into saying, you know, let's use our smart Lexi solution.
And the Ai-Media Smart Lexi solution is geared towards offering a middle costs tier that uses ASR technology to reduce the costs of live caption service a lot, but also uses a human service approach to kind of work with the customer on vocabulary, to work with the customer on scheduling, to provide, you know, monitoring, to provide an audit report and to give the customer kind of that full-service experience at a reduced cost compared to the premium human service. You know, finally, for a lot of styles of captioning when things are a flagship program, when they're a difficult sport, when you might be throwing different speakers, different accents, different subjects, you know, nonstop really the flexibility of one of the skilled premium human operators is often ultimately what you need. You know, and that's kind of the thing that above all, like, what we do better than computers as people I think, is that sense of that flexibility, that adaptability. You know, the human captioners are experts at adapting to the circumstances, paraphrasing when necessary, they know how to get the meaning out to the customer even when the verbatim transcription is nearly impossible.
You know, and that and that's really the skill and that's the premium skill that's enabled. So, you know, we have this ladder of different quality and pricing styles, you'll see, you know, you can try this on different content. And again, with iCap, this is all actually going to be really simple. You know, you can pay for iCap in you can pay for Lexi, rather, in relatively small increments, and it's going to mean that you can basically, you can test this you can run it on certain programs. Again, it's kind of a, it's an operational decision that no longer has to be tied to a technology and infrastructure change. And, you know, I think kind of that's the real value there. So, I mean, you know, James, you've kind of been involved in the, you know, more of the sales and even the operational aspects on the service. I mean, I'm gonna give you kind of an opportunity to add on Smart Lexi there.
James Ward: Yeah, I think you captured it well, Bill. And I think the first point as Bill just touched on there is that this is a trend of where the industry is moving, right? Like technology in the last, or ASR technology in the last few years has really accelerated. And I think the first thing I'd say is that the iCap infrastructure is what, yeah, the foundation that this sits on. And as, however faster that technology gets an improvement towards being able to caption any kind of content, you already have, you know, the infrastructure there to do that. And I think that's the real key message here is that, you know, we do have a platform now that, you know, as technology gets better, we can use to distribute, you know, whatever type of delivery like. But what we're really finding here in this middle column on the image in the Smart Lexi curation is the consistency of the captioning. And it really, you know, with the right content, there's a real consistency in the delivery and the curation of the dictionaries.
Now, that doesn't mean just sticking a load of words into a topic model that hopefully, you know, on the output will come out correctly. That can actually be, you know, negative in terms of the results. So, what we've really focused on with the customers that are already using this service is curating that dictionary over time. And the great thing about the smart legacy products is that it gets better over time. The more that we curate the dictionaries, the more that we learn about the content, the improvement of the of the services is noticeable. And so what I would say, for anyone that's kind of looking at ASR as an option or even just looking to really dip their toes into the, into the ASR field is that this is a, I guess, in a way, like a safer way of doing it and not feeling too overwhelmed of like, well is ASR going to actually deliver what I need, is it really going to be able to, you know, to capture the content? This just levels that up a bit, and really gives you that sort of security around ensuring that your brand name and your speaking names and presenting names come out correctly.
And we're seeing some fantastic results. Really fantastic results and I think that the thing that our media has done for many years is always independently audited our captioning. And we're doing exactly the same on our Smart Lexi captioning. We're looking at, you know, the NER model, which is a globally recognized quality methodology for live captioning. And we're seeing that your Smart Lexi is meeting the threshold, you know, the 98% and that in many regulated markets, it is required, Smart Lexi is meeting that. Yeah, so we're interested in partnering in working with organizations to really help them become comfortable, help them see the benefits of technology. And at the same time, you know, with the iCap platform, if you do have a captioning vendor or if indeed you need a captioning vendor, then the options are there to keep your premium content on the premium captioning. So, yeah, this is a real game-changer for the market and one that we're excited to, hopefully, talk to more of you about in detail.
Bill McLaughlin: So, I mean, OK, Regina, you're going to introduce our Q&A. Cool.
Regina Vilenskaya: That's right. So yeah, we are now at the live Q&A portion of this webinar. So, if you have any questions, and you have not done so yet, please enter them into the Q&A tool at the bottom of your zoom window. We have received a lot of questions that I'm very excited for these three speakers to answer. And so the first one is, can you speak to on premises versus cloud solutions for live broadcasts subtitling.
Bill McLaughlin: Yeah, so, we, you know, we've been mostly talking about the Lexi cloud system here, I think that's the thing that most kind of meets the themes of a lot of broadcasters of wanting to do a lot of scaling and looking into, you know, working even with video production in the cloud. We do also offer a local Lexi product for live, which, you know, is a appliance that's, you know, internal to your network, and basically operates in a similar fashion but, you know, restricted to the local network and without dialing out to a cloud service. You know, we've mostly seen that model be popular in more of the enterprise or government type of market where, you know, the kind of the privacy around the conversations and the data that might be in caption meetings or training videos is kind of the paramount concern, as opposed to cost or flexibility. You know, that being said, it's a good solution and it can be used for broadcasting it's something that, you know, you would host inside your own network.
The updates and things like that for the vocabulary are on a quarterly basis when you have the local Lexi and so, the customer may have to take a bit more responsibility for actually, you know, modifying the system for any names, freezes, or other things that they see systematically come up, right? Because, you know, Ai-Media wouldn't be in your system and enable to affect your system if it was the local system, we don't offer a service like smart Lexi, it's really just a completely automatic solution when you're using Lexi local. So, you know, these can both be good alternatives, I think the cloud system is going to be lower cost and more flexible in most cases and kind of allow you more dynamic improvement in the system. But you know, when security issues or privacy issues are kind of paramount, then Lexi locals definitely a good thing to look at.
Regina Vilenskaya: Lauren says we are creating a YouTube video that is 25 minutes long, it will be in English, but we also need to have Latin Spanish captions. We have a real tight turnaround to get this completed. How quickly can this be done? What would the process look like?
James Ward: Sure, I can probably take that one. It depends on whether you're broadcasting live, or if you're recording first and then uploading to YouTube. But in the live space, with the multiple language, you know, options that we have, we could we could definitely caption that in Spanish. And that sounds Spanish, but on the recorded side, a 25-minute video with preparation and knowing when we're going to receive video that could be turned around within the day as long as we have the resources lined up to caption it, then yeah, it wouldn't be a problem.
Regina Vilenskaya: Someone asks, with recent outages to server firms such as the AWS network taking down many services for several hours at a time, what redundancies do you have in place for failover to maintain uptime on live events?
Bill McLaughlin: Yeah, I mean, it's certainly a challenge because you have a, you know, you have a product that honestly is as reliable as AWS. You know, I think it's easy for the tech team to assume that, you know, that's completely reliable, of course. You know, you probably shouldn't rely on that. You know, we have iCap servers across, I believe four AWS regions right now and an ability to, you know, sort of interoperate those as needed so that, you know, even if your preferred capacity is in one availability zone, you know, what associated with one geographic region, essentially, the system does work even if you're transmitting data from, say, the UK to Australia or to the West Coast of the United States. So we have pretty good coverage against basic outages like that to be able to have customers use relays that, you know, even if they're not in their closest preferred geography, at least, will provide temporary service when needed. You know, it's definitely the number one issue I think that anyone providing a service in the cloud needs to face.
You know, I think our historical reliability on this is, you know, above a four-nines point. So I think, you know, we've done really well for our customers in the past asked at something to kind of keep trying to improve. You know, I think one way to also think about the issue is, is kind of in comparison to what you might think of as, you know, competitive services in a sense. Like, it's kind of, you know, my view is that when you switch to automatic captioning from human captioning, if the automatic system is well architected, you are actually likely to see higher uptime because I know that, you know, I personally do not show up to work with four-nines of uptime, right? You know, and I think that there's actually within, within human departments or agencies that provide services there, you know, I mean, obviously, we're always doing the best we can to provide a completely reliable service to our customers, but, you know, kind of managing the sick days and the traffic jams and things like that, in its own way is just as challenging as you know, managing AWS.
Regina Vilenskaya: I'm in the federal government with a lot of jargon, acronyms, etc. How is the success rate there?
Bill McLaughlin: I think that things like things like acronyms and people names that are relatively predictable, and their occurrence and even when there's a lot of them, there's kind of a finite set in your field, that's the kind of thing that with Smart Lexi or with a, you know, a dedicated program on your own Lexi local, you should be able to mostly overcome that particular challenge, you know, with proper names. Things that when you say jargon, I think honestly, sometimes that can be more challenging, because sometimes that can be a whole style of speaking. And it could be hard to kind of list out all of the strange expressions that one of your colleagues might possibly use. So, I think it kind of comes down to repeatability. You know, it's easy not to make the same mistake twice if you're dedicated to kind of monitoring and improving the service. But you know, when the kinds of problems that are hard are, you know, something like simply like, you know, many of my colleagues, you know, they mumble, they have strong accents, they are difficult for other humans to understand.
You know, that's the kind of problem that if, you know, I think that could be a question like, you know, if, if a human from outside of your team would have a tremendous amount of trouble, you know, transcribing this conversation with any accuracy, then that's, you know, that probably roughly corresponds to the type of trouble you would have with an AI system.
Regina Vilenskaya: Your subtitling solutions work on jumbotron at NHL games, for example, for fans who might be deaf or hard of hearing?
Bill McLaughlin: Yeah, we've certainly done applications like that. Yeah, I mean, that's kind of, that requires kind of an interconnection to the video system that can be done in a couple of different ways. You can burn captions on to the video screen, you know, and just have them accessible on the main picture. We've also been involved in systems where there's a separate ribbon board in the stadium, that's a text-only ribbon board that, for example, transcribes all the announcements that are done over the public address. So, you know, yeah, solutions like that, you know, I know like, you know, Daktronics, for example, is a major vendor in that space that we've done a number of integrations with. So, it's definitely possible and yeah, a lot of stadiums do that.
Regina Vilenskaya: Someone asks, How can I send a video to iCap? Can I use standard RTMP or RTP encoders, or is it mandatory to use your encoders.
Bill McLaughlin: So, you need to use a product that speaks the iCap protocol that includes kind of managing the encryption and transactions. So EEG supplies an SDK for that that some other vendors have also incorporated. You know, for example, Imagine Communications has one, Pebble Beach system is has one there, there are some other vendors in the space that integrate that. We also provide with our Falcon product that we didn't really talk about today because it's less commonly used in broadcast, that sort of a connector that you can send an RTMP feed to and get it captioned, you know, through Lexi or through human premium services. So, there are ways to get into iCap and it's not only an EEG product, but for example, because it's a full trial transaction, it's not as simple as just, you know, originating of, you know, a unidirectional audio stream using something like RTP.
Regina Vilenskaya: It seems that your solutions support many languages. What about local languages like Aster, Catalan in Spain?
Bill McLaughlin: I don't believe we currently offer Lexi in either of those languages. I mean, I don't know if James could comment if we have providers that work in those languages?
James Ward: Yeah. Yeah, that's right. So we do. We do have some of our subtitling staff who are fluent in those dialects. So, yeah, that would be a capability we could provide.
Regina Vilenskaya: For a live TV broadcast, would you need to put a slight delay on your video to sync with the captioning?
Bill McLaughlin: It's a good question. If you wanted the captioning to appear perfectly synchronized with the audio program, then yes. And EEG encoder products actually have a video delay feature built in that you can use that way. I'll say that in most regions, most broadcasters do not do that. And what they do is they allow the live subtitling to come in as soon as possible, which means that it lags the audio programs slightly by, you know, with typical number is something like three to five seconds. And your choice of workflows on that can matter a lot, right? Like if you kind of set up a monitoring feed that's delayed to a captioner and have them kind of, you know, rist. I mean, those delays can balloon now if you don't have the right technology solution really, it will become a problem. But with kind of a good iCap centered solution, you should be landing at something like three to five seconds. And usually, that's just put to air in that way. Although, yes, a matching delay is possible.
James Ward: Yeah. Yeah, we've see that delay feature used more in a live streaming environment. So, you know, a website or a platform like YouTube and Facebook. But yeah, options are available for that.
Regina Vilenskaya: What is the recommended tool to use to have multiple video streams going out with a caption track for each stream?
Bill McLaughlin: What is the... I'm sorry, I lost the end of that question a little bit. Could you repeat?
Regina Vilenskaya: Yeah, what is the recommended tool to use to have multiple video streams going out with a caption track for each stream?
Bill McLaughlin: Yeah, I'm trying to consider exactly what the question is trying to do. Like, for example, if there were, you know, the streams could be had the audio tracks in different languages, in which case, you would really just caption them separately. But imagine that the stream had a video and audio that was the same, but we were going to support captions in, you know, the primary language of the program on one stream and then we were going to have additional streams that had the bit had a translated language as the caption track instead. So, you might have to do that if you were sending your receiver that only supported one language of captioning per stream.
So, you know, we've worked through some of those types of cases with customers. I mean, right, typically, you may need to use some, you know, kind of like, a restreaming technology, you know, which our services teams kind of familiar with using and guiding the customer through what they need to do to actually generate the different streams with different language tracks from a single origin.
But, yeah, I mean, things like that are possible to do and I think it's just a case of kind of breaking down, you know, like which of the tracks are being uniquely subtitled based on their audio, you know, using Lexi or a human captioner, and which of them are translation tracks? You know, which are then is that being done by a human-based translation approach or is that being done by, you know, Lexi's automatic translation feature? And you know, what video formats the streams are ultimately in. So there's a lot of workflow choices, but I mean, that kind of thing is certainly possible.
Regina Vilenskaya: Someone asks, Where the captioners subtitle their source from and if they can use their own captioners?
Keith Lucas: Yeah, you can. Ai-Media has had a wide variety of options in subtitles and languages, but as Bill mentioned during the webinar, if you have preferred vendors that they already working with, then the solution equally enables you to work with them.
Regina Vilenskaya: Great. So that brings us to the end of the webinar. We received a lot of questions, so thank you to everybody who sent those in. If we did not get to your questions, we will be in touch following this event. I'd like to give a huge thank you to the three featured speakers. Thanks to Keith, James and Bill for sharing your insight and knowledge about live broadcast subtitling and a big thank you to the subtitlers behind the scenes helping make this event accessible. Thank you everybody again, and have a great day.
Bill McLaughlin: Thank you.
James Ward: Thanks, everybody.
Keith Lucas: Thanks