October 14, 2020

Webinar Replay: Fall 2020 New Products Showcase

EEG

On October 6, 2020, EEG Video hosted a webinar for broadcasters, educators, municipalities, and other content creators. The online event focused on the newest solutions from EEG for making content, meetings, and events more accessible with closed captioning.

Fall 2020 New Products Showcase • October 6, 2020

In this 45-minute webinar, Bill McLaughlin, VP of Product Development at EEG, walked attendees through:

  • An overview of EEG’s closed captioning and subtitling solutions
  • How you can use our products for your captioning needs and workflows
  • The latest closed captioning advancements at EEG
  • A live Q&A session

Featured EEG solutions included:

To find out about upcoming EEG webinars, as well as all other previously streamed installments, visit here!

 

 

Transcript

Let's get started. Thank you to everyone who's joined. This is part of a series that we've been doing, trying to replace some of the things that unfortunately we haven't been able to do with trade shows like NAB and IBC and all the other things we usually do in in the fall, especially to try and bring some news to all our customers and partners about things that we've been working on at EEG, on the new product features and all the different ways that we can, you know, try to help you promote accessibility and improve your workflows with EEG products, improve your experiences with EEG products.

I am Bill McLaughlin. I'm the VP of product development at EEG. And, you know, I'm glad to be doing this. This is probably more than any of our other webinars. I really talk about what I've been doing and what I've been in the thick of for the past six months or so since we've last done a webinar. And this has been a really, really busy time for us. We've been seeing a lot of customers doing more and more remote events, streaming media. In broadcasting there's been so much growth in, you know, needs in remote production, in IP virtualization.

There's certainly been a lot of cost pressures on customers as well to try and be doing things in the most efficient way possible but also to be producing more and more content and to be making the content more and more accessible, to be making the content more and more global. And, you know, a couple of the developments we'll be talking about involve trying to, you know, both for accessibility and just for reach of the programming, try to reach people all around the world in as many different countries as possible. 

So I'm gonna go through a bunch of different products. We're not really gonna have live demos as part of this and, unfortunately, the live demos and those type of personalized interactions are something that I definitely will really miss from the trade shows. But please do get in touch with me or anybody at the EEG staff in terms of trying to get more of a personalized demo, you know, see something working either for, you know - for your own environment or, you know, just a webinar demo like this, something a little more personalized, please get in touch. We'd like to hear from you!

Let me make sure - great, the slides are rolling now? OK. So today we're going to talk about Lexi, our automatic captioning services. We'll talk a bit about new things going on in the Falcon RTMP live streaming caption encoder. We'll go through the AV610, which is a caption display generator for live events, and Alta the next generation virtualized media closed caption encoder for IP video and production - SDI replacement and media production.

And these themes, again, reflect most of what we've seen from our customers over the past two or three quarters. An emphasis on improving workflow, improving cost of accessibility, an increase in remote events, acceleration of AI captioning trends, and a strong focus on what can we move to the cloud, what can we control with an API, and, you know, control inasmuch as possible a software-focused virtualized, you know, pretty much automatic media factory experience, and enabling that all, of course, with video over IP, SMPTE 2110, and all the new standards that are associated with that. 

Lexi

So first we're going to talk about Lexi, though, and one of our biggest releases here, something that we've been working on a lot since around the NAB show timeframe at the demand of customers, involves scheduling of Lexi. And what we've done with Lexi previously has been primarily doing scheduling that sort of gangs off of broadcast automation systems using 

GPI's contact signaling or else API's coming out of the the broadcast automation that are more now-based for when to trigger captioning on programming and to say, you know, we're going into live show now. Please trigger the captioning to start immediately. And with Lexi, that captioning does start within about five seconds of sending a trigger like that.

So it's pretty fast, but we found that one place where that doesn't really cover all customer needs is when you're not running something like a broadcast automation plan that has an existing schedule, but running things that are more like live events or, you know, weekly meetings, whether that's a corporate board meeting or municipalities and towns, and this tends not to run off the same type of large scheduling system or with something like GPI's, and we've found that a lot of customers are, you know, kind of - really, they need to have their tech make sure to remember to set up captioning basically right before it's going to happen, and that's definitely a pain point.

And so we've been looking to address that in Lexi, and this is both in the cloud-based Lexi service where you get the captions delivered over iCap from the cloud-based system. You pay as you go on a monthly basis. The feature is also available on the Lexi Local system, which is our on-prem system. It's basically a, you know, a server that connects to caption encoders and is simply able to prevent - provide ASR captions from there. So on Lexi Local this is already available, actually, with new demos or with a firmware upgrade, and these features are going to be rolling out to the main cloud site at eegcloud.tv next week. 

So what you can do now that the system didn't have before is create what we're calling a Lexi Instance. And that's a reusable set of all of the parameters that you use to run Lexi. So that would mean your iCap Access Code, that's the caption encoders or the Falcons or the video feed that you're using, the custom vocabulary, the Topic Model data that you want associated with your Lexi, and all of the features for caption appearance and style that you use.

And once you've set up this Instance that represents a set of parameters, again, for a recurring group of jobs, you can schedule this to run at a future time. And that can be a one-off event anywhere from one minute in the future to 90 days in the future. It can also be a recurring event, so we have a pretty sophisticated recurrence engine built into there to do meetings, whether it's a weekly meeting or, you know, first Tuesdays of the month, more than once a week, every day at a certain time. 

Any of these common recurrence relationships, you'll be able to put into the schedule and make sure that at that point you'll have the caption job running every time on this schedule. You'll get an email notification a few minutes before it starts and you'll get another one after the job ends that will actually have your transcript data in the form of either just a straight transcript of what was said, or you can get a time-based transcript in a format like SCC or VTT that provides you with an as-run caption file. So that can be associated with the video later on in a VOD platform or you can send that on.

You can do editing or work with a caption agency partner who does editing to kind of produce a completely cleared-up 100% transcript for future use. The basic interface on that is going to look really familiar, I think, to almost everybody, because what we've done is tried to use what you already know and not make this too confusing, you know, not really require an advanced knowledge of any particular - you know, some things, like in broadcast scheduling, can be very, very complex.

But the initial implementation of the Lexi calendaring is based on something more familiar like just a, you know, Google Calendar or the calendaring on a basic consumer system that gives you the ability to set these recurrence events gives you the ability to view them on a bunch of different bases. And effectively you click into the events when you need to for more details about exactly what's running in each event and, you know, even more complicated things, whether that's the recurrence pattern, whether that's the time zone, or whether that's specific settings that are associated within your Topic Models and your email settings, things like that.

With each job you'll also have the ability because a lot of times things don't go exactly the way they're scheduled to extend using the web tools as you're going. So, you know, a common thing would be to realize that, you know, we're running a bit late. We're going to need to run the captioning for another 5 minutes, 15 minutes, 30 minutes; that can be any amount of time. And, of course, this works with Lexi's existing systems for timeouts, which can be used to kind of prevent getting any unwanted billing for times when, you know, if the meeting ends and someone doesn't turn everything off, but this can - you can extend a fixed amount of time with the calendaring and that way know that, you know, all the billing is going to be limited to, say, within the next 30 minutes or within the next hour, and that will take effect pretty much instantly. 

If you were to have the captioning event end and, you know, realize that the event had ended and it shouldn't have ended yet, in that case you could go back to this Instance. Thankfully you have the stored settings and you can start a new job based on the same settings and be back up very quickly, but ideally you would extend the event beforehand if there was going to be any overrun. 

Even when you're not using the scheduling, I think the Instance feature, customers who are starting jobs from the website are going to have a better experience with that because it makes it a little bit more like when you start the jobs from the encoder with the broadcast automation where all of your settings kind of become set it and forget it in a certain configuration profile. And the way the site has been for starting your own Lexi over eegcloud.tv, you have to set each of the parameters each time you use it and, you know, understandably that can be a little bit of a headache, so I think that's going to be a lot easier now. 

These features are also compatible with the Lexi Leash Windows software for anybody that's using that or exploring that. And that's just a Windows desktop app that interacts with our APIs for Lexi and can help you with some features like profiling and built-in tracking and schedule management. And sometimes the Windows tool can, I think, give a little bit more of a comfortable interface for some things than the browser tool, but the browser tool is, of course, flexible multi-platform, so that has its own advantages. And those HTTP APIs are actually - you can really integrate.

We would encourage anybody running their own internal workflow systems or anyone who who makes a third-party system, you know, we'd love to be seeing more integrations with that, because I think really the automatic captioning is something that really works best when it's well-built into the whole workflow system that you're using and, you know, isn't just kind of a island of, oh I need to remember I need to go set this stuff up at eegcloud.tv, I need to specifically remember my login for that again, etc., etc. You know, the more integration you have, the easier it's going to be to make that work.

So continuing on the Lexi, we're going to have two more items here where, of course, the key with Lexi is making sure you can get the most accurate captions possible and, you know, automatic captioning technologies have been around for a long time and only in the last couple of years has that become something that, I think, is really providing good results on a decent block of programming. And, you know, there's still a ways to go.

I think, you know, every month, every quarter, certainly every year we're seeing an increase in the type of applications where people can really see high-quality results with automatic captioning. And one of the things that helps with that is vocabulary building. But what's difficult about vocabulary building is that it can take a lot of staff time and we know that for some of our customers, that's been a pain point, you know, having no staff that really is trained in how to build up the vocabularies and, you know, really needing to figure out the system and, you know, how much is the right amount, how many - you know, how do we deal with errors we see or things we want to correct, and is this something that we can kind of just set up once when we get the product and then not have to worry about? Or is this going to be something that, you know, we're going to be managing on a recurring basis and, if so, how much time do we have to devote to that? What do we have to do?

So Lexi Core Models is a new program we have that's kind of designed to help customers with that by providing a pre-curated set of custom dictionary models that are specific to customer applications. And we found that you need to do that because when you have different use cases for the automatic captioning and they all run out of the same global dictionary. It's just not specific enough for all cases.

It's also true that the entire large dictionaries that run a system like this, they're not taken from sources that are updated on a very, very frequent basis, like every day or even every week. You know, a lot of the large platforms are only updated every quarter, twice a year. It's really not enough for things like news and sports, so we've gone and, with our support and engineering staff, been building up a program that we've had some success with, especially in the news field.

And we're planning on continuing into a lot of our other target application areas for news, for sports. We've done some for legislative and municipal sessions, which have a lot of customized vocabulary surrounding the rules of order and the voting procedures that we've been able to see a lot of improvements on. Christian broadcasting, which is certainly a fairly large genre of broadcasting and of events, which kind of can definitely have a very distinctive vocabulary, a set of references to the scriptures that are, you know, specialized to that field that aren't very likely to show up on the regular evening news.

But so these are the kind of applications where it really helps to have smart curation. And we're working on that with the thought that we're really uniquely suited to do that compared to individual customers who, you know, may not have the experience operating with the system.

And also there's a basic redundancy problem where, you know, we really don't need every news station in the country individually realizing that they need to kind of train the system to support, you know, a big story. I mean something, you know, "COVID-19" or a phrase like that that was just not on anybody's radar, you know, a year ago today but, you know, became such a giant story, of course. You know, that wasn't being recognized correctly by most automatic engines and so, you know, does it make sense for hundreds of news stations to individually be, you know, trying to train systems to do this?

We think it doesn't, so we're working on making it so that the customization effort of individual customers can really be geared to what those customers really have unique about their local area, about their on-air talent, and not to be trying to focus on general words or phrases in the news that are going to be the same for, you know, pretty much everyone across the country. And that's available in Lexi right now. What you can do is when you're creating a new Topic Model to use with your Lexi features, all you have to do is select the button that says Choose One of Our Topic Models and you can select from these designated Core Models.

And what happens is then the model, as it's created, is automatically populated from the beginning with the pre-curated information, and then everything custom that you put onto it as an individual user and is private to you, that just trains your individual model, and as we add and remove things from the base model, that also gets merged with your information live. So you're allowed to essentially do your own customization and also get the advantage of the EEG customization, and it should really make it a lot easier for customers to keep an updated model and and worry about less things that aren't, again, exactly targeted to their application. 

We've seen general improved Lexi accuracy as well, and I want to make sure that, especially everyone who's an existing Lexi customer, kind of knows how to take advantage of the latest features on that, because we've had a pretty long-running beta program over the summer, running with the new Lexi software engine, and we didn't want to really just slip this in on everyone who was using Lexi overnight until we were really able to get some quantitative confidence that there was improvement and that that was true over, you know, a broad range of programs of accents.

But I do think we've reached that point, and with the Lexi Beta we've seen pretty much across the board a reduction of, you know, probably even in the worst cases 20-30% in word errors and up to 50% in a lot of common use cases.

And I think this has brought the basic target of what we think just out of the box a lot of, you know, news stations are going to get, say, in a practical test from something more like 90% to about 95% word accuracy and, you know, that is a very big jump, and it's gonna mean, you know, in a subjective case this tends to be the difference between feeling like the captioning is pretty good–it follows the speakers, you get the main ideas, but, you know, are errors easily noticeable? Yes, they are–to by the time you're at 95 and above, you start to think that, you know, the errors are really pretty few and far between, especially when a lot of them can sometimes be, you know, connecting words and things that a framework like NER doesn't even see as being heavily meaning-altering.

So it's a lot of improvement and we've seen customers be a lot happier with that. So this is still labeled on the EEG Cloud site as being the quote "Beta" in the engine field. I would encourage anybody who's using Lexi to at least try using that as opposed to what's labeled as the standard. We're going to be changing the labeling on that over the next couple of weeks to actually with new customers, especially, make sure that we're funneling people into the model that at this point we think is going to give them the best accuracy moving forward. And I think there's going to be further progress on accuracy, too, but certainly this is a big step and I think it's kind of a generational improvement to be moving into the new model.

And, you know, again, that's currently available. It's labeled Beta, but that's an open feature. It's available to all Lexi customers and there's no changes in the price or the service plans that that's available under, so I think that's really - that's a gain that we're going to be doing a lot more to promote over the next couple of weeks in some email blasts and on the website to try to make sure that everybody's getting the most that they can out of the system and, you know, not kind of just sitting still with them with what could be done two or three years ago, but making sure they're on the newest system.

Falcon

So to switch gears a little bit now, we're going to talk about some of the actual video processing technologies, and all of these technologies–really in Falcon, Alta, and the AV610–that's going to be usable with any form of captioning that a customer uses from a service point of view.

So these products, they're all compatible with Lexi. They're also all compatible with human steno caption writers, with human voice writers, with teleprompter systems, with, you know, third-party automatic captioning, any type of system that is being used to generate captions with a standard video encoder is going to be compatible with any of these. They all run over iCap for systems that are remote and over the cloud. Most of these also have options to do things like a telnet connection. In cases where it's a physical product, like the AV610, you have an option to do a serial port or a dial-up modem connection, you know, for any legacy equipment compatibility.

So we'll really be talking more now about features that enable you to do new things with the video and new things with being able to encode, for example, with Falcon here in different languages. So this has been a big issue since we've introduced Falcon has been that, you know, when you use embedded captioning in RTMP streams, which has been really the best way to get a live stream through a video workflow from a encoder, something like an elemental encoder or a software encoder like Wirecast or OBS, RTMP-embedding the captions is the best way to get them through to an end platform, whether that's a CDN or whether that's social media, it's compatible with pretty much everything.

But it works based on the US caption standards and broadcasting that are really significantly show their age on issues like world languages capabilities. So you can caption anything in English or Western European languages, but the support is very, very limited for any other alphabets other than the kind of Roman Western European-based alphabets. So we get a lot of requests from customers obviously who want to have accessible events and languages moving beyond that, and so we're working to roll out a new system in Falcon that's going to allow you to use any language that is codable in Unicode and uses a UTF-8 coding to get into your player and that you have a font for, you should be able to run captioning in that.

So we have a new system that converts your RTMP feed that you uplink into an HLS stream, which is HTTP live streaming, and that's already a popular technology for delivering the stream, actually, to consumers. But by converting to HLS earlier in our product before you move to the rest of the production system, we can actually create a caption track that moves out of the older standards into new standards like VTT and TTML or timed text and is able to actually show all kinds of world languages. 

Here's a block diagram of of how this is working, and you're seeing - it's a RTMP encoder is going up into Falcon and what you're actually getting out is an HLS stream, which is represented by this playlist file, which a player or a video platform can then ingest off of Falcon. And that's going to be able to support doing things like you're seeing here with the - this is a live JW Player demo and it allows you to use fonts, like here we have a Japanese demo and a Russian demo. And these fonts are things that you wouldn't be able to pass through any of the systems that are shown using embedded captions based on the 608 and 708 broadcast standards.

So it's really going to open that up, and we're currently doing limited beta tests on this with a handful of customers and definitely if you're interested in that, I'd encourage you to contact me or contact EEG Support about getting involved in that.

And we're going to presumably, you know, given that we're having some success with it, be able to put that into general release with Falcon by the end of 2020. And you'll be able to choose between the RTMP output modes, which still might provide for English the best compatibility with a lot of systems or the HLS output mode, which is going to allow you to send directly to players into platforms that'll support those VTT caption tracks and be able to support, you know, the kinds of things that we hear people want to do with events all the time: put something into 10 different languages, you know, with a mix of, you know, human captioning and AI translation and things like that that really reflect how global a lot of these videos are.

AV610 CaptionPort

So another approach for live events that I want to discuss is the AV610. And, you know, we've provided versions of this has been out for about two years, and it's a product that centers around, you know, coming out of the broadcast captioning decoder kind of roots, having a live event that really focuses on in-person caption display on screens at a live in-person event or having the same thing for streaming, but having streaming that's a separate text feed as a video essentially, as opposed to closed captions.

And, you know, traditionally one of the major reasons for that has been to support more or different languages, which, you know, hopefully the Falcon features we were just talking about help make that a little bit more of a use case of the past, but one of the things that is really nice about character generating your own captions onto the video is that it does give the events' technical producer full control over the captions.

You can, you know, support any languages you need to in any font that you want to upload. You can control, you know, the color, the appearance, the pacing. And with a lot of web players, unfortunately, those things can all be an issue, you know, for example, we're captioning this Zoom webinar now and definitely the formatting is not what you would wish it would be if you're watching it; it's something where it kind of shows up as, you know, one line and then it disappears, and then another line. And you can do better than that, but sometimes it takes using a specialized product. 

So the 610 now supports a new mode where you can, in addition to overlaying the captions the way you want them to look over an existing video or even scaling a video to to make extra space for the captions, you can actually use the 610 in a less expensive, easier-to-configure way that doesn't require any input SDI video.

So the 610 will generate its own video output based on something you upload, like a company logo or an event logo, and we can place that picture or tile that picture on the screen so that what you see is the accessibility text and the background picture. And that's an SDI video signal with an HDMI converter and a consumer TV. You can get that up at a live event in person pretty much anywhere or you can bring it into a capture card and something like OBS or Wirecast to put it on the web really easily, and you'll be able to use that to do something like what's shown here. You can have more lines so that it's basically something like a 50% of the screen with large text and the other 15% since the 16x9 format is is usually a little bit strange for captions with your logos on top.

So that really is kind of the best thing a lot of times if you pictured, for example, a relatively small in-person event where you know everyone was fairly close to the stage and you didn't really specifically need TVs for the audience, you can still set up a TV for accessibility somewhere by the side of the stage or at other staggered points through the audience, you know, depending on the size and the spacing so that everyone will have access to see the text, and that could be original language text for hearing accessibility.

That also could be translated text when there's a multi-lingual audience and there's a desire for language accessibility. And that's a separate mode from - here's the kind of original AV610 which, you know, this is a firmware update what we're talking about now with the SDI generation, and it can work on any of the previous 610's in the field and will come pre-loaded on all new demos and sales. And it still works side by side with the pre-existing feature, which was really centered around this use case of doing something like a PowerPoint presentation that was text-heavy and having the captions appear under or above that in their own separate space on the video so that when you're using the captioning, you're not finding that it's blocking kind of important data from the slides, from the graphs, from the presentations.

Alta

Last on today's agenda, we're just going to talk a little bit about some new developments with the Alta project. And if you haven't heard of that before, that is our virtualized IP video broadcast caption encoding product. It basically works like one of the classic SDI captioning code or something, like an HD492 or any of the older models in that series, but for IP video either compressed in the form of MPEG transport stream video or uncompressed video on using simply 2110.

And with Alta, a lot of what we've been focusing on as the IP video space matures is making sure that we have strong deployment options and strong integration options. And that's something where, you know, I think in general in the industry, the IP video technology in terms of essence-based things has been mature for several years, but there's really still a lot of progress being made on issues like, you know, interoperability, integration, control layers, security.

These are things that - these are things where a lot of the products–now that it's a proven fact that you can do the very low latency, very high bandwidth, uncompressed resolution IP video and that that will work in a, you know, at a trade show or in a lab–it's really been about productizing those features in a way that makes them easier to deploy and easier to use.

So at this point we're having really three main deployment options around Alta. And there's a full cloud option where we can provide a pre-built AMI for Amazon Web Services and, really, we share that into a customer account and the customer can just get off and running with that in their own AWS account, and it's pretty amazing how simple that is when once it's kind of, you know, when once you have an environment and, you know, assuming the customer has some experience with AWS, you know, the IP's addresses are right there, you can use the pre-existing AWS security groups, and it really just runs, so that's pretty cool.

For on-prem installations, we can do virtual machines that will run on conventional IT hardware; anything like, you know, VMware and ESXI and vSphere, to other systems using virtualization–if it's Red Hat virtualization or the Microsoft system–we can run on any of those and provide the VM's to the customers. 

We also have a new program providing a new customized EEG turnkey server where the Alta software runs probably in its highest performance mode on a bare metal server and that actually has a custom face plate with a little LCD control similar to one of our SDI encoders, and what that's helpful for is it will allow you to set up the basic networking and security features of the box that you need to get started with it, without needing to do something like, you know, go into a server and actually log into the operating system consoles and do anything like that. It provides a little bit of a smoother interface for the initial physical setup. 

Simultaneously, we've been trying to make this stuff easier to use by, you know, automating as much of the management as we can when it comes to individual channel setup in the Alta systems, because each of these systems can support, whether it's a VM or a physical server five or 10 or, you know, depending on your sizing in AWS, many more individual channels of captioning.

So really each of these channels of captioning is equivalent to like a 492 encoder that has one video in and one video out, and connects to one captioner or, at max, a couple, you know, captioners in different languages, but on the same video stream. And you can support a lot of those in very, very little hardware density with IP video. And to make it easier to manage them, we have our own HTTP REST API on the boxes. 

We've also been continuing to improve the interoperability of our AMWA NMOS implementation, and we have some third-party partners like data miner who are building in drivers for the system that make it easy to use in the context of, you know, a multi-vendor IP video setup kind of using some best-of-best solutions and we certainly have been honored to kind of find that for captioning, we are often the preferred solution, and even when there's a lot of other vendors in the picture or really just a few but, you know, the EEG captioning really helps people with the connection to iCap and the ability to do both human captioners that people have long-standing relationships over with iCap, and also work on Lexi when needed. 

And for Alta, we've also been working on 2022-7 redundancy, which allows you to send the media streams over two different IP links and recombine them and, you know, as of the March and April JTNM event, you know, we did the self-certification process that replaced the the testing event for the 2022-7 redundancy for the first time and we've had that deployed at a couple of customers. And in the transport stream domain, we've also been working on the HEVC codec, which I think for anybody looking to do compressed video at, you know, kind of the same resolution but have that be 4K video and sports broadcasting, we're probably, you know, we've seen a couple of installations with HEVC already and are probably going to see that continue to increase. 

So that is our announcements and our, you know, kind of guide to what's been new at EEG for now. And thank you to everyone who came and listened. I hope this was, like, interesting for you, rewarding for you. You know, we want to help you with your accessibility mission to modernize the approach, to save costs, to get a really high-quality product out there and try to make that rewarding for your business and your audience, so let's keep doing it.

I'm happy to take questions you have about these products and these announcements or anything else EEG-related and I think - you know, Regina, thank you very much. Regina's our Marketing Director and has helped us a lot with these presentations in these series, and I think she's kind of gonna help filter through some of the questions, so if it looks like I'm typing, I'm not, just checking my email. I'm trying to figure out what to talk to you all about so please, questions.

Q&A

So we have a question about Lexi Local and, you know, how does that work and does that require web access or is it over iCap? And the answer is that, yes, that's a server that is designed to be used on-prem and, really specifically, I think one of the biggest applications for Lexi Local is security-focused applications where there isn't necessarily a desire to, you know, connect to a new cloud service for ASR.

And so with Lexi Local, what happens is an iCap cluster is made in your own network. You can actually use it with human captioners or other sources of captioning, like a prompter as well, but it's a cluster that's owned by you and your own network. And any access between different facilities on the network would be handled through your internal IT routing and your processes, and so it doesn't require any cloud services. 

The licensing for that is is fixed around a year-long subscription where it has unlimited use within that year, so it's not a pay-as-you-go service and it's not a cloud service. It's basically a - it's an appliance that you just bring in and it does all the captioning you need.

I also see we have a question about switching from the standard to beta on Lexi. And yeah, so it - exactly how to do that can depend on how you're triggering Lexi. If you use the HTTP API, there's a new parameter in the documentation for what's called Engine that has a couple of different strings you can choose, so that's just a question of modifying your programs.

If you use Lexi Leash, as long as you have the newest version it's compatible and it's a choice that's available to you. You just need to find the correct setting. If you go to eegcloud.tv and are managing your account through the website, then that setting is when you - at the time when you create a new Lexi job, you'll see the Engine setting and it will say Standard or Beta or any other installed options, and you just want to choose the Beta option. 

And if you have a an encoder or an HD492 or a 537 and you're triggering through through GPIs and broadcast automation, then you may need to upgrade to the latest version of that Lexi Module feature so you can contact Support and request the update to the newest firmware on that and you'll see a new menu if you don't currently have that menu that, again, will give you a selection for your software engine and which one you want to use. And then at that point, that will be the one that's triggered when you trigger your GPIs or press, you know, the Start on it or, you know, have no captions upstream or whatever set of logical conditions the encoder is set to trigger Lexi on.

OK. Any more questions? Last chance. OK great. We got a question about Falcon. When using Falcon, a signal is routed via stream from a device such as Wirecast and then rerouted to a location such as Vimeo?

And that's 100% correct, yeah. What would happen is, if you have a direct connection between a streaming video encoder and, you know, any type of ingest platform–something like Vimeo something, you know, Akamai or a similar CDN type ingest or, you know, just a social media URL like Facebook or Youtube–what you would essentially do is when you sign up for your Falcon account you get a stream key into Falcon, which you can generate and regenerate as needed, similar to a social media uplink key. 

And what you do is you set up the streaming encoder to target Falcon with the Falcon key, and then you could tell Falcon to target your endpoint previously that was in the streaming encoder your social media account or your other ingest platform. And so Falcon - the video passes through Falcon. Falcon injects the captioning that comes from your source of transcription and brings the video on. And that's typically with - with typical settings in your streaming encoder, that's going to add a delay of about one second onto the stream to be pushing it through Falcon. 

Alright, so I think we're going to close the webinar. Please do email us with any private questions or requests for more information, and thank you very much for coming. We have some additional webinars scheduled. I believe there's one specifically that's kind of a how-to walk-through on setting up some of the Lexi Cloud material on October 20th, so that's two weeks from now. And you can go to our website at eegent.com, or I think there's some links in the chat as well of this webinar to try to learn more and to get signed up for those. So thank you very much. Have a great afternoon and hope to see everyone soon.