>> I guess we can start I know people will still come and go because it was all tricky to find this place I purposely left the Keynote early just to make sure that I’m here, and I’m with you, and I’m on time But don’t do like me, and I relieve their Keynote, that’s the most important part So let’s start Welcome everyone to the Xamarin Developers Summit We’re going to talk about Cognitive Services in Xamarin Applications We all will see that advanced intelligence can be accessible for anyone and it’s really easy My name is Veronika Kolesnikova That’s my Twitter handle So follow me on Twitter Huge thanks to our sponsors, you will see that slide a lot today But those companies are super great We all here, just thanks to those people, and those companies So definitely check their booths, you can get their agenda in the app You can check out online too So if you haven’t downloaded the app definitely do Before we jump into the session, let me introduce myself Again, my name is Veronika kolesnikova I don’t have my badge right now So you have to remember my name I’m a developer at Rightpoint in Boston I just recently guide my Microsoft MVP in AI I have Master’s Degree in Information Technology I started my career as a Quality Assurance Engineer, and I spend several years doing that But at some point I wanted to change a little bit, and I became a developer, and now I have around seven years of development experience I look much younger than actually I’m, and I guess its good Mostly I’m working with Microsoft technologies obviously, C#,.NET, SQL, Xamarin For my day job, I’m actually more focusing on data based CMSes like Sitecore and APiserver and I’m certified Sitecore Developer At some point of my life I actually was working with PHP, MySQL, Drupal, RoR I don’t do that stuff anymore But you never know maybe I’ll go back My hobbies are dancing, traveling, and aerial yoga So you want to talk about my hobbies, we definitely can talk after the session But if you have questions along the way during the session, definitely feel free to ask If something is not clear, if you can hear me any questions just let me know I’m here to help you and guide you through cognitive services Here’s the agenda for next probably around 40 minutes I have lots of information here So I’ll try to actually tell you everything, almost everything We’ll start with Artificial Intelligence and Machine Learning, then we’ll move to basics of Microsoft Cognitive Services, then I’ll tell you a little bit about main groups of cognitive services, and individual services available there Unfortunately, I won’t have time to tell you about each and every service available, but I will be just picking and choosing something here and there Then I’ll tell you just general information about Microsoft AI platform, and then we’ll move to Specifics of Integrated cognitive services and to Xamarin applications, and then I’ll show you the demo So let’s pray to demo gods, that it will work and just send you positive vibes So let’s start We all know about Artificial Intelligence and Machine Learning Lots of companies they invest tons of money, they’re hiring the best people to actually improve and work on artificial intelligence We all know that’s the future, and that’s already here So maybe some of you have PhD in Machine Learning Anyone? One guy Awesome. Good job I really admire you because I don’t have PhD in Machine Learning

So I really appreciate the work of those people who actually build the models and work on them, and make them better, and make our lives better I’m talking about Artificial Intelligence and Machine Learning We all understand what’s going on there, but sometimes it might be confusing How Artificial Intelligence relates to Machine Learning Are they the same? Are they working together? Are they are not even talking to each other? So let me clarify that part We will start with Artificial Intelligence Artificial Intelligence has more than 50 different definitions But in general, it’s something that we as humans are good at, but machines are not So we developers trying to use If-then rules and other statements to actually mimic human logic, and human thought process Next there we have Machine Learning, and Machine Learning is a subset of Artificial Intelligence, and in general that’s a combination of algorithms and statistics So using those algorithms, our models can get better at tasks with experience Then the next group here is Deep Learning, and Deep Learning is a subset of Machine Learning So when we have lots of data, we’re trying to use it in a most effective way We are creating data nodes, and then we are organizing those data nodes into multilayer neural networks So based on those multilayer neural networks, our machines can not only improve at tasks with time, but also they can perform more complex tasks like image recognition, and speech recognition and other cool stuff Microsoft Cognitive Services, they are based on Deep Learning techniques So I already mentioned Microsoft Cognitive services like a 1,000 times and you’re like, “What is that?” Microsoft Cognitive services, they’re basically status of APIs and SDKs available for all of us So it doesn’t even matter if you’re a Xamarin developer, or you are doubling down on native development, you were hardcore iOS guy or girl, or you can’t live without Android It doesn’t matter Maybe you will prefer Python for all your work You can still use Microsoft Cognitive services You just need to get the key from Azure You get then a key, you write in couple of lines of code using a REST API, and that’s it You’ll get in awesome Machine Learning tools available for you So your applications can be more interactive for users Then, they can learn with experience and get better without you doing anything infrared Just connecting into your applications Then, they have really good community support around it So check the GitHub Stack Overflow, lots of grade documentations Then, all those services, they were actually built by people who got their Master’s degree or PhD in machine learning So they know what they are doing They have lots of experience in that That’s why we don’t have to do then We just use their work and we’re showing our buses that yes, we know machine learning So that is also all this standpoint thing Let’s talk about groups of Cognitive services The first group is vision, and vision group actually help us to identify and analyze content within images, videos, and digital ink The first one is Computer Vision API This one is really popular service here in this group It actually can tell you what it can see, what objects we can see in that image So it basically provided an image and then it

will tell you what kind of objects there, where on that picture they are allocated, some coordinates Also, it provides information about the confidence like how confident the model is that in this place we have this object I’m really going to use it Actually, can also provide optical character recognition So if you have some written text, you can use Computer Vision API to actually translate it into types text and recognize what person, man probably So computer vision is now recognizing that intention that was in the text, but it recognizes the text and then that text you can pass to another cognitive service that will actually recognize the intention Then, another cool one here is Ink Recognizer It has a little star there That’s in purpose because it’s in preview Ink Recognizer recognizes written text So if your users have some written notes, maybe they have some special characters written down, ink recognizer will recognize and translate it into types text form Maybe pass on to other models, other cognitive services or just process in the text in any way So it’s really cool It’s in preview, so I wouldn’t recommend you to use it in production But definitely, go ahead, test it, provide your feedback to Microsoft, and they’ll be happy to get your information about it and your opinion about those services what is working and what is not working Definitely, check it out Another one, and here, we have something different We have two stars Form Recognizer is also a brand new cognitive service, was introduced during Build, just a couple of months ago It basically recognizes data from your forms So you’re uploading the forms, and then it will break them into labels and values for you So it’s really cool It has two stars because it’s really cool obviously, but it’s not on a preview You need to send special form to get access to that, but that’s available to anyone. It’s really easy They have application form online You submit it in right there, then you get an access to form recognizer, and you can try it, and then provide your feedback, and then soon it will be in preview I mean, it’s already in preview, like go, just you have extra step to actually access the preview, and they will release it completely to online soon So definitely, check it out, provide your feedback It’s going to be great, I’m sure Then if you’ve checked all those Vision Services, but then, you’re like, “No I want something different, but there is nothing that exactly matches to what I want,” you can always use Custom Vision Service It’s really cool. You can actually feel yourself like you are a scientist and you are training your model You are getting the result Custom Vision Service, you can train that service on specific images, and train it to recognize specific objects and those images, so that they can be highly customizable for your business Also, a huge benefit there is that you can export those models Once you train them in that portal, you can export the models and then use them offline That is real cool I’ll show it in my demo I’m going to use Custom Vision Service Then I will be using TensorFlow Lite that I actually exported from the service So stay tuned It’s going to be fun Next group is Speech So that’s obviously all about spoken words and speech recognition They have two services here

One is speech service It’s called speech services It combines actually several services They were separate sometime But then, they combine them all together into speech services because they realized that usually, if people use one speech service, they’re going to use another one and it a little was tricky because for each service, you need to get the key Sometimes it’s so confusing which key goes there and how they all interact So they just decided to combine them all together and make it super easy for you So it combines text-to-speech, speech-to-text, and speech translation So you can use them altogether or you can use just some part of it, that’s totally fine You just need to have one key and API access They also have SDK for speech services So real cool You can use them in different ways and both approaches, they have pluses and minuses So it depends what exactly you want from speech services Then another one is what is in preview? So potentially, I am thinking that’s just my opinion, they will merge speaker recognition with speech services, so it won’t be hanging just alone there Now, it’s in preview, it can recognize who is currently speaking, I think that’s a bad example But in teams, you have those, it’s highlighted when someone’s speaking, but you don’t have to be on different parts of the screen, or different parts of the world, or whatever If you seated in one room it can actually recognize who is currently talking That is really useful if you automatically write in some notes based on your meeting So you probably saw some sessions from build when they just created a meeting there, and they were talking to each other, and then the application took notes from the meeting and then it actually recognized who was saying what Real cool. Another group here is the language Actually language services, they ensure that apps and services can understand meaning of unstructured text or recognize intent behind your text So language and speech, those are different languages working with text information mostly One really cool service here that I am promoting everywhere and I personally absolutely like it, it’s Language Understanding Intelligence Service or LUIS It’s widely used for both design and development and all kinds of virtual systems like Alexa and Kogler system, Cortana Basically, that is also similar to the custom vision in the way that you can actually train your model based on your information They have some pre-built stuff for you, it’s a good start, but you can go and customize it What LUIS does? It actually gets the utterance and recognizes intense and also, it provides you entities from that utterance So for example, if users say, find me next flight from Boston to Houston for tomorrow, then we understand that probably that user wants us to find tickets for a flight Then, there are several entities like origin and destination and the time-frame So you can get all that information in LUIS The only thing is that if your user is speaking to your bot or your voice assistant, then you need to translate obviously speech to text and then pass the text to LUIS But LUIS is really easy to use and now, it’s probably a weird example, but we are currently using LUIS in one of our own projects

Our project manager in that project, he actually doesn’t have any current experienced, but I spent pretty much 10 minutes showing him LUIS and he created, super-cool models there He added some utterances, he trained the model, he knows how to test it It was super easy and when we showed our colleagues from another project, they were really impressed because they were thinking that someone from development team was doing that But it’s really easy to start Next one there is QnA Maker This one’s also widely used in both design and development and also in virtual assistant skill creation If you don’t want to train the model do all that crazy stuff that your users pretty magical to your website or using bot just to ask same questions over and over and they want to gauge standard answers and you probably have FAQ page and events type, but it’s hard to find or maybe they are too lazy to go and find it That’s why they are using the bot You can use QnA Maker just to provide a link to that FAQ page or if you have those questions and answers in PDF, you just need to upload the PDF file or I think they support other formats there too You upload in the file to QnA Maker, you train the model so it recognizes questions and answers, and then you are just flagging in to your bot or virtual assistant skill Then every time when your user is asking that specific question, he or she will get that specific answer Really simple, here on YouTube or any kind of heavy lifting or typing or anything, just there automatically Then another one here This one is in preview with [inaudible] This one is really cool service that will help your users that probably they’re either learning the language, maybe their kids, they’re reading some books or they have some disability You can use the Immersive Reader to actually read the text in different voices and also highlight the parts of the text like, you have verbs and nouns there, you just highlight them in the text They have really cool VA about it, talking about all the features and benefits of the cognitive service So definitely, go and check the video and try, it’s on preview You can use it in your applications and be mindful about people who either learning, their kids, or they’re adults, they’re learning the language or they have some disabilities, that is really useful Next group is Decision So this group is pretty new, they just created I think several months ago It wasn’t there before, but it actually enabled informed and efficient decision-making So you can just use those cognitive services and they will help you or your users to come up with a decision about something Then the first one here is the content moderator So it actually checks the content that you’re providing or maybe it’s a form and you can set up some filters and make sure that maybe the comments there or some chats they are clean People don’t use bad words or I get mood of the conversation You can use the content moderator So pretty cool Then next one here is Anomaly Detector,

it’s a premium and it’s really cool So you also can set up some rules so it will recognize what is anomaly and what is not and I can give you an example When you post a blog post for example, you are blogging every month and you pretty much know that on average, 20 people leave comments there every month But one day, you just go to Xamarin summit and you blog about it, and you have 1,000 comments But before you post that blog, you know that that’s a hot topic and people probably will come to your blog and they’ll check the post and they comment a lot about it So you can just set up the rule that’s not anomaly, that is normal But in general, if you didn’t set up the rule, it will notify you that something was unusual That is really good and in all kinds of industrial work for example, if you monitoring system and that you don’t have to go and personally check the system every time You can just connect anomaly detector and it will recognize that something went wrong, recognize the time when that happened So we definitely check it out and try it, it’s in preview Cognitive Another one in preview is personalizer and they released it for the period during Build They were talking about it a lot during Build I know that in some CMSs, they build them, the personalization tools By far most of systems you need to think about users and what kind of users want to see that specific information and then they might behave differently or they have different path in your app or they want to see different information, different groups of people So you can just connect their personalizer, set up some rules inside it, and then it will learn from your user’s experience from their behavior and you can set out different content for different groups of people For example, if you have an event and then you have a discount for some groups of people, you know that probably those people they might go to your event, but they just need a little push So you send it on personalizer and only those people will see the discount or special announcement from you, really cool check it out Another group here is Search, that’s the biggest group I tried to view all their services available there, but I wasn’t able to because we have a lot Actually kind of services they started with Search at some point They will do in Bing search and then they started building those models and then Microsoft recognize that that’s actually working great and maybe they expand it and opened the Cognitive Services So all kinds of Searches the Web Search, Visual Search, you can customize Search for your specific application or a website or a specific business That’s available for everyone, the Video, the News, Auto-suggest So all their foreign cognitive services just get the key, connect the API and it’s going to work Also they just recently included a Spell Check inside Search and I noticed that Microsoft is shoveling those services around the group solid time So don’t pay too much attention to the groups, just pay attention to the services themselves because they might end up with a different group at some point The Spell Check, I think you’ll need it, I definitely need it because I misspell stuff all the time

I wish every application has their Spell Check for me So go and check it out Then the last group here that’s not officially a group, but I personally think that’s a valid group Because using this group and those services from this group, you can see where the future is All those services, they’re experimental So probably, some researchers from Microsoft they have their ideas and they’re not sure if it’s going to work or if people like it So they just created Labs, they named them Project, project something They used to name it by city names and they chose the longest probably city names When I was given that talk at that time, I’m like, “I can’t pronounce the names of those.” So now I’m super happy that they actually changed the names of the projects But definitely, don’t use those unchecked in production They’re experimental, they might change a lot But based on your usage and your opinion, your feedback provided to Microsoft, they might promote those Labs into Cognitive Services So they will be actually part of their Cognitive Services bind and you will be able to use them on production or in all kinds of environments that you have For now, just test them, provide the feedback, what is working, what is not, that is really important So you can actually participate in the future or current services and make it happen Let me tell you about some services that I personally like for example, Project Gesture When I signed for the first time, I was so impressed You actually can interact with objects on your screen without using the Keyboard or Mouse even touch screen, you’re just interacting with the camera and that is so cool I played with it for a couple of hours when I signed for the first time So they have good samples online So you don’t have to download anything or install or get the key You can just try it online and see if it works for you Actually, that statement is valid for all Cognitive Services So if you go to the Cognitive Services portal, they have some samples right on the page So you can see what kind of data you can pass in and what kind of data you’re getting back from the service and you see that Okay, that is working for me and that is not working for me and you don’t need to install anything or get the key and try, but you definitely can try If it works for you in their portal, you can try it with your app and then see how it works for your project Then Project Event Tracking, that is also a cool project here It tracks events around deals So based on your location, it will provide you some information about the most popular events Project URL Preview, that is more about security and it just helps your users So if they are not sure if they can open a link, some kind of link, so they can use Project URL Preview It will show them the preview of page they’re about to open and there are 12 labs available right now in the portal, Cognitive Services Websites They also have 17 AI labs and their AI lab.Microsoft.com So go there, check them out Those labs they are not part of Cognitive Services That is just something that those researchers are working on and they’re excited about and they definitely want to see what you think and hear your opinion So we learned a lot about Cognitive Services,

I provided you just huge amount of information I can see you’re like, “Whats going on? But I hope you are ready to go there and try them I got excited about all those capabilities, all those services You’re just asking, “Okay Where can I start?” You definitely can started from their website, Cognitive Services website Just try some services online and see how they all work They are all available in Azure So if go to Azure portal, you just type name of the cognitive service or they have a special group it’s called AI and machine learning I think, yeah So you’ll find a group or you can just type the name They recently actually created a special resource called just Cognitive Services So you can get one key and with that one key, you can access all services available there That is based on the same problem that I mentioned before So some people, they use several Cognitive Services and because of that they had several endpoints and several keys, and it may be a little tricky because you need to manage the keys, you need to remember which goes to where and where is the endpoint, and it’s not hard, it just a little tedious there So they just created one Cognitive Services resource so you can have one endpoint and one key if you use the several Cognitive Services Then they also have free tier, 5,000 transactions per month That’s for the computer vision API, they have different numbers for different services So that’s all available in the Cognitive Services portal You can see what’s the limit there, whats the free tier about If you don’t have Azure subscription, then just go get it But if you don’t want to do it for some reason, you’re not ready that’s fine You can get the key from the Cognitive Services portal That key will be valid for seven days So you can test it with your app and see how it works, what is working, what is not, and then decide if you want to go forward and actually use that with your applications I already mentioned it a little bit, the Custom Vision Service has mobile model expert So you can just build your model, treat it with your images and your tags and then export it and use it offline So we have [inaudible] , TensorFlow, ONNX and you can explore this darker model So all kinds of opportunities there Then just quick overview of the AI platform I know we don’t have a lot of time here unfortunately But we can see that pre-built AI is Cognitive Services and it’s just a small part of the Azure AI Services available there and just a tiny part of Microsoft AI platform available for all of us We also have access to bot services and also conversational AI If you are ready to jump in and actually start building your own models from scratch, and you are like, “Yeah, I’m all about machine learning I want to build my models, I want to train them, I want to see what models available there,” you can use Azure Machine Learning, those are custom AI tools and then the whole Azure infrastructure is available for you All databases like SQL DB, data lakes, all kinds of AI compute power like Batch AI and Spark, IoT Edge obviously that is the big thing CPUs, GPUs they’re all available on Azure They have pre-built VMs for you So you don’t need to install anything just ready for your machine learning, just spin it up and go wild there Then you have tools that are not directly related to Azure like Visual Studio tools for AI They are available for Visual Studio 2017

For Visual Studio 2019, they are not available yet So I’m not sure what’s going on there I don’t want to lie So either they create a new version or maybe they’ll include it in Visual Studio itself Then Azure ML Studio, a Workbench then you can connect to all third party tools like CNTK, TensorFlow, Caffe all kinds of fun stuff there Okay. Down native functionality That’s more about the Xamarin here finally I know you were excited about that If you want to use Cognitive Services, you probably need to access native functionality You can access native functionality with device class in the frontend and that’s about, it’s a little side example here that’s not directly related to Cognitive Services It just the way that we can set up your layout from your shared project So here we just leave in the top margin for iOS devices and then it’s zero margin for all other devices Then in the backend, you can still use dependency services So that as they’re old screened for dependency services Now you know that you can get them from NuGet and how many of you work with dependency services? Okay. Lots of people? Yeah. So I don’t need to tell you about that one So where to start, you can create your Xamarin project in Visual Studio It doesn’t matter if it’s Xamarin or Xamarin Forms, you can use Cognitive Services in all projects So some of those Cognitive Services, you can find So if you want to use SDKs, if you want to use APIs that just simple, you’re just using REST API, getting the key that’s simple, we all know about that You want to use SDKs Some of them are available in that Microsoft.ProjectOxford group So you’ll type in Microsoft.ProjectOxford and you see Cognitive Services that are available there Project Oxford that’s the old name of the Cognitive Services Some of them that are a new word or they updated them completely recently, they’re available under Microsoft.CognitiveServices, and here I show you that I used the Microsoft.ProjectOxford with Vision, Custom Vision Then in order to access some native functionality, you obviously can use dependency services that I mentioned before But I think easier is the common APIs So you can access the microphone, the camera Usually when you use the Cognitive Services, you want your users, people who are using your app, they’re interacting with the app using microphone or camera, may be a location So all kinds of services there I personally use Xamarin Plugin Media created by our favorite James Montemagno How many of you use the same plugin, yeah >> [inaudible] >> Okay. So I used that it worked perfectly with the Computer Vision and the Custom Vision Then you can obviously use Xamarin.Essentials that David was mentioning just a couple of several minutes ago basically during the keynote So I took that picture there in build in order to show you the latest Xamarin.Essentials but then what David showed today that was different So they’re just hands down working on them Xamarin.Essentials They add in new essentials, pretty much everyday, improving lots of stuff So even that, that was like what two months ago, it’s already outdated So definitely go check them out, it’s really easy to use Xamarin.Essentials, easy to access native functionality that you need to use for

Cognitive services in order to get data from your users Now let’s move to the demo Do you guys have any questions while I’m sitting up? Yeah >> Could you just go ahead [inaudible] Yeah, so you were basically providing URL data, and I can show you how I did with the custom vision So custom vision has its own portal or no I’m not showing anything Prohibition stop presenting Okay. Perfect, yeah look at that, but I’m not showing anything Yeah, so that’s the custom vision portal where you can train your custom vision models It’s Custom Vision.ai, it’s absolutely free to access You can create new project here I already have two projects available, and I’m going to show case you this one So here you are uploading your images, so you click “Add images”, just opened in aware location by the open folder with your images, select a couple of pictures This one for example or select, and then you add in a tag So it already has three tags available Can also add a negative tag that is definitely not tagged, and then you are uploading it So it uploads it for you, and then it appears here, and then you can see that I have eight images with that tag, and then I have nine images for this tag So now my model knows that probably those cats they are ragamuffin, and then other cats they will be in Gora, but you can train all kinds of images and all kinds of tags there So that’s trained in the model here, and that’s the general principle You are uploading some data, you are saying, okay that is this, and then based on that your model will understand, then when it sees something similar Okay. That is that Okay. So yeah I train model You can see the performance here right in the portal Some precision you can set up the probability threshold So for me, if it’s at least 50 percent confident about the object, then should consider it as a valid object, as a valid answer What is in this picture? Then I can experts told that’s the fun part that I mentioned You can train the model here where you can see everything what’s going on, you can tag, you can upload your images there, and then you are expert in it for iOS For Android, you’re using TensorFlow, ONNX, Dockerfile, and then they have new Vision AI Dev Kit that just recently was released So you can export it, and then you technically don’t need that portal anymore In general, that’s an ideal world, but you still need it because always you need to think about the future, and how your users interact with your model So if they are uploading a picture, and then the model didn’t recognize something there, or it was completely wrong, then you can add picture back here, target, and we train the model So next time when it sees that picture it will provide the right answer, and it’s not only with images and visual tools, that’s for everything especially like Lewis

You get him down information, what worked and what didn’t work, and yeah we have five minutes, perfect We can do it, but I can talk about it forever So if you have any questions, definitely let me know, we’ll talk a lot about that So I mentioned Lewis, so there you see some analytics what worked, what didn’t work for example, you user ask that question, and then it didn’t understand it or it returned completely wrong answer So you can add it, and then retrain the model, and then just, if you exported the model, you need to export it again It won’t just connect it automatically because once you export it, it’s completely separate It’s not connected to anything, but its really easy So when you export it, you’re getting two files Let me show it in my Visual Studio here I have them Android project here with the assets So when I exported it the TensorFlow model from there, I got two files labels with the tags, and then the model, and then that model that’s TensorFlow model it has that TB there, so you can just use it as you use all TensorFlow models from other resources Then you’re just uploading it to the assets, and that’s it You can use those offline I don’t have the best code example here So you can definitely take a look, but don’t judge me That was just for the demo purposes That’s definitely not the best architecture and coding that you probably saw in your life That is just to make sure that works So here we have, we basically have them So on Create, we are creating the button, so it’s going to be button, then image, then we have tags, or tags, or results from the model When we are taking the picture here, we’re getting the image For getting the image, I am using that plug-in that I mentioned before There are media plugin So we definitely need to get permission from your users that he or she likes to use the camera to take pictures, and then process them You probably can access their images under devices and use those images too Then here’s the magic All the magic with TensorFlow is going here So you are uploading the model, that’s modeled TB, that’s the TensorFlow file and the labels, and then you are basically processing that So don’t pay lots of attention on those numbers When we get that exported model, usually it’s not completely normalized, so you need to process it a little bit in your code, but that’s really straight forward Usually you have some loss there like definitely it’s never perfect, your model never perfect you will need to accommodate that threshold there and some data loss, and then process in the model, and then you are recognizing based on the labels and based on the image that you’re probably took a picture with a camera or your got that image from device folder, and then you just recognizing it, and then you provide in their result Here I just have some text data I think the card on this picture is the tag, and then percentage off the confidence So you can get it from the model there, and let me vantage really quick because I know we are completely running out of time Have you brought in your cats

No. Why not? Okay. I brought my cats That was not really my cats. I wish I took some pictures online Yeah. So you can Perfect. So not really sophisticated design here We are just taken a picture of the cat >> I think the CAD on this picture is Bread Doll I’m 100.0 Percent confident I think the cat on this picture is drag doll I’m 100 percent confident >> It says it two times First time that was from Xamarin Essentials, that text-to-speech, and you saw it was really good, but it wasn’t as natural as possible The second speech was actually from their speech services So they have neural languages They are really close to actually how we talk to each other, and you saw it was much better But it depends what you are a project is focused on, and what is more important for you. That’s it