[Applause] no no no no no please maintain social distance no no no please maintain social distance today we’re going to build our own real-time social distancing monitoring alarm using a Raspberry Pi and the OpenCV AI Kit with depth capability so this Raspberry Pi social distancing camera will be able to detect people using either a mobile net ssd or YOLOv3 tiny model at 30 frames per second to sound an alert to politely remind people to maintain their social distance so the problem that we’re trying to solve here is that while doing social distancing detection on a powerful CUDA supported GPU or a cloud computer is very nice but we are looking to do this on an embedded and deployable platform like the Raspberry Pi and OpenCV AI Kit but with real time performance so this allows the system to be portable and low powered but you must be asking ritz it’s almost the end of 2020 and the world is on the verge of a vaccine why do we need this technology let’s say that the vaccine is established today it will only reach third world countries like South Africa only towards the end of 2021 according to enc that’s right some of us who are at the bottom of the queue we’ll still have to practice social distancing and wearing of our mask in public now we cover the details of our social distancing app in our YOLOv4 series you can check out the links up here in this video we’ll use the depth ai repo for social distancing and modify it to output an audible alert to violating bystanders so this application is not only great for social distancing but you can also use it for package detection as well as real-time sign language detection using the od depth ai camera there is a full step-by-step tutorial for training and deploying custom models to the od at the links down below and you can also sign up for free stay till the end of the video as you’ll touch on this application as well as analyzing the bias in our data sets using the roboflow platform so let’s get straight into it so if you have completed App1 this app will be quite similar in terms of its overall build assuming that two people are in close proximity if they cross the acceptable social distance threshold our oakd ai camera will respond by sending the detected people to the Raspberry Pi where our code resides the code will wait for a moment of when the social distance thresholds have been violated and immediately sound and alert to a loudspeaker either via analog cable or hdmi to an external speaker like in app one you have the ability to view the output of a remote desktop or use flask to pull a web server from which to view the output comment down below if you’d like to see a tutorial of this requirements our requirements are more or less the same as app one you’ll need a raspberry pi with all of the accessories the opencv air kit i’m using the od because of its depth capability there will be a link down below of where you can get your own oak devices and you’ll need a speaker as well as a 3.5 millimeter auxiliary cable to project the audio alerts what’s optional is a 3d printer for printing out the enclosure for the ot which may be connected to a tripod as mentioned the links to all of these components will be in the description down below schematic the schematic is very simple you just need to connect your portable speaker via a 3.5 millimeter auxiliary cable to the raspberry pi otherwise if you have a monitor with speakers you can just output the sound via hdmi so how the social distancing app works is that the camera captures the raw image at stage 0. at stage 1 the image is passed to detect people in the image upon detection the depth info is added to the detections the depth ai then provides the depth info for all three dimensions the x which is the horizontal coordinate

y the vertical coordinate and z the distance coordinate this allows us to create a 3d vector in space pointing from the front of the camera relative to the detected objects 2d mapping for explanation purposes it’s easier to imagine the position of people in 2d space taking only the x and z coordinates into account we can then map our space into a bird’s eye view or bird view for short showing the position of the people in 2d space projection this perspective makes it possible to use a simple 2d geometry for distance calculations and can be useful to understand how the camera is determining our position in space however in the app itself the 3d distance is utilized as there is no downside to this and it can provide much more accurate results in the case of handling some edge cases in the scenario of where someone is on a ladder knowing each person’s position we can now calculate the distances between them and check if they do not exceed the defined threshold the formula used in the app is a regular 3d euclidean distance with the formula shown finally if any distances exceed the threshold we flag both people as dangerously close to each other and display a warning as well as sound and alert no cool so just like how we did for the oak one you can download the cad files from getup.com like sonus depth ai hardware blob master user contributor mechanicaldesigns.md and scroll down all the way to this design over here now as you can see that there are multiple parts so we have the front part the rear part as well as the mounting part which is this blobby over here now because i don’t have the inserts that go into here i decided to mod it for my gopro mounts it’s not the best of designs but it will do the job now as mentioned before we want to ensure that we are able to mount our ocd on a stand or tripod and all of the links to the gopro tripod mounts and tripod and 3d files will be in the link in the description down below if you require it so you can decide if you want to have it as a direct screw mount or you can have it as a gopro mount the choice is yours now once you’ve done that you can download the files and open it up in your favorite slicer i’m using ultimaker cura okay so now you can open up your 3d files in ultimaker cura we’re going to set our resolution to 0.2 you can decide on 0.2 or 0.1 now because of problems that we had with the oak one the tripod mounts weren’t that strong so we’re gonna set this to a hundred percent from the original eighty percent from that we had before we’re also going to include supports and click slice if you click on preview you can see how this will look like and then we can save this to our file and we can my so so cool so you’re going to head over to the github repo github.com forward slash augmented startups open cv ai kit apps and we’re going to focus mainly on app 2 social distancing with depth now in this repo you’ll be able to clone all the necessary files that you will be using in this tutorial series so that’s app one to six some of the app tutorials and source code will only be available in the membership area of youtube so you can click the join button to become a member and support the channel for now we’ll focus on the contents of app 2 social distancing with depth so go over here copy this and we’re gonna essentially open up a new terminal

and get glow in our repo so there will be two files social distance underscore base step by and social distance underscore final that buy if you want to skip the coding part then you can also skip to the implementation and testing chapter of this video using the final that buy otherwise if you want to get your hands dirty to code the social distancing alert part then follow me down the rabbit hole and i show you how deep the rabbit hole goes cool so if you’re following along with new undercoating ensure that you have the following dependencies if you did not install it in app one then we can go over here to terminal python 3 dash m boop install i am utils and we want version 0.5.3 and one more other dependencies python three we’ll just click up and we’ll say concurrent log handler one version zero point nine point sixteen okay cool now we can get into the coding going to app 2 social distancing and open up the base file so the first thing we need to do is import our play sound libraries so we’re going to say from play sound import play sound now video we’re going to add in our audio file path so we can see task underscore play underscore sound equals false so essentially we don’t want our soundtrack to play for no reason then we’re going to set our audio path so audio file underscore path equals os dot path absolute path and then type in the name of our file in this case is dot no dot mp3 or you can even type in socialdistance.mp3 depending on which one you have next we go over here to our definition and type in play sound and type in audio file next we want to schedule the task to play the sound so under our definition of scheduled task type in global task underscore play sound so over here i’m using a service called kite so kite ai essentially uses ai to help you auto complete your coding you can think of it like intellisense but much much more smarter than that i’ll leave a link down below where you can try this out for free okay so moving on to task play underscore sound i’m gonna put that in next line task play that sound equals true okay so essentially we’re going to be adding a timer over here and this is essentially to stop our audio from playing back too soon after we’ve detected someone so imagine our detector saying no no no no god no all the time we want to just to say no wait some time for the audience to respond and then say no again after that so we’re going to set this to 10 schedule task and dot start and over here you can see where our function will be called now we can go down a little bit over here and this is we’re going to put the code in for playing the sound so we already have our global over here which is our task play sound i’m going to delete that and we’re going to say if task play sound

then we’re going to say task play sound equals false so if there’s any audio playing before this it must switch that off and then we’re going to put trading dot thread target equals play sound i’m going to put in our argument over here which is audio file path dot start cool and that’s essentially what we have to do right here short and simple so you can go through the code and see how all the calculations are done as well as how we’ve initiated our sound a lot make sure you save let’s open up our terminal again and we can test our app the way we do this is python three social distance underscore is the final of base i’m just going to put final and oops i seem to have forgotten to connect my old device let’s try that again cool it should be working let’s try this out on some test images okay so here we go over here we have our two people standing together and if they too close like it is in this case it’ll sound an alarm let’s try this out on another image now because we are using this on a computer monitor it’s not going to detect depth that well so let’s do this in a real environment cool so now again we can run the app and we can test our code with two people as one person approaches the other the people are detected and if they cross the preset threshold an alert will sound and politely advise people to maintain their social distance i would like for you to comment down below if you think this will work in a practical setting if not i would like for you to comment on how you could improve this so that it will be practical for real world applications okay cool so as promised i’m going to show you how you can deploy a custom object detection model for the specific application of sign language detection using the od so i want you to go and check out blog.roboflow.com like sonos of the custom model and what’s really nice about this article is that it shows you step by step on how to collect label organize process train your dataset convert your custom model and then deploy and display it so i highly recommend that you check out this tutorial now over here it says that you need to gather and label images right now what happens in the case where one class has more data than the other well this would create some sort of data bias so it will recognize one class more than the other so to ensure that you don’t have any sort of bias in your model you’ll need to ensure that you have an equal amount of data for each class now how do you check this it’s really hard to see manually okay so let’s go to public.roboflow.com and let’s look for the american sign language data set let’s click over here let’s fork the dataset yes let’s fork it and this will fork it to my account cool so over here we can go to dataset health check and it starts over here with images it says that we have 720 images and with zero missing annotations that’s great looking at our annotations it says 720 so this means that there’s one annotation per image on average and this is across all 26 classes now why 26 well it’s all the letters of the

alphabet the average image size is around 12 megapixels and we can also check the median image ratio over here now what is really helpful is that we can see our class balance over here for j we have eight more annotations than f and we can see that for u we only have 25 so this means that we need to gather more data for all the remaining alphabets so that we have a balanced data set moving down we have dimension insights and this will tell you the size distribution of our images looking here we can see that we have jumbo size images that are greater than 1024×1024 so the images that we are using mostly are tall images rather than wide images and this makes sense because we are detecting our hands and our hands are mostly portrait another really cool feature over here is our annotation heat maps so for each alphabet we can check the heat maps of these classes and what this would be quite helpful with is that we can see that we are not utilizing this part of the image this means that in the pre-processing step in data augmentation we can cut or crop this part of the image so that we save on processing time so in my opinion i think this is a really great feature and once you’ve gathered some insights about your data set you can always go and augment your data using the steps that i’ve shown in the previous video and like i’ve mentioned that all this functionality you can get for free on roboflow.com if you’re interested you can check out the links down below for those of you who are interested in the object detection object tracking both estimation courses buying your own oak kits or accessing the github code all links will be down below we also have a membership area on youtube where you can access all of our premium content and tutorials source code and get early access click the join button to become an elite member and supporter of this channel also click up here where we’ll be creating an app that will open a boom gate for security reasons when a person has a mask on and will keep the gate closed if the person does not have their mask lastly if you want to see more tutorials like this please hit that like share and subscribe and it will really help me out thank you for watching and we’ll see you in the next video you