0:30
Hello and welcome to AI42. Hello, Gosa and Goran. Hello in 2022. How are you
0:42
All good. We're excited here because this is our first session for 2022 and then it feels
0:49
extra special to be able to say hello again to one of our previous speakers here, to Goran
0:56
you were part of the real life machine learning conference that we had a little bit earlier
1:04
last year in September I think something like that I'm also not sure
1:10
was it September or October but it was really nice sessions over there
1:16
it was really good fun can you say a little bit about yourself once again
1:24
because you're working in Pandora, you're AI MVP, but can you just say anything more about yourself
1:31
Yeah, sure. My story originally from Croatia, I moved to Denmark some seven years ago
1:38
then two and a half years ago, moved to Malmo, Sweden, where I live now
1:44
And just recently I joined Pandora as engineering manager to lead the data infrastructure team
1:54
And, yeah, I'm Microsoft AI MVP. That recognition usually comes for being active in a community
2:04
doing a lot of those sessions, workshops, and so on. So you can, yeah, often see me on some meetups and conferences
2:14
Nowadays, more online than in person, but hopefully we've seen some in-person conferences
2:21
So hopefully after the winter, the trend continues and goes a bit more back to normal
2:28
At least I would like it to go to some conferences and meet people in person to shake some hands
2:36
Oh, yeah. It's much different to meet people face to face than everything online
2:42
Yeah, that's true. Yeah. And I follow you on LinkedIn and time to time you just post a lot of things about IoT, drones and other stuff
2:51
and it's really exciting. Thanks. Well, yeah, I'm a tech guy and I like to explore what technology can do
3:01
And yeah, I often share this knowledge. So yeah, that's me
3:11
Yeah, so if you would like to know, just follow Goran on LinkedIn. There's really cool stuff he shares
3:17
And we can also share the link to the previous session that Goran had with us in the chat afterwards here
3:26
But I think before we leave the stage to Goran, we're just going to have a quick introduction here into AI42
3:35
Yeah. So, first I'm going to be describing a little bit here about the background here to why we've started the AI for To Do
3:57
So, the thing here is that we also hear that there's really no good starting point that can take you sort of from A to C to teach you about machine learning and AI from basics
4:10
So, what we've done is we've created AI for To Do. So, we're a strong team consisting of three Microsoft AI M&Ps
4:16
me, Eve, and Gausha. And what we want to do, we want to provide you with a valuable series of lectures
4:22
that will help you to jumpstart your career in data science and artificial intelligence
4:27
So we will provide you with the necessary knowledge so that you can land your own dream job
4:34
as long as it's related to the fields of data science and machine learning
4:39
And the concept is quite simple. So what we do is we've invited in professionals from all around the globe that will explain to you the underlying mathematics, statistics, probability theory, and also data science and machine learning
4:55
And we will guide you through this all the way. So all you have to do is to follow our channel and enjoy this content that will stream to you every second week
5:02
and don't worry you know we've all started here from scratch and we're very happy to be able to
5:09
help you to build up your knowledge and you can always stop and rewind the videos or ask for
5:15
clarifications in the comment section so hope to be able to assist you on this journey and that
5:21
we will also like to have you as a guest in our show one day and we've also created cross
5:28
collaborations with other organizations so we think that we will be able to give you the best
5:32
opportunities so that you can broaden your own network in the AI and data science community
5:38
And also with this combination of our offered services, we would also like to support
5:42
groups that are not so recognized yet. So that said, we switch over a little bit here to this
5:54
Yeah, so our organization is sponsored by Microsoft and we are humbled by all this support we get from our contributors as well
6:06
Thank you for leaving for all the beautiful graphics content and me and Mary for the cool intro music before the event
6:16
We are the close collaboration with C-Shop Corner and Global AI Community
6:20
so our lectures are going to be available also on their YouTube channel
6:25
additionally to our own media. Nicola taught create and review all text content we use on our website
6:33
on Advitmasin and during our session. You can follow us on Facebook, Instagram and Twitter
6:41
to become a part of a growing community where we share knowledge and fun
6:45
You will find every information that will bring you on advanced level in the field of AI and data science
6:54
You can as well watch our record session on our YouTube channel and find our upcoming session on our Meetup page
7:02
Yes, and we also have a code of conduct. So the code of conduct will outline the expectations that we have for participation in our community
7:11
as well as the steps you need to take for reporting unacceptable behavior
7:16
And we are committed to providing a welcoming and inspiring community for everyone
7:21
So be friendly be patient be welcoming and also be respectful with each other And you can find our code of conduct here on this link I think with that said we can get back here to Goran
7:37
hello back Goran so tell us a little bit more about this session because we are really excited
7:56
to have it? Yes, sure. So title of session is from 3D model to AI on the edge. I will walk you
8:06
through some, first of all, AI basics and then show you some things that you're probably familiar
8:15
with some services in Azure, and then we will go into some examples
8:23
and prepare, build one model. I will show you how one model is built
8:31
and tested on the Edge. And through this story, like this line that connects all this
8:39
is synthetic data for AI training. So hopefully you hear some new stuff and learn some new things through the session
8:50
Thank you. So then we will leave the stage here to you, Goran
8:55
Okay, great. so i introduced myself just uh just quickly over 15 years of experience it uh tech guy
9:19
did a lot of projects uh my twitter uh messages and linkedin messages are always open like if you
9:27
work on some interesting projects want to connect uh yeah just feel free to uh reach out and let me
9:35
know what what you work on as i said um this session like starts with some basics so we will
9:45
talk a bit about computer vision and by its definition it is a scientific field that deals
9:53
with how computers can gain high level understanding for digital images or videos right so teaching our
10:01
computers to be able to to see and to interpret this this images and most common tasks that we
10:10
have in this field are image classification, object detection, optical character recognition
10:18
facial recognition, pose estimation, and so on. So those are some most common ones that you will see
10:27
Maybe some of them you already used. And as I said, we will slowly build through some of these examples
10:35
and what this session is about it is about object detection and if you have picture like this
10:47
if you run it through some object detection model you would probably get some result like showing
10:55
okay this those are the classes over here probably some standard models out there we
11:02
would show you also, okay, this is a dog, that is computer behind it, even though it's blurred out
11:09
it would probably recognize it and so on. So with object detection, idea is that we detect specific
11:16
recognize and detect specific object in the picture to get information where exactly in the picture it's placed
11:27
So we get some coordinates and width and height of this box
11:32
So being able to tell, okay, here it is. And if you want to achieve this, you probably so far seen or most likely tried out the Azure Custom Vision
11:47
It's a nice little service. You can access it via customvision.ai. And over there, you can easily create models
12:00
and I will quickly walk you through one simple example. So let's imagine that we want to find this Lego minifigure Chewbacca here on the left
12:15
Where is it placed on the board? We cannot use the standard models for that
12:22
because this Lego minifigure was probably not seen by the standard model so far
12:29
was not trained with that model was not trained with images of the chewbacca so we want to create
12:36
our own model being able to recognize it and to find it among the all other lego minifigures on
12:42
this board similar to that game where's wild right so if you go to custom vision over there you can
12:49
create new project you give it some name you can write the description define the resource and
12:55
resource group and you can choose the project type we have classification and object detection
13:02
we said we are focusing on object detection in this session figuring out like where this object
13:08
is exactly in the picture and classification would tell you information like for example is it
13:17
cat or dog in this picture, like most represented object in the picture is identified and
13:26
you get answer back like, okay, is it this type of object or that type
13:34
And for this project, how it works, how custom vision works, what we need to do
13:43
is we need first to upload images that will be used for model training
13:49
We need to tag them. That means we will select where on those images
13:54
exactly is object placed. We will train the model and then we will test it out
14:01
So if we go with this Chewbacca, we take some pictures. Over here, you can see 17 images
14:10
will be added to the project. this minifigure is placed on the Lego catalog
14:18
just to have some variations in the background, right? And you upload those files
14:26
and then you tag where in this picture is exactly a Lego minifigure
14:34
and you give it some tag to that part of the picture
14:42
So Chewbacca is placed here, you click around, go each picture you set this up
14:49
This tool has also some smart recognizer like where the objects are so it will help you out while you doing that You can you can try it out when you take all the pictures you train the model and now you can with the interface you can do the
15:10
quick test here or you can publish this model and test it via the api which means you will send some
15:17
picture and you will get results back. Minimum number of pictures required for
15:24
object detection model training is 15 and that is really, really low. Like if you want to achieve
15:31
some kind of accuracy, you need a lot of pictures. Like 15 is super low. So first time this model was
15:40
trained and I made the test. You can see over here, there are some detections happen with some
15:49
probability really low, like 20%, but those are not Chewbacca's, right? That's not Chewie. We are
15:56
getting some results, but obviously this is not good enough. We need much more pictures for AI
16:04
to be able to understand, okay, that object is placed over here
16:12
So in order to do that, we can go back and upload more images
16:22
take those additional images, train the model again, and then test it again to see, repeat this whole process to see how it works
16:31
after adding a few batches of images over here, you can see that I have iteration number three
16:42
So I added images two more times to be able to detect
16:46
okay, Chewbacca is over here and I got accuracy of 40.8%. That is still pretty low, but you got the idea
16:56
like adding more and more pictures, like you'll be able to train it better
17:02
and to identify this object correctly. And as I figured out, this is complicated stuff for AI
17:14
Like it really needs a lot of images. Those minifigures, even though they look different
17:21
they're really similar and hard to recognize. okay, this is what we are looking for
17:30
and this is something else. So repeating this process, like there is one thing
17:37
that will probably come to your mind that most of the time
17:41
you are spending over here uploading and training, uploading and tagging images
17:48
in order to train the model which runs automatically. You get the model trained in a few minutes
17:55
and of course being able to test is just one one click away for full tests similar to to publishing
18:05
the model and trying it out so usually the taking images you will do it with the camera you need to
18:13
upload it to your computer upload it to to cloud to custom vision each tag each one of them and
18:23
then run this process over and over again. So over here is, yeah, really time consuming
18:32
And maybe some things through this session will give you the idea how this can be improved
18:39
if you are doing the custom vision. And what I have in mind, like I mentioned in the start, the thread that is connecting
18:50
All this is actually synthetic data. And synthetic data can be generated from different computer simulations or algorithms
19:04
And it is alternative to real-world data that can be used to train AI models
19:12
Like, why would we use some generated data to train AI models
19:18
Well, it will shorten the time for data collection and tagging. So we can make it in a way that it shortens this time
19:27
It will minimize cost for data collection. Imagine that you work for some company that needs to recognize some product that you have
19:39
And now your boss gives you a task to recognize it with 98% of accuracy
19:46
and then you spend weeks and weeks taking pictures and tagging them, right
19:51
You can also reduce the bias in your training data. Yeah, and also you could simulate different environments and such
20:03
Let's imagine that your company is producing cars and you want to recognize that specific car model
20:12
So you can take a lot of pictures in your warehouse, But maybe you want to recognize it later on out there in forest
20:23
So how to give AI some pictures from there to have a better accuracy
20:32
And yeah, all is about getting more accurate AI detections. And this synthetic data, this is not something I came up with
20:41
This is a really hot topic out there. And if you try to search a bit, you will find a lot of mentions of it
20:51
And this is Gartner Research some time ago, I think two years ago, published or something
20:58
So over here, you can see that over time, until 2030, use of synthetic data will be much, much more than real data
21:10
And with real data, we also have a lot of those problems like GDPR, this and that
21:19
And with synthetic data, we are able to generate a lot of data that's easily used for models to be trained
21:31
This real data will not disappear. We will always need it. But as addition to real data, like synthetic data is really a great thing
21:44
And you can even train models just on synthetic data. So what solutions are there to generate the synthetic data
22:00
One of them is unity perception. and Unity mostly focused as far as I figured this out
22:08
for like B2B model working with different companies and such but they do offer you a package that you can download, install in Unity
22:19
and try it out. So you can for example import different type of products that you have 3D models And nowadays it really common that any kind of industry we talk about there is first 3D model of the thing
22:37
that we are producing, right? So it's easy to get that. And combining these 3D models
22:47
and maybe you recognize this scene if you use the Unity, otherwise you'll see it soon as a quick demo
22:55
It works like one 3D environment where you have your camera, you place some objects that you can tag each one of them
23:03
And then you generate different backgrounds behind in order to get the pictures as a result for this data training
23:16
So when you move the camera, you get the pictures from different angles
23:21
store those pictures and use them for the model training. That's what Unity Perception is giving to you as an option
23:32
We have some viewers from C Sharp Corner. Hello to them. Unity is one of the things you should definitely try out
23:44
if you know C Sharp because it's a great tool. it's free
23:51
to try out they are paid models if you want to get rid of logos
23:57
when publishing and so on you can read more about that but it's really
24:03
interesting because in Unity you have this 3D world where you can place different
24:08
objects and with programming you can focus on the game interactions and such
24:16
not doing everything from the scratch. So over here on the screen
24:21
you see I have one virtual space and I can create some 3D objects
24:27
Okay, here's some cube, right? That's the, I just made the cube
24:33
I can add physics to that cube. So it has gravity, will fall down
24:39
will hit something, detect collisions and so on. Really interesting thing to try out
24:45
There is also an asset store where you can find a lot of 3D models and use that models in your game
24:52
Some are free, some are paid. You can check that out. But what I want to show you over here in Unity, I have one scene that is like realistic, some kind of realistic environment, industry environment
25:10
And over here I have a 3D model of the forklift
25:21
And what I would like to do, like idea, I want to detect this
25:28
Why? Because the standard models will probably not detect that. And maybe my boss wants to know how many forklifts are driving around the warehouse
25:38
Maybe he wants to know different types of forklifts and such. And in case I have the model, I can easily place it on some scene
25:49
This is really drag and drop. And when I click play, I go into this
25:54
I have my camera placed over here and click play. Then I see the game view where I actually see the 3D model, how it looks like
26:04
So this is some kind of warehouse. this is a forklift and looks pretty realistic
26:09
We could use this for AI model training, right? If we put more effort in 3D design and such
26:16
we could make really nice realistic environments. There is one great demo of Unity Office
26:23
that you can, for example, check out. So this could be one picture for the training, right
26:30
And this loader at the moment over here, we could even rotate
26:38
Now I rotate it 90 degrees. So you can see over here how it looks like
26:50
Idea is not to manually rotate everything and take a lot of pictures
26:55
That's what Unity package is about. But since we got C-Sharp people watching, I wanted to show quickly how actually you write
27:08
the scripts. At the moment, I have two scripts over here. So I'll switch to Visual Studio Code
27:16
And this how scripts look like in Unity. You have one class and inside of that class, you have the start function and update
27:29
Start called when before the first frame update. So when your scene is loaded, start is called
27:36
And update is called once per frame. So in this update, I define one new vector with one degree on y axis
27:45
So this is my new vector and I apply that vector to the current object I'll attach script to
27:54
So that object will rotate based on this vector. So we'll rotate one degree per frame
28:02
If I go back into Unity environment and attach this forklift script to forkloader, over here
28:12
object, you can see it is attached. I say play. And now our forklift is spinning, right
28:25
And taking a lot of pictures, now we can detect it from all possible angles
28:30
We could also change the backgrounds or we could, for example, use this scene from different angles
28:39
So I have one camera script also. and in that camera script, I define target
28:47
like a game object that is target and define one point. That point is position of the target
28:55
and the camera will look at that point. In update, I rotate around that point
29:04
same way I rotate the forklift. So I rotate with one degree
29:11
If I go back here, attach the camera to the camera, and in the script I say that the target is..
29:24
...and it's... ...forloader, and if I click play..
29:34
...everything is spinning. Camera is slowly spinning around the forklift and forklift is spinning around itself
29:45
This way with few lines of code and with a few realistic 3D models
29:52
we could generate a lot of pictures, right? And just using one scene
29:57
Now imagine all the combination slides changing different scenes, changing backgrounds, changing lightning, and so on
30:04
All this is possible. And basically, not to do this manually and to take some pictures
30:11
That's what Unity Perception is about. It helps you out with that to generate this
30:17
and to train your models. So it generates the synthetic data for you
30:24
Unity is not only one. How they are working on such solutions
30:28
and with the Omniverse Replicator, that is a simulation framework and produced like this really physically accurate data
30:42
They made it in two versions. Like one is focused on vehicles
30:48
and the other one is focused on training robots. There is recently, I think some three weeks ago
30:58
I seen some demo from their side, like more developer oriented demo published out there where you can check it out and try it out how it works
31:10
uh yeah so synthetic data can be uh generated with anything and uh just like
31:20
for this presentation i decided to use the uh most used programming language in the in the world
31:31
you can guess probably is the javascript it is programming language just to make some slide with
31:37
bold statement out there. And my take on it is usually if we can do it with JavaScript, then we
31:45
can basically do it with any other language. It's so simple, because if we can do it on web, then
31:52
should be simple to use any other languages. So how we do it with JavaScript? There is one nice
32:01
JavaScript library called 3.js. And it allows you, based on WebGL
32:14
to show your 3D models in the browser. And that is really cool and nice
32:21
I'll switch over here to show you some examples. This is 3js.org
32:28
Over here you have documentation how to install it locally, run NPM and so on
32:36
And you can see now we are in the browser and over here you can see I'm spinning the 3D model and that 3D model is also animated
32:46
So things are moving over here. And there is a lot of different examples that they are showing here, like how it works with different effects, transformations, like lightnings and things like that
33:04
So everything this is possible to do in your browser, which is, I think, pretty, pretty amazing and really nice library to have fun with
33:16
I will try to find just a second. Some, some, some examples like this one
33:31
where you can see one helmet. There's a lot of lightning reflections on it
33:38
And this really looks real, right? So if we can have this real representation, 3D object that looks like a real one
33:51
then we can generate images out of it. Okay, so how we do that with TreeJS
34:02
Just a second. I'll show you. I'll show you example. So over here, I will run it locally on my computer
34:20
And what I have over here is a model of R2D2, Lego R2D2
34:26
And that's what I have at home. I was like, when I got idea for this session, I was like, okay, what model could I use
34:39
something that I can show easily in the presentation. So I found this little guy and I was like
34:46
yeah, that would be fun. Maybe I can find 3D model on internet
34:51
And there's so many 3D models, marketplaces, you can find a lot of stuff
34:55
So this is a 3D model of the R2D2 used by, with help of 3.js, we are displaying it in the browser
35:08
So let's take a quick look at code. It is HTML page
35:18
We have a head and a body, import some stuff that are needed
35:24
define some variables over here, and just a second, try to stretch this a bit
35:32
And in this function init, we define some standard stuff. So first we need to define the container
35:39
We need to define the camera and our scene so that camera is looking at this scene
35:46
We define the lights in the scene. This is a standard code that you get
35:53
when you open the example how to display the 3D model. So I'm not going much into that
36:00
We have ground and we have ground grid. If we take a look at the example here, right
36:06
We have, I can even move it around. We have a ground, this white thing
36:12
and there is a grid on top of the ground. And on top is placed our model
36:18
And as I said, model is placed there. It's FBX model. And for that model, at the moment
36:29
it is just white because we haven't applied the textures to it
36:35
And that model is added to the scene. We rendered the scene and we added the controls
36:44
Okay, so let's do some cool stuff. First, let's add textures to the model
36:52
I'll uncomment this lines. And those are just definitions. Textures are the PNG files, right
37:04
Model consists of two parts. One is the head and the other is body
37:09
For head and body, we need this texture. Now I saved, refresh over here
37:18
And yeah, we have our R2D2 that looks pretty much realistic compared to this one that I have over here Okay so I would like to do something else I want to create some custom background
37:36
not to train it just on the white background. So I can, for example, enable this setting
37:45
setting the background picture. But before that, I need to disable this ground and grid
37:51
that was added by default. So I'll just comment these things out
37:57
No ground, please. No grid. And for background, I am using the Star Wars backgrounds
38:08
You can Google that and find a lot of those official wallpapers
38:16
So I will use one of those wallpapers as the background. Here I save
38:21
I refresh. and yeah we have our model in some realistic environment right now i'm using mouse to
38:30
to turn it turn it around and what else we can do we have one function like what's happening on
38:40
the window resize and we have function animate those are the standard stuff but what we have
38:46
custom over here is we can add one to animate the camera right and there is some calculation
38:52
how to move the camera around. If I save that and refresh over here
39:00
now I'm not moving anything with the mouse. Camera is automatically moving around
39:08
So based on this, we could generate a lot of pictures and train our model, not just to rotate the camera around
39:18
We can change different backgrounds different backgrounds and each time we are taking a screenshot
39:25
take it with different background. And it's cool because we can also rotate this object
39:33
in many different directions and detect it also from the bottom and top and so on
39:41
which is sometimes not easy with the real 3D objects. Okay, so we have our R2-D2 flying around, and we want to take pictures of this
39:59
So how can we do that? Solution is Playwright, and it is a small framework made by Microsoft
40:07
It is used for the testing and allows like this works cross browser, cross platform, cross language allows you to test mobile web and so on
40:22
Really nice thing if you want to test your web pages, there is a lot of options it supports
40:30
Really great thing. I leave it to you to explore. But for our case over here, we have one small script where we will run the test
40:46
And that test will load our current page and we'll save the screenshot of it
40:54
Then it will wait for three seconds and take another screenshot. So basically we are taking screenshots of this R2D2
41:03
as it is spinning around. Of course, we could also add to change the backgrounds
41:11
We could add options to change lightning and so on. How this works, how we run it
41:18
basically from the common prompt, we will run this play rate test
41:25
and it will execute now take screenshot run for three more seconds take another screenshot and
41:35
store it locally so if I take a look at the folder I have over here you can see there is a screenshot
41:45
generated something went wrong here, but okay, this one is fine. Maybe if I run it again
41:55
and put a bit of pause on the beginning, it would work better
42:02
But that's, yeah, can happen. Okay, so this way we can generate a lot of screenshots
42:11
and use them for AI model training. And if I go back to my presentation slides, this is what I did with R2D2, changing different backgrounds and spinning the model around, right
42:30
So I generated the 100 images. All this is synthetic data. So the model never seen a real object
42:40
and trained that model to be able to recognize it. And how you can test this model, like I said
42:50
quick test over here, or you could publish it and use over API
42:57
but you could also test it on the edge. and what that means in many scenarios nowadays like we need this intelligence at the edge just
43:09
imagine one self-driving car it has so many different sensors in it like there are different
43:16
different sources on internet i try to find but somewhere around 20 terabytes of data is what car
43:23
produce in one hour or something like that. You'll get different numbers, but it's huge amount of data
43:31
And now just imagine pipeline that you need to upload this data to the cloud
43:36
And that's not the only example. That's why AI is moving more and more to the edge on different devices capable to execute
43:47
the model, recognize, for example, recognize something visually, and only upload data points
43:53
to the cloud. So only recognized points go to the cloud. And yeah, last year, Microsoft released Azure Percept
44:06
That was in March, if I remember correctly. It is this, as they call it, family of hardware, software, and services designed to accelerate business transformation using IoT and AI at the edge
44:24
You can find more info on the website. Their development kit looks like this
44:30
It consists of one smart board. In this nice case, that is cooler
44:40
you connect the smart camera to it, or this is audio. There is a microphone linear array over here
44:50
There are four microphones, so you can use some voice recognition also on the device In the cloud you can log into your portal
45:06
and search there for Percepts Studio. You can see how it looks
45:12
It's pretty simple. I'll show you soon. So idea with Percepts Studio is one service
45:20
where you are able to select models that you want to publish to your device
45:26
you execute them and you get the results like while device is detecting
45:31
it's sending those results to IoT Hub. From IoT Hub, for example
45:37
you can use Stream ytics and connect it elsewhere, store the data, go directly into Power BI and so on
45:46
So what you can do over there out of the box is you can use some predefined vision models
45:54
like this general object detection, people detection, like detecting products on shelves
46:02
or vehicle detection. That is for vision. And there are some voice assistant templates
46:08
for we said like you can use these devices voice assistant and try out like in hospitality
46:18
IT care inventory, automotive, some templates, how it could work. I believe all of you use some kind of voice assistance so far
46:29
so you get the idea. But we want to test our model, so we go into the Percept Studio
46:38
Interface is pretty simple over there. It will guide you nicely. Are you new to AI models
46:45
Do you want to create prototype, try out simple applications, or you want to do some advanced stuff
46:53
Over here on the left, you have option for devices. You can click that, select your device, and deploy model to device
47:01
And that's what I did with the model that was trained on R2D2Data
47:05
So I published it on the Persep and tried it out. And with only 100 pictures, synthetic data pictures, it was able to detect R2D2 around, as you can see, it's bumping between 69-70%
47:23
And this is also not the perfect lightning that I have over here in a video
47:28
And I think this is really cool because, yeah, you don't need to spend a lot of time like trying, taking pictures from different angles
47:39
uploading those pictures and so on, like all this can be generated automatically
47:46
So that was our test and something I work on like since end of last year, somewhere since start of December in my free time
48:00
It's called Synthetic AI Data. and it's my free time project that I applied for Microsoft for startups
48:09
It got accepted over there and the idea with it is exactly this
48:14
to generate synthetic data because I think we need one, we developers need one easy to use web-based tool
48:27
where you can go in, upload, model and generate this data. What Microsoft for Startups
48:37
just to go back to this slide, Microsoft for Startups is offering support
48:43
with Azure Craigs. So if you have some idea or already work on some projects
48:49
feel free to submit it over there and see can they help you out
48:55
It's not only Azure Craigs, there are different licenses and things. You can find a lot of info on their website
49:03
This is a short preview how it will look like. So web-based dashboard where you can see those models
49:11
you can upload model and so on. And basically how it works is exactly that
49:17
You upload your model, you configure some options. I want this type of background
49:22
I want different lightnings. I want this and that, right? There are several options you can configure
49:32
and you push the button generate, and you get the package of images
49:38
Yeah, in this configure option, you choose, do you want 100, 500,000, whatever images
49:43
and you download them, and you can use them for model training
49:49
It should also integrate because the custom vision has API where images can be uploaded
49:56
I hopefully will sort that out so that can be uploaded there
50:01
So basically, if you remember those four steps from beginning, it should be much, much faster to do it
50:08
If you are interested in the project, please follow on LinkedIn. I will be happy about it
50:13
So I promise I don't spam. There is some announcements like every two weeks approximately
50:22
with what I achieved to do and where it is. So I will publish some demo video
50:29
As a last slide over here, first of all, thank you for listening the session
50:40
I hope you learned something. And from sharing list, I got this Azure Hero Learner badge to share with you
50:52
If you want to claim your badge, you can scan this QR code and add it to your wallet
51:02
More info about Azure Hero program can be found on this link over here
51:09
But I believe that you are already familiar with it and seen those digital badges all over
51:17
So once again, thanks for listening. I hope you learned something something new and that it was
51:24
interesting and basically idea of the whole session is to show how it's easy
51:29
how it's easy to generate this synthetic data and I believe we'll be hearing
51:36
more and more about synthetic data especially for the vision training thank you
51:49
Well, this was really great
52:02
I really enjoy all the session you show. It was really amazing
52:09
Thank you. This is really a game changer because I think this is one of the things
52:14
that takes a lot of time if you're going to do object detection to actually take these photos with different backgrounds
52:20
and from different angles and to be able to automate this as a true game changer Yeah yeah it is I just remember my last project when I used custom vision when I done a lot of photos and needed to go
52:35
and from different angle. And now with this, what you show, it could be so quick
52:40
and you don't need to this effort of making photos and upload them
52:45
Yeah, with use of those realistic 3D models, it's pretty simple. And like I said, this little guy was recognized
52:54
just by using 100 pictures. Like in some visual detection, it's really not much
53:03
like being able to teach a computer to see, okay, this is our today too, right
53:08
So that's a nice thing. Yeah. And so we have some questions here from the audience
53:14
So you can start with this question from CloudTalk with Jonny Chips, who says
53:20
could you use LIDAR or camera technology to model the real world
53:24
so that you can take imagery from the real world into Unity to use in this manner to automate AI model training
53:31
And have you come across any examples? Yeah, I think this would be a lot complicated
53:42
Not sure can you really generate the 3D model, but we invented 3D scanners
53:48
I think that's much easier to use, to create 3D object first and then put it into Unity
53:55
With LiDAR technology, I'm not sure, can you generate 3D object out of..
54:03
Yeah, I know that you get the point cloud. So maybe representing this cloud, if it's high enough resolution, but this is really..
54:13
I'm not sure you would be able to do it. But maybe it's better to use some 3D scanner or, yeah, that's something I would go after
54:26
But working, like I mentioned earlier, working in many different industries, there are 3D models already, you know
54:36
And I was also like when I got the idea for this session, I was like, okay, what I'm going to try out and had those Legos around as I usually use those for my session and stuff
54:51
And I was like, yeah, maybe I can find R2D to minifigure
54:56
And I was able to find it in five minutes. Like there is so many different marketplaces for 3D models out there
55:03
So I hope I answered, yeah. And he also says that this is awesome to see, by the way
55:12
Thank you. Thank you. And then, Gosia, do you want to take the next question
55:18
It was an excellent presentation so far. If anyone got curious about the 3GS example
55:24
I would recommend also having a look at Babylon.js. The library is also open source, baked by Microsoft
55:30
Yes, I've heard about it and I think I've seen it, but I haven't read it out, honestly
55:41
So yeah, this is not, of course, not the only thing out there
55:47
Point of my presentation was let's grab something, you know, super simple, put it in a browser
55:53
because if it's in browser, it probably works everywhere. And let's get some data out of it
56:05
Okay. And then we have a question here from Gabriel who says
56:12
is this considered part of the new data-centric approach for AI development
56:17
How do you see this trend growing in 2022? Well, synthetic data, like if you take a look at that
56:24
Gartner research that I shared, synthetic data is growing and by 2030 it's expected to be used
56:35
three, four times more than real data in addition to improve the model accuracy
56:42
or to fix the bias and to add additional scenarios
56:51
to where it can be seen. And it was also part of the Gartner's Hype cycle
57:02
published at the end of last year. You can also find synthetic data there
57:07
And if you take a look a bit at what NVIDIA is sharing about it
57:13
you'll find this mentioned a lot. Also, like Unity, there are some nice talks about it
57:23
And Danny Lange, he has some nice sessions. You can check it out on YouTube to learn more about it
57:33
Unity is also interesting because they have their own, I think they call it AI agents
57:41
which you can train like small bots in game that are able to understand something and such
57:46
So they also work a lot over this field. But the idea is, yeah, do we need big complicated tools
57:57
We can do it really simple. So pick what is easiest for you if you have a need to generate such data
58:07
Yes. Yeah, that's fantastic there. So I think that's it for the questions
58:16
Thank you. Yeah, people really enjoy your session and it was really, really great about this all
58:23
syntactic data and everything around this one. And it's good that you show two languages as well, C-Sharp and JavaScript
58:32
Well, yeah, we had viewers from C-Sharp Corner, so it was, I think it was important. Right
58:41
Yeah. So with that, I think we thank you. Thank you, Goran, for a fantastic session
58:49
But please stay on StreamYard while we're finishing up the session. So we have some more information here for our audience
58:57
We'll be back in a short while. yeah so yes some information here about our next sessions um so two weeks from now on january 26th
59:19
we will have a presentation about terry mccann and he's also been here with us before so it's a
59:27
So a warm welcome back to him. So he's gonna be talking about graph-based processing
59:32
in Apache Spark. And then two weeks after that, on February 9th
59:38
we have Henk Bollman who will talk about how we can build an Azure ML pipeline
59:46
But the thing is, Gosia, we are also looking for speakers. Yes, so we're looking for speakers for next sessions as well
59:55
So if you would like to speak on our and meet, us online so you can submit your session on sessionice.com.ai42 and just come and speak
1:00:07
with us and show something really interesting about AI and data science
1:00:14
Yes. So with that said, we would like to thank all of you, both from me, Gose and also Yves
1:00:25
who couldn't be here with us tonight. so with that said I think we
1:00:33
can finish this session here and we hope to see you next in two weeks
1:00:39
yes see you in next two weeks