Demystifying Serverless - MVP Show ft. Zeeshan Ep. 2
11K views
Oct 30, 2023
Join us 📺 - Learn why leading tech giants are providing not only a #serverless hosting platform but also are providing their SaaS and PaaS as serverless offerings too. 📅 28 May 2020 ⏰ 7:30 pm IST | 4:00 pm CEST | 10:00 am EST In this live show we'll talk about : • What is Serverless? • Why Serverless? • Serverless offerings on cloud • Alibaba #Cloud Function Compute • Azure Functions • AWS Lambda • GCP Cloud Functions • Demo Serverless #Architecture • Best practices 🌎 C# Corner - Community of Software and Data Developers 🔗 https://www.c-sharpcorner.com
View Video Transcript
0:00
hi everyone my name is different simon and welcome back to c sharp corner live show
0:22
in this episode of c sharp corner mvp show we have joining uh zishan ahmad all the way from
0:29
Turkey. He's actually from Pakistan and he's been a C-Shop Corner MVP from last four to five years
0:35
But you can see my background has changed and I'm not in my room now. Just the way C-Shop Corner is
0:40
committed to ensure that we provide the quality community content from past two decades, so we are
0:46
committed with the live shows. It doesn't matter if you have a power backup or if we are not at the
0:50
right place, we'll make sure that our live content reaches you at the right time. Having said that
0:56
let me bring upon the guest of today and that is Dishan. Hi Dishan, how are you
1:00
I'm doing good. Thank you for having me. Great, Dishan. So, Dishan, it's our second
1:07
episode of C Sharp Corner MVP show and we are doing this for the first time live and
1:13
go ahead and tell who you are, what you do and how did you actually start your MVP journey
1:20
Yeah, I mean, I remember it's been like almost a decade now that I've been in this industry
1:26
writing articles, providing solutions to online members, especially for C Sharp Corner
1:33
With C Sharp Corner, I've been for like five years. I joined back in 2015, and I've been an MVP since
1:40
So, I mean, it's been a great journey working with the experts on C Sharp Corner community
1:47
on GitHub, open source projects. So I look forward to sharing my learning for serverless today with the audience
1:54
That's good. I think serverless is one of the very hot topics these days
1:59
It helps both the early age startup and the enterprise to manage all these services on the cloud
2:06
Why don't you go ahead, Dushan, and let's get started with the show. You can share your screen and then you're good to go
2:13
Cool. And before I start, it would be a request to our audience to always ask questions whenever you have them
2:20
And we'll get back to you. That's perfect. So if you are watching us on Facebook, Twitter, YouTube, and Twitch, you can just write your
2:29
comments in below and Zishan, our C Sharp Corner MVP, will answer all your questions
2:35
So your screen is there and it's all your show now, Zishan. Thank you
2:40
Thank you, Simon. Okay. So although Simon has already given you a one-liner for what C Sharp Corner is, but there's always
2:50
more to the story. So we always encourage our readers. I always encourage my audience in the events that I have to visit C Sharp Corner, join it
3:01
Maybe you'll be the next MVP. Who knows? My introduction is that I'm a senior software engineer
3:07
I've been an MVP with C Sharp Corner since 2015. I have been doing a lot of offline workshops
3:15
I have contributed to several open source projects online. Most of my contributions are on GitHub, GitLab
3:23
So the topic that I'm going to be discussing today is serverless
3:28
Serverless is definitely an interesting topic, like Simon has mentioned. Most of the people think about serverless and they think that serverless is all about forgetting about the operational side of your applications
3:43
So that is what I will be talking about in this short talk
3:48
and I will be showing you how serverless is more than just the no-ops for your applications
3:56
What people think is that, and this is definitely a common practice of building your cloud-native
4:02
applications, is that you break down most of your applications into microservices
4:07
you develop them separately, you publish them separately, and you monitor or provision them
4:12
separately. That is what serverless does as well. So how it differs from no-ops or no-operations
4:18
is that I believe the no-ops part for the serverless is more of a marketing term when it comes to your solutions
4:26
So if you host your solutions on, say, Microsoft Azure, they tell you that you don't have to provision your serverless
4:35
That is not true because at the end, you still are going to be publishing your serverless
4:40
You're still going to be looking for the logs. and you will have to, in some way, provision when something happens
4:49
The difference that comes in no-ops and serverless is that you can scale as much as you can
4:55
So there is no limitation on the infrastructure that you get. And that is what I will show you in the demonstration
5:01
and a bunch of different areas that serverless is covering with the cloud infrastructure
5:09
such as for Kubernetes, for database systems, et cetera. Another major benefit that one gets with serverless is that serverless offers cloud native integrations
5:19
Most of these integrations are first class products on the cloud platform
5:24
For example, if you are using Microsoft Azure, you will get cloud native integration for App Service, Azure Cosmos DB
5:32
Same applies to Alibaba Cloud, AWS, and Google Cloud Platform. So these are the benefits that you get
5:38
But at the end, it all boils down to one single concept, which covers the entire serverless domain for your applications
5:49
And that is that serverless provides you with all demand resources. So what that means is that mostly people think about cloud as a pay-as-you-go solution
6:01
So you host your service, and as much as you need it or as much as you consume it, you pay for it
6:07
But with serverless, what happens is that if you are not using the application, you're not paying for that
6:13
Think of it as an Azure App Service. When your app service is running, you are being charged for that
6:19
There are bills for that. But when you stop the service, most of the resources are still there
6:24
For example, the source code is there. And a bunch of other services, such as the Azure App Service plan, it is always on the cloud platform
6:32
So what you do is you either remove that and redeploy the service when you require it
6:39
or you keep on paying for that when you are not using it
6:45
So what Serverless does is that it automatically removes the services, such as the resources required for your application to run
6:53
and automatically provisions and deploys those resources when they are required by your applications as well
7:00
So this on-demand capability of serverless gives it a really cloud native feeling and a cloud native experience to the developers as well as, let's say, to the stakeholders
7:14
So they know that if they are using the cloud platform for, say, five minutes per day, they are only paying for those five minutes
7:23
And now comes the no ops part because you do not require to deploy the applications again, because that is actually the part of the platform
7:35
Most of the popular frameworks that most of the audience will know are, say, Alibaba Cloud Function Compute, Azure Functions, AWS Lambdas, or Google Cloud Functions
7:48
Now, Anibaba Cloud is basically a new kid on the block to say, but it provides a really global scale serverless platform
8:00
And you can use this platform to build any sort of application. Most of the frameworks that it supports are Node.js, Java, Python
8:08
on. So you can build your own applications on top of it. Or you can break down your existing
8:14
microservices applications and deploy them as services on Alibaba Cloud and then monitor
8:20
them as you go. Same applies to Azure Functions. With Azure Functions, what happens is that
8:26
Azure Functions provides first class integration support and connectors for most of the Azure
8:32
marketplace solutions such as Azure Cosmos DB, App Service, EventGrid, Event Hub
8:38
and so on and so forth. AWS Lambda. Now when people talk about
8:42
the serverless technology, AWS Lambda is the first name that comes to their mind Primarily the reason being that AWS was the first one to offer such a concept of doing serverless
8:55
where you only write one function, one specific task that needs to be performed
8:59
whenever something happens. So that concept grew up and became the serverless
9:05
And today we have a lot of serverless offerings that originated from that concept
9:10
of having one specific operation performed. when something happens inside the infrastructure
9:16
or outside the infrastructure, depending upon how you connect it. Same goes to Google Cloud Functions
9:22
Now, many people don't use Google Cloud Functions. The reason being the Google Cloud Platform
9:28
is more of a consumer-facing cloud. It's not much of an enterprise targeting
9:34
or enterprise-facing. But Google Cloud Functions provides the support for most of the first-class mobile solutions
9:42
such as Firebase. Firebase uses all the power provided by Google Cloud Functions
9:47
And then Google also provides support for Knative, which is a serverless platform for Kubernetes
9:52
which I will discuss in the last part of this presentation. Now, before we dive into a quick demonstration
10:03
let us take a quick overview of what serverless runtime actually looks like
10:07
We have discussed that serverless provides on-demand support for resources that can be compute resources or they can be other services
10:16
So what happens when a serverless application runs is that when there is an event inside your cloud or outside the cloud
10:25
in a hybrid environment, your serverless application gets triggered. And as soon as it is triggered, all the dependencies
10:33
all the resources such as the compute, RAM, or memory resources are allocated for that particular instance
10:40
You can figure all of these services, so you can figure how much RAM will your process require
10:46
you can figure how much CPU are you willing to pay for one execution of the process
10:52
or how much will it require to efficiently process. And all of that is done for every event that happens inside your infrastructure
11:03
Once all of the resources have been allocated, then your process starts
11:08
The process can be an HTTP handler, the process can be a handler for let's say log or the process can be let's say for a database
11:20
Whenever a new record is inserted into the database you can check if the data is valid or say if the data should be inserted or not
11:31
Once at the start point, now your process is running and it executes for as long as it requires
11:40
Normally, a serverless runtime takes like one second to say five seconds or 20 seconds
11:46
And then it can raise notifications in other areas of your cloud
11:52
So this is the benefit that one gets when working with serverless in that each serverless can be triggered by other serverless
12:00
And so when you are finishing up with your process, you can end up triggering another serverless
12:07
So for example, if there is a transaction that is being made on your card or checkout phase of your e-commerce website
12:16
you can trigger your Lambda and that Lambda can end up triggering another Lambda
12:23
which might be responsible for the payment or other parts. So this creates a very asynchronous microservice that you have, where most of your components are isolated from other components, and your services can run independent of each other
12:41
But they can execute and start running as soon as they have some events, and most of this is provisioned by your cloud runtime
12:48
While this architecture has some benefits in terms of, let's say, you don't have to provision anything, and if, for example, you have a sale going on on Christmas Eve or Black Friday, you can see that your customers might be sending requests in billions because millions is only a funny number during Christmas Eve
13:09
So what you do is you create a serverless environment and you allow your cloud platform to provision all the resources that are needed for that runtime
13:19
And each of the requests is isolated. So if one request fails, your server or the entire process does not crash
13:26
It's that one specific request. There is one downside, however, that I personally feel everybody should know is that if you see on the left side, all the way from the triggered part to the start part, that is now a dependency
13:43
That is basically a debt that you have to pay for each of the resources, for each of the events that happens inside your system
13:49
So let's say you are using .NET Core and you are building up your Lambda environment
13:56
But your Lambda requires a couple of dependencies, such as you're using MongoDB
14:02
So you require MongoDB driver. You are using JSON converters. So you're using JSON.NET and a couple of other helpful libraries inside your Lambda
14:12
So what happens is that Lambda runtime would need to load up all of those dependencies
14:17
and create an isolated instance for that request to be handled. Now, this happens for all of the requests
14:26
So if you have like a million requests, this dependency load will happen a million times
14:32
So in most of the times, this takes up a lot of cost
14:38
and adds up a huge bill in your invoice. So this is a primary problem that I see with the serverless run time
14:46
And most of the clouds also require a minimum payment. So, for example, if you're serverless, if you do not have those dependencies
14:53
so you plan to remove all these dependencies and you say, okay, so I just have a function
14:57
I will make sure that the function executes as soon as the event is raised
15:02
and as soon as the resources have been provisioned. And so you make sure that everything is sent down to the metal of the process
15:13
You can do that. But then cloud platform comes in and they say, okay, this is the minimum execution time that you can have
15:22
So for example, if you write a function that takes 20 milliseconds or 30 milliseconds
15:27
but the cloud platform will be charging you for say 100 milliseconds. And same for the memory requirements and the CPU requirements and all
15:36
So in a way, this takes us away from the pay as you go model, where as soon as you spin up the resources, you are paying from that point
15:44
onward all the way until you release those resources. So that is one model
15:51
The serverless model says you don't have to provision the resources, but whenever we provision your resources
15:57
we will be charging you for at least a specific amount of time. And after that, if you continue executing
16:02
we will charge you per second. Otherwise, we can simply remove your resources
16:07
So that is one problem that I personally see with the serverless runtime. And so we need to provision that
16:14
I'm working with a client and we are basically digitizing their entire infrastructure
16:19
So what we do is in a couple of Lambdas, because I have designed and developed their entire infrastructure as a serverless model
16:29
So we're using AWS for that. What happens is that most of the times their serverless instance takes, say, one second, which is okay
16:41
So now we don't need to provision anything. we don't need to have an EC2 instance or another Beanstalk running all the time because we only need to process it once every or
16:52
So what we do is we run those instances. But sometimes what happens is that an instance only requires 15 milliseconds to execute
17:01
The complete runtime for a process is 15 milliseconds. But the client is being charged for over 100 milliseconds at least
17:09
And in most cases, it takes like, let's say 30 seconds to process a complete document that they have in their own format
17:17
So these are a few of the constraints that one needs to look into before actually diving into the serverless just because it's a buzzword
17:26
So now I'll go ahead and I will discuss one quick sample that I want to show
17:34
And I will talk about all the areas that I have discussed on this slide and then later on i will talk about what uh or how cloud giants are using serverless in their uh in their environments
17:47
and i will discuss like why serverless is beyond just a simple http sample or how serverless is
17:53
beyond just like let's say an event manager manager for your event grid or let's say your iot devices
18:02
So I will be using Alibaba Cloud, the overall method to create a serverless application
18:15
or to host a serverless application or to execute a serverless application is similar
18:20
The only thing that you need to know is that a serverless runs whenever there is an event
18:25
Now an event can be anything, it can be a change inside your, say the state of the application
18:32
a change inside your database, or it can be an external event such as an HTTP request
18:39
So as you see over here, there are a couple of templates provided. We have video auditing your processing service
18:45
which will definitely be with media encoders on Alibaba Cloud. If you're not aware with
18:50
or if you're not familiar with Alibaba Cloud, one thing that you can do is definitely you can go
18:55
and create a free account with Alibaba Cloud. If not, you can use the same concepts on your favorite cloud
19:01
such as Microsoft Azure, AWS, Google Cloud Platform, or other online services such as Firebase
19:10
They all work the same. The code is same. You just have to read the input and provide an output
19:17
So if I go over here, Aribaba Cloud basically provides us with the concept of a service
19:23
And then we create different functions. Now the concept of a service is similar to what
19:28
we have as a microservice. we create a single microservice, which can be say, a people catalog
19:34
or a cart or a payment gateway. And inside each of the service
19:40
we create different functions. Now these functions, each control or manage, yeah, manage would be a better word
19:51
So they all manage one aspect of that microservice. So for example, if we are building an HTTP based service
20:00
we can have a service and inside that service we will have some managers for
20:06
get some for post some for port now I have had some arguments over the past
20:12
with a couple of my colleagues and they said that there is no point in creating
20:17
different or managing different HTTP methods in different functions but I think most of the times your services do not have a balanced flow of the
20:29
request so for example if you have say a shopping cart you will only be sending a put a post request
20:37
like to add an item to that card say every five minutes but you might be sending a get request to
20:44
check how many items you have in the cart every 10 seconds or 20 seconds right so now the load is
20:50
no longer balanced between those different endpoints because the get and post endpoints
20:55
are two different entities so you need to see okay so which of the which of the end point is causing
21:00
some problems causing the delays or are taking more resources or which of these or say when you
21:07
are about to scale your resources so you need to see which of these uh which of these end points
21:13
require some additional scaling uh or additional compute resources to function properly uh that may
21:21
be a marketing term because now you want to discuss okay okay so these are the slas that
21:25
i'm providing to my customers how can i make sure that my slas on the get request are not being eaten
21:32
up by the slas on post request and so on so now you can separate out these different components
21:39
into different functions so that is what i always recommend uh to everybody that talks to me about
21:46
serverless that if you are going to serverless just think about writing everything separately
21:52
if you have to apply a single if condition write it as a separate function so for example
21:58
if you are processing a new file upload you can easily check okay so if the file is a png
22:04
process it with process it with this lambda okay so if the file is the pdf process it with this
22:09
lambda or this serverless or so on so that gives you a very uh brief overview of how your customers
22:17
are using your serverless which of the serverless is causing problems or say which of the serverless
22:24
are crashing so then you can have your own team work on those serverless and improve the overall
22:29
environment right so if if you are using a simple if condition and you are ingesting all the files
22:36
inside that single serverless, now you have to think, okay, so where exactly is the code breaking
22:42
I hope you understand the point that I'm trying to make over here. So that is what I recommend
22:47
to everybody. So if I go over here, this is just a single function. What it does is
22:54
it simply just says, it's just a simple hello world function. The point, the interesting point
22:59
is not whether it says hello world or not, because you can find that in all of the videos online
23:06
The point that I will be making over here is something more than that
23:12
If we go over here and invoke this function, this is basically a simple function
23:18
So now you see. If you see over here, I'm not sure if you can see properly
23:27
Let me just magnify this a bit. I hope you can see it now
23:32
So if you see, what happened is that when your serverless was executed, so when I invoked a request, what happened was it logged that FC invoke start
23:44
So this was the request that started. Then it loaded the code from the code that I had provided it
23:51
It had stored it inside encrypted at rest for privacy and compliance
23:57
And then it went ahead and processed, it executed my code. So there are like four steps. First, the serverless gets a trigger. Then the serverless loads the resources. It provisions the compute resources on the cloud as well. Then it executes my code. Then finally it ends
24:17
Now the interesting part is you can see that for this Hello World, our code only executed
24:22
for 51 milliseconds. But the build duration was 100 milliseconds. So memory size we used was only 16 MB
24:32
But since we have already configured it to be 512 MB, so it charged us for 512 MB
24:38
We can configure the memory size, but we are unable to configure the build duration because
24:43
it's always uh it's a simple condition that it's always more than what your average duration is so
24:51
if you like you are using let's say 102 milliseconds maybe they maybe they will charge you for 110
24:58
milliseconds for more than that so uh that is uh what i wanted to show with with this approach
25:06
like okay so this is how serverless works now as i mentioned the serverless is no longer about
25:13
a simple hello world function or your microservices going separately or like how can I put it like for example
25:23
it's no longer that you're just breaking down your microservices further down into atomic levels which become a function
25:30
Serverless is now being used to create the entire infrastructures. So let's say you are using Kubernetes for your application
25:39
Okay, so I'll give it just a minute. I will just give a nice touch so that the readers know that I'm not talking about the code anymore
25:55
So what happens is that when we are using Kubernetes, for example
26:01
so what happens with Kubernetes is that you create your containers and you say
26:06
okay, so my container will require 200 MB of RAM, each container
26:10
So you orchestrate it. And as the need grows, your Kubernetes orchestrator will automatically
26:17
create new pods, create new instances inside the nodes and automatically load balance all the traffic between different endpoints So what happens is that now your application itself
26:32
is scalable, it's elastic in nature, and it can take all the traffic that is incoming
26:38
But what happens is that your Kubernetes, the infrastructure is not at all scalable
26:45
So for example, if you have three nodes, so let's say your Kubernetes architect came up
26:51
And he suggested that, okay, so we can have one master node and a couple of worker nodes, say five of them
26:58
Each one having this amount of RAM, this amount of CPUs and so on
27:03
So that works just fine. But what happens when you hit that limit
27:07
Now you cannot change it because provisioning new nodes, adding new nodes, removing nodes when they are no longer needed is also not feasible
27:16
so what happens is that most of these cloud platform providers they offer serverless offerings
27:22
for infrastructure what happens is that now you only define okay so at the end of the day
27:28
i want this infrastructure to be hosted on the platform and i don't care how you host it how
27:34
many resources you require and how you do that so what the the orchestrator of the infrastructure
27:40
does is that it provisions all the resources creates nodes for you adds them as they are
27:45
needed and removes them. So entire infrastructure becomes scalable. Recently I was also studying a
27:52
couple of weeks ago that your Cosmos DB for example it has gone serverless. What happened
27:58
previously was that you can provision okay so this is the maximum these are the minimum resources
28:05
that are required from my database engine and these are the maximum resources that are required
28:10
from a database engine. What happened was that you cannot go from the minimum
28:15
So minimum was, I think, the 10% of the maximum. So if you have like 10 hours use as minimum
28:22
you will have to have like 100 hours use as the maximum. But at the time when the application is not being used
28:28
especially in the development or testing environments, you are still being charged for those 10 hours use
28:34
Now, what happens with the serverless deployment option is that you're not charged for any of that
28:39
So as soon as the request comes in, Cosmos DB will actually go ahead, process your request
28:47
provide you with the response based on its single millisecond read and the same SLAs that it provides
28:55
And when your instance is no longer being used, it will remove it
28:59
So that is where serverless is going nowadays. And I encourage the audience and the viewers to go ahead and try out these serverless offerings specifically for Kubernetes, for Azure Cosmos DB, and build new stuff
29:17
Recently, I was working on one of the projects which required a serverless platform for machine learning
29:22
So we have a Kubeflow for that and a bunch of other offerings which different domain experts can use and build their own solutions on top of
29:33
Yeah, Simon, so I think this will be over to you now
29:45
That's great, Zishan. I think, all right. I think I did not go in
29:51
I think, Zishan, that was a really good session that you gave. I mean, you started with what is serverless, when it is important, what are the limitations
30:01
What I people would really love about this session is that you're bringing your years
30:04
of experience and the projects that you're working with the clients, you're taking those
30:09
feedbacks and you have shared in this session. One question for anyone who is trying to move into this serverless technology, taking feedback
30:18
from you, what was the first services that it's used? I mean, you did talk about
30:23
which was your first cloud service that you chose and what were the challenges that you
30:33
had to face when working it for the first time my first serverless framework was Azure functions
30:38
to be honest I personally love Azure because of its simplicity it is actually a lot simpler to
30:47
build solutions on Microsoft Azure than it is to build a solution on any other cloud I believe and
30:52
I can prove that. I have been proving that to my colleagues, to my audience, and I think I can
30:57
still prove it. So that is why I used Azure Functions. The challenges that I had were actually
31:02
thinking about serverless things in serverless fashion because it's difficult. So when we were
31:09
previously building monolith applications with ASP.NET, when microservices came, like everybody was a big buzzer, like, okay, so how does this work? It's a lot simpler. And same with the concept
31:20
when serverless came out so every like nobody knew like how like I like I
31:26
mentioned like most of the people still think that serverless is just about like building a node.js application and a provisioning different functions that
31:34
is not all about serverless the serverless is more than that so well that is more about the on-demand concept so when you move ahead from the pay as you
31:42
go all the way to on demand so you can like have one of your pay as you go
31:47
subscription that is fine you can host your applications on the cloud but then there comes
31:52
a time when you require more than just the server more than just the pay as your group model so at
31:58
that time you can incorporate the serverless so serverless is more like you control alt delete
32:02
like they are the goons that come to solve your hardcore cloud native problems so they're there to
32:08
help you out that's crazy chan i think uh uh when when you talk about the different challenges that
32:16
that you face as a chance. So what were the different resources that you did refer
32:20
when you were facing these challenges? I mean, talking with the team leads and manager
32:24
and the leadership role is fine, but as a developer, what were some of the resources
32:28
that you choose to learn the server less and then come up with those challenges
32:35
Yeah, when it comes to Azure, I always refer to the Microsoft documentation
32:41
Documentation is open source. If you are an expert and you feel like
32:45
the documentation is missing something, You can please contribute. Other than that, yeah, I've always referred to
32:51
and used the Microsoft documentation only. They are the best resources out there
32:55
especially for the Azure. And you can also learn different cloud design practices
33:00
and design patterns on Microsoft documentation. And that applies to other cloud platforms as well
33:07
so AWS and other. I've learned a lot of concepts from those documentation
33:11
and I've applied that over to Alibaba Cloud, AWS and Google Cloud Platform
33:16
And it's always worked. Yeah, exactly. And also for people who may know
33:23
that Zishan is a C Sharp Corner MVP. So he keeps on writing different articles
33:28
blogs, and video contents, not just on C Sharp Corner, but in the entire this community ecosystem
33:36
All right. So get back. Okay. So, Zishan, so having said that
33:44
I think this is one of the best sessions that we had so much like 30 minutes will be somewhere a very good audience a
33:49
couple of feedbacks and a quick tips that you would like to give to the
33:53
audience that who have bought from last 30 35 minutes a quick pro tip whenever
33:57
someone is moving into serverless and they're deploying it a pro tip from your
34:01
end and then we'll wrap the session. Yeah when it comes to serverless I would
34:07
recommend that instead of reading articles you write your own article so
34:12
I think that'll be a one-liner from my side. Build something, share it on GitHub. Remember
34:19
you have been a true community leader. And you can definitely hit me up on Twitter
34:25
when you write your own server. Yeah, exactly. Zishan is very active on this
34:30
social media platform, Facebook, Twitter, LinkedIn. Just visit Zishan anytime you can visit csharpcorner.com. Just type Zishan over there. He's one of our top contributors
34:41
He has been MVP from last three, four, five years. And people always look into him
34:46
Dishant, thank you so much for your time. You have been a valuable asset, not just
34:51
to C Sharp Corner community, the entire developers community across the globe
34:58
So we have you in our community. And just thank you for your time
35:03
Thank you for having me. So see you soon. Take care. And thank you, everyone who has watched
35:08
We'll see you in the next episode of C-Chap
#Programming
#Windows & .NET