Azure Kubernetes Service - Unleash The Power! by Allen O’Neill || Azure Virtual Conference
16K views
Nov 16, 2023
Conference Website: https://www.2020twenty.net/azure C# Corner - Global Community for Software and Data Developers https://www.c-sharpcorner.com
View Video Transcript
0:00
Share the power
0:04
Okay, okay. So, what I'm going to do here today, folks, is I'm going to talk to you about Kubernetes
0:15
Azure Kubernetes service in particular. And what I'm going to basically talk about is the background to it, where it came from
0:26
why it is, and why you guys should be using it. So before we get stuck into that, I'm going to give you some brief background on myself
0:37
So I'm the very handsome Santa Claus chap beaming out of your screen there
0:43
I'm an engineer, which means that I'm obsessed with understanding how things work and with making things
0:49
and you know when we're talking about Kubernetes and Docker and serverless and all this type of
0:59
stuff it's very much a case of if you understand how it works you can use the power of it much
1:04
better. You don't need to go in there and to develop it but you need to understand where it
1:10
came from, why it's there, why it exists and what the Azure Kubernetes service adds to
1:16
Kubernetes and Docker itself. So I'm a CTO of a company called the DataWorks, and we mostly play
1:26
with big data and machine learning. And this is one of the reasons that I became deeply involved
1:30
in Kubernetes, because it offers such immense opportunity and goodness for folk who work with
1:39
big data, who work with applications that have to scale quickly and easily
1:46
and people who want to put managed services into their pipeline. My most prized award is the one that I got from C Sharp Corner for being a most valued
1:58
professional member of the community. And C Sharp Corner, as you know, it's not simply online
2:03
it's also offline, apart from our COVID days right now. And in a serious way, these folk
2:09
organize local meetups. They promote technology. They promote diversity and inclusion. And, you
2:15
know, there's millions of members across the world. So my message from this slide here is that I'm
2:20
speaking to you. I'm giving you some of my time. I'm giving you some of my expertise. I'd like you
2:26
who are watching today, if you don't already, to strongly consider giving back to your tech
2:30
community. It's not difficult. Just start by answering some questions online. Write an article
2:36
considering sharing some of your knowledge with fellow techies either virtually or
2:40
when we get back into whatever the new normal becomes with physical meetups and physical people
2:48
And the biggest message that I want to give everyone at the start of this slide here
2:53
is that knowledge must be shared for the good of everyone. So do your best to share your knowledge and to help others
3:01
So let's talk about Azure Kubernetes Service. or AKS as it's also called
3:09
The agenda that we're going to look at today is we are going to talk about the history and the background of Kubernetes
3:18
We are going to talk about the Kubernetes and containers themselves, what they are, what they do
3:27
We're going to cover some basic Kubernetes fundamentals. We'll then talk about AKS and what it brings to the party
3:34
and then we'll go through any Q&A that you might have at the end. The very first website that I was involved with was back in the late 1990s
3:46
And at the time, the project undertaking was absolutely huge. And I mean, really, really big
3:52
You wouldn't think it, but if you look at some of the websites that were around from the big boys
3:58
like Apple and Microsoft at the time that I have there on the screen, You'd look at those and you'd go, goodness, well, isn't that the type of thing that a kid puts together in code camp, like in an afternoon or a weekend, you know
4:11
They look so basic. But the thing is, even to get to that stage there took a huge amount of effort and a huge amount of money
4:20
And what we're going to do now is we're going to talk about the evolution of this stuff over time
4:26
So when I did my first website, having decided what the service we were providing was going to do, what was going to be, we put together a project plan and a budget
4:38
And at the time, our project costing estimates were, well, it was going to cost us $250,000, a quarter million dollars to purchase the servers
4:48
It was going to cost us to host those servers and bring the internet pipe into them, into our office, because there was no data centers, really
4:58
It was going to cost us $75,000 for a fast ISDN line, which was, I think, $32,000
5:06
and the engineers and other people related costs it was going to take us about 30 to 40 people to
5:12
bring this all together and that was going to cost us about 350 000 and to ensure the that we gave our
5:21
project success and we moved the new office then near to a digital exchange at the time so we could
5:28
get a guaranteed 128k speed internet connection so as you can see it was a very big project well
5:35
it really was back in the 1990s right um now don't get me wrong i mean this wasn't something that was
5:43
completely out of the dark ages we didn't have punch cards and um the like but it was a very big
5:49
project for the time and if you converted it back to today's projects well it was just a basic
5:55
website but back then it was such a big deal it was like something you would make your career on
6:01
And you would say, I did this big project and it was, you know, three quarters of a million dollars and et cetera, et cetera, et cetera
6:08
But in retrospect, for what was a very, very pretty basic website, it was at the time extraordinarily complex to do
6:17
And it was big. I mean, really big. And it used very big resources
6:21
And in the background picture that you can see there on the slide, when you see those big physical racks and the power units and everything else, that's exactly what we had
6:31
Now you could fit all that into an iPhone, but back then that's what we had, and this was not cheap stuff. It was extremely expensive
6:39
So last week, I had an idea for a new SaaS website
6:46
And after spending, you know, the bones of 45, 60 minutes slapping together a React.js, an entity framework into place and spinning up my CRUD
6:59
And what I did was I did like, you know, in Visual Studio, file a new website
7:05
And my website was up. Like within four or five minutes, I had a website that was up, deployed and done
7:13
And it cost me almost nothing. I mean, you know, you get your free $25 a month from Microsoft under the different programs
7:23
And I just did that So I put up the equivalent of what cost us over a half a million dollars and took months and months and months to push through incompletion back when I started in IT
7:38
And I published one in a few minutes. And it just overall cost me less than a dollar a month to run
7:43
And that's absolutely incredible. So the days that we had to publish into, and a lot of you remember this, publish into a particular folder and then FTP the code or the DLLs and all of the supporting files up to your service hosts and beg them to please allow you to include some kind of non-standard plugin or DLL that you really, really needed for this particular website so that I could get paid
8:10
and then hoped that everything connected together and worked and that the website would deploy
8:17
Only the gods themselves could help me if something went wrong or something went right
8:21
because there was very little support for logging and debugging on the machines that you uploaded to things at that stage
8:29
And even now, some hosts are still in the dark ages when stuff like that went wrong
8:35
So how did we get to this stage? How did we get to the stage where we can do this amazing web stuff at the speed of light
8:43
And what exactly is containers and Kubernetes and serverless and cloud functions
8:48
What's all that kind of stuff and how is it related? So what we're going to do now is we're going to just step through the progression of these things over the few years and see how things emerged
9:04
so 15 years ago or so it was even 10 or earlier it was very commonplace to have an actual physical
9:15
server in your office or in your home and this was directly connected full-time to the web
9:22
and by this I mean that you had a physical server with the fixed IP address that physically sat in
9:31
your office and was exposed to the web. And some places, you know, they still have this kind of
9:37
setup, but it's becoming increasingly rare for obvious reasons. And when we wanted to publish a
9:44
website or expose a service or some kind of thing on the net, it was really easy because we had
9:53
control. We had direct access to the physical machine and we could do what we wanted on that
10:00
machine. We could install this DLL. We could bring up that library. We could have an exotic
10:05
ActiveX plugin. Does people remember ActiveX plugins, right? The things that your website
10:11
had to have or they just simply would not work without. But the problem with this setup is that
10:18
it's very wasteful. It's expensive. And it left you and the folk in your office, your team
10:25
to be the managers of the entire infrastructure, not just the website that you built
10:33
And this is a huge deal because not only did you have to be someone
10:37
who developed the code and the backend and the HTML and the CSS
10:42
because, of course, we didn't have great frameworks like Angular and React
10:48
and Vue and all these things back then. You also had to be the guy who actually went and took care of the box
10:54
and you had to be the DBA guy and all these different things. So it was very, very difficult
11:00
So somebody then came up with the idea of renting out rack space
11:06
in their data centers to people who had their own physical machines
11:10
and wanted to offload some of the management work of, well, if a disk goes wrong, going over and plugging it in
11:16
and plugging it out and that kind of stuff. You know, offloading your management work in a shared server, in a co-hosted server, meant that you still owned the physical machine, but you could send it to this dedicated space where people knew how to, as it were, keep the lights on for internet-facing servers
11:41
and take care of some of the things like backup, fast database connectivity
11:47
making sure that the servers were patched, that the security was up to date
11:51
stuff that, to be honest, website developers really shouldn't have to worry about, right
11:56
This should be like just part of the plumbing at this stage. And the thing about co-hosting your own server, though, is that it's still your own server, right
12:08
It's still your own hardware. you still have to buy that thing it's still going to cost you a lot of money
12:14
and while somebody else is managing the machine and all its related overhead issues it's still
12:21
not really efficient like the machine is not being used to its optimal and perhaps 80 percent of its
12:28
80 plus indeed of its actual compute life cycle is just spent there lying idle it's money being
12:35
wasted that would be put to better use. It could employ a new developer. It could get somebody
12:41
else to take the pressure off your team, etc. So the next step up in this evolution of computers
12:49
and servers on the internet was shared servers. And this is where a Rackspace company installed
12:58
some you know cool fancy pants server but on their own operating and hardware systems that allowed
13:06
them to isolate certain folders and processes and to share resources over a number of different
13:14
website customers this had the immediate benefit of using the server for a lot more of the time
13:21
and at the cost of flexibility to the customer. The customer has got access to, and you've seen these things before, a control panel
13:31
And we use this to interface with the services that we could install and upload
13:34
and it was quite restrictive. And one of the major problems with shared servers is that, as the name says, things were shared
13:43
The operating system was shared. The machine resource was shared. The CPUs were shared
13:51
The RAM was shared, the operating system and everything on it, all of the different components were shared
13:58
And the problem with this is that if one process goes rogue, if something goes wrong, well, then it can maybe hog 100% of the CPUs on the machine for quite some time
14:11
And then any other websites or processes that were sharing the space on that machine will suffer as a consequence
14:19
And that's not a great situation. So, along came another clever person, and they came up with this cool idea of virtual machines
14:30
And this is where we have a system that can very specifically isolate part of a server's hardware and present it to the operating system, which sees these isolated parts of the server as another entire server
14:47
and when I learned about this, first of all, it just blew my mind, right
14:52
Like think about it. We take a snapshot of an entire machine an entire server We break it into chunks We hand this to a full operating system and we say this is all you get And that kind of cool because we could now take a very very powerful machine
15:08
let's say with one motherboard, a gigabyte of hard drive space, 32 gigs of RAM
15:16
and virtually split that into, for example, maybe eight virtual machines with two gigs of RAM
15:21
and 100 megs of hard drive space each or whatever it happens to be
15:29
one of which would be absolutely tons for you to run maybe a reasonable instance of SQL server
15:37
and still have loads of space carved out to run the underlying virtual machine operating system itself
15:45
So virtual machines and servers worked really well, and they still do
15:51
But they have a limitation. And the limitation is that they're a full-blown operating system environment
15:59
And what you need to do is, if you need to create another virtual machine, is you need to spin up another full operating system environment for each and every service that's needed
16:12
So despite the fact that we have all of these virtual machines sitting on top of one physical box, it's still a wasteful process, and it's still not optimal on resources
16:23
But it's still come a very long way since we had the physical machines in the office, right
16:29
So the next step up and the next major step in the evolution was the move towards containerization
16:36
And what container technology allows us to do is to create an isolated block of resources within a machine, virtual or otherwise, and to share the underlying machine resources without bleeding into the other services that are using the same resources
16:56
And that's really important to note, that you can isolate things on the same machine, on the same operating system, without one application or one container affecting the other
17:11
So in effect, this is like a shared server, but without the side effect of one installed system having unintended control over another one due to bad resource management
17:24
The other major benefit of a container is that you can specify particular versions of system level dependencies
17:32
So, you know this thing where you'd go off and you'd write a piece of software on your machine and then you transfer it to your colleague's machine and they'd say, no, it doesn't work
17:40
And you say, well, it works on my machine. No, it doesn't work on my machine. It complains about X, Y, or Z
17:46
Library is not installed or it's out of date or it's not up to date or it's the wrong version
17:51
It's incompatible. All those problems go away when you start dealing in containers, right
17:58
As I said, you can specify particular versions of system-level dependencies. So let's say, for example, that you need to have a particular version of a DLL, a library
18:10
another installable. But the problem is that maybe it's custom or even that it's older or it's out of date
18:16
to another system or another application that needs to also share that system
18:23
on the virtual machine. So that causes a problem. Somebody has to give away
18:28
But with containers, what you can do is you can isolate dependencies like this
18:33
and you can keep them effectively firewalled from each other. And the added benefit of this
18:39
is that if something works on your machine, well, then you can take a snapshot of this configuration
18:45
and you can transport it to an online host or another developer's machine
18:50
and it will still simply work. No questions asked. If it works within a container on your machine
18:57
it will work within a container on any other machine that you put it on. And this is an incredibly powerful feature of containers
19:04
and it's worth checking out for this particular feature, if nothing else whatsoever
19:11
So now we're going to take another little jump upwards before we come back down again. The container paradigm moved us into an area
19:18
where we can have, let's call them, surgically sliced up parts of the operating system just for our simple system
19:26
We can use containers to host simple single services like, for example, an SQL server
19:32
or even more complex arrangements of different services in combination. But sometimes we need something leaner again
19:41
Sometimes we need something tighter. we need a little simple thing where you think actually this one particular part of my system
19:49
or my architecture would actually be better just being in a shared instance, doing its thing when
19:54
I need it, but without the overhead of a virtual machine or even the overhead of having to manage
20:00
a container. And now you can have your cake and eat it. This is what they call function as a service
20:07
are serverless computing. On Amazon, they call it Lambda. On Azure, we call it functions
20:14
And you now can go in and write a simple function with no supporting website, no container
20:20
and just simply say, run this code when some event happens. So functions as a service allow us to write
20:31
simple functions or methods that does something on the web, let's say, and to deploy it to run
20:36
without having to worry about the underlying infrastructure, without having to worry about setting up a container or a virtual machine
20:44
and without having to worry about all of the usual things that we need to do
20:48
even to get to a starting point. It's defined as server-less computing because that's actually what it is
20:55
It's the ability to write a function or a set of functions in reality
20:59
and deploy to what seems to us like an environment that has no servers
21:05
no containers, nothing. The cloud provider, in this case Azure, worries about the deployment, the isolation, and critically, the auto-scaling horizontally and vertically when it's necessary
21:18
Now, unlike virtual machines and Azure platform-as-a-service type offerings, functions aren't charged by the hour, but rather they're charged by the execution of the function on the microsecond
21:30
Like, that's just incredible, by the microsecond. And that raises a really interesting question
21:36
In the past, and now really still, we look at our applications and we say, where's the bottleneck
21:44
Where are things actually slowing down? Where can we improve our speed, our process for the customer
21:52
What's costing us time and what's costing us money? But now, now with the introduction of serverless computing and the fact that we're being charged by the function running
22:06
we can now look at things from the point of view of saying what function is costing the business
22:12
the most money what function and that's really really really interesting and i suspect that
22:19
will bring up um a lot of focus on design decisions in the future and when we talk about architecture of these types of uh big applications but that aside that for a different day session So let say our customer came to us and said
22:37
hey, can we implement a simple thing that when an image is uploaded
22:41
we check to see if it's in our specified dimensions, and maybe if not, we edit and resize it dynamically
22:48
to make it fit our requirements. Now, before functions, we would have had to either add this new functionality to our existing web offering in code someplace, integrate it, upload the changes
23:01
But with functions, we can simply go into an online editor, define an endpoint, in this case, something to monitor incoming image files, and write some code that will be processed when the image lands
23:16
And that's it. There's no hosting setup. there's not even a file new project, for goodness sakes. I mean, you can do that, but it's not really
23:23
needed. So that's incredibly powerful. So we've gone the full loop from physical servers
23:31
in our own office through to how they evolved up to functions and serverless
23:40
And now we're going to step back a little bit and we're going to look at things from an orchestration and a
23:47
management point of view, because actually under the hood, these functions as a service
23:52
they actually run as specialized containers, if you like, and they have to be managed
23:57
So let's go and focus on containers and Kubernetes again. So the job of containers, as we've seen, is to isolate an application and to package up all of
24:12
the dependencies that the application needs into an isolated bundle. and then to run this in a self-contained isolated part of the system
24:21
So if Docker manages the applications themselves and the communication between the applications inside the containers
24:28
what Kubernetes does is it manages the orchestration of those containers. Kubernetes is the thing, it's that glue in the middle
24:37
that manages the containers on the virtual machines on which they run
24:43
It ensures that they have enough CPU and RAM. It ensures that the hard drive space is adequate or will alert if it's not
24:53
It will ensure that the containers that need to be close together for quick communications are placed onto the same machine
25:00
They're deployed or scheduled onto that same machine or to close networks
25:05
And if a container dies, it'll start up a new replica of that container
25:11
So it monitors the health of the system and a heck of a lot more
25:15
And that's the magic of Kubernetes. It manages the containers so that you don't have to
25:21
So if we look back and we compare this to the start of the story
25:25
when we talked about having a machine in your office, where you had to physically go and physically do all the checks
25:32
and put in the floppy disk and the CD and everything else, right? You have to do all those physical things around the machine
25:36
it's almost like the same thing in that kubernetes is that person who did all those things
25:42
right all you as a developer have to worry about is you deploy your code in the container
25:48
shoot the container up to um let's say azure container services and then let kubernetes pull
25:55
it down and manage within that so this is um kelsey high towers um github repo um and it has a
26:06
a whole set of instructions, step-by-step book, almost if you like, online. And it's called
26:15
Kubernetes the Hard Way. And what it does is it talks you through setting up Kubernetes
26:22
absolutely from scratch. Because the bottom line, my friends, is that Kubernetes setup
26:31
and management is not for the faint-hearted. It can be quite complex
26:38
And one of the things that Microsoft does well is they do very good at automation
26:45
They do very good at removing pain. They do very good at removing the things that get in our way
26:56
that stop us being productive as developers. Because as an engineer, when I look at something
27:00
I want to code it, okay? I don't want to get in there and start messing around with all sorts of settings and pipelines and everything else
27:08
I want it to just work. I want to do my creative work and keep driving that forward, right
27:13
But Kubernetes is an absolutely fantastic and awesome system, and it runs some of the biggest services on the planet today
27:24
and will continue to do. It's only growing bigger. But it is not an easy system to work with
27:31
Okay. It is not an easy system to work with. And Kubernetes setup and management is hard
27:37
And this is one of the reasons that Kelsey's website there says Kubernetes the hard way
27:42
because it's difficult. It's a bit like hardware. You know, we have hardware and we've got software
27:48
Like there's a little hint there in the name, hard, hardware, because hardware is hard, right
27:52
Kubernetes setting up is also hard. So if you look here at this diagram
28:00
what we're looking at is we're looking at three machines. We call them nodes
28:07
One is running CentOS, the other is running CentOS, the other is running CentOS
28:11
So three machines running CentOS. That's the Cent operating system version 7
28:17
And each of those has a particular role, and each one of them runs containers
28:25
If we didn't have Kubernetes, we would have to dial in and manage those machines ourselves
28:31
as we deploy the containers onto them. Kubernetes does that management for us
28:39
Before I continue, I'm just going to check on the website here
28:44
and to make sure that we're actually doing okay on time. So can you just give me a ping there, Stephen
28:50
in the chat window to make sure that I'm okay, because I don't want to run too much over or
28:54
under time. Yes, I think we're doing good. That's fine. So we're doing good. Okay, so let's go back
29:00
here now. And in this here, we can see that we have three virtual machines, and we have Kubernetes
29:08
managing. And it seems reasonably clean there, okay? It seems like there's not a lot to do
29:13
Now I'm going to show you another screen. And this screen here shows a little bit more of what's
29:18
actually going on inside in the Kubernetes network when we have it running and what is it actually
29:24
doing. So this looks a little bit more complex, right? So in Kubernetes, we have two main types
29:33
of machines or nodes, as they're called. We have a master machine, a master virtual machine, and then
29:39
we have node machines. Now, these guys used to be called, and they weren't called nodes
29:46
They were actually previously called Minions from the Minion cartoon program
29:57
I thought it was really cool. I prefer them to call Minions, but they've changed the name. And that's the way it is
30:01
In any case, what you can see here is that in every node, we have a stack, which is, first of all, the operating system
30:09
On top of that, we have the container runtime. And this is the thing that runs the actual Docker containers that have our applications
30:18
On top of that, we have a thing called a kubelet. And a kubelet is a small application that runs like a daemon
30:26
It runs as a service on the nodes, and it continuously communicates back and forth to the master Kubernetes node, reporting on the health of the system, reporting on the containers that are running, what type of CPU they're taking, how much space they're taking, all these different things
30:47
Above that, again, we have the networking layer, and we look at networking a little bit later in a live system I'm going to demonstrate to you
30:55
And if you look at the different color of the arrows, right, of the information going up and forward, you'll see that it uses a number of different types of network message communication protocol
31:10
It uses JSON message exchange on REST APIs. It uses gRPC. It uses core protobufs
31:20
So an awful lot going on there. By the way, if you are doing all of high-speed messaging
31:26
I would recommend that you look at GRPC and Protobufs. They're absolutely awesome from a speed point of view
31:32
That aside, let's go back to the picture here. So on the top, then, the master machine
31:37
it has a number of different services that it runs and it manages
31:41
First of all, it has an API server. And this is to allow the different nodes via the kubelets to communicate back to it
31:50
so that it knows what's going on all the time. Then it has a scheduler and a control manager
31:57
which decides exactly what containers should be running where, how many there should be, et cetera, et cetera
32:04
And it keeps those all ticking along. But the bottom line here to note is that the container part
32:13
the container runtime, is an extremely small part of this entire process
32:17
And what you can see there in this high-level diagram is actually quite a complex environment
32:24
There's a huge amount that's actually going on there. Kubernetes is hard
32:29
So what Azure Kubernetes service does is it comes along and it manages the hard stuff for us
32:37
Okay. It takes away the pain. And there's a lot of pain if you try and do it yourself, trust me
32:42
It takes away the pain and it does all of these things for you. It ensures that the virtual machines stay up
32:49
If one goes down, it will deploy another one. It will ensure that it gets filled out with the right containers
32:56
It will auto-scale them. It'll keep your machine healthy. It'll keep your security up to date
33:01
Every single thing that you would normally have to do as a person tweaking Kubernetes manually
33:06
Azure Kubernetes Service takes care of for you. And that's incredibly important
33:11
So some of the key benefits of using Azure Kubernetes Service, at least to me, like most of Azure, is that we have incredibly tight integration with all of the other Azure services that we know and love
33:29
So we have, first of all, elastic provisioning, and we can use a tool that Microsoft have generated and given out as open source called KIDA
33:39
and KIDA is really interesting because what it is is it's an event preprocessor
33:46
and it a system that can look at a queue so it can look at a queue of work that coming down into your system into your app And it can make a decision based on the amount of work in that queue
34:00
So the amount of work coming down into your app that's spread across many containers
34:04
It can say, hmm, I think that this particular system is about to explode
34:10
I think this particular system is going to need to grow really, really fast, really quickly
34:15
because maybe you've just put an advert on TV. Maybe it's been on Super Bowl Monday or something
34:22
Maybe it goes out as an advert on the most popular reality TV show
34:28
and suddenly everybody says, oh, we want to go and we want to use this service. So as all these people hit your end point and those messages come into your queue
34:38
Kida can look at those coming into the queue and can say, ah, we need to auto-scale the system up
34:43
So it can grow your system. You don't have to do this
34:47
You don't have to go in and manually trigger up Kubernetes. AKS will take care of this for you
34:53
You just need to configure it, give it the boundaries, because we don't want our budget to be blown either, right
34:58
And it'll take care of that for us. The other thing that it does is it takes care of very tight integration
35:07
with our Azure DevOps. And this is one thing that I use very heavily myself
35:13
So all of the code that my company uses, we push into Azure DevOps
35:21
And we have our ticketing system in there in Azure Boards. We use Azure Container Services
35:28
We've everything tightly integrated. And this means that when I go in and I write a ticket for one of the engineers on my team to say
35:36
go and do this particular thing here, they go off and do it. When they do a push on Git, they put in a flag in the comment to say that it's attacked that this particular push they're doing for a PR request is linked to a particular job
35:52
So it's linked to a particular task that I issued to them on the board. That gets in
35:57
The other members of the team, they'll check the PR, and that's accepted
36:00
and then it gets pushed up and it goes through and it's integrated into the overall system
36:06
and gets pushed up into Azure Container Services and then gets automatically deployed down into Azure Kubernetes Service itself
36:17
So for me, one of the huge wins here is the end-to-end tight integration that we have
36:24
The other thing it does is it integrates very tightly with Azure Active Directory
36:29
which means that we can have our inbuilt security layer very, very tightly
36:34
And the other benefit that it has is it is in more regions
36:38
than any other cloud provider that we have at the moment. So earlier on, I talked about having to have numerous people involved
36:47
in the process to get our application up and running. We talked about having IT pros and the DBA and the security guy
36:54
all of these different folk involved. But the beauty of using an integrated service is that we can now automate all of this and put it into a pipeline
37:04
And this is what my company does. All of our work is in one smooth pipeline
37:09
Everything gets taken care of from end to end. So now we can easily define, deploy, debug, and upgrade incredibly complex systems and automatically containerize everything
37:22
We can ensure that all of our routine tasks are automated and that we have a very, very deep level of traceability
37:32
across everything in our deployments. And this is incredibly important. You can see there from the diagram on screen
37:39
where we can start out and we can send everything in through the repos They can come up They can automatically do a continuous integration and deployment
37:50
It can go and it can automatically pull down, deploy, or deploy into and pull down from the Azure Container Registry Services
37:57
It can kick that then into the Azure Kubernetes production cluster. Everything is automatic from end to end
38:03
And this is one of the, for me, the huge draws of AKS integrated with everything else
38:11
Deploying your code into Azure Kubernetes Service is extremely easy. Like, it just could not be easier, both in the large Visual Studio and in the lightweight Visual VS Code itself
38:25
So this screen, for example, shows how I can make one request and will deploy my code directly up to a container
38:32
running my code up on the cloud and push it through all those loops
38:37
in my predefined pipeline. And I don't have to worry about this, right
38:42
So this is the beauty of the managed service. The other thing is that if you recall
38:48
we talked about how Azure container, or sorry, Azure Kubernetes service removes the pain of managing all of these virtual machines
38:55
that sits under Kubernetes. Well, this is how easy it is. We can define the type of machine that we want
39:01
And then we can either choose to manually update these ourselves, or we can let Azure auto-scale this with certain criteria, potentially hooked into the likes of Kida to push that along for us, which is pretty awesome
39:15
So what I'm going to do now is I'm going to show you, give you a little peek inside one of my own systems of things that's going on inside a Kubernetes cluster
39:25
In this case, this is a Kubernetes cluster that's managed by AKS
39:31
So I'm going to investigate it through a thing called WeaveScope, which is a tool for examining the network, and show you some of the different things that's going on
39:39
And I think by looking at this here, you will see the complexity of Kubernetes
39:46
You will see the huge amount of things that it has to manage
39:50
And you'll understand why you do not want to do this yourself, right? You'll understand why you want to use Azure Kubernetes Service or another similar managed service
40:00
But importantly, you'll want to get somebody else to manage this for you because you don't want to spend your time here unless that's what you want to do
40:08
And if that's what your thing is, that's awesome. Go for it. So what I'm going to do now is I'm going to skip over hopefully to my live screen here and see if I can pick it up
40:23
let me see if I go here and I click out of this, I think I'm now in WeaveScope. So I'm just going
40:34
to double check with the guys here to confirm that they still have that. So Simon, Stephen
40:42
you have the correct desktop there at the moment. I did this last week and I turned out I was showing
40:48
the wrong desktop. So you should be seeing a thing called WeaveScope and you should see my mouse
40:53
moving around now. WeaveScope. Awesome. Okay. So what we're doing now is we're looking inside
41:00
a live Kubernetes instance. And this particular thing here has a number of
41:07
services inside. It has a thing called Grafana, which is a dashboard system
41:19
It's got JupyterLab, which is a system for machine learning and working with AI
41:26
It got then Certificate Manager it got Superset which is a database visualizer type tool We have Cassandra database over here We have Redis over here We
41:42
got Mongo over here. So there's a huge amount going on here. Okay, we can see an awful lot going on
41:46
here in this particular system. And what I'm looking at here now is I'm looking at the containers
41:50
live as they're working across my system, which is why you're seeing things move in and move out
41:57
What I'm now going to do is I'm going to show you the underlying machinery underneath this, and I'm going to do that by clicking on the hosts button up here at the top
42:07
So if I click on hosts, as it goes and it updates here, it sees that I have one, two, three, four hosts
42:18
Okay, so there's four hosts that I have, and each one of these is running Kubernetes, and they're all managed by AKS
42:28
Now, if I go to one particular host, this first one here, I'm going to click into it, and it would bring up the properties on the right-hand side here
42:38
And inside this, I can get information on what's happening on this particular host
42:43
So you can see here that it's using a small amount of memory
42:49
The overall load is less than half of a percent. CPU is less than 4%
42:56
If I look down here and see what containers are on the machine, it tells me exactly what containers are deployed onto this machine at the moment
43:08
And I can see also what processes are running on the machine
43:13
Now what I'm going to do is I'm going to click on a different machine. So we can see here that there is Weaveworks, TunnelProbe
43:22
and four pods on this machine here. So now I click to this machine over here
43:33
And we can see here that it's running different containers. So these are the different containers
43:40
that this machine is running for us. If I had to go and do that myself by hand
43:44
that would take a serious amount of work managing all this stuff, right? But again, by using this here, it does it for us automatically
43:52
So what I want to show you now is if I go into containers
43:57
Stephen tells me I have to wrap up soon, so I'll do that
44:01
Now what I'm going to do is I'm going to look at some of the containers that I have, and I'm
44:05
going to look at the work that they're doing. So this one here is one of our main things, which is a worker
44:12
And if I go into this worker here, I can see all of the information about that particular system
44:21
I can also go in and I can execute a shell onto that system there
44:27
And it will allow me to interact with it directly. So all of this here I can do from inside Azure Kubernetes Service
44:35
And the reason I can do it from inside it is because all of the hard work is taken care of for me by Azure, by the Azure Kubernetes service
44:46
If I didn't have it managing all this for me, either A, I would have even less hair than I have right now left, or B, it just wouldn't be up there because it is difficult to do, guys
45:01
So if you're getting into deployments, I strongly recommend that you consider using this for it
45:11
So let me just now finally wrap up and ask if anybody has any questions
#Business & Industrial
#Computers & Electronics
#Distributed & Cloud Computing
#Other
#Photo & Video Sharing
#Virtual Worlds