Keynote by Vishwas Lele || Azure Virtual Conference
5K views
Nov 16, 2023
Vishwas Lele is the CTO at Applied Information Sciences. Vishwas also serves as the Microsoft Regional Director for the Washington D.C. area and is currently an Azure MVP. Conference Website: https://www.2020twenty.net/azure C# Corner - Global Community for Software and Data Developers https://www.c-sharpcorner.com #azure #csharpcorner #virtualconference #keynote
View Video Transcript
0:00
Well, you know, I was looking at the agenda for today's conference and they're great speakers
0:09
They'll be going into a lot of detailed topics. And my objective here is to motivate a discussion
0:17
about the sessions that you're about to see later today. You'll drill into a lot of information
0:23
Mahesh already talked about and Magnus together talked about the sessions that are there on the
0:28
docket, I just wanted to take a few minutes to motivate a couple of things. So here's my first
0:36
slide, and this is my agenda slide. Quite unusual for an agenda slide to see a diagram, but these
0:41
are the three things that I really want to focus on. And I have limited time, so I won't get into
0:48
as much code as I would like to, but hopefully I can give you a sense for why this is important
0:54
So the three things that I want to focus on today, first is cloud native constructs
1:00
So what do I mean by that? And what does cloud native mean? And I'm sure you've heard of many terms
1:06
There is a whole foundation called the Cloud Native Computing Foundation, CNCF
1:11
There are a whole host of definitions about cloud native. I'll just give you my simple definition, which is, are you trying to build applications in the cloud that take advantage of the latest innovations and capabilities in the cloud, whether it is serverless, whether it is a managed Kubernetes environment, whether it is a consumption-based SQL server or a global database like Cosmos DB or something like that
1:42
The idea is that you're taking advantage of the latest and greatest innovation
1:48
Many of these services that we talked about, they've been designed with cloud in mind
1:53
whether it is multiple regions, failure, failover capability, elasticity, and whatnot. So if you're taking advantage of those things, I like to call them cloud native constructs
2:07
you know, whatever your definition may be. So keep that in mind and we'll drill into that
2:12
in a moment as well. So that's one. The second thing which is becoming increasingly important
2:19
is consumption-based pricing. So what do I mean by that? As large companies are moving to the cloud
2:28
as they are moving their existing applications to the cloud, it is becoming very clear
2:33
that now the next step of the journey is how do we optimize our footprint in the cloud? How can
2:40
we optimize the consumption in the cloud, right? This is going to be really important. And there's
2:47
a fundamental shift that this is happening, right? It is fine for you to move to the cloud
2:52
in a pre-provisioned capacity model. What do I mean by that? Well, I'm moving to the cloud
2:59
and I need these many VMs. I need this much storage. I need this much compute capability
3:06
or network or whatnot, I'm pre-provisioning that capacity. And of course, there's auto-scale
3:12
don't get me wrong, I'll provision it to a certain number, and then I can auto-scale up and down
3:18
Many applications can't take advantage of auto-scaling because, you know, their business is
3:22
consistent across the 24 by 7 time frame, may not be a peaks, they may have some peaks
3:28
but it's consistently used. There are many applications that see episodic use
3:36
a few users in the morning and then nobody in the day, maybe in the evening. So you have to keep
3:42
those systems up. And if you think of those in terms of consumption-based pricing, you are having
3:48
to pay for all the time that you have reserved the capacity. And what we are finding now in 2020
3:53
that the option for only paying for what you use is becoming really important
4:01
And that's what I mean by consumption-based pricing. Only use resources and only pay for resources
4:09
that you are using. Let me take a couple of examples to point this out. This has been around for a long time
4:14
but this is becoming more and more important. Let me give you a couple of examples
4:18
Everybody knows Azure Functions has a consumption-based pricing model. So you don't pay for a pre-provision capacity
4:25
You're paying for how many times you're invoking that piece of code. That's essentially what a consumption-based pricing is
4:31
But if you look at it today, Cosmos DB team has announced a serverless model
4:37
SQL Server in Azure. Azure SQL database has a tier, consumption tier
4:43
where you run a query and based on the kind of query that you're running
4:48
they can scale up the number of cycles they make available for your query to run efficiently
4:55
Take API management for example, they have a consumption tier. So previously, we used to have
5:00
hey, provision an API management instance, and it gives you 2 million calls
5:06
or whatever million calls it gives you, and customers are finding that, hey, I'm using 200,000 calls
5:12
I'm not quite using the quota, should I be paying for that amount
5:15
Wouldn't it be nice if I could pay for the API gateway that works in a consumption-based model
5:21
where I only pay for the number of API calls I'm servicing
5:25
So that's consumption-based pricing. Very important concept. And I know you cannot move to consumption-based pricing in all cases
5:35
And in some cases, provision capacity would make it more cost-effective. If you have a SQL server that is running at a consistent capacity for most majority of your month
5:48
then it might be advantageous for you to go with a pre-provision capacity. In fact
5:53
it might be advantageous for you to go with something called the reserved instances where
5:57
you say Microsoft or Amazon or whoever, I want this capacity for a period of time
6:04
Please give me a discount for a long-term usage. It might make sense, but I want you to focus on
6:09
consumption-based pricing. The third and final thing I want to talk about here really is developer
6:15
productivity. So what do you mean by that? Well, so we can have cloud native constructs
6:22
we can have consumption-based pricing, but it doesn't really help us if we can't build these
6:27
applications quickly. What are the sort of the developer productivity benefits that are coming
6:33
out from the various teams and how can we leverage that to build these applications quickly
6:38
So think about these three things as you are embarking on your journey in the cloud or you're already in the cloud as you're optimizing your presence in the cloud
6:50
Think of these three things as we progress further. Okay, so that said, I want to go quickly to Vishwas
7:02
Now, you've talked about these three motivations, consumption, cloud native, and productivity
7:09
But before you go there, let me ask a question. Who's using the cloud
7:13
Why is it important for me to learn this? Can you tell me about it? And here's a slide
7:18
I took this from Scott Guthrie's keynote back in June and built
7:21
More than 95 of Fortune 500 companies are using Azure public cloud some sort So I often still get questions from people that I interested in the cloud but I really
7:36
didn't have an opportunity to work on it because my employer will not let me go to the cloud
7:41
or my customer is very concerned about the cloud. We want to demystify that if 95% or greater Fortune 500 companies are using this, there's
7:51
a very good chance that the use case that you're working on is probably be acceptable in the cloud
7:59
And of course, during my introduction, I talked about this earlier. I live in the Washington
8:04
D.C. area. We do a fair amount of work on the Azure government space. And if some of the agencies
8:12
which do classified work are going to Azure government and public cloud in general
8:18
There are very few use cases out there where you can say, I can't run in the cloud anymore
8:24
So I want to take this back. Here's another slide very quickly
8:28
This slide is, again, gives you answering the question, who's using the cloud
8:33
Well, if Microsoft was not seeing this kind of adoption, they wouldn't be creating these
8:39
new regions, including the one in New Zealand. You can see at the bottom right, brand new region, 61 regions and growing
8:48
And the interesting number is that it's a worldwide presence. This is more than AWS and Google combined
8:55
Pretty interesting statistic. Okay, so we've talked about who's using the cloud
9:01
and why should you be and what are the kind of customers
9:06
The next thing we want to talk about is, so Vishwas, you said cloud-native applications
9:11
and you talked about consumption-based pricing and you talked about Azure Functions
9:15
that sounds like an interesting pattern but who is building real world enterprise grade
9:24
applications in that realm who's using these kinds of constructs like serverless and
9:32
consumption-based pricing to build enterprise grade applications well i want to show you a
9:38
quick example this is a chart that you may have seen and just look at the hockey stick growth
9:44
Back in March, Teams usage was around 32 million active users. In April, it reached to about 75 million active users
9:54
So you're looking at this chart and saying, yeah, that sounds great. But, you know, every collaboration remote meeting tool is growing at this pace
10:01
So why are you talking about this? Well, I'm talking about this because of this slide
10:06
Teams, whether some of you may be using it, some of you may be using some other tool
10:10
So Teams uses the kinds of services that I was talking about earlier, which is all the serverless kinds of things
10:18
It uses Kubernetes. It uses Cosmos DB. It is using Redis cache or service fabric and things like that
10:27
And look at the chart on the left hand side of the screen where because it was built on these cloud native constructs
10:35
they were able to quickly scale this application up. Going from 35 million to 75 million within a span of 30 days
10:42
is quite an incredible story. So the point I'm trying to make through this slide is
10:48
that use of serverless technologies together is leading up to really significant enterprise-grade applications
10:57
And having 75 concurrent or active million users, 75 million active users is a pretty significant number by any measure
11:09
Okay, so moving on, the next thing I want to talk about the developer productivity
11:14
So it's great to have serverless, it's great to have consumption pricing. What about developer
11:20
productivity? What are the kinds of things that we should be thinking about? So first and foremost
11:26
when we talk about Azure or when we talk about the Microsoft cloud, we are not just talking about
11:31
Azure. And as we go through the presentation, you'll realize this, that it is really the confluence
11:38
of three clouds that are giving us this hyper-productivity kind of experience. It's the
11:46
Azure platform, which powers it all. And then on top of that, you have the power platform
11:51
which is the low-code, no-code platform. And I'll give you a little hint. So there's a lot
11:56
of talk about low code, no code, and there's a lot of talk about citizen developers building
12:02
applications and not having a need for professional developers necessarily to write all these
12:09
applications. And that's certainly true. Power Platform has enabled citizen developers to dip
12:16
their toes into the world of building applications. But I'll give you a little tip. I see many
12:24
professional developers dipping into the power platform because they're able to crank out
12:31
applications such a much rapid clip than they would if they were using the traditional tools
12:38
So I want you to take this important lesson away. When we talk about productivity, we're talking about low code in the context of professional developers as much as we are
12:46
talking about the citizen developers. So our platform, Dynamics 365, you're building an
12:53
application for a customer. Maybe you want a single view of the customer. Dynamics could be great. You
12:59
don't want to create this up from scratch. And then, of course, we have the Microsoft 365 or M365
13:05
where you have the whole collaboration platform. You have the Teams platform, where you can embed
13:10
Power Platform applications. So, you can do all kinds of things to very quickly build these kinds
13:17
of applications. So moving on, here is a typical application. We have PowerApps running inside
13:24
Teams. So you can just build a low-code, no-code application and snap it into a Teams tab
13:30
which in turn calls API management. Teams can call an Azure service. It'll call an API management
13:37
And then API management can in turn call a microservice hosted in Azure Kubernetes service
13:42
and that Kubernetes service can then in turn talk to Cosmos DB
13:48
And, you know, there are all kinds of interesting things there related to the Kubernetes community doing things like the service broker
13:56
which automatically brokers a connection to Cosmos DB. So, you know, you are within the cloud native constructs, essentially
14:05
The next thing I want to talk about here really quickly is the data platform, right
14:10
You just cannot have this application generate a lot of data and not have good tools to yze this data
14:20
What do I mean by that? So on the left-hand side of the screen, you have data coming in from your on-premises application
14:27
Maybe you have some SaaS services. Maybe you have some IoT story
14:32
What have you? You have all of this data coming in, and then you now want to yze this data
14:39
and once again the concept of serverless and consumption-based pricing applies here as well
14:46
and this is sort of the important thing to take away from this slide so you bring all of this data
14:51
you store it some sort of a data lake and then once you have the data in the data lake you can now process this data using a relational data warehouse model which is the SQL pool capability previously known as the Azure data warehouse
15:08
Azure SQL DW is now called the SQL pool capability. So you have this data coming in into some sort of a storage data lake
15:16
And now you want to run some sort of an ytic query
15:21
in a relational model where, well, you're not paying for any SQL compute capability
15:28
if you don't want to. You can use something called the on-demand pool
15:33
and just run your query and your ytics query is running and you're only paying for the time
15:39
that ytics query is running. That's one part of Azure Synapse, right
15:44
You may have data which does not lend itself to a relational warehouse technology
15:50
and you really need to process This is using some sort of a Spark script
15:54
Well, Azure Synapse combines the two models of ysis. So on one hand, you have the relational model
16:01
and on the other hand, you have a Spark pool available. So all the goodness of Spark is available to you
16:09
So both these models can be brought up, can be elastic, can be on demand, and can be consumption-based pricing
16:17
So I want to make sure that our theme of all the way of building application
16:21
in a serverless consumption model continues all the way down to data ysis
16:27
The last two things I want to talk about in this slide is, of course, Azure Machine Learning
16:33
Azure Machine Learning is, again, it can be connected to the same data lake
16:40
You can run a training algorithm. And once again, in the same model, in this case
16:45
you might be running a deep learning algorithm. and that deep learning algorithm may require some GPU compute versus a CPU compute
16:54
You can now train your model using a GPU compute, pay for the time you're training it
17:01
And once you're done training, you can get rid of the GPU compute. You can take your trained algorithm, containerize it, and run it inside another consumption-based pricing called ACI or the Azure Container Instances
17:15
So you see where I'm going with this, that if we think about this whole model, we can have the richness of the platform, the productivity of the platform, at the same time, have a lower total cost of ownership
17:30
Okay, so that's the Azure ytics part. So with that said, I wanted to quickly show you a very simple application and I'm not going to get into as much detail because there are lots of sessions to go into the details
17:45
My goal here is really to motivate the things that I talked about
17:50
As a developer, I always like to have some code working to sort of bring home the point
17:55
But we will very quickly just go through this model. And then let's see, maybe leave some time for questions if there are any
18:07
So I have a simple reference application. And two of my colleagues, Gaurav Mantri, some of you may know him, and Vansli
18:15
This application was really put together in the last 48 hours because I thought it would be good to have an application
18:21
So I wanted to thank them, call them out. Here is what the application does
18:27
so we have a trip tracker application very nothing fancy we have simple application so
18:33
i'll walk you through this application right here starting here so first thing we wanted to do is
18:38
we wanted to have a mobile application that works across all mobile platforms and this is where
18:43
power app shines so we built a canvas app which can then be deployed to ios android what have you
18:50
So built a power app very quickly within hours, literally, to build a power app
18:56
That power app calls an Azure function and we'll take a look at the code
19:02
And what in this case is happening is when, let's say, I'm an Uber driver or I'm a taxi driver, I start my trip
19:11
Application comes, my mobile app starts sending GPS coordinates. Right here, start sending GPS coordinates to an Azure function
19:19
this function is HTTP triggered, so I'm only paying for the times that is coming in
19:24
The Power App calls the Azure function. The Azure function then in turn writes it to Cosmos DB
19:32
and then right here goes into the Cosmos DB. There's another Azure function here
19:37
which is triggered by the Cosmos DB change feed. So you may be familiar with the change feed
19:43
a very powerful model. All the things that are happening in a Cosmos DB instance
19:48
are available to you in the form of a change feed where you can do some processing on that data
19:54
So we have another Azure function. So this first function is triggered by HTTP
19:58
The second function is triggered essentially by the change feed. So once we write to a Cosmos DB instance
20:09
this Azure function is going to get triggered. this Azure function is then going to use SignalR
20:15
to essentially send an update back to an administrative app. So here's an administrative app
20:22
which can see where my taxi drivers are, and then they can essentially look at that data
20:30
So pretty simple model, but at the same time, not only is it simple, it has two other characteristics
20:37
Let's just talk about this for a moment. So it's a pretty simple model
20:41
but I'm of course paying consumption pricing but at the same time I have so
20:48
what happened here so I'm not sure what just happened okay I'll keep please tell me if you can see
21:01
this the screen suddenly change on me a little bit hopefully you can see it
21:07
Okay, I'll assume. Yeah, Vishwas, we can see it clearly. Okay, very good
21:12
Thank you, Stephen. Just something happened, and I just wanted to make sure you're able to hear me okay
21:17
So very quickly, the point I was trying to make here was, you know, all these pieces of code are, as I mentioned, they are serverless model
21:26
But at the same time, we have sort of outsourced the underlying complexity and the resilience
21:34
So what do I mean by that? Let me just explain this point a little bit. So because we are writing to Cosmos DB
21:40
Cosmos DB is a geo-replicated database. If we have lots and lots of new drivers show up
21:45
all we have to do is, you know, it's really up to Cosmos DB to scale up and down
21:51
It's not our responsibility. And let's say we have to spin up this application
21:56
in more than one region. We can have a geo-replication turned on
22:00
So we really have outsourced the underlying complexity of scalability and resilience
22:06
Same thing with the Azure functions. If more load comes in, we are not in the business of writing auto-scale logic
22:14
Somebody else's job it is. We just give them a range. They scale us up and down
22:19
And similarly with the websites here, since this is a single-page application
22:24
this can be hosted inside the static websites capability that Microsoft just announced And the idea is you really not paying for a web server because the entire code can be
22:36
on inside a CDN, if you will, and all of the dynamic aspects of the code are really some
22:42
JavaScript and the signals that are coming in here as part of that
22:47
So this may seem like a toy application, which it is. It was built in a few hours, but it at the same time exhibits the characteristics of an enterprise application in terms of scalability, resilience, and things like that
23:00
So with that said, let's go over and just take a quick walkthrough of this
23:07
Let me start out with, and we will make, what I'll do is after the session is over, I'll tweet the link to this code so you can spend more time looking at it if you like to
23:16
So the code is not very interesting, I have to say. Let's just see a couple of things here
23:25
So this is the trip tracker function. This is where our PowerApp is going to be
23:30
When HTTP trigger comes in, our PowerApp will come in here. We capture some data and we simply write it to the Cosmos DB service
23:38
Nothing fancy going on here. Let's look at the other Azure function right here
23:44
which is going to be triggered when the change feed triggers, okay, right here on line 15
23:50
And when that triggers, all we are going to do is do an add async for SignalR right here
23:57
which will then communicate with our administration website and update the location of that taxi driver
24:04
or an Uber driver, whatever. Okay. So those are the two important pieces of code right there
24:11
And then here's some code which basically essentially sets up the signal R right here
24:16
If you go up here, this is where we are setting up signal R. This is the startup.cs code
24:21
And this is, of course, in the function app startup code. This will enable the signal R
24:27
So pretty simple code base. I'm just trying to look at anything else I want to show here
24:34
There's some simulator. Of course, let's just look at the admin website here very quickly
24:38
So you can see the admin website. These are all the pieces of code
24:45
Not very, very interesting right here. You can see it's essentially a single page application
24:52
that can be hosted inside the static websites. Okay, so since I'm going to go run out of time very quickly
25:00
so let me just also show you this application in action. and so what I want to do here is I'll come to my resource group in a minute
25:17
All of the resources for that entire application are inside this one resource group
25:24
So we have the app service. We have the consumption plan where our Azure functions are hosted
25:30
We have a bunch of app insights. We have Cosmos DB, and we will go into this a little bit
25:36
We have a Cosmos DB instance, and there's a capability within Cosmos DB called the Synapse Link
25:44
And remember when I was talking about Synapse as your way to yze this data
25:48
Where inside Cosmos DB, we have something called Synapse Link. We can go into Cosmos DB and establish a connection so that that data is automatically transferred into an ytic store
26:02
So all of the hard work and error-prone logic that we had to write in ETL code is sort of greatly simplified by having that Synapse link
26:12
And I'll show that in a moment. And then I also have the Key Vault, of course
26:17
I have the machine learning service. We will go into that. Once you have collected the data, once we have done some ytics, we can also do some machine learning on that
26:27
Then I have the SQL pool. This is what I was talking about, where I can run my ytics
26:35
I have a bunch of storage accounts, of course, that you would expect. And here are the two Synapse workspaces that I have, essentially
26:42
So not a trivial resource group in terms of all of the capabilities that are there
26:48
Okay, so let's just go over here. Let's start our application right here
26:55
And this is, of course, a very, very simple application here. And I can do one thing here which might be interesting for you all to look at
27:04
Of course, this app can run inside iOS and Android or whatever. I just turned on the developer tools part just to point out one interesting thing here
27:15
Let's just go over here for a second now. So I can come here, right here you can see, let me escape out of this and you can see here
27:32
that what this application will render on a pixel, what will it render on iPad X and so on and so
27:40
forth, right? So that's the beauty of the Power Platform that you're writing essentially code
27:46
which is very similar to an Excel macro, and you are dragging and dropping these widgets
27:53
Under the covers, they are taking care of making sure your application can be
27:58
hosted across all of these platforms. I just wanted to quickly show you that
28:03
That's our application right here. Let's start our trip. If I start my trip here, let's see
28:13
we'll give it a moment here. So it'll start sending data to the Azure functions
28:17
which in turn will trigger our second Azure function, which in turn will trigger the SignalR code
28:23
So if I go into my web app here, web administration application right here
28:27
you can see that my chart is moving here. This is the static websites code that I talked about
28:35
So it is receiving signals from the SignalR service, updating the coordinates on the map
28:40
and as our trip is progressing here. So just imagine the productivity difference
28:46
If you had to build an application, this was literally a couple of hours of work
28:51
I almost toyed with the idea of building this app during the keynote
28:55
but then I said, maybe that'll be too much, but literally two buttons and a timer control
29:01
and really a data grid here. And then about 10 or 20 lines of formula-like code
29:10
which in turn calls the HTTP trigger for my Azure function. That's literally what this code is all about
29:20
And this code right here is pretty simple. A single Google map with signal-r coordinates coming in
29:27
and we are updating it here. And then see the taxi stopped moving
29:30
because I stopped the trip here. Okay, so that's one part that I wanted to talk about
29:37
Let's just now go back to our resource group here. And I just wanted to call out a couple of things
29:44
We can come in here. So this is the app service plan
29:56
This is the app service right here. And this is where our app service
29:59
admin interface is hosted right here so that's that's pretty easy to to understand uh i did
30:07
promise you that i'll show you the the cosmos db part and the link part so let's just go over to
30:13
the cosmos db this is a brand new capability and really exciting here this was part of the keynote
30:19
actually if you watch the build keynote and let's just go over to the to the document explorer
30:27
open the document explorer and I have two databases essentially let's just go in here and just let's look at the properties
30:39
of this guy here let's look at the properties and you if I go further down here and right here you
30:48
can see ytical storage is turned on. That's all you literally need to do to send your OLTP
30:59
information. So drivers are sending this information in real time back to Cosmos DB
31:04
and this information is getting automatically transferred over the Synapse link, converted into a columnar format, which is conducive for ysis. So now, if you wanted
31:21
to do some ysis on this data in real time, not last month's data or last week's data
31:28
but if you wanted to do real-time ysis inside Synapse, you can just spin up a Spark pool or a
31:36
SQL pool and start yzing this data, how many trips are happening, and what is the average price
31:42
Do you need additional drivers? You can take these decisions really, really quickly
31:48
This was easily, having worked on many projects related to ETL processing, I can tell you
31:55
that the error-prone process, this way of taking your data, converting it into columnar
32:01
format, conducive for ytics, is a big step forward. So I just want to show you this right here
32:08
So that's one part. So that's the Cosmos DB part. I'm trying to see
32:14
Let me also show you here. Let's see if I can bring up the Synapse
32:21
one of the Synapse workspace here. So, looks like it's waiting for this refresh to happen, but what I was going to essentially
32:43
show you here was all of the essentially a couple of examples of queries that I was performing against the relational database and looks like it not showing this up here because of an RBAC issue that okay
32:59
I will I will send you a screenshot of a tweet a screenshot I was not going to run this query but
33:06
really the idea here was that I was going to take New York taxi service publishes a lot of
33:12
historical data and what i did was essentially took 50 gigs of that data because you know my
33:19
app is not generating that data of course so in so took a 50 gigabyte set of data set from that new
33:27
york taxi service and brought it inside the synapse studio and then run a sql query against that and
33:34
it's pretty powerful to see that those queries can be run within seconds without really having built
33:41
a canonical OLAP model or something like that. So that's a really powerful capability
33:46
Okay. So the last thing I want to show you here is of course, the machine learning piece
33:51
So now that we have the data all available to us, we've performed some ytic queries
33:59
We have given our line managers some information that they can do
34:03
go into a decision-making process. I can now come in here and within the same data
34:10
So I'm not moving the data anywhere. It is still in the data lake
34:15
or in this case specifically, Azure Data Lake Gen 2. Now I want to do some ysis of
34:22
what are the factors that lead up to higher trips on any given day
34:27
Is it the number of drivers? Is it the placement of the drivers
34:31
Is it the timing of the drivers? If I want to figure it out, if I want to do a predictive ytics around that
34:37
this I can do. So once here, I'm inside machine learning workspace right here
34:44
And I know I'm going through a lot of services, so I'm trying to be a bit slow about it and not overwhelm you with too many tabs
34:53
But I think the key thing to take away from my presentation here really is that these are the capabilities available
35:03
and then on your own time, you can then go into each capability and explore it for your own
35:08
scenario. That's my hope here. So in this case, I have a Python workbook right here
35:14
which loads the data from our data lake Gen 2 architecture. It is processing them some data
35:23
As you well know, in order to run machine learning algorithms, you need to cleanse that data
35:28
Oftentimes you need to vectorize the data so that machine learning models can pick it up
35:34
So all this workbook is doing is doing some cleaning. It is doing some transformation So converting lat long into some real street names and whatnot so we can get a better sense here And then once the data processing has been done then I can show you the other workbook right here which is going to do some regression ysis on this data to figure out which features in this data set were most impactful in determining how we can maximize the number of trips
36:06
So we are taking predictive ytics into account. and once again, I can run through this code
36:12
and certainly I see an error here because when I started running this
36:18
just a minute or two before the keynote started, my compute instance was not running at that time
36:23
It is running. So once again, going back to our team, I need the compute instance only while I'm inside this
36:30
when I'm running this algorithm and once I've trained this algorithm, I can containerize it, put it on Azure container instances
36:37
and I really don't need that compute at any point after that
36:41
Unless I have to, of course, retrain my algorithm or change some hyperparameters or something like that
36:47
I really don't need that compute available back to me. So let's go back to the slides here really quickly
36:56
So we have come essentially a full circle, right? We have shown a power app
37:03
which we really built using drag and drop and using controls that work across the platform
37:09
And I showed you very quickly that, you know, you can, without having all the devices available to you
37:15
I gave you a little trick of how you can test your PowerApp against all of the different form factors
37:21
quickly build that application. That app is then connected to Azure Functions
37:27
which is nothing but an HTTP triggered in our case, which in turn stores data inside Cosmos DB
37:33
and inside Cosmos DB, we had an Azure Synapse link, which was sending the OLTP data into an ytics store
37:43
So we can do some ytics on top of that. And the real scale of this architecture comes by decoupling
37:50
So one function is writing to Cosmos. Cosmos is sending data to Synapse
37:55
Another function completely decoupled, and that's really the hallmark of scalable architecture
38:00
that you have things decoupled. The other function is waiting for a Cosmos DB change feed, gets a change feed
38:07
And then we are not making our administrative interface load and load again and do refresh or anything like that
38:14
We are pushing that data out or a WebSocket or something similar to our open browser connection and making updates
38:22
Imagine you have hundreds of cars running. You don't want to be expensive in terms of refresh and things like that
38:28
And then once we had the data in the ytics store we could run a Spark query or we could run a SQL query And then we could also run some sort of a regression ysis on that data to do predictive ytics So
38:44
in a sense, we have tried to achieve the three objectives that I had laid out. And I usually
38:52
like to do this. I like to go back to my agenda slide at the end of my session, always, because
38:58
did I hit on the points that I really had promised that I will hit on? So cloud native constructs
39:05
serverless programming model, not having to worry about patching and underlying OS upgrades
39:13
and having to worry about multi-tenancy and things like that, or writing complex
39:19
auto-scaling algorithms. We achieved that through a combination of services that we talked about
39:24
Did we achieve consumption-based pricing? Well, we did. We were using Azure Functions
39:32
We were using Cosmos DB. Now, Cosmos DB, we're using the free tier of Cosmos DB
39:36
and Cosmos DB team has announced a consumption-based pricing later this year, so watch for that
39:41
But other than that, every other piece, all the way up to machine learning
39:46
was on a need basis only. And then did we achieve the developer productivity
39:51
Well, we built a multi-platform platform, multi-form factor application using PowerApps, low-code, no-code ability. We use Synapse Link
40:02
to reduce our ETL time, and then we can use some of the prepackaged. I didn't get a chance to go
40:09
into that. I was showing you an example of regression ysis that I was conducting
40:15
but you may have heard of this. In cognitive services, there is a new capability called the
40:21
personalizer service. Some of you may have heard of that. It's essentially, and in fact, they
40:26
announced something called the apprentice mode for the personalizer service. So you really didn't
40:31
even need to write that custom algorithm. You could essentially feed your data to the personalizer
40:40
service, and it is based on an RNN model, a reinforcement learning model. So it can get that
40:47
data, and it can start giving you decision choices that you can make based on that data
40:52
without having to create your own ML workspace and things like that. So we are also taking advantage of the developer productivity in the process
41:01
Okay, so that brings us to 10.58, which is about a minute left in my keynote
41:08
I was told to go until 11. So, Stephen, Mahesh, Magnus, back to you guys
#Cloud Storage
#Distributed & Cloud Computing
#Networking
#Web Services