0:00
so thank you again for having me here uh to the entire conference organizers and
0:05
today I'll be speaking about synchronous web apps and how we can approach web apps from a messaging architecture point
0:12
of view you know as developers one of one of our greatest and perfect dreams
0:18
is to get our hands dirty with code i mean it mean what I mean by saying is that as developers nothing excites us
0:25
than getting to work on a project with tons of integration patterns right creating databases entities defining
0:32
workflows and if you have a ton of external APIs you know what let us just
0:38
get through it you know you can't just wait to work on such a project sounds perfect right and I think that's a very
0:44
relatable situation for all of us who are developers and it's a dream come true as well so We work on such projects
0:52
and then we put our projects live fantastic it's a moment of pride and joy
0:57
when your project hits production and starts making money for the business and there's no other joy like sh showcasing
1:04
your own work again something relatable and then comes that first campaign that
1:09
goes out as a mass email to how many people just about a few hundred,000
1:14
people that the business has gained as the customer base and then starts your rainy day how because you notice
1:22
sluggish front ends or you you get phone calls saying you have sluggish front ends you notice that the backend
1:28
services are not scaling quite well or they are struggling to put up with the
1:33
load you might be taking payments on the site and that doesn't seem to work or your payment processing gateway is
1:40
hitting rate limiting territories and at some point you know what the site just gives up make saying that oh can't deal
1:47
with this I cannot deal with the spikes just do whatever you can now this is
1:53
something that we all have been there done in the past I have been there done that myself but there are other ways to
2:00
face this reality more confidently upfront and messaging is one such
2:07
technology and sorry my slides are frozen why what happened there okay no I
2:14
am purima nayer I'm a software engineer and solutions architect at particular software where we build n service bus
2:20
and the entire service platform around n service bus I'm speaking to you from bakshay in United Kingdom and if you
2:28
have any questions reach out to me DM me at punima nayer I am Asper Neimanay on
2:34
LinkedIn X and Blue Sky so any questions feel free to DM me and I'll try to answer you with um as many answers the
2:41
best answers that I can so in the next 35 minutes we'll pick up some basics on asynchronous messaging and how we can
2:48
use messaging as an architecture to have really high high throughput high
2:53
performant mission critical systems and we'll see some code as well when it
2:59
comes to mission critical or high performance system the the first thing I like to talk about is the order
3:04
processing system why because it's a very relatable workflow for everyone it
3:09
is the holy grail when it comes to building distributed systems and talking about distributed systems and what is a
3:15
typical workflow we go to the website put a few products into the basket click
3:20
check out put our details in then fill in the credit card details and then say place order boom and that order goes
3:28
into the backend system of the online shop and then it gets processed that's the happy day that's not the rainy day
3:34
that's the sunny day that you can see just like the day I have out here in UK that's a perfect case scenario now go
3:41
back and rewind to that campaign scenario where were you at that point when you had that rainy day so we had a
3:50
sluggish day uh we had we started off with sluggish front ends sales are reported to be very low fine you know
3:57
what we are on Azure let's just scale the front end autoscaling is what 10 minutes max fine we do that have we
4:05
solved the problem not yet because our backend services are struggling to cope up it's still slow fine we are going to
4:13
autoscale our backend services now there's a catch there legacy backends
4:18
who has dealt with a backend system or API server hosted in a old server
4:24
somewhere in the basement of a building been there done that there's an API which just couldn't be scaled that's a
4:30
catch in itself let's just not talk about it but let us say that we have started scaling our backend services as
4:36
well and then at one point we hit the wall the database so we have been moving
4:42
the bottleneck from the front end to the backend services to the database then it just goes boom
4:48
You know what i cannot cope up with the amount of traffic that is hitting me says the database there's so much
4:54
resources that you can throw at it but what really happens is cascading failures the database fails it then goes
5:02
and fails the backend services that we have scaled out and that goes and fails the sluggish front end which has also
5:08
been scaled out horizontally vertically all way but it then just blows up on the
5:14
face of the user we are not making money as a business and that is the rainy day that I've been talking about yeah so
5:20
I've been there done that myself years ago i was working on a system and when the mass email campaign went out to
5:28
their very loyal customer base we had like tons of people hitting the website and we had a legacy API and there was a
5:35
point where I live monitored a system with my colleague over a phone call with a customer and that went on for a couple
5:42
of rounds and that is where we start thinking about did I use the right tool
5:47
for the project you know could I have done this better so what did I miss out
5:53
did I miss a memo and the memo that I missed out happens to be the policies of distributed computing so what we have
6:00
been speaking about so far is a distributed system where two or more computers speak to each other so as
6:07
developers when we develop distributed systems we take many many things for granted and those false assumptions have
6:15
been coined together as the policies of distributed computing the things which we failed to see up front this was the
6:22
this was this is a list of policies that was put together originally by Peter Deutsch i will be leaving a link to all
6:29
of this in my resources at the end so keep on watching but let us talk about these policies for a minute we assume
6:36
that the network is reliable we assume that network will always be up and
6:41
network will everything that we send will be received but in reality network can fail information can get lost and
6:49
network could become congested right we assume that latency is zero we assume
6:56
that requests will complete instantly but in reality there is always a delay
7:01
millisecond microsconds to even more in some cases we assume that the bandwidth
7:07
is infinite but bandwidth is finite there's a limit to the bandwidth you can have slowdowns and bottlenecks we just
7:14
cannot send as much data as we want at any point we assume that the network is
7:20
always secure we assume that you know now my network is secure there's no one
7:25
coming to attack it but we need to understand that in flight request there's always a chance that someone
7:32
could intercept it we assume that topology doesn't change we assume that servers nodes
7:40
routes they all are static and constant and they don't change but in reality
7:45
machines crash they restart they get migrated they scale up and down so we
7:51
cannot assume that there is no change in fact change is constant and as developers we need to embrace it we
7:58
assume that there's just a single administrator but in reality there is nothing called a centralized control
8:04
these days it's more about distributed teams you could have multiple teams working on the same distributed system
8:10
and when that happens if you don't have enough interte communication inconsistencies and failures happen but
8:17
is there something called as enough interte communication i don't think
8:22
so we assume that the transport cost is zero even if we have infinite bandwidth
8:28
the transport cost is node zero you know there is a little bit of that extra that you pay at the end of every month with
8:34
your Azure bill that comes around that is what telling you is that transport
8:40
cost is not zero there's always cost associated even with CPU performance and
8:45
usage and we assume that network is homogeneous but is the network homogeneous is the behavior always
8:52
uniform never because no two systems or no two servers are alike let us go back
8:58
to our order processing workflow and see what might the workflow be so we have
9:04
our website which is pushing the orders or when you hit that place order button
9:09
goes into a big service called the order processing service the order processing service then might reach out to the
9:15
sales service and say hey create this order for me and the sales comes back uh
9:21
with a response then the order processing goes and talks to the order payments sorry the payments is hidden
9:27
here the payments and says "Hey build this for me." And then somehow it gives a response magically and then say then
9:35
it goes and talks to the customer status service to update the loyalty points and
9:41
then it might then give it uh give the response back to the order processing and then the response goes back all the
9:48
way back to the client that is when the client uh gets to know yeah the order has gone through now what you might have
9:57
compare this with the policies that we just spoke about what if one of the services is down what if the network is
10:04
congested and the data doesn't reach on time there could be that chance of HTB timeout exception right what if one of
10:11
the services has been updated by a team and the others other other services just
10:16
cannot cope up there's the risk of that too so what we have is a scenario of very tight coupling each service like if
10:24
you look at the order processing service it needs to know that it needs to call the sales it needs to call the order it
10:29
needs to call the uh order payment and it needs to call the customer status why why can't just the order processing
10:35
handoff and then just move on to do other things that is what you want to do right here there's too many
10:41
responsibilities on any service that is doing it and you might think at this
10:46
point that you can bring in poly to keep doing the retries if there's failures happening but that's not enough because
10:54
what about the data and the item potency around it do you want to take the payment twice no the underlying problem
11:01
here is what's known as temporal coupling this is the tight coupling that I've been talking about the idea that
11:07
for two services to communicate to each other they should be up and running at the same time so if there's a sales
11:14
service and the order processing both needs to be up and running I need to get the response back after talking to B
11:20
directly uh and only then can I move on so if B crashes there is no request there is no
11:28
response right and at that point A is completely blocked a cannot do anything
11:33
and there's failure on the face of the user and while that request has been
11:38
sent to B A still waiting doing nothing yeah that is money lost probably during
11:45
that time in compute power there's also the request response pattern that we see
11:50
here i am not saying that this is a problem but when it comes to high throughput high performant scenarios we
11:57
cannot rely on request response because it follows the synchronous processing technique because HTTP by nature is
12:05
synchronous we cannot have async processing with request response so this is also that something cannot really fit
12:12
into this high throughput high performance scenario mission critical systems so what we have built here is a
12:18
house of cards and not just a house of cards but one that is sitting on top of
12:24
Asian waves now this is not about bashing HTTP APIs or gRPC GraphQL
12:30
whatever because they are all battle tested they are there for a purpose what I'm talking about is missionritical high
12:36
throughput high performance systems where time is money where you cannot
12:42
have failures and probably there an HTTP based API is not something that you
12:47
should be using and how can we change this how can we have better foundations by introducing what is known as a
12:53
message cue so in reality it is not as simple as this there's a lot of thinking
12:59
about the problem and the solution before you go into implementing this But to have an introduction to messaging
13:06
today let us just assume that we have introduced a message queue in between what is a message Q consider it like a
13:13
database uh but instead of rows you have little messages and what are messages
13:18
they are basically what you would have in your request information so with the message Q in place A will actually send
13:26
a message into the message Q and at that point the message gets stored into the
13:31
message Q a is not blocked at all a can continue doing whatever else it needs to do and B is the on the other side of the
13:39
message queue b might be crashed at that point b might be undergoing an update or
13:45
B might be online but whenever B is ready to process things further it takes
13:50
the message from the message queue processes it and deletes it so in a
13:55
single shot we have gotten rid of the temporal coupling and note that there is no request response why because message
14:02
cues or message brokers they rely on protocols like AMQP which by nature is
14:08
asynchronous processing so where do we start with this i'm going to be focusing on the
14:14
technical aspects of it but if you want to head over to particular.net/blog which I'll be
14:19
sharing a link today there's a lot of information about what goes into actually planning such a project how can
14:26
we migrate or how can we even start green field projects but today in this 35 minutes I just want to focus on the
14:33
technical aspects of it so the main thing to talk about or the first main
14:38
thing to talk about is the message cube there are variety of options out there it can be cues or brokers brokers are
14:45
much more advanced in terms of functionality with uh routing and message filtering message transformation
14:52
a talk for another day but understand that there is message brokers and cues so there's Rabbit MQ the open source uh
14:59
if you want something in the cloud which is a more Q version you have Amazon SQS then you have Azure service bus which is
15:06
in Azure uh cloud and then you have active MQ0 MQIBMNQ to Google pubsub now
15:14
all these all this are infrastructure this is extra infrastructure being brought into place and then you also
15:21
might be moving in from an HTTP to a messaging environment so you might want to play around have a look gain some
15:28
confidence about messaging in general in such instances we have seen uh database
15:34
as cues using SQL server as cues why because such organizations who doesn't
15:40
have enough confidence with messaging from the word go they they find it easier to have SQL server which they
15:47
might be already using in the application and use that as a cue it's okay in that kind of a scenario to use
15:54
it but When you think about real high throughput scenarios using database as a
15:59
queue is probably not a good idea because it's also adding load to your own database service now we have our
16:05
messaging queue in place suppose say for today I'm using database as the queue
16:11
because that is the easiest to set up but say if you are using rabbit Q or SQS
16:16
or service bus or even database as the queue how would you go about talking to it would you use the native client
16:23
library not not really because there are so many things that you need to ask yourself when you go down this path what
16:31
patterns do you need to support is it send and receive and publish subscribe which one do you need because with
16:37
publish subscribe you're going down the even eventdriven architecture route what is your average message size do you need
16:43
large message support what about the complex routing that you might need in place do you want to write things
16:49
yourself no how about the message delivery order and the delivery guarantees are you after at least once
16:55
at most once or exactly once processing what about data transactionality and the item potency of data surely you don't
17:02
want to take payment twice what about the throughput do you want managed service or self-hosted service are you
17:09
after cloud native solutions to what local dev support you need all these are
17:14
things that you need to ask yourself and on top of that you need to be also
17:20
thinking about a variety of patterns that I'm just going to talk to you about message routing workflow transactions
17:27
claim check pattern serialization retries monitoring and alerting because
17:32
it's a distributed system all in all you might end up writing all the patterns
17:37
that you see on the screen so would you want to do that yourself or would you want to do it a different way that is
17:44
where me messaging middleares or messaging libraries come into play there are many many options which I'll be
17:50
talking about but I'm going to use the analogy of a car here suppose you want
17:55
to uh have a new car with all the latest features out there would you go buy a
18:01
car for yourself and use the amenities that it provides you with or decide hm
18:07
this sounds fun let me actually build a car you know trust me there are people who build a car but then you have to
18:14
think about the roadworthiness of it you have to consider the paperwork that goes behind it do you really want to be doing
18:20
it of course there is the learning but then one of the key learnings at the end of it would be I don't want to do it
18:26
ever again i don't want to be servicing this car it is going to sit as a car which is not going to be used on my
18:33
driveway i built that car that is it i don't want to be servicing it so my point being do not reinvent the wheel
18:40
use one of these libraries and service bus mass transit brighter reebus wolverine all of these are messaging
18:46
libraries they all come with patterns that I just spoke about that has been built in you can make use of those
18:53
amenities yourself so this helps you focus on the business code in today's example I'm going to be using N service
19:00
plus because that is what I am used to but regardless of that feel free to use
19:05
any of this because all the code that I show you it is just the APIs that keep changing the patterns are the same
19:12
regardless of the library you use so just putting it out there do not reinvent the wheel because what you
19:18
might end up writing is a cheaper version of one of these libraries that is going to cost you a fortune in the
19:26
long run servicing and maintaining it save yourself from the headache because a distributed system is no mean feat so
19:33
time for some code now so what we will discuss are these main points endpoints commands and pointto-point communication
19:39
events publish and subscribe we will see if we can extend our system if time permits and we'll also talk about
19:46
recoverabilities and retries so let's go to Ryder
19:53
what I have here is a solution with four projects i have unloaded something let
19:59
us bring that into picture when the time is ready so the client UI is basically
20:04
the website which is the web front end ASP.NET core NBC application nothing too
20:09
fancy about it and I've got two services which is the sales endpoint and the billing endpoint so in reality I might
20:17
be hosting these as Windows services the billing and the sales end points as Windows services or even as like an
20:23
Azure web job the client UI that is a website that could be hosted anywhere or even on Azure so as I said I'm using N
20:31
service bus because that is what I'm familiar with but feel free to use Transit or Wolverine or Rebus because
20:37
the patterns just keep repeating it's just the semantics and the APIs the names of the APIs and the methods that
20:43
keep changing so in my controllers I have the home controller there's a place
20:49
order button so if I just run this demo and show you what the web front end
20:54
looks like it is very simple and basic uh for this 35 minute session again so I
21:00
have this big button which places an order and it tells me that a order has been placed so when that place order
21:08
button is clicked what happens is it goes and invokes this action method for me so in in here I create a new order id
21:17
and then I instantiate something of the type place order and I am sending that
21:23
object that I've just created using something called message session so what is actually happening so message session
21:28
is like an interface in service bus that helps me do basic operations on the message like sending a message or
21:34
publishing or receiving a message um and I am using that to send something now
21:41
what is that something that I'm sending so if we look at the place order class it's a plain old CLR object which
21:48
inherits from I command and this is what I'm sending in reality it'll be your
21:54
entire request which contains all the details of the products and everything that's one way of doing it but
22:00
inheriting from I command is telling the system that is it's telling an N service bus that hey this is a message this
22:08
needs needs to be treated like a command now in comes the concept of the two
22:13
different types of message here this is a command a command is an instruction so
22:18
it is a very valuable method and me as a sender I expect someone to take action
22:24
upon it so if I expect someone to take action upon it and it is a high value thing I need to make sure that I know at
22:31
least who is going to deal with it so that also can be configured but as a
22:38
sender I don't need to know whether that that receiving party or whoever is going to deal with my instruction is up and
22:45
running or down or whatever i just need to know that there is this someone who I need to send the instruction to who is
22:51
going to do the work for me at some point so using the message session I'm sending that command and how that
22:58
sending is controlled in a centralized manner is from the program uh CS so here
23:04
I have this API called route to endpoint where I say the type of place order
23:10
classes or the objects of type place order the destination would be the sales for it the sales endpoint or the sales Q
23:17
so as you can see there is a website and it is talking directly just to the queue not the sales service it's the sales
23:24
queue that it is talking to so it centralizes all the connection for me um
23:30
and this is now also an endpoint because that is how we treat things in in service bus and endpoint means it's a
23:36
service that can send or receive messages um and of course there is the
23:42
the the uh configuration for the SQL transport which I'm using today and um a
23:48
little bit of service plus endpoint configuration so that is the client UI which sends the message or sends the
23:54
command now let us go and visit the sales service so the sales service is
23:59
where the place order instruction is received so it needs to process that command and for that there is a class
24:06
which uh which says that hey I am here to catch all the commands that you send of the type placeholder let me deal with
24:13
it so in the placeholder handler we have a class that inherits
24:19
from I handle message messages of type T so marking a class inheriting from I
24:25
handle messages of the type placeholder say say is shouting out loud to the system that I am here to handle all the
24:33
commands or or all the messages of the type placeholder and inside my handler
24:40
there's a handle method where I can get the message as well as the context so if
24:45
you think about HTTP context the iMessage handler context is something
24:50
like that it gives me access to the the context because this is a service that has been invoked by a message with HTTP
24:58
APIs you are invoking using HTTP request here it is invoked using a message so in
25:05
here I can then do some business logic that is processing the order or saving it to the database in this instance and
25:12
then we have some interesting things happening if you see here there is another object called order placed being
25:20
instantiated and I'm using the context API the publish API in the context to
25:25
publish it here I'm not sending I am publishing it so what is the difference so this is an event event is like
25:33
letting uh letting the world know that something has happened while command is an instruction that says do this event
25:40
is like something happened i have done it so the sales endpoint or the sales
25:46
service is saying that yep I have saved the order to the database my job is done i am just letting others know that I
25:53
have done my job so that others can pick up on what is left to do so as you can see here there's just one responsibility
26:00
residing on the sales service which is creating that order in the database so another thing to note here
26:07
is that the order placed event is again very lightweight in reality it might be
26:12
different but the order placed inherits from I event which lets the system know that this is an event now if you look at
26:19
the semantics of a command and event the event semantic is a noun plus a verb in
26:26
past tense and for command is like a verb in present tense plus a noun so so
26:33
that this way you can easily make out which is a command which is an event without even going to the uh without
26:39
even going inside the class and speaking to the class so so something like a best practice so now we have published the
26:45
event and if you look at program.cs is I'm not talking to anyone right because
26:50
that is because this is an event i just need to let the world know whether someone will listen to me how many
26:57
listeners will be there I don't care i just need to let the world know it is like me talking to you it's like a
27:02
broadcast there could be one or many listeners and those listeners are called subscribers and this is the basic of
27:10
event-driven architecture what we are doing as a pattern is publish subscribe and this is the basics of the event-driven architecture and the beauty
27:17
with event- driven architecture that is that we get the space to extend the system easily because we can bring in
27:23
more subscribers without affecting the system as a hook now how do we handle
27:28
this order placed event that is getting done in the billing endpoint or the billing service here we have a handler
27:36
again which inherits from I handle messages of type T which is order placed in the handle method I can again do some
27:43
business logic and then at the end I might send another command or publish another event that is also possible now
27:50
let us see the demo in action yeah that's what the splain
27:57
so let me bring up the for the run windows so I have the client UI here
28:05
i'm going to place a few orders um just let us start with one to begin with that is CFB4 is the starting if I look at the
28:13
sales it has received the place or uh place order instruction so if I look at the client UI to begin with I'm sending
28:20
the command for the order ID CFD4 the sales receives it and then publishes the
28:26
event for the order ID and in the billing I have now received the order
28:32
placed and I can move on and do other things now we spoke about extending the system right so I have another service
28:40
that I have just hidden here i'm going to load the project and then I am going to run this as well so right now it is
28:47
not running at all i'm just going to run this again this is a s very simple
28:53
handler which handles the order placed and if I run this
28:59
now that should also run
29:09
uh give me a minute i think this is up and running now if I go and place a few other orders
29:17
I should see that the shipping endpoint should receive some of the messages as well from the past so we have extended
29:25
our system without impacting the system as a whole now what if one of the endpoints is down it has a bug what
29:32
happens behind the scenes that is when the recoverability and the retries come in so let us go and introduce a bug into
29:39
our sales end point by by just throwing an exception so let me just stop this
29:52
so the sales endpoint is completely failing at the moment because I have introduced a bug
29:58
but that doesn't prevent the UI from working i have placed a few orders now
30:06
and the client UI must have sent a few messages but when it comes to sales it
30:11
keeps on failing so now what happens is in the sales I've got some recoverability in place this is
30:18
something that you would add to every endpoint and with the recoverability it says that hey when you have an error
30:25
retry it and when the first retry fails then back off exponentially 2 seconds 4
30:31
seconds and so on till the point you cannot do it anymore and then what you
30:36
do is you don't sit and do anything you move that message into into another
30:42
queue called as the error queue and that is what is going on here I think it
30:47
retries tries for like three times before it completely backs off and moves the message into the queue so if you
30:53
look here there should be some messages in the monitoring area which I'm just going to show to you right now
31:05
let me bring that up here i should see a few failed messages
31:13
so if you look here there is a message that has failed with the message body D5
31:18
BF5 C and that is indeed one of the messages that was sent by our
31:26
client UI here so at this point sales is failing but the client UI is up and
31:31
running so now let us simulate the scenario where we have gone and fixed the bug yeah so here we are in the
31:39
placeholder handler again we have fixed the bug and we are running the app again
31:45
and now what I can do is when things are up and running I can go back to the monitoring area and then start uh
31:53
retrying my messages the failed messages so I can go into here
32:01
definitely know in the console in service pulse and I can see failed messages i I can go into one and I say
32:08
retry this message yes I want to retry this message so it is getting retrieded now if I go into sales I should see that
32:15
it is going to try and pick up the message that just failed so let us see
32:25
come on yeah yeah yeah because we don't Yeah just
32:31
retry i can send you I can export this whatever they are but I mean we don't
32:36
get tickets for these yeah i mean security is not bothered there you go so the message with the message
32:43
body PFICE that has been retrieded and the order has been the order placed
32:49
statement has been published and then the billing can receive it and the shipping can receive it so the idea is
32:55
that you can extend the system and even if there are failures the request is safely and durably stored somewhere be
33:03
the error Q so that you can bring things back up and retry it so you're not losing any message you are building that
33:10
difference in place so that nothing is lost similarly we can select all and retry and it should all go through
33:17
without overwhelming the system or it should at least get processed one by
33:23
one so we should see that in action any minute so I'm just going to move on and
33:29
we can come back to it and visit the logs later so we discussed all of this endpoints commands and point to point of
33:35
communication events in publish and subscribe we saw about how you can extend your system without impacting the
33:41
rest of the system we saw about recoverability and retries so we have moved on from that very monolithic
33:48
architecture to something like this so you have the client UI which publishes sorry not publishes it sends a command
33:54
saying place order to the sales endpoint the sales endpoint then publishes an event called order placed which is
34:01
consumed by both billing and shipping and here it gets interesting because the
34:06
billing can then produce further events to say I have built the order at which point the shipping can chip in and say
34:14
you have placed the order the order is confirmed you have built the order um I
34:19
might also want to wait for the say the the stock service to come back and tell me that there is enough stock so that I
34:25
can start the shipping so that is a very complicated thing happening behind the scenes and that is yet another
34:31
integration pattern called the saga pattern which you might want to invest and investigate and that is used for
34:37
having uh or creating workflows further the billing might uh publish the the
34:44
same order build event that the billing published it might be used by the email service to pick up the fact that a
34:50
confirmation email needs to go on to the customer and finally the order build
34:55
that could be picked up by the customer loyalty service to say that yep uh I haveated the customer
35:03
loyalty database or the details of the customer as well so all in all if you see you are moving closer to your
35:10
business domain with such a kind of architecture at play now talking about where you would see such messaged
35:16
architectures at place um when you have communication between services when you want modules in modular monoliths
35:23
communicating with each other you can investigate into messagedriven architecture in real life you would see
35:30
such architectures at play industrial automation health care system sorry uh
35:36
banking and finance anywhere you want to have high throughput high performance
35:41
scenarios mission critical systems messaging architecture can widely help there and if you are thinking about
35:48
message messaging architecture these are some scenarios where you shouldn't be using them cud applications when you
35:55
need request response or even realtime application uh now revisiting the
36:02
policies of distributed computing what is the verdict how does messaging fare so network is reliable we embrace
36:10
failure by design by introducing durable storage retries and DLQs uh with the
36:16
latency we have introduced async processing and we have decoupled time there's nothing called a instant
36:22
processing it is all eventually consistent data we are tackling bandwidth by achieving
36:28
some load leveling because backend systems could process at a normal rate while the the client UI alone is scaled
36:35
so there's load leveling happening the network is secure because
36:40
all the message brokers they take care of TLS encryption of inflight services
36:46
and with fine grain services the attack surface is also reduced with the topology that doesn't
36:53
change we saw that we can evolve with minimal disruption we can take entire consumers down for maintaining and
37:00
servicing and that doesn't impact the entire system at all we we can have
37:06
different teams working on different part parts of the distributed system and they they all can work independently and
37:13
have team autonomy we can scale just the necessary services and deal with the spikes for
37:19
example you could just scale the client UI alone to have multiple instances for the userfacing fend and then the backend
37:27
services could be just one instant and keep processing data at a normal normal
37:32
pace achieving load leveling and we have an inherently polygot friendly architecture in place
37:39
because the underlying uh protocols like AMQP and MQTT they are language agnostic
37:45
and enables communication across different platforms it doesn't solve all the problems what what messaging helps
37:52
you is confront the policies early on and these policies as they are you are
37:58
forced to ask yourself those questions early on to mitigate the risks so that is it from me today that is the QR code
38:05
to my resources and my resource link as well and if you want to reach out to me
38:11
feel free to reach out to me at Purima thank you and thank you once again for having me as a speaker