MassTransit and Open Source - with Chris Patterson

Durée: 83m25s

Date de sortie: 16/02/2022

In this episode, I was joined by Chris Patterson to chat about the open-source distributed messaging framework, MassTransit. We also spoke about open-source, event-driven architecture, docker, k8s, testing, Rider, and more!For a full list of show notes, or to add comments - please see the website here

Hey, everyone, welcome to the Unhandled Exception podcast. I'm Dan Clark, and this is episode
number 31. And in this episode, I am very excited to be joined by the author of the
Mass Transit framework, Chris Pattinson. Hey, Chris, thanks for joining us and welcome
to the show.
Hey, thanks for having me on the show. Excited to be here. Looking forward to talking
about MassTransit.net, all the things we can come up with. So thanks for having me on
the show.
Oh, you're most welcome. Definitely, I can forward to talking about Mass Transit, but
also seeing your notes that you did earlier on Google Docs and adding things like
Kubernetes and service messages has made me even more excited about this chat. So
that should be good. So now, obviously, you do a lot of work on Mass Transit, which
we'll be talking about today. But before we delve in, could you give our listeners
un quick intro into your background or what is you do?
Ah, sure. Yeah. So I'm a software developer by trade. I've been building
software since I was a teenager in high school. Actually, even junior high. But
yeah, I don't even know that that translates globally. I've always built
software. I've been doing dot net since probably 2005. Before that, everything
was C++ assembler, everything, anything you can possibly think of in the
language I've probably used at some point, except JavaScript.
No, I've actually had to do some JavaScript at this point, although I
favor type script in that regard. So I built software all the time. In most of
my experience, even my first major real gig was building connected systems
using net bios. So I've always built distributed applications that connected
over network protocols. And so when I started to get into messaging back in
the early days of MSMQ, I mean, we're talking in the 90s here. So some
people probably weren't even born yet. I've always kind of built these
message based applications and built custom frameworks and libraries
around it. I've always been kind of a back end server type of developer
rather than a front end developer. So I've always had to build applications
that talk to each other and obviously did web services in the late 90s via
Asmix with the original classic ASP, which I think was C. I can't even
remember. I think it was C. Maybe it was VB script. I don't even remember.
But we've always been building kind of connected applications. And so we
had started using a lot more MSMQ in our applications. So back in 2007,
I was in Austin for the first alt.net conference. And at that point,
I met Drew Sellers, and we were talking about some of the problems and
challenges we had around applications. And he was looking to use MSMQ
and didn't quite follow everything that needed to be done. I had been
using it and found that a lot of the patterns were reproducible. We looked
at what was in the landscape at the time and decided, let's just
write our own. How hard could it be? It's just a library on top of
a message framework. How hard could it be? So we started Mass Transit
back in 2007. I think the first initial releases landed in 2008. And
some of those early releases from what I found out about a month ago
are actually still in production, like pre .8 versions of the code
are still in production. So that was kind of crazy. But that's what
happens when you write code and it tends to run in a fairly static
organization. So anyway, Mass Transit's been around for, I guess,
14, 15 years now. It's obviously grown tremendously. We no longer
support MSMQ because MSMQ should just be gone from the world. So
we've attuned more to like RabbitMQ, Azure Service Bus, Amazon SQS.
We still support ActiveMQ. It's definitely the least used transport
of the Mass Transit stack. And then recently, we've added things like Kafka
and Event Hub to support data distribution use cases for event streaming.
So yeah, it's been a lot of fun. So it is a hobby, which some people laugh and say,
how can a hobby be used by so many companies? Because it is used globally by
companies and decision support systems and emergency response systems.
And it's surprising how it's everywhere. One of the challenges is people say,
well, who's using Mass Transit? And I don't have that web page that has the logo
of every major company because Mass Transit is free and open source.
So it isn't like someone has to register and pay to get their name on a slide of all
the companies that are using the product. So it makes it kind of hard to answer that
question. But based on the download usage and the email addresses of people that I get
random questions from, it's used a lot.
Yeah, I was going to say, it's kind of, if this is just a hobby project on top of your main job
and so many companies are using it, you've presumably got to do a lot of support now as well.
That must be quite scary, I guess.
It can be. At the beginning of the pandemic, you know, when everybody was kind of locked down,
everybody was sitting at home. Like, people were craving content. Netflix was like,
we're limiting bandwidth because everybody's watching 720p.
Now we're going to give you 480p.
We have no bandwidth. Everybody's sitting at home watching Netflix.
And I thought that would be a great opportunity.
So I sat down, I set up my rig to be able to do live video streaming.
And I started streaming how to use Mass Transit on Twitch.
And it was a great way to kind of connect and, you know, in a nerdy sort of way,
communicate with other humans throughout this lockdown period.
But it really spawned a lot of interest and put a lot of content out there.
I ultimately replicated it out on YouTube so that people could go back and watch
and like learn along and kind of see how different parts of it.
And I ended up doing hours. I mean, there's probably,
there's over 50, probably near 100 hours worth of content out there.
And that really kind of helped answer a lot of the questions
because you're right, the support load was getting high.
And questions on Stack Overflow, direct emails,
a lot of that stuff would really build up.
And for one person, it's tough.
Now, the interesting thing is there are companies that do support Mass Transit.
And some of them might surprise you who they are,
because you would think, oh, well,
don't they have their own solution?
And it's like, well, yeah, they have commercial solutions.
But if a customer is using Mass Transit,
they're going to say no to consulting dollars, consultants or consultants.
We'll work for anybody if there's money involved.
But then I also have a relationship with improving in Dallas.
So they'll do some support and they have a lot of customers
that use Mass Transit and I'll refer a lot of people to them.
But I do run a 24 7 discord channel that if I'm awake,
which, you know, that's the operative word, if I'm awake
and I see a message, I'll usually respond to it.
I'm surprised how much traffic I get in there and just the weird hours.
But, you know, it just goes to show if I look at the analytics of the dock site,
I mean, everywhere from around the world,
you can see, based on the time zone, as it kind of rolls,
which part of the world is actively using the project and looking stuff up.
So, so yeah, it is a lot to do, but it's sustainable so far.
I mean, fortunately, there's enough content out there
and people can figure it out.
And if they don't, they've got Stack Overflow,
GitHub discussions, Discord, everything.
So that generally figure it out eventually.
Yeah, that's quite cool.
You mentioned about like Kafka and Event Hub,
which is like using like event streaming.
That's quite different than just to stand a message broker.
Do you find it quite difficult to have the same abstraction library
working over both, like two different ways of doing things,
two different ways of doing eventing?
Yeah, that's actually a good comma.
You know, it was probably a couple of years ago,
even prior before that, the the requests kept coming in for Kafka.
Why is there no Kafka transport for mass transit?
And I kind of just came back and said,
well, because Kafka isn't a message broker.
And if you go to their web page,
they're like, Kafka is a message broker.
It's like, no, it isn't.
It's a log file with a really exotic API
on top of it to make it look like a message broker.
You know, it has headers and it has a body and, you know,
all of these things, but it is not a message broker.
And through my regular job,
I have monthly meetings where I check in with a lot of the
developer people within Confluent.
So I have a monthly call with Confluent,
the people who support Kafka.
And we talk about this a lot.
And we talk about the different patterns.
And, you know, the questions I would get is,
you know, from customers of mass transit,
trying to use it, it's like,
well, how do I do request response over Kafka?
And I'm just like, you don't.
And they're like, but that's our message broker.
And I'm like, no, it isn't.
So when we when we started to look at how to bring
a transport in for Kafka and Event Hub,
because they're very similar patterns,
we came up with the concept of riders.
So with mass transit, you have a bus.
And the bus is backed by a traditional message broker,
whether it's RabbitMQ, Azure Service Bus,
or even in memory.
You know, if you're not doing durable messaging,
you can use the in memory transport,
but I don't really recommend it for most use cases.
But what we came up with is a concept of a rider
where we have these riders on the bus
that with alongside an existing bus
that is a regular message based broker,
you could bring on other sources of messages,
message being kind of a logical concept
of I have some type that I'm going to pull into a system
and dispatch those to consumers, sagas, whatever,
the same way you would a message
coming from a traditional message broker.
And so by doing this, we were able to make it
so people could create endpoints to consume from topics,
whether that's Kafka or Event Hubs on a Event Hub.
And it turns out that it works really well
because you're able to use the same programming paradigms,
you know, writing consumers or sagas,
but use them with, you know,
a theoretically higher rate message transport like Kafka.
So it's worked out really well.
It's give a consistent experience.
The crazy things that come up are,
oh, well, how do I handle retry in Kafka?
Because with a regular message broker,
you know, you're removing messages from the queue.
But with Kafka, it's just an offset and a file.
So it's like you can't write it to the back of the queue
because then you're going to have or the topic
because then you're going to have multiple
and you can't schedule
because there is no scheduling
or delayed delivery mechanism in Kafka.
So if you go out and search,
you'll find that there's a number of different approaches
that people have done to try to do retry in Kafka.
I don't have a good answer for it.
I believe in blocking retry
that if you're consuming from a partition in Kafka
and you can't write to the destination, stop.
Because if the destination isn't available,
you're not going to get any better
by skipping the message that you're currently on.
So there's a number of different patterns
and we're trying to come up with some good usage stuff
for the next release.
But, you know, it's just...
Those are the differences that you see.
You know, you don't really publish and fan out.
You produce a message to a topic
and then that topic is consumed by multiple consumer groups.
So it's just a different experience,
but a lot of the patterns mapped over.
So we did it that way and it's worked really well.
So just that automatically, for example,
for event hubs where that stores like the offsets
to like somewhere like blob storage,
do riders automatically handle up assistance?
Yeah, so riders automatically handle the check pointing,
whether it's Kafka or Event Hub.
So they'll keep track of that.
And you can configure the timings.
Some people like to monitor consumer lag in Kafka.
And so you can set the checkpointing to be very aggressive
and say, well, I want to checkpoint like every five seconds
because I want to know if my consumer lag is over 10 seconds,
which would be ridiculous with Kafka, but some people do it.
But yeah, that consumer lag metric is important to know
to see how far behind your consumers are
from the events that are being produced.
So yeah, the checkpointing is handled by both,
under the covers, again, Mass Transit is a framework,
it's sitting on top of the confluent client
or the Azure Event Host Processor or whatever.
So it's just letting that do the dispatching
and it can tune those checkpoint parameters
so that those are auto checkpointed,
either to blob storage or I think Kafka does it through,
I think they do it through ZooKeeper,
but I think with the latest versions,
they also do it through a roll-up topic.
But anyway, yeah.
I'm kind of aware that we,
I'm kind of wondering because maybe some of our listeners
might not be used to events driven architecture
and the different ways of doing things.
I wonder if it's worth stepping back a bit
and maybe covering some of these different terminologies.
Like we've talked about event streaming
and then we've got messages,
but you've got things like messages, events.
Maybe if we skip out events sourcing
because that's something completely different,
even though it's got a vent in the name,
is it worth us just demystifying
some of these differences before we dig a bit deeper?
Each of the listeners are gonna have different experiences
and some will understand what we're talking about
and maybe some won't.
Yeah, we can definitely kind of cover some of those terms.
So when we're talking about an event-based architecture
and event-based system,
and even beneath that,
when you're talking about a message-based system,
the first item to think about is what is a message?
So if anyone's done any type of programming,
they've had parameters that they passed to a method.
And after they get five or six parameters
that they passed to a method,
they might create a type that includes those parameters.
And so they pass a reference to an object
that has five or six different parameters in it.
Well, in mass transit, a message is just a type.
So you create a rec class or a record or an interface
that has a number of properties on it.
And that type is then considered a message.
And that message is serialized out
to whatever format it needs.
In most cases, everybody uses JSON these days
and is written to the message broker
as the message body.
So what mass transit does
is it takes those types and saying,
I wanna send or publish this message
and the differences are subtle,
but they're sensible.
And it takes that message
and it writes it out to the message broker.
And then mass transit
is constantly pulling messages from those brokers
and dispatching those messages to consumers.
I like to think of mass transit as kind of like,
it's kind of like the JMS of .NET.
Java has JMS, which is an abstraction of messaging.
Mass transit kinda does that
because it's allowing you to use
different message transports with a consistent API.
And .NET doesn't have a JMS.
So anybody who's done Java is like, oh, well, I just use JMS.
Well, with .NET, there isn't that answer.
And the way it does it is very similar to ASP.NET.
When you look at the latest ASP.NET core,
you create a controller,
your controller has a number of dependencies
and your controller has action methods.
And each of those action methods
is really a type that that controller can consume.
And it just happens to be mapped to a URL
that comes in through the web server.
So when you think about that action method,
in mass transit, a consumer is just a class
that has an I consume interface on it
with the message type specified
that gets mapped to a consume method on that class.
And what mass transit does
is it will create instances of your consumer
either through the container
or directly if you aren't using a container,
although pretty much everybody's using a container these days
because of the way that the generic host
and everything works with .NET now.
For each message that comes in off the broker,
it will create an instance of consumer
and then dispatch that message
to the consume method on your consumer.
So it's kind of, they call it the Hollywood principle,
don't call us, we'll call you.
Once you've registered your consumers and hit start,
as messages come in,
mass transit's gonna invoke your consumers
and let those run.
And it handles everything like concurrency
and acknowledging the message on the broker
and not acknowledging it if your process crashes.
And there's a whole bunch of configurations
you can put in place as far as retry policies
and outboxes and partitioning.
And I mean, there's so many different middleware
or filters that you can put in place on that.
So that's kind of the basics
of how messages are dispatched through mass transit
is you have some type, you either publish or send it,
send it, you can send it directly to a specific queue.
Whereas most cases, people just publish
because it's a published subscribe system.
Whereas your consumers as I'm interested in this type
and mass transit will wire it up.
So if anybody publishes that type,
your consumer will get an instance of that message.
And if you have two separate consumers
that are both interested in the same type,
they get registered on separate queues.
And when they publish one message, it fans out
so that each queue gets a copy of it
and processes it on its own.
So it's a really nice loosely coupled way
to build a system because your consumers,
the consumers are isolated,
they don't have to read from the same endpoint.
And if one of them fails,
it doesn't impact the other consumer.
So it's that published model
where everything creates a copy
and it's handed independently.
So if one consumer can't talk to, say, a remote web service,
it doesn't affect this consumer
that's just writing stuff to a local database.
Yeah, I really like that pattern.
It's almost like an inversion of control
where you've got rather than the publisher
knowing about the consumers and calling them directly,
the publisher is just publishing a business event,
something has happened,
and then the consumers just independently on their own
can just consume them
and do that single responsibility principle,
do that one thing,
which is about that event,
like maybe it's sending an email out or whatever it is.
And I really like that
because it's also mentioned
that's doing one thing, each consumer,
so it's single responsibility,
but you've also got,
what's the other one, open close principle
where you can add more consumers
and nothing else needs to change.
It's just another consumer responding to a business event
and doing its own thing.
And I really like that pattern.
It makes it nice and clean and maintainable, I think.
Oh, I absolutely agree.
And what's funny, as you mentioned,
it's kind of like dependency injection
in some of the early design notes,
which I have a flip binder of
that I think I still have.
I might have thrown it away at this point.
But when we were doing the initial designs for Mass Transit,
that was like,
how do we make dependency injection
or inversion of control work across a message broker?
I guess it's more of dependency inversion,
not dependency injection,
that I'm thinking because it's kind of...
Yeah, you're absolutely right.
And I meant to say dependency inversion
because that was kind of the thing,
is we were like, how do we make it so
when I just produce a message,
it just resolves the types
and makes it go across the network.
So that was what we were thinking about
when we originally do the design doc,
is how do we do dependency inversion over the network?
Speaking of dependency injection,
with, like you said before, the .NET now,
you'd have to fight to not use dependency injection.
So I really like that,
that's just out of the box and works.
And like you said, with the consumers,
it's kind of, I've been recently playing with Mass Transit.
To be honest, most of the time,
up until recently,
I've been using RabbitMQ
or Azure Service Bus, Native Clients.
And now I'm using Mass Transit,
which I'm really loving.
The thing I'm really loving is all the stuff
I don't have to do
because Mass Transit doesn't form me,
like you mentioned.
Going back to,
like where I was going with the dependency injection
is because now there's just these Mass Transit consumers,
which is just a class.
And if I want to inject something,
then I can just use the standard way of doing things
by injecting it into the constructor.
And it just works.
And it just feels so clean.
Well, that's good.
I like to hear that the patterns are applying.
What's funny is when we started Mass Transit,
there was like one,
well, I think there was like Spring.net.
I think there was Castle Windsor.
And if you went to a room full of people
at a conference and said,
hey, who here is using dependency injection?
And there'd be like one hand go up in the back.
And it was Jeremy Miller who wrote Structure Map.
So it was like,
it was like, okay,
so nobody's really using dependency injection at this point.
And now, like you said,
it's ubiquitous and at least in the .NET Core,
.NET Standard Areas.
Yeah, what's interesting?
I think if you go in between now
and what you just said then, time-wise,
if you ask the same question
and you said who's using dependency injection
and what container are you using,
you probably get like five different auto-farc,
NINJET, there's a whole bunch of them.
Well, it's interesting with .NET now.
I used to default to auto-farc,
but now I just use out of the box because it just works.
And yeah, it's quite interesting the way things are going.
It's interesting you point that out
because I've seen the same thing
if I look at the download stats
as I used to be a huge auto-fac user.
I have systems in production
that are heavily built around subcontainers
and child containers and auto-fac this and this, that
and registries and tenant IDs
and just crazy amounts of complexity
that you could build with auto-fac.
Things that should have been separate programs, honestly.
But we built that with auto-fac.
And now, if you look at the download numbers,
the fact that extensions dependency injection
comes out of the box with almost every .NET new,
it's yeah, pretty much everyone is using that.
And it's why.
So MassTransit has up to this date
supported five different containers.
We used to support more, we used to support Unity,
we support a ton of them.
Prior to version eight,
MassTransit supported out of the box
like five different containers.
Structure maps, simple injector,
Castle Windsor, auto-fac, MSDI.
And with version eight, we've just said,
you know what, extensions DI is the norm.
Every container has been forced to adapt
to the extensions DI patterns
as far as iService Collection and iService Provider
if they wanna stay alive.
So we've just finally decided, you know what,
this stuff is dependent upon by everything.
If you use entity framework core,
you're using all of those things.
So let's just make those as kind of core dependencies
and then let everything work around it.
Because like you said, now it's,
if you look at the usage of the containers,
it's significantly in favor of extensions DI
over everything else.
Yeah, it's kind of sad really,
cause everything, like you self,
you obviously spend a lot of time
working on open source with MassTransit.
And with these different containers,
there's a lot of people that I put
an awful lot of time into these containers.
And if now it's just easier out of the box
to not use them.
Oh yeah, it's,
people who've worked and spent
a ton of time building containers.
And I think the phrase is sure locked.
Once Microsoft takes it and puts it out of the box,
it kills all the open source versions of it.
And it's unfortunate cause I know what it's like
to pour a ton of love into a project.
And if Microsoft came out with their own abstraction
of messaging and basically copied MassTransit
and then had their own,
I mean, I guess there would be a sigh of relief.
Cause,
cause believe it or not,
it's actually a lot of work to maintain and support this.
And it's weird to think,
okay, if I make this change,
cause if it's a hobby project,
which you know, it is a hobby,
you know, it's a business,
but it's a hobby,
we can say that all we want.
But yeah, it's like,
if I make this change,
how is that gonna impact people downstream?
How are they gonna upgrade?
I mean, are they gonna have to upgrade everything?
Is it a forklift?
Or are they able to just add new services
built with newer versions
and they fully interoperate with the old ones?
So yeah, it's,
it's like,
you can't just go out and just, you know,
chop off a couple of tree limbs
and, you know,
build something else off the side of it.
You really have to think about the longevity
and how it impacts others.
Yeah,
that's it when you start an open source,
air quotes, hobby project.
And it goes viral,
which is kind of in one way a good thing.
In another way,
suddenly you have to support it
or you don't have to,
but you don't have to mean,
it's kind of,
it's suddenly expected.
And especially if you're just doing that
in your own spare time.
Yeah.
What was I listening to?
I can't remember what podcast it was.
I was listening to one the other day
and they were talking about this kind of thing.
And it was interesting where,
I think it was one of the .NET rocks ones actually.
I'll include all in the show notes
to whichever one it was.
And they were talking about how someone
that starts an open source project
in their own spare time.
And then businesses that use it
suddenly assign a developer full time
to help support it.
And suddenly that developer full time
is spending more time on it
than the person that created it.
And I don't know if it's hard to control
or I suppose it depends on the project really.
Yeah.
I mean, I could definitely see
where that would be a thing.
I have a pretty broad list of contributors
and admittedly most of the contributors
add small fixes or documentation updates
or just comment consistency updates,
which is great.
It's nice to see so many people
actually looking through the source code
and looking and making changes.
I have a few major contributors
who have added significant parts to the code base.
And then I ultimately have to own them.
As a maintainer,
that's one of the questions
that comes up all the time though,
is someone is like,
hey, I want to add support
for this database and this environment.
And I have to think,
okay, is this widely enough used?
It's significant enough
to make part of the core project.
And am I willing to maintain it?
Can I test it locally?
So on and so forth.
Because ultimately it comes down to that
is I'm going to have to own it
as maintainer and owner of the project.
So it requires a level of scrutiny
and sometimes you have to say no.
Yeah, it definitely makes sense.
That's an important skill to learn
to be able to say no.
So going back to the different transports,
we mentioned RabbitMQ and Azure Service Bus.
How do you handle the differences
between these transports?
So for example, Azure Service Bus
has queues, topics, subscriptions
where Rabbit has exchanges and queues.
Have you found it works quite well
being able to abstract
on top of those two different concepts?
I suppose it follows a similar kind of pattern, I guess.
And that's the thing which you just said.
It's a similar kind of pattern.
You know, you have,
in Azure Service Bus,
you have topics and you have queues.
And you can set up a subscription
in Azure Service Bus to forward to a queue.
Or you can actually forward to another topic.
And when you go to RabbitMQ,
you have exchanges and queues.
And in RabbitMQ,
queues are the only thing that can hold messages.
And exchanges form,
for lack of a better word,
an ephemeral message fabric in memory.
So when you send to an exchange,
it's immediately routed through those exchanges
and land in one or more queues.
So it's very quick in that respect.
Whereas in Azure Service Bus,
if you write a message to a topic,
that topic then puts them into the subscriptions.
And if the subscriptions forward,
they kind of read and feed out to the queue.
Or in the case of Azure,
you can actually read directly from the subscription.
The subscription is a queue in and of itself.
So things that are additive,
like subscription endpoints, for instance,
MassTransit has a,
normally everything in MassTransit is a receive endpoint,
which maps ultimately down to a queue.
With Azure,
and the way MassTransit supports
the additional transport capabilities,
you can actually create subscription endpoints
in MassTransit
that map to a subscription on a topic
that only get messages from that topic.
So it's basically the ability to use
the capabilities of Azure Service Bus
on top of what the base capabilities of MassTransit are.
So it, and it's a huge distinction
because when we first wrote MassTransit,
we created kind of the lowest common denominator API.
And it made it so like,
well, how do I use this in RabbitMQ?
Or how do I use this in Azure Service Bus?
And the,
it didn't translate.
And so it was like,
we can't lowest common denominator that
because then we have to throw a bunch
of not implemented exceptions
if it isn't implemented by that transport.
It's a very painful experience for the developer.
So what we did is,
we basically turned this whole thing upside down
and said, okay, well, we're gonna have a base set
of core MassTransit functionality.
And then each transport
is going to extend that base functionality.
So if you'll notice
when you're writing a MassTransit application
in the first code,
you're like, okay, well, I'm gonna add MassTransit
and I'm gonna register my consumers,
my sagas, so on and so forth.
And then I'm gonna say what transport I'm using.
I'm gonna say, I'm using Azure Service Bus.
And then within that configuration block,
then you can configure
the different broker specific things.
You can wire up specific subscription endpoints
or receive endpoints if you want to,
or you can just use the defaults
and say configure endpoints
and MassTransit's gonna configure
all the topics, subscriptions, queues,
bindings, everything for you.
And it's gonna do that
whether you're on ActiveMQ, Amazon SQS, SNS,
or RabbitMQ.
It doesn't care
because if you're using the out of the box functionality,
it maps one to one,
regardless of what broker you're using.
It's given us the ability to allow access
to all the low level transport features
which you can opt into through
like.net C sharp pattern matching
and saying, well, if this configurator
is a RabbitMQ configurator,
then I wanna change the type to this
and do this and that and the other.
Otherwise, don't do anything
because I'm not using that feature.
I tend to use Azure Service Bus when deployed,
but playing with MassTransit this morning,
I then got it so locally.
It was the same consumers
were running over RabbitMQ
just spanned up in Docker Locary
cause they're basically
using Docker spanned up,
spinning up the database.
Now I'm spinning up RabbitMQ
so Docker composes up and it spins up.
But the only thing that had to change
was my program.cs
and the startup,
like basically what you were describing before.
So none of the consumers changed or anything.
And I now just got a flag saying
about which transport I want to use.
And locally, in the local settings.json,
I've got it using RabbitMQ
and then when deployed,
it's using Azure Service Bus.
And as I say, it's just the startup code
which is slightly different.
And that's it really.
So it just works really well.
Yeah, that's awesome.
I mean, I know many people that do that.
They run RabbitMQ locally for testing
and then they deploy to Azure
and run on Azure Service Bus.
And you're right, it's an awesome use case.
You just, in your startup CS,
you just check an environment variable
and say what's the transport?
Or you use configuration
cause obviously I configuration is a big thing,
but you just say, hey,
which transport am I supposed to use?
And it's really the actual transport specific configuration
is a small block of the overall configuration.
So it's worked out nicely.
So I'm glad that worked for you.
Yeah, definitely.
And I think also,
I mentioned before about all the stuff
that Master Transits seems to do
that you would have to do manually yourself.
And just things like,
so when using Azure Service Bus
and using the client libraries,
creating those topics in Azure Service Bus,
creating those subscriptions against the topics.
And then the scenario I've just described
where I locally want to do it in RabbitMQ
and I would have to manually do those too,
where Master Transit just does both of those things for me
out of the box.
And yeah,
it's something that I'm gonna like start
using a lot moving forward
where as I say in the past,
I've just used the native client libraries.
But yeah, this is definitely
something I'm gonna be using a lot moving forward.
I'm just thinking that would make a good video topic
for YouTube.
With the whole Docker locally thing,
I really like that for integration tests as well.
So with the ASP.NET Core,
Web Application Factory integration tests,
where I'm actually my,
I'm kind of favoring those integration tests
over unit tests.
And I spin up database
or whatever it is in Docker compose.
And then the integration tests over them.
So the sound testing,
business requirement,
the actual consumer of the API
or whatever it is requirement
rather than each individual class
and mocking out everything else
because that just takes so much time.
But having this Docker compose,
which can do Docker compose up,
so it's can spin up the database,
RabbitMQ,
some of the stuff I'm doing is Azure Storage,
so spinning a page right
all within the same Docker compose.
And then you can just develop locally,
completely locally,
all the tests can run against
all that stuff in Docker.
I just find that works so, so well.
I completely agree.
It's been a super enabling technology.
And most of our teams
at the company I work for
have started moving over to GitHub Actions.
And it works there too.
So we're like able to run automated tests.
In our GitHub Actions,
we can say, oh, we need these three additional containers spun up.
So before we run our tests
and we'll run integration tests in GitHub Actions,
we'll spin up Postgres, RabbitMQ,
couple other services,
and everything will run in isolation.
Because otherwise, it was like,
OK, well, we need to have a test
Azure service bus namespace.
We need to have a test this and a test that.
It's like, OK, well, when we run in the unit test,
we'll just run against RabbitMQ locally.
We'll verify that it works.
And you're right,
it's we'll talk against a real Postgres database.
We'll spin it up.
We'll apply the migrations
at startup for the test fixture setup.
And then we'll just start running the unit test
by just a regular dot net test,
but they are really unit tests
or like full end to end integration tests written in any unit.
But they do the full end to end
using the real components
and there's no reason to mock anything out.
Yeah, it takes four or five seconds
instead of 200 milliseconds.
But honestly, it's a build
that takes four minutes total on GitHub Actions.
Because what I say about GitHub Actions
is in your job description,
you can say, hey, just do all four of these different things
at the same time.
And it spins up four instances
and starts running out.
And so that local experience
translates to the CI CD capabilities
you're getting with GitHub Actions.
And it's just awesome because it just works.
I mean, I realize Docker
is definitely charging money now
because they have to make money somehow.
But when you talk about enabling game changing technologies,
Docker is one of all.
Yeah, definitely.
And they're charging money for Docker desktop,
but you can run the Docker engine without that.
And I can't remember the exact figure,
but you've got to be a pretty big company
to have to pay that anyway.
Oh, yeah, I think you're right.
Yeah, for the individual developers and such,
there's no charge.
And it makes sense,
get the big companies who have the spare change
to write a check once a year
so they can stay in business.
I'm totally cool with that.
Yeah, definitely.
And if you think, going back to the whole
writing unit test, mocking out every single layer
of the implementation detail,
how much time that costs in developer salaries
versus paying the Docker desktop license.
That's a good point.
That's why every developer on your team
should have an awesome laptop
instead of some janky old machine.
We've been making that argument for 20 years.
I need a faster computer.
It's worth this much.
And you'd have to spend a day proving
that it would save you that much time.
This is the advantage to me being a consultant.
I just use my own gear.
So I've just got a really high powered desktop.
And I insist on not using clients' laptops
and stuff that they send to know,
I've got my own gear.
I'm using that.
And it makes such a difference.
Like stress levels as well.
If you've got a really slow computer,
that can be quite stressful.
It's actually surprising
how big a difference that makes.
No, you're absolutely right.
Good gear matters.
You've got to have good tools to do your job well.
That means you have to get them yourself.
So be it.
Looking back at the feature set of Mass Transit,
you've got this thing called the requests,
which you've got requests and responses,
which normally is kind of,
I think of rest APIs,
where you make a post request
and you get a response back.
I guess I would naturally think of requests
and responses in an event of an architecture.
Looking at your library,
it seems to abstract a lot of that away from you.
So it feels like a request and response.
Can we talk a bit about how that works?
That's a good question.
I hadn't really thought about that much.
So yeah, you're absolutely right.
When people think of an event or an architecture,
the last thing they think of is request response,
because you get the purists who are at the podium
or the lectern saying,
your producers should not know anything about your consumers
and you should just fire and forget.
And in most cases, that's true.
If you're accepting an order,
you just need to accept the order
and write it to a queue and be done.
You don't need to wait for a response
because you may not get it.
The back end system may be down.
Partial availability is a thing.
I may have fulfillment services on the back end
being upgraded and I won't get that immediate response
like I would expect with an HTTP request.
And that's one of the main reasons
people move to a message-based system,
is okay, if I can write the message to the broker,
I've accepted your request, have a good day,
you can check back later.
But it's that check back later that really comes into play
because eventually, even in an event-based system,
you're gonna have to be able to come back in
typically from an API endpoint.
I mean, we're in the API economy,
everybody exposes APIs for everything.
And they're gonna come in with that API request
that's gonna need a response.
And in the early days
of kind of event-driven architecture,
we talked about, oh, we'll just create a cache
at the edge that has all the data in it
and you'll just, you know,
that way you can read from the cache.
And they talked about building this whole
separate eventing model to build up, you know,
read stores that have view state
that you can then query, you know,
using a query at the web API
to check to see if it's there.
And yes, you can still do that,
but databases are really quick.
Traffic loads really haven't gone up for most companies.
Not everyone is web-scale, so to say that,
oh, you need this big massive distributed,
separated read-write architecture.
It sounds great on paper,
but from the complexity of the developer managing it
and this whole eventual consistency model.
Yes, there are still practical use cases
where that makes sense.
But there are also cases where it doesn't make sense,
especially for companies of a size
that just aren't doing Google-scale workloads,
which let's face it, it's 99.9% of us
that aren't doing Google-scale workloads,
even though we think we're cool.
So the request response support was really put in
because if you think about a request
and a response in a message-based system,
let's start at HTTP, request response.
You make a socket connection,
you transfer a piece of data to the server,
the server does something,
and then transfers a piece of data back to the client.
In a message-based system, you write a message to a queue,
another process reads that message from the queue,
does something, writes a message to another queue,
response queue, and then the client
reads from that response queue and accepts that response.
It's the same thing,
it's just you're using queues instead of a socket connection
to transfer data.
So it makes sense,
and when you think about it that way,
that's how requests work in mass transit,
is we create a request client,
each bus has its own response endpoint when you create it,
so if you actually use request response,
it starts up that response endpoint,
and then it will just send messages to,
it actually publishes them by default,
it's just as published because you have a consumer out there
that is listening for that request
or it's subscribed to that request.
When the request lands on that consumer,
the consumer looks at the response address,
which is in the message headers,
and then sends the response back to that address,
which then gets picked up by the client,
and simulates request response over messaging.
Why would we wanna do this?
Well, people still do a lot of request response,
we have APIs, they pass in an identifier
and say get order status,
we have to go get the order status,
and we don't wanna keep and maintain a separate read store
that is eventually consistent from the source of truth.
And I see this a lot of cases
when people are using sagas in mass transit,
it's like a saga will kick off an orchestration
of some complex backend operation,
and then they wanna come in and check on it.
So in many cases, you'll have a saga
that subscribes to that request event
so that it correlates to that particular saga instance,
and allows that saga to make its own
deterministic business logic
similar to like an aggregate route
in domain driven design,
it'll say oh, you're asking me
what my order status is?
Okay, well I will look at my status and say okay,
I am currently waiting for fulfillment,
and then I will respond to that request
from the saga itself,
back to that request client,
which might be in say an API controller.
And the reason that makes the most sense
is because what's the alternative?
Okay, what is the controller gonna open up a DB context
and then do a query against the database
and ask for some column and make a determination
based on some business logic in the controller
to determine the status of the order?
That to me seems like a bad plan
because you've now coupled your controller
to some business logic happening in a backend system
versus just defining a message contract
of get order status and then order status
being the response type
and managing that through messaging.
I know you mentioned sagas then
and this is something I would like to come back to,
but if you weren't using a mass transit saga,
but you were using consumers to do the concept of a saga,
so as you say you've got an aggregate,
let's use the example of an order
going through different states through a system,
but you wanted to manage that with your own event handlers.
That very first create order command,
whatever you wanna call it,
that triggers that saga.
Is there a benefit to using the request response
for that initial and start the thing going
versus just using a REST call to an API
that puts the next message onto the queue?
That's a really good question
and the classic architect example, it depends.
In systems where, say for instance,
you have a legacy system
and that system doesn't provide
a unique identifier for the order.
Let's say they just submit a list of cart items
and expect to get an order ID back.
This is very common because so many people thought
identity columns were cool in databases
and they would just return your order ID
as the next identity key that came back.
Very bad idea, but there are systems out there that do it.
And those APIs around them were built around that thinking.
So in that case, you might use a request response
to say, okay, I'm gonna do a submit order
and I'm expecting a response of order submitted
or order submission accepted
because I may produce an event called order submitted.
So having a separate response type
for that request response conversation can be useful.
And you can really build it both ways.
The way I do the consumers in mass transit now
is there's an extension method isResponseAccepted,
I think it's called.
And if you just do a fire,
if you just publish the event and say, okay, submit order
and you don't put a response address on it,
you aren't doing it with the request client,
the consumer can check and say, hey,
is someone waiting on a response for this?
And if they are, I'm gonna do X, Y and Z
and then respond to them
versus if they aren't waiting for a response,
maybe I'll do something different.
So I can make decisions based on the context
in which the caller produced the original message.
Does that make sense?
Yes, yes.
Yes, that sounds very similar.
As you say, it kind of, it feels like both
for that initial request to start off the saga,
either would work using like this request response,
then your initial business logic is within the consumer
or just being a web API endpoint
that posts the next message.
Hey, everyone.
So during editing, I realized that in the conversation
with Chris, we haven't mentioned quite an interesting point
regarding requests and responses
over a message broker versus a standard API call.
And I thought it was important enough to warrant
adding a quick additional comment to include this
before we move on from the topic of requests and responses.
Basically with an API call,
obviously that API needs to accept incoming network connections
and have a port open.
However, when making a request and receiving a response
that is orchestrated via a message broker,
that's not the case.
Both the caller and the callee
are both making their own outbound connections
to the message broker.
So doing it this way, you get extra security too.
So all of the logic is just a bunch of consumers
in console applications
without having to open any incoming connections
or worrying about ingresses and that kind of thing.
And it's not only more secure,
but it also feels way simpler too.
Anyway, sorry about the interruption
and back to the show.
You mentioned sagas then.
So if you then rather than manually implementing
like a good workflow,
if you use mass transit sagas,
like so I've not actually tried this,
but I read through the documentation
and it sounds like you've got a concept of persistence
where you've got to have something else to store the state.
Is that something else, just your database?
So mass transit is basically using something
like entity framework to write your entities to the database.
Yeah, so let me go into that for a little bit.
So sagas and mass transit supports two types of sagas.
The one we talk about the most is the state machine sagas
where you can define states and events
and correlate them together
and use them to orchestrate workflows.
Sagas are interesting because they're stateful.
And as much as people say,
they want stateless web services
and all this other stuff, let's face it.
State has to be somewhere or nothing changes.
So sagas are stateful workflows
for lack of a better phrase
that respond to events,
are identified by a unique identifier
and are persisted in some database.
And yeah, it's usually the database
that all the rest of your data is in.
If all of your business data is in Postgres
and you're using entity framework core for that,
mass transit has out of the box support
for entity framework core.
You define a class map for your saga
to persist the properties of that sagas
or the saga instance data,
that state portion of the saga
because state machine sagas are split into two parts.
You have behavior, which is the state machine
and then the state,
which is the actual data you're storing.
It's kind of like your bag of data
that you keep whenever an event
correlated to that saga instance is received.
So it will load that state
and then execute that behavior against that state.
So it's like a call stack in the database
so that you're not having a bunch of stuff
waiting around in memory, in call stacks.
But yeah, so that state is stored
in whatever database you're using.
If you're using Redis for your data,
you can store your sagas right there in Redis.
If you're using MongoDB,
if you're using Azure Cosmos,
I mean, there's a bunch of them.
And that's the whole point
is you're storing it in the same place
that you're storing your data.
Gotcha, gotcha.
So it's another case of mass transit
just doing stuff for you that you don't need to do.
I guess when I started reading about sagas,
just initially, I got a bit confused
because it was talking about storing state.
In my head, I was already storing stuff in the database.
So I'm thinking, what is this state I'm storing?
And I was thinking,
is this yet another thing I've got to store?
But then as it clicked to what it actually meant,
it's actually just doing my main database entity rights for me.
I thought, oh, actually, that's quite nice.
So I think for the current use case
I'm currently doing,
we're actually converting over
an older code base to use this.
So it's kind of harder to just let mass transit
take over that workload,
but maybe not,
it's something that's worth looking into.
So that's why I'm kind of almost doing a saga-like flow,
but just using consumers and doing it manually.
But it might turn out that,
because we'll be using entity framework,
what we're doing actually,
we can just let mass transit do it.
Yeah, I mean, that's an interesting migration pattern.
And yeah, admittedly,
most people are working with existing code bases.
But if you previously had work
that was done in like an API controller action method,
or a service method that was loading up a DB context,
loading some data, making some changes, saving some data out,
using consumers as a first step for that makes total sense.
I hadn't even really thought about that.
That's a strong approach.
And what you've come to realize, as you pointed out,
is that sagas are just doing that entity load and save for you.
And it's one root table that has some certain constraints.
It needs to have a certain ID
and it's be able to store current state.
But it's a DB context.
You can have, I don't recommend going too crazy on this,
but you can have joins and you can bring in other tables.
And you can use the DB context as scoped.
So you can bring in other loads
and manage that unit of work
and do everything as part of that same transaction
that was started by mass transit
when it loaded the saga instance from the DB context.
So yeah, makes a lot of sense.
Cool.
Do you find, just speaking of entity framework
and other dependencies that mass transit has,
do you find that as Microsoft or whoever,
as they create new versions,
is it quite difficult to stay on top
with breaking changes and all this kind of thing?
Funny you should mention that.
I was going to say that the listeners can't see you laughing right now.
Funny you should mention any framework core six
versus any framework core five.
Yeah.
When Microsoft released .NET 6
and they released entity framework six,
I mean, there's a ton of breaking changes
in .NET 6.
And they were very clear about it.
They're like, hey, these things are fixing.
And for the right reasons,
I mean, any framework core got a lot faster in .NET 6
because they jettisoned some of the garbage.
But you have to deal with that.
And as a framework author, it was like,
okay, so how do we support that?
Because what we found in the bug reports,
if you run an open source project,
I guarantee you the day after Microsoft releases
a major version of .NET,
you will get five bug reports of people saying,
hey, this doesn't work with .NET 6.
It is so frustrating.
And then, of course, you'll get the people,
hey, I just stalled in, you know,
first alpha preview of .NET 6.
You know, I'm super cool and awesome
and hey, mass transit doesn't work with it.
And I'm like, hey, good for you.
I was going to say,
with all the really early pre-releases now,
that must be even worse
because I think people are more likely
to just try out the cutting edge
before it's anywhere near ready.
And then, because they want to use it in their projects,
they somehow expect all library authors
to magically have got it to work.
Yeah, remember when I said earlier
that you have to learn when to say no?
Yes.
I didn't even install .NET 6, probably.
Actually, I just installed .NET 6
about a month ago on my main desktop.
I mean, I've been building to .NET 5 up until then.
And the only reason I had to install .NET 6
is because now with mass transit 8,
if you're using .NET 6, it requires EF Core 6.
If you're using .NET 5 or earlier,
it requires EF Core 3 or 5.
So basically, it's just three and five are compatible.
Six is completely broken.
So as an author, you have to draw a hard line
and say, this is what we're supporting on these frameworks.
And there is no reason to support EF Core 6 on .NET 5
because, hey, guess what?
It doesn't work.
It requires .NET 6.
Do you find that you get people forgetting
that you're doing this in your spare time
and almost being rude about it?
Said any open source author anywhere.
Is there a toxic sense of entitlement
that people who are getting something for free
are owed something?
Exactly, yeah.
I think that's a general sense
in the open source community for sure.
If you pay attention,
you can certainly be aggrieved by the fact
that someone was rude and said,
hey, your product did this
and I need your help right now.
Yeah, again, I just ignore it.
I mean, I run a 24-7 discord channel
and sometimes people will post
similar type assertions that like,
hey, how do I get this and I'm doing this?
And it's like, sometimes they're doing things
that they just don't realize
the framework was not designed to do.
They're trying to fit a square peg
in a round hole and you just have to say,
that's really not a use case
that you know, MassTransit really addresses
and sometimes they'll be insistent
that it should and sometimes just say,
hey, good luck, I hope you get it to work.
I think people have figured out
that if I say good luck in discord,
don't expect like another response for a while.
It's not to be rude.
It's just like, hey, if it works for you, great,
but I don't have the brain cells to burn on that.
That's a hard problem,
but I don't know that I have the right answer.
Yeah, I think a lot of people
need to just remember that,
I was gonna say, especially when people are doing things
for free, but I would say anyone,
just have respect for the people.
And, but especially if someone's spending their own free time,
working on a project that you're getting for free
and a lot of companies will be making money from that,
right, you're not.
I think people just need to remember
to have respect for the people.
So going back to MassTransit,
one thing that is on my list,
maybe this afternoon, I don't know,
it depends on which time I've got left,
is there's a section on testing.
So I'm looking forward to having a look at this.
But how does, from a MassTransit point of view,
how does the testing work there?
I've always put good unit testing support
into MassTransit because I've always believed in testability
and I have to test the framework.
I have to make sure it works.
So I have to write tests against it as well.
And if you look at the testing code base
that's in MassTransit,
I mean, there's thousands of unit tests.
And I've had, if you look at the evolution of them,
I've had to continue to change them as,
a container has become a bigger part
of what people are using.
MassTransit has a built-in test harness.
It has an in-memory transport.
It has in-memory saga repositories.
It has a lot of in-memory message data repositories.
I mean, all of these things
that can support in-memory testing
so that it can be very fast.
And it has a number of fixtures
that allow you to assert that things happen.
You can check to see,
did a consumer publish a message?
Did it consume a message?
Was a message sent?
And so there's a lot of collections
that can be observed.
And because messaging is by its nature asynchronous,
you can't just call a method
and then check a result.
You have to wait,
because some thread in the TPL
is gonna, in the background,
pick up that task.
It's gonna dispatch that message
and then it's gonna,
the consumer's gonna process it.
So you can't just check something
immediately after calling a method
because it isn't instantaneous.
And so MassTrans,
it has a number of test harnesses
that allow you to set up the in-memory bus,
add your consumers to it,
and then you just start publishing messages.
You say, okay, well, I wanna publish this message.
And then I'm gonna await,
and I'm gonna await on harness.consume.messageType.
And then once that message type has been consumed,
then I'm gonna maybe go and check if something happened
or check if a message was published by that consumer
that is gonna be used by some external system.
And so that's been really helpful to kinda test things.
Now that you're testing sagas and state machines
and repositories and, you know,
things that are injected through dependency injection,
I've had to kinda evolve that test harness strategy
to add support for containers
and add support for, you know, multiple consumers
and make it more consistent
with how people are configuring applications.
And so it's evolved,
but, you know, the testing harnesses are great.
They let you check your ins and outs.
You know, it's up to you to check, you know,
kind of the side effects of whatever your consumers are doing,
whether they're writing to a database
or, you know, however they produce that.
But, yeah, I think the testing support
of mass transit is really good.
And I've done a couple of videos on it
and I've definitely evolved it.
It's improving even better in V8.
It's gonna have some easier test harness structures
that are gonna be more akin to,
you're always testing with a service collection
in your unit test and you're setting it up
just like you would any other way.
So that like your dependencies work and things like that.
Because that's more and more what the requests have been lately
is how do I test this since I have this, you know,
dependency that has to be injected.
So I've focused a lot of the changes in V8 additions
because the old stuff still compiles,
but around making it easier to test container-based
bus instances and consumers and saugas and stuff.
Yeah, very nice.
Yeah, I think a combination of using your test harness
and what we were talking about before,
you mentioned side effects, but what we're talking about,
Docker before, spinning things up in Docker,
a combination of using your test harness
to do all the stuff and verify that things have happened.
But then afterwards,
also just directly querying the database
has this side effect actually happened.
And that's kind of like a pattern I quite often use
where to avoid testing any implementation detail,
as we said, spinning things up in Docker,
but then the assert side is just,
the test is doing a direct query to the database saying,
as it actually changes,
what I expect to happen,
how's that actually happened in the database?
And then combining that with your test harness,
that sounds like a really nice pattern.
Yeah, it's a good way to get there.
And I mean, you just reminded me,
one of the things that I'll test
is like the order example before,
is if I have an order saga,
I'll do the submit order message
in the top of the test harness.
And then after that's happened,
I'll actually use the get status
request response client to say,
okay, well, I'm gonna check and see
if that order is in the correct state.
Did that order get accepted?
Is it in this processing state?
Rather than going to the database
and checking for an entity to be there.
But you can also just query the saga repository
and check for a state as well.
But if I have that request response support in the saga,
I will usually check that with additional tests
that verify, okay, well, I've added an order,
now can I check the status of that order
to verify that part of the behavior?
Gotcha, gotcha.
Does that make sense where the saga is responding
to that request to check its own status?
Yes, yeah.
Initially, where I was thinking about the request
and response, I was thinking about it
as being doing the entry point into the saga,
so that initial first message to kick off the saga.
But from what you're saying,
it sounds like you can query the saga
at any point to find out the status.
Yeah, so if you're, so the saga,
in the case of creating a state machine saga,
you would have an event in your saga
and now you can specify that it's read only
so that it doesn't lock the database,
it doesn't try to do a write of no change
because that can create concurrency issues.
But you would have an event in the saga
such as like a get status event.
And when the request client produces get status
for a particular ID, such as like they hit an API endpoint,
it says, okay, well, this is my order ID,
I wanna get the status of it.
Instead of in the controller going to the database
and trying to find that order,
just send that request with that order ID in it,
the saga would get that event,
it would load that instance of the state
from the database,
it would execute that event behavior,
which would say, oh, okay,
well, the instance is currently in this state,
I'm gonna respond with an order status message,
plugging that state in in the timestamp
and whatever last updated,
whatever model you wanna put in there.
And then that saga event is processed,
it's done and the request client gets that back.
And that's actually the place I use it the most
because I wanna go and ask,
typically that's the use case,
I submit something, I wanna check the state,
I wanna, you know, it's my order accepted,
you know, it's my order been shipped yet.
And that's the most common one,
it's like, has it shipped, has it shipped, has it shipped,
I need it now, I need it now.
But yeah, that's a super common use case
that I find is using the requests
to actually query the state of a saga
because you would do the same thing
with like a domain aggregate.
You wouldn't go out to the database
and query the aggregates table to find out its state,
you would in effect call a method on an aggregate root
to say, okay, what is your order state?
You know, has this order been accepted?
And those are questions
that you would ask an aggregate root,
you would do the same thing
with request response to a saga.
And that way all the locking and currency
and methods for loading the data from the database
are the same, it's the saga repository.
Yes, yeah, that makes total sense.
And it makes much more sense,
the whole request response thing
going via eventing now,
now I think about it that way.
So going back to your Google Docs
where you were rapidly typing notes
before we started this call,
I've been quite looking forward to your entry
about Docker and Kubernetes
and how this fits into the overall strategy.
So I'm actually a really big fan of Kubernetes
and I do kind of pretty much use it everywhere.
And I hadn't thought about
that mass transit would act any differently in Kubernetes.
So that's why it pique my interest
when you put that entry in the Google Docs.
Would you use mass transit differently
in a Kubernetes environment?
I don't think it's a matter of differently.
I'm trying to think of a good example
that would fly here.
Kubernetes is definitely enabling,
it takes Docker and puts it in production
and it does it in a way that gives developers
a lot more control on how their applications
are scaled and how they're managed.
And if anything, it's made it nicer
to be able to deploy services that scale horizontally
and deploy multiple instances of container.
It does introduce some questions
cause there's always the question of,
okay, well, if I have one instance of the service,
what happens if I have five instances?
And by default, mass transit load balances.
So if you start up five instances of the same service,
it's going to have five instances
reading from the same queue
and you're gonna get load balancing
round robin across that queue.
So you immediately get faster consumption across that.
You don't have to set up load balancers and routers.
You're just doing it by creating more instances.
The thing that that brings up for some people though
is when they produce events
that are meant to be received
by every instance of the service,
such as like an invalidate cache event
or something like that.
And that's where they need to specify
like in the configuration, okay, well,
for this consumer,
I want it to have an instance ID of a GWID or something
so that every time it starts up,
every instance of the service
gets its own QN point for that.
That is temporary, maybe it goes away
cause you know, it's just for receiving events
to like invalidate a cache or something.
But you know, there's just some things like that.
If anything, it's just, it's a nice natural fit
to create scalability and messaging
cause you can just spin up more instances.
And we see this a lot with like the Kafka consumption
is you'll have, you'll have the same consumer group
because you wanna read, you know,
the set of messages once within a consumer group
but you need to increase your ability to read faster.
And when you spin up more instances,
mass transit, you know,
as long as you partitioned your Kafka topics correctly,
you know, if you have eight service instances running,
Kafka is gonna manage assigning those partitions
to different instances of the service for you.
So it manages that load
and you're able to consume more quickly
across different partitions.
So it scales really nicely.
It's just been a super enabling technology.
And like you, like we've mentioned early on the show,
being able to just spin up Docker containers
and be able to spin up like a persistent service
right next to my services for, you know,
kind of like what we used to call an operational data store
is just, you know, I need a quick store
and it's easy to just spin up Postgres
in that same environment to have like a data store
for like a saga instance or anything like that.
So it just gives you so many options and flexibility
and you can test the exact same thing locally
on your local machine, which is awesome.
Yes, yeah, yeah, definitely.
That's a really interesting point
about the example you gave with Flush and Caches.
So for messaging, how I'm currently using it,
I use it like the standard topics and subscriptions
where some business event is published to a topic
and then each area of the business or the code base
that's interested in that event
creates its own subscription
and that area of the business can horizontally scale
and like the Azure service bus just manages
making sure that only one of them gets it.
But as you say, it's kind of
if you actually want all of them to get it,
that's quite an interesting problem, I guess.
Yeah, I mean, some of it has to do
with the conventions of mass transit.
So mass transit, it looks at message type names,
it looks at consumer names
and it generates a Q name
based on the consumer name by default.
And some people override it,
some people want to make it unique
and there are a lot of configuration options for that.
It's just an awareness of
what do I have to do when I want to do this?
Because like I said, mass transit does
competing consumer load balancing by default.
So the trick is, it's like,
oh, we have this common framework
and we have this event consumer.
And six different services use the same event consumer.
Okay, well, when you register it,
you need to give it a distinct name
because otherwise it's gonna have the same name
and all of those services,
even though they're running the same,
a different service name,
but they're running the same event consumer
from their common enterprise framework.
And it needs a unique name,
otherwise it's just gonna load balance
with all the other instances.
Yeah, in my startup code,
where in the Azure service bus specific bit,
where it does that configure subscription
and it specifies the name of the subscription to create,
I've ended up just doing like name of my consumer.
And then that seems to work quite nicely.
So then I've got my topic,
which is the business name,
which as you said,
kind of just automatically uses the message POCO class name.
And then the subscription is the name of the consumer.
And I just find that works really well.
Yeah, good.
That's the plan.
Nice, nice.
So another thing you mentioned in the list
is about service messages.
And so are we talking about like Kubernetes, Istio
and Linkede kind of service messages?
Yeah, I mean, this kind of came up,
I mean, five years ago or maybe even four,
I was like, oh man,
we got to build a service mesh for our next service.
And now I'm just like, why would you want that?
You know, one of the cool things
about both Docker and Kubernetes
is it's easy to just add a service by DNS name
and reference it by DNS name.
So discovery is sort of handled by that.
But everybody got excited about East-West communication
and Istio and doing all this stuff.
And it's very much focused around RPC.
I mean, it's really designed
to kind of handle that interaction.
And so obviously the question is,
well, like how do we have an event mesh?
And I mean, I guess I never really thought
kind of in depth about what that would be.
But when you have published and subscribed
and you've decoupled your producers from your consumers,
that discovery is already handled.
So I don't really need to think about it.
So, you know, I originally had thought
about how I would apply service mesh patterns
to mass transit.
And I never came up with a good way
that was consumable by developers
because all of it was highly complex.
And if anybody's dug into Istio,
hey, guess what?
It's highly complex.
So Linkerdee isn't necessarily any better.

Yeah, exactly.
But I think those do lots of different things.
So one thing I would like out of the box
in Kubernetes, which these service messages add
is like mutual TLS.
But I kind of just want that,
but without all the other good things
that comes along with it.
So the event mesh kind of thing,
is that almost like the active pattern?
You say I don't necessarily know.
It's funny you mentioned the actor pattern, though.
I used to have an actor framework that I wrote
when I was all into actor model
and I thought it was going to be awesome.
And then I realized it's so edge and complex
that it's another thing that only three developers
are going to ever know how to do.
But, you know, what I what it came down to
is every time I built an actor model program,
I wanted persistence.
It's like, okay, well, I need my actors to be persistent
because I'm not building a video game.
I need to be able to store this state
out to a disk somewhere.
And hey, guess what?
Saga state machines are basically like persistent actors.
So at that point, I killed my actor framework
and said I'm done with it.
I'll let aca.net deal with that from now on.
But yeah, no, no, so an event mesh is,
it's a similar question is like,
when you think about an Istio and stuff,
it allows you to do like routing and like you said,
TLS and translation to some respect, you know?
And I feel like event mesh is trying to be more like
the next enterprise service bus type thing of like, okay,
well, I produced an event in Kafka,
but I want to consume it over here
and Azure service bus and like being that bridge.
And, you know, with mass transit
and the fact that you're writing to an abstraction
and can consume messages from a variety
of different transports,
it could be considered that it would allow you
to have multiple event sources with the same behavior
because you could just create a bus on Azure service bus,
you could create a rider to event hub,
you know, you could use the same consuming logic for that.
So that kind of goes back to the nomenclature of
is it a JMS for.net?
And yeah, it allows you to anything that is a block of JSON
or a block of something that can be converted to a type
and dispatch to a thing.
Yeah, mass transit can do that.
I mean, in Azure functions,
which is kind of a, you know, the message dispatch
and queuing and everything, it isn't a transport,
but Azure functions, when you do Azure service bus,
it's doing all the underlying work of locking the message
and doing all that for you.
But you can use mass transit with it
because you can just literally say,
okay, well, this is the message I got.
It's gonna be of this, it's from this topic,
dispatch it to a consumer.
And mass transit just runs the consumption pipeline
and ignores all the transport code of mass transit
cause, you know, it's decoupled,
it's used across the board.
And you can apply that to anything, really.
I mean, I have this long standing backlog item of,
you know what, let's just go full circle
and make it so mass transit can sit on top of HTTP
and your topics are URLs
and your message is just JSON that you're passing in.
And one of these days, that might happen,
but, you know, cause why wouldn't it?
I mean, it's a block of JSON
that you're converting to a type.
You can dispatch it to a consumer.
Why would I not do that?
That reminds me, I saw in the list of transports,
you've got GRPC in there,
is that quite a new addition?
Yeah, so GRPC support in .NET
has gotten so much better.
And, you know, the in-memory transport
that mass transit has
is within the same process, same bus instance.
And I've had requests from people, it's like,
well, how could I have a brokerless transport
where I'm able to use my consumers
to just like connect a backend system together.
And so I got a wild hair
and started firing around GRPC one day.
And I thought, you know, I can make a transport
that just goes over the GRPC bi-directional stream
and just uses it as a really smart socket.
And yeah, it works.
It's a great way to connect.
It's super fast,
cause I mean, essentially it's in-memory non durable.
So, but I can publish, I can subscribe,
I can connect consumers.
And I was surprised at work.
That was one of those weekend projects
that like Monday morning, I'm doing a video on YouTube.
I'm like, I cannot believe this works,
but it actually did, check this out.
And it's only been used by a couple of people.
And I mean, yes, it's released,
but is it something that I would put into production today?
I don't know, I'd be cautious
cause apparently there's some reconnect issues with it
in the case of server failures.
But yeah, it's pretty cool.
But it isn't like it lets you take a generic GRPC request
coming in from like a GRPC server
and then dispatching it to mass transit.
But again, if you have a type,
you can run mass transit
because mass transit has a received dispatcher
that you can just register in the container
and it'll just let you resolve it
and send the type straight through
to the mass transit pipeline now.
That's fascinating, all the different ways of using this.
So, another item you put in your list,
which I'm kind of scared of bringing this up
because if we get talking about this,
we could be here all day.
Well, I'm just gonna like read out a substring of it.
Rider versus Visual Studio.
The longer sentence you put was
Mac versus Windows, Rider versus Visual Studio.
Why did I switch to a Mac for .NET over a decade ago?
But my eyes just got drawn to the Rider bit
cause I'm a huge Rider fan.
Oh, well good, that's nice to be in good company.
I love Rider.
I mean, I've been on a Mac for years
and when Rider came out
and I realized I didn't have to run a VM
to test Windows stuff anymore,
I was pretty excited about that.
I'd used mono from the early times
and I mean, it's good up to a point,
but inevitably you would always have to go back to Windows.
I have not developed .NET on Windows since 2016.
I mean, I would do an occasional test
of different versions of mass transit
on .NET framework back then.
I have, since mass transit completely jettisoned
.NET framework and went full net standard 2.0,
I have not used Visual Studio,
I have not done anything on Windows
and that's probably several years now.
I mean, I use Rider
because I've always been a resharper fan
even though it really bogs down Visual Studio
and taking that experience
and having it native on a Mac has just been pretty solid.
And like you said, it's a powerful tool.
It's like having resharper but fast
because it's built on top of IntelliJ
and it runs on a Mac and it's a native experience.
And since net standard and net net 5.net 6 runs native
and now .net 6 even runs native on ARM 64,
the new Macs, which I don't have one
because they're like back ordered to infinity
and they don't have a desktop yet,
but I'm okay, I have a Mac Pro, I'm fine,
I have more RAM than I ever need.
So it's awesome,
especially when you're running
bunches of Docker containers.
But yeah, I struggle with people
who fight Visual Studio constantly
cause I'm just like, there's a better way.
And even on Windows, Rider is good.

What do you mean even on Windows?
I've never used it on Windows.
Actually, I used it on Windows back
when it first came out
because again, I was still supporting .net framework
at the time, so I would test it.
And I was surprised, it was pretty much a clean transition.
The muscle memory of switching from command keys
to control keys was always a tricky one though.
Yeah, I don't use it anymore
but I did have a MacBook Pro
but I had Windows installed on it.
And the bit that got me was the FN key
was in the corner where the control key would be.
So I would always hit that.
I've used Windows for many, many years.
But like you, I use Resharper for quite a long time.
And then when Rider hit the scene, I'd move to that.
And I am a huge keyboard shortcut nut.
So it's kind of like just basically getting the ID
to do everything for you
and using all the resharper functionality.
And as you say, it's resharper does bog down Visual Studio.
And in Rider, it's just super, super fast.
More recently, I've been doing bits and bobs in Visual Studio
as I've been playing with Donnet Maui
because it's obviously a bit more up to date
because Donnet Maui's in preview.
And to be honest, Visual Studio's not that bad
but I do have Resharper installed
even if it does slow it down, I don't care.
I've got to have Resharper installed.
But yeah, it's very cool.
I have Visual Studio Code installed
because it supports live sharing.
That's one thing Rider doesn't support yet is live sharing.
And I found when I'm doing code reviews
like people will contact me
and even people with my own company
that are using mass transit.
I'll do live sharing with them
because it just makes code reviews
and like doing walkthroughs a lot cleaner.
It's just easier to navigate the code
and see it versus trying to do it over a Zoom session
where you're getting like blurry video and everything.
But yeah, Rider is fantastic.
I can't imagine not having it.
I don't know why it's up to you now.
I don't think it's quite ready
but JetBrains have got the code with me
which is kind of a similar thing.
And I just find it such a shame
there's not some standard protocol
which both live share and code with me can talk across.
You've got a team.
Each developer should have the choice
of using the ID they want to use.
And it feels like it's either
all VS Code, all Visual Studio
or all JetBrains.
And it's a bit of a shame, really,
because if we had some kind of collaboration tool,
especially now a lot of people working remotely
that could just work across these IDs,
that would be so, so nice.
I find you also lose context with these tools
because quite often if I'm pairing with someone
and I want to switch over to the browser
because we're Googling something
and I want the other person to see.
I quite often use tools to annotate the screen,
to point at certain bit or to zoom in
and all of that goes away.
So I kind of lose a lot of context as well.
No, I agree with you though on the,
when you have people collaborating
and even if I'm doing a live share,
we'll typically have a zoom
where one person is sharing
just so you have that flow and that context.
So it's a hybrid kind of thing.
And it's, I guess, having a big monitor helps
because you're able to break it out
or multiple monitors
if you're a multiple monitor person.
Oh, definitely.
I've got three monitors.
I've quite fancy one of those,
like general big stretching, massive ones.
But then it's kind of,
you've got to use something to,
like, you know, if you're screen sharing,
normally I just share my whole screen,
but then you'd have to use something like fancy zones
and only show a particular window or something.
I don't know.
It's all good.
Looking at when mass transit started,
it sounds like it's pretty much 15 years old now.
Avez-vous hâte d'avoir une célébration
de 15 ans?
Je pense que ça aurait dû être à la fin de l'année
parce que l'alternat était, je pense,
en octobre 2007.
Donc je pense que la célébration de 15 ans
aurait dû être, et nous n'avons pas réellement
la version de ça
jusqu'à l'earment 2008.
Donc, oui, on aurait dû
faire ça plus tard.
On aurait dû avoir une célébration
de 15 ans.
On a pu le faire pour les élèves.
C'est très impressionnant.
Alors, on va faire des dév-tips?
Oh, ce sera un dév-tip.
Je veux dire, Ryder est mon dév-tip.
Ah, c'est un bon dév-tip.
C'est le meilleur dév-tip.
Donc mon dév-tip est
un outil qui est basé sur Obsidian,
qui je l'ai très récemment utilisé très hévérée.
Je l'ai mentionné au défi, je suis un grand fan
de notion, et je l'ai toujours.
Mais je n'ai pas vraiment senti
de me faire faire des clans
en working notions clouds.
Je l'ai toujours utilisé notion
pour plus de notes personnelles
et codes snippets.
Mais pour les notes locales,
j'ai commencé à utiliser Obsidian.
Et j'ai été impressionnant.
C'est basicement, fondamentale,
un éditor à marquer,
mais c'est un design pour la note-taking.
Et il s'agit très bien
d'un wiki qui a des liens
entre différentes notes et choses.
Et il y a beaucoup de très, très
puissants notes-taking
fonctionnalités qui sont construits.
C'est assez difficile de expliquer
sur un podcast audio.
Mais il y a beaucoup de vidéos
et il y a un grand commune
sur le monde.
Et je vais inclure le link
dans les notes.
Et dans l'endroit où
j'ai parlé de la productivité

j'ai parlé beaucoup
de la force de la note-taking.
Donc si vous n'avez pas vu,
je vais définitivement
vérifier ça.
Mais oui, Obsidian,
je vais inclure le link
dans les notes.
Je vous ai mis un.
Je suis sur un max.
Ça va être très max spécifique.
Il y a deux outils
qui sont les premières choses
que j'ai installé sur un nouveau max.
Un est appelé
carabiner elements.
Et un autre est appelé
hammerspoon.
On l'a mentionnée en plus
dans le show
sur le contrôle de l'IDE et le contrôle de la clé.
Et ce sont les tools
qui vous donnent
le même contrôle
sur votre max.
Carabiner vous permet
de maître la clé de cap
pour une clé de clé universelle
parce que sérieusement,
qui fait tous les caps?
Vous vous en avez de la peine
si vous faites tous les caps.
Donc, je veux dire,
le cap est un clé
d'accès à
aucun application sur votre système.
C'est poweré par un hammerspoon,
qui, je vais être sûr,
mettre des links dans les notes
pour cette vidéo.
Mais la possibilité
de les rappeler
aux séances
pour définir les régions
des séances
pour avoir des médecins.
Qu'il soit sur votre
lape-top-screen
ou votre monitor grand.
La possibilité de
bouger les séances
des séances de résistance
et de tomber aux séances
et d'activer les séances
de l'application
parce que si l'application
n'est pas en train de rassembler,
ça va juste commencer
et vous laisser le côté
où vous êtes.
Donc, oui,
deux de mes préférés
pour le max
sont un mélange de
carabiner
pour ce que je n'appelle
un hyper-key
et un hammerspoon,
qui vous permet de
mettre les scripts
pour contrôler
votre positionnement des séances
des séances de l'écran.
Les shortcuts de la clé
imotocheront et leur
pierception.
On obscure
les secondes
car avantage
autofocus
aux séances de seassen.
nation

Donc,
il est
la
ch shrimp
gravement fort ou au knows-how pour en regions d' Oklahoma,
but c'est overs
discussion météques qui говорissent vers des
maisles,
summit.
L узна�
un
surique
C'est bien, les maîtres sont les mêmes.
Le clé de la clé est assez loin et vous vous presserez tout le temps en utilisant Vim.
Donc, le clé de la clé est juste là, c'est parfait.
Oui, oui.
Mais quand je change les computers et je n'ai pas encore remapé,
je suis toujours en train de faire le clé de la clé en place de la clé de la clé.
Je dois faire le clé de la clé globalement des computers.
Mais je l'utilise en max, je l'utilise comme la clé hyper qui
transmet à contrôler la commande de la clé de la clé.
Mais ça fait que quand je m'occupe le cap et j'ai un clé,
c'est comme, oh, je suis en train de faire le browser, oh, je suis en train de faire le zoom,
oh, je suis en train de faire le chat, oh, je suis en train de faire le message.
Et c'est une clé simple et c'est tellement de mémoire que si je n'ai pas le clé sur le max,
je suis complètement en train de faire le zoom.
Je dois avoir un whole bunch de tools sur un nouveau computer.
Je ne sais pas ce que je dois faire, mais comment je l'utilise un computer.
C'est génial.
Donc avant de rappeler où est le meilleur place pour les listeners
pour s'y arriver si vous avez des questions?
Le meilleur place pour me faire toucher est, je veux dire,
évidemment, des discussions GitHub, si vous avez une question plus grande,
vous devez faire le chanel Discord. Je vais faire sure que nous avons des liens pour ça.
Je suis active sur Stack Overflow. Ce sont les places générales que je fais.
Je réponds à des commentaires sur YouTube.
Mais oui, le Discord est le meilleur place.
Les emails ne sont pas très bonnes, mais je n'aime pas des emails non sollicitées,
car les gens s'en demandent des choses comme des supports commerciales
sans avoir un accord commercial.
Mais oui, je vais vous donner des contacts.
C'est génial. Nous pouvons inclure tous ceux dans les notes de la show.
Cool. Merci beaucoup pour votre joie.
C'est un bon chanel, pas seulement la transité,
mais tout le monde aussi. C'est une conversation très grande.
Oui, c'est beaucoup de fun. Je suis content d'être sur le show.
Cool. Merci beaucoup à tous d'avoir regardé.
Je vous remercie de ce podcast sponsorisé par Everstack,
qui est mon propre company.
Pour la construction et la service de consultations sur le software,
pour plus d'informations visite Everstack.com.
Et si vous aimez le podcast,
s'il vous plait, vous pouvez me séparer le nom sur les médias sociaux.
Je utilise le hashtag UnhandledException,
et je peux être trouvé sur Twitter à Jackan.
Et mes DMs sont en train.
Et mon blog d'Anclock.com a des liens à tous mes trucs sociaux.
Et nous allons inclure les liens à toutes les choses.
Et c'est une longue liste.
Nous avons mentionné dans les notes de ce show.
De aujourd'hui.
Et cela peut être trouvé sur unhandledexceptionpodcast.com.

Les infos glanées

Je suis une fonctionnalité encore en dévelopement

Signaler une erreur

TheUnhandledExceptionPodcast

Tags
Card title

Lien du podcast

[{'term': 'Technology', 'label': None, 'scheme': 'http://www.itunes.com/'}]

Go somewhere