gRPC - with Poornima Nayar

Durée: 51m51s

Date de sortie: 29/10/2022

In this episode, I was joined by Poornima Nayar to chat all about gRPC! gRPC is Google’s implementation of RPC. Since .NET Core 3.0, gRPC has first-class support in .NET and seems to be the way forward for remote procedure calls. We chatted about what gRPC is, how to use it, what usecases you’d want to use it for, and much much more!Poornima is a .NET developer with over 10 years of experience in .NET and Umbraco. She is passionate about learning new technologies and keeping herself up-to-dat...

Hey everyone, welcome to the Unhandled Exception podcast.
I'm Dan Clark and this is episode number 44.
So first of all, apologies if my voice sounds a little bit croaky.
My kids have brought something home from school and this week we've all been a little bit rough.
But don't worry, I'll make sure I edit out any coughing or anything.
But of course the show must go on and today I'm joined by Purnima Nair to chat all about GRPC.
So welcome to the show, thank you for joining us.
Hi Dan, thank you for having me as your speaker on the show today.
It is a privilege and I'm excited to talk about GRPC.
But for those who don't know me, my name is Purnima Nair.
I am a freelance.net developer and currently I'm doing a lot of consulting work
and I'm also the acting CTO for agency in Sweden called WebMindSC.
I'm a Microsoft MVP for developer technologies and I'm also an Ambraco MVP
for those who don't know Ambraco.
Ambraco is a.net based open source CMS.
A lot of my day to day job is involved with building web applications and websites with Ambraco.
And I do a lot of community activity as well, both around.net as well as Ambraco as you can clearly see.
I am also one of the members of the advisory board for Ambraco Hardco,
which is the headless Ambraco CMS.
So exciting times there and non work me.
I'm a mother.
I have a eight year old daughter.
She's turning eight actually next month.
I do a lot of reading like non fiction fiction, whatever.
I just love reading and I'm also student of Carnatic music vocals that singing is like my unwinding thing.
If a cup of coffee asleep, the last step is music.
If me singing cannot fix something, then that problem is beyond my hands for me.
So that is me in a nutshell.
That is a lot of stuff.
It's interesting with most guests I have on have this some kind of link with music.
A lot of people have guitars in the background.
I've got one there.
But yeah, don't worry.
I won't try to sing because that'll be that will get rid of all the listeners.
So before we dig into TRPC,
I'm going to quickly do this episodes list of mention.
And this one goes to Carl Sagana and he tweeted,
what a fantastic intro to Docker in this episode of the Unhandled exception podcast.
Even though I'm not a total Docker newbie, I picked up some awesome tricks.
So thank you for your tweet car.
And if you want to be mentioned on the show, just send a tweet using hashtag un handled exception.
All feedback is greatly appreciated.
And I'm Jack on Twitter, which is D R A C A N.
And don't forget that this podcast now has its own slack channel.
And if you head over to the website un handled exception podcast.com,
you'll see a big shiny slack link there.
And a quick reminder that this podcast is sponsored by Everstack,
which is my own company providing software development and consultation services
for more information, visit Everstack.com.
So I've not used GRPC that heavily,
but coincidently enough, I just switched over service for a current client to using GRPC from rest.
And I am most certainly sold.
And I think we're going to start using a lot for the other services moving forward.
But before I jump the gun and say why I like it,
can we start off talking about what GRPC actually is?
And also, I guess what does it stand for, which is always the controversial one?
So before we jump into GRPC, I have my two cents to add about Carl Sagunar.
Carl is a friend of mine from the Ambraco community
and he's a very big fan of Docker as well.
He has been running workshops and he's been doing talks about Docker.
So it's really nice to see Carl mentioned in this podcast here.
Three cheers for you, Carl.
And thank you, Dan, as well. It's fantastic.
So GRPC is Google's implementation of RPC.
RPC being the appellation for remote procedure call.
So remote procedure call is when a computer program calls a function, basically.
But that function execution happens elsewhere.
It jumps the network and executes elsewhere.
So I as a programmer call a function as though I'm invoking it locally.
But in reality, it's jumping across the network and getting executed elsewhere.
That is RPC in a nutshell for you.
And the keyword here is procedure.
Whenever I talk about RPC or GRPC, it's very procedure oriented, action oriented.
Like do this, do that, get me this, get me that.
So if you are someone familiar with Richardson maturity model, the pyramid,
I think it comes in level 0 or level 1, I think.
We are not thinking about resources yet.
We are doing post methods to get something done for us.
That is RPC.
Now GRPC is Google's implementation of RPC.
RPC is not something new out there.
It's been out for a while.
I think if you go back to 1970s or 1980s, you can trace back RPC implementations then.
But we all might be familiar with what we had with WCF in the dotnet framework.
So that is, I think, like a remote procedure called framework in itself.
But coming to dotnet core 3.0 onwards, we have support for GRPC in dotnet.
So I think from what I understand, GRPC is a project managed with a cloud native computing foundation.
And Microsoft has written a library of some sort to support GRPC in dotnet.
So I think with dotnet core, dotnet 5, dotnet 6, 7 and beyond,
GRPC seems to be the favored framework for remote procedure calls.
WCF remains dotnet framework only.
And I think there's minimal support for WCF in dotnet core, dotnet 6 and forward.
There's also core WCF, which is a community project.
But I think if you want official support for a remote procedure call in dotnet these days,
GRPC seems to be the favored framework going ahead.
So starting dotnet 3.0 onwards, we have GRPC in dotnet.
So that is like a brief history of GRPC, if you ask me.
Yeah, I for one won't be sad to see the back of WCF, that's for sure.
Oh, no, me neither.
I have done like this much of WCF in production after learning about WCF.
I'm pretty sure I used WCF like last year in a very old services where I was like, oh no, oh God.
And when I started picking up on GRPC, I just wanted to see what GRPC is all about.
That is where it all started for me.
I don't use GRPC in production personally, but ever since I learned about GRPC,
it's been intriguing me, it's keeping me interested, so I'm learning more and more about it.
So I found this presentation by Srihar Hatti on the dotnet channel,
where he went through the basics of GRPC.
And the first fleeting thought, what, this is so similar to WCF,
the configuration, the way you work with it, is very similar to WCF.
So I think there is that a developer familiarity with the developer experience
when it comes to GRPC as well.
So I think Mark Rendall has written an e-book on migrating from WCF to GRPC.
I don't think there's a straight migration path from WCF to GRPC,
but there is developer familiarity.
Plus, if you know dotnet core, ASP.net core, you can reuse a lot of that in your GRPC core as well.
And one of the first things to talk about GRPC is, it is designed for STTP2 and beyond.
It takes advantage of the nice features of the STTP2, like request multiplexing,
the streaming capabilities, which makes it perfect for point-to-point communication.
So if you're thinking about browser-based applications,
that's probably not a good choice for GRPC,
or a good way of consuming GRPC services at the moment,
because the browser low-level APIs, I think, still doesn't support STTP2.
So if you're thinking about point-to-point communication,
if you have microservice to microservice communication,
if you have your REST API talking to a GRPC,
that is all a perfect candidate for using GRPC,
because GRPC is incredibly fast.
One of the things I really like about GRPC when I've been doing it,
exactly as you said, is because it's RPC and it's a method core,
you can be quite explicit about what that action is,
rather than REST, where you're doing crud, where it's post, get, put, delete, whatever.
Ok, how do I fit what I want to do into REST?
Well, actually, with RPC, just like you said, it's like a local call,
well, it's not like a local call, but it's a method name,
and you can call it, and you can use DTOs to pass
as a request and response to that method,
and it just becomes much easier.
And also, another thing, and we can talk more about
what it looks like to develop with GRPC,
but the fact that it's typed,
means that with REST API,
I've had so many conversations with developers
where they've been wanting to create all,
one of them got a Microsoft architecture with different REST APIs,
and they've been wanting to create all HTTP clients,
so it's tight, the thing to do with that REST API,
and that adds a whole bunch of extra work to do.
And actually, with GRPC, just get that out of the box, it just works.
So I really, really like that.
Yeah, absolutely.
So if we take a step back a bit,
and follow the listeners that don't even know what GRPC looks like,
if you were to create a GRPC service,
what does that look like?
What do I need to create?
Sure. First of all, there's good tooling from Microsoft
for GRPC because we have official support.
We have starting point in the form of a template,
so you can create a GRPC service using your Visual Studio,
and what you would find inside that is obviously a new get package
which accesses support, but GRPC is essentially a middleware.
And GRPC favours contract-based API development.
So when you think about the contract,
it is actually defined in something called a proto file.
So it's a file with an extension dot proto.
That is your contract definition.
So you have your service definition in there,
and your procedures or the RPCs listed in there.
So typically, one proto file would have one service definition,
and it would also have the various RPCs in that service definition.
Now, this is one part of the proto file.
If you think about procedures or functions,
it accepts something and it returns something.
That's how functions work.
With GRPC, that's all in the form of protocol buffers.
So protocol buffers is Google's open source mechanism to serialize data.
And for a very long time, I thought protocol buffers meant
the inputs and outputs for an RPC method.
I was proven wrong because the interface definition language,
what you see in proto file itself, is a protocol buffer,
and the incoming request messages and the response messages,
they are also protocol buffers.
And there is a representation of the incoming request
and outgoing request in the protocol buffers.
So they are in the form of what you call messages
with each message having a set of key value pairs or records
with each record having a data type, a name, and a number attached to it.
It is actually quite difficult to tell about it
rather than just kind of showing that.
But I'll try my best here.
So each message has few fields
and each field has a data type, a name, and a number.
And protocol buffers, it's very special and niche to Google
and it is language neutral, it's platform neutral.
It's extensible to the point, it's got even backward and forward compatibility.
So with a little bit of TLC right from the beginning
from the design phase onwards of the GRPC API,
just like any other API, you can have a perfectly versionable
and maintainable service from the word go with GRPC.
That's all thanks to protocol buffers.
Again, the language neutrality and platform neutrality means
I can write a service in C sharp
and I can go generate a strongly typed client,
say probably in C++
and then have the two apps communicating with each other.
That's all because of the marvel we have, that is protocol buffers.
So think of a protocol file, you have your service definition,
RPCs, your input messages, output messages,
all that forms a contract of your API.
So if you have a big project, you might have multiple protocols
with each protocol having its own service definitions,
RPCs, and the messages.
Now one thing to note with RPC in its basic form,
that is GRPC in its basic form, when it comes to RPCs,
it always needs a single input and a single output message.
So in its native form, even if you don't have any fields
in the input message or the output message, that's fine.
You still need something as an input and output.
There are ways to overcome that, that's possible,
but in a simple form, it needs an input and an output
and it can only have one of it.
I cannot have like a hello method, which accepts a hello request
and something else.
If you have a hello request, it can have multiple fields
and it can have values attached to each of that field
in the message if you get it.
Now the messages are transmitted in binary format.
So that's one thing about protocol buffers,
which keeps the message size small.
The second thing is that, think about a JSON blob.
You have the name value pairs, so it's the field name
along with the value for that field
when you get something out of JSON, serialize it JSON.
But when it comes to protocol buffers, it's slightly different.
Even though you have the name specified in your message,
the actual value is marked against the field number
that you have on the message.
So that's another reason protocol buffers are so small in size.
The field name is quite irrelevant when it comes to messages.
It's only for us developers to actually make things easier.
The underlying framework, the serialization and des serialization.
When you encode it in binary and if you try to decode it,
it's against the number that it is encoded.
So all the transmission happens that way.
So that numbering is very important in protocol buffer messages.
Yeah, that makes total sense.
I guess because this is an audio podcast,
maybe if we take a step back a bit just so that the listeners can visualize this a bit better.
For a service and a prototype, in my mind,
that looks very much like a C sharp interface.
It's like a class but obviously without bodies.
The syntax is slightly different,
but just for listeners that have never seen this before,
I'm just trying to help visualize.
So it's kind of an interface and you've got a few details.
These messages just look like C sharp classes with properties
with slightly different syntax.
And as you say, it's got a number.
Is it called ordinal?
I think it's called.
Yeah, probably.
I don't know that off hand, but I've always sequence number.
The sequence number is what I've learned it as.
Sequence number, it is a sequence number.
It needs to be unique per field.
Yeah, and it's very important.
So is that related to versioning?
Yes, it is related to versioning.
So changing it,
removing and adding something new in place of it
that can break your service.
So that sounds like an important thing to know
if you're changing existing GF and P-Cold.
Absolutely.
But that's like one of the places it can break.
But as I said, protocol buffers are incredibly smart
when it comes to the forward and backward compatibility.
I'm pretty sure we could touch base on it
if we get the time to record all that today.
But if not, I'm pretty sure I've done a talk
about the subject as well.
I can point the viewers to the talk as well.
Magic.
We can include any links like that in the show notes as well.
So for example, with the versioning,
if you wanted to just remove a field,
say there was a field that was redundant in one of your messages,
would you have to keep that in?
Or would you remove it and just make sure you don't reuse that number?
How does that work?
So removing a field is generally not a breaking change
with protocol buffers.
But it could change the behavioral meaning of things
because what happens when you serialise and deserialise protocol buffers is
there's no concept of nullability for fields in protocol buffers.
That's one of the key things.
If I change something or if I remove something on the client side
and send it to the server,
the server then detects that hey, I've not got value for this field.
So let me just put the default value for that in place.
Say if you have an int field with no value sent
and the field removed, when it gets to the server side,
it is assigning a value of default, which is zero in this case.
Similarly for bool, it assigns a default, which is false.
So behaviourally, it could affect the application.
Similarly on the server side, if you remove
and things get on the client side, the client grpc middleware,
it gives it a default value again.
So zero and default could mean behaviourally different
to your client or the server.
False and a default bool could be different as well.
So removing could affect the behaviour of the app.
So that's again something to take care of.
But if you remove something and you are concrete
for sure that you don't want it in place,
there are ways to make sure that the field
or the field number doesn't get used again.
There's a keyword called reserved for that.
So I can mark a field name as reserved.
I can mark a single field number as reserved.
I can actually mark a sequence of field numbers as reserved as well.
So that someone who comes later than me
doesn't accidentally use it as well.
So there are, what do you say,
fences in place or protection in place
to make sure that things cannot go ory very easily
with protocol buffers, which is one of the reasons
I find it quite interesting as well.
Because whenever we think of software,
we think forward as well.
That is how we do software development.
We make sure that it is maintainable in the long run.
But GRPC as a framework provides me with that capability
in the framework itself so that with a little bit of work
I can actually get the design of the API up and running
and have fully versioned API from the word go
when usually versioning comes out of the thought most probably
and it's one of the subjects that is most scared of,
like people are most scared of when it comes to design as well.
You need to really think around it.
But GRPC takes care of that for you.
Yeah, Magic.
You mentioned about nobility as well.
So does GRPC not support it at all?
Is there any way we can,
if your example with the end,
zero might mean something so you might want to say
okay, it's not even set?
Yeah, I think there's a Google,
what do you call it?
Well-known type for that, I think.
So Google Well-known types are like different types
which Google have written again,
which accesses ProtoFile.
Just like you would import something into your project
using a using statement to bring in another namespace or something,
you can bring in external ProtoFiles into your project as well.
So there are Google Well-known types.
I think there's a Google Well-known types for null.
I know that there's a Google Well-known type
for datetime offset, datetime.
Similarly, when I spoke about not sending values in
or getting values out, there's an empty data type for that.
There's a lot of things which Google supports out of the box.
I think I have seen a nullable thing somewhere.
I'll be surprised if my memory proves me wrong.
It does sound like something
because we're used to using nullin.net.
Yeah.
And then it sounds like something that would be quite useful.
You did mention types about timestamps as well.
And I know I've seen, when I've used gipc in the past,
timestamps seem to have this weird format
where it's almost got two properties,
because it's a matter of seconds or something.
And that always seems really bizarre,
but in .net, I know there's got a built-in function
to convert between the two.
Yes. So I think I've used the datetime offset.
There is a special type from Google for that as well,
which gives me a datetime offset or something.
It is another Google well-known type,
but you can bring that into your proto file.
So you bring all these well-known types into your proto file,
and then you can start using that data type in your messages
so that your server can handle it
or your client can handle it for you.
Magic, magic.
So we spoke about these proto files before
and sharing them, and that's your contract.
So sharing them, you can just put them presumably
and then you get a package and just share them
as you would anything else.
Yeah.
So if your consuming client is also .net,
then you get package works fine.
But with gRPC,
one of the biggest advantage is
it is the perfect candidate
if you want to use it in polyglot systems.
That is, if you want to write something in C-Sharp
and consume it in a totally different language,
or I could have the service itself in Node
and then consume that within C-Sharp,
it is a perfect candidate for that.
And to communicate with gRPC,
we need a strongly typed client,
which means that I have to make sure
I publish my proto file somehow
so that the consuming clients can get hold of it
and create a strongly typed client behind the scenes.
That brings me to the tooling
which we need to talk about before we jump into this.
So whenever you have a gRPC server to communicate with that,
we need strongly typed client.
So there is support for gRPC
in every well known language out there,
so there's a library,
and there's out of the box support
as well for some of the languages out there
with the tooling that comes in.
So it can generate the strongly typed client for you.
So with .net in particular,
obviously we have the Visual Studio template,
but there's also some tooling in place
which kind of abstract survey
a compiler, a special compiler, or the proto C compiler.
This proto C compiler has understanding of C-Sharp,
and what it does behind the scenes
when I have a proto file is
it generates C-Sharp code for me.
So on the server,
I can actually tell it what type of C-Sharp code
to generate as well.
So if it's a server-side configuration that I'm using,
it generates the code for server-side usage.
And if it's for the client-side that I'm trying to generate the code,
it generates the code for the client-side.
I don't know whether it's the same kind of configuration
for other languages,
but I'm pretty sure it could be the same way it works.
So if I'm setting the code on the server-side,
especially for .net,
what it goes and does behind the scenes is
for every service that I have in my proto file,
abstract class is getting generated.
And every RPC method that I have in my service
goes to become a virtual method in that abstract class.
So that is like basic C-Sharp keywords being used,
which means if I need to give it my own implementation,
I can create a class of myself,
inherit from the base abstract class,
override the methods,
and give it my implementation.
So that is on the server-side.
But when it comes to the client-side,
it generates something called a strongly typed client
and a method stub.
So if you have an RPC called say hello on the proto file,
it generates a say hello stub in the client for me
so that my client call looks like the client name,
the strongly typed client name variable dot say hello on the client.
But in reality that say hello is actually using
some configuration behind the scene to jump the network,
go and actually execute the method in my server
and come back with the response.
So there's a lot of code generation in place for us.
And it's the proto C compiler which helps you do that.
So if your clients are C sharp or donner based clients,
Nuget packages can work perfectly fine.
Otherwise you need to publish your proto file somehow.
It's up to you how to do that.
I was thinking about minimal API the other day.
So you can actually have a minimal API in your GRPC project
set up using the app.map get endpoint
so that your clients could go into the proto file
or download a copy or you could have your proto file
say hosted somewhere and you could have a service reference
to that, that's one way of doing that.
But you have to find a way of sharing that proto file.
If it's .NET clients consuming it,
yeah class libraries Nuget packages works perfectly.
But GRPC we have to be a little bit more careful
because there could be clients in other languages consuming that.
So we need to find a good way or standard way of doing that.
I think there's also another option where you can
and I don't know whether I don't think this is recommended
in production.
I'm not sure.
I don't know the term but you can get it so that your
server side is almost like publishing what the proto's are
and the proto can be inferred.
You can basically get it from the server.
It's got a term.
Let me just...
Server reflection.
Yeah, yeah, I think that's it.
Because some of the testing tools like
Blum RPC and Postman and GRPC UI,
it uses server reflection to gather information about
the proto file.
So I think the server reflection is probably the term.
Yeah, that was it.
So I've got the code up now and I'm just having a quick look.
And I can see that in my server side,
program.cs, I've got...
Well, I've actually got if development.
So I've not got it in production.
But then app.map, GRPC reflection service.
And then that's kind of enabled.
And as you, in fact,
you've just pointed out something I was going to mention
about Postman.
I think that's quite recent that they've added GRPC support.
And when you go in Postman and you put the URL
and you get a drop down,
you can choose to upload your proto files
or you can say to infer it using this reflection endpoint.
Or there's different options and it just works magically.
It's fantastic.
Yeah, have you tried GRPC UI?
I've not, I've not heard of that.
Okay, so GRPC,
there's this command line called GRPC CURL.
And GRPC UI is built on top of GRPC CURL.
So what it does is you can install that tool
in your local computer.
And then I can run my service
and then I can issue a command GRPC UI
on localhost to this port.
And it spins up the browser for you
and opens up a nice UI.
So I don't even need to upload my proto file
or refer to proto file,
it just gathers it for you.
It's almost like having swagger.
I wouldn't say swagger or open API.
It's almost like that
because you can actually then choose your services
to interact with.
I can issue my request commands.
I can even add headers
and then get back the results.
I can actually see what the response headers
and trailers and so on.
So when I demo things,
I usually use GRPC UI
because it is really seamless for me.
You can bake it into the application
and then use that on your development side.
I'll definitely make sure
I include a link to that in the show notes.
I remember a couple of years ago
when I was doing another project
I was working on,
but it wasn't me that implemented this,
but they were using GRPC there.
And they were consuming the service.
There was stuff in,
as you mentioned, Node and stuffin.net.
So we had to publish the Node stuff
as an MPM package,
the proto files in an MPM package,
the same proto files,
and then also a NuGet package.
But it was the same proto files,
so the same pipeline
just made sure that it's back them both out
as an artifact,
then put that to the MPM package feed
and also the NuGet package feed.
And then both, as you say,
both, regardless of language,
whether it's Node or .NET,
both can just consume it.
So that was really, really nice.
Exactly, yeah.
If you want to actually make your proto file visible
to say that developers out there
in development phase,
you could stick to controllers
or have a minimal APN place.
I think Antony Giretti has written a blog post
on either using controllers
or minimal APNs.
I can't remember which one.
But when using minimal APNs
to do things like this,
it just felt so natural.
It's a really good use case
and your developers get to see things
before even they consume them as well.
So it's nice that you can bring in
things you already know
about ASB.NET Core into GRPC as well.
You were talking before about the generated files,
how from the proto is the tooling
just generates all these files.
Those files go into the obj folder.
So they don't go into source control.
Because I know that in the past
when you've had auto generated files,
sometimes it goes into source control
and you end up with horrible messy.
Even WCF with the clients,
all of that goes into source control
and it's just a mess.
Where GRPC, the stuff that's in your project
that's not in your intermediate folders
like your bin and obj files
are quite clean.
It's pretty much just the proto files.
And that is so nice.
Yeah, absolutely.
As rightly said,
it goes into the obj folder,
the intermediate resources.
What happens is
the tooling takes care to make sure that
whatever you have specified in your proto files,
when it gets generated
into the corresponding language
you're thinking about,
all the coding best practices
and standards and the naming conventions,
all of that is taken care of for you.
So I'm talking from a C-Sharp perspective.
If you look at the generated code,
everything that is recommended
as a Microsoft standard for C-Sharp coding
is in place for the generated code.
That's one thing about the code.
And there are two files
getting generated for each proto file.
So there is a file
which puts all the messages
as plain pokos in the obj folder.
And there is another file
which lists all the RPCs.
And basically it's a partial abstract class
with virtual methods.
So if you want to actually see
how the way it's different data types
in your message,
that is protocol buffer, messages,
maps to the corresponding C-Sharp types
and the .NET types,
this is a good way to learn that as well
because all that messages are pokos.
So you can actually see
what is going on there
and your RPCs are abstract classes
with virtual methods
which by default
throws exception saying unhandled, unimplimented.
Is that unhandled exception?
No, it's sorry, not unhandled,
un implemented exception.
It's an RPC exception.
You see what I did there.
No, not unhandled, sorry,
un implemented exception.
So again, it's a good way
to learn about things
and then gives you enough keywords
to Google upon and learn as well
which can get interesting I think.
So I know with all these types and stuff
you've got a concept of a stream as well.
You can choose whether you're using,
I never know how to pronounce this,
unui or unui or whatever it is,
but so that seems to be the standard stuff
but you can also use a stream as well.
When would you use streams versus this unui
or unui, whatever it's called?
So streams is when you want to get a stream of data
out from the server
or you want to send packets of information
to the server.
So there are various different modes of GRPC
apart from unery, which,
that's how I call it, sorry,
unery which is the normal client's server.
You get a message in, you get out
the response from the server.
But you can have client streaming
in which the client sends in like a stream
of messages to server,
but the server responds back with a single response.
You can have server streaming
which is the opposite of client streaming.
So in server streaming, client gets a single message
into the server.
The server knows that as client
indicating that I should start the stream
and the server starts sending a stream of messages.
And there's a mixture of both,
this is called bi-directional streaming
in which you can send a stream
into the server.
You can send a stream out from the server as well.
So when you are thinking about large volumes of data
but you do have the facility
to break it down into chunks and send it back,
then that's a perfect use case for streaming.
If you want to think about something like a large
file upload from the client,
there are some barriers in place
because with GRPC,
what I read is that it is all binary information
which is good.
The packet size is naturally low.
However,
on the client side and server side,
this message is loaded into the memory
before it is sent out.
And on the server side,
it is loaded into memory again before it is
deserialized as well.
So there is resource usage in both client and server.
So naturally I think there is a limit
on the packet size on the server.
So if you are thinking about a large file upload,
you might be better off breaking it into smaller chunks
and sending it as stream of data.
If you are thinking about a thousand items in a list,
again you have the facility to send that in a stream
and gain on performance as well.
Because the packet size is so small
and the resource allocation and resource usage
gets so small as well.
So if you can actually break your data
into smaller packets and send it to the server
or send back from the server,
then that's a good way of using streaming as well.
Again, making use of the STTP2 facility
that we have with streaming.
We are thinking about STTP2 here,
but with .NET 6, I think there was at least
experimental level support for STTP3.
I don't think it was experimented
towards support for STTP3.
And I think GRPC in .NET was the first kind
of implementation of GRPC
to have official support for STTP3
in GRPC as well.
So we are kind of trying to learn the basics
of STTP here, at least me personally.
And we have support for STTP3 in GRPC,
which is like, oh my God,
this is blowing my head away.
Then you get, like, if the calling code is .NET framework
and you've got to fall back to one point,
like, it's B1.
I know for this project,
because we're doing that,
we had to host a call, let me one second.
I've still got that program.cs open.
So on the server side,
I'd say to you something called GRPC Web,
and that's a different,
I think that just goes over HTTP protocol.
Yes.
But so by switching that on,
it means that .NET framework can talk to it.
Yes.
Yeah, it's just these little edge cases
where it depends what the calling code is.
And yeah, .NET framework can be a pain at times.
There is GRPC Web,
of course, if you want to make use of GRPC
in browser-based application,
but then you lose out on the client streaming
and bi-directional streaming
because you can't support that with GRPC Web,
which works over HTTP 1.1.
Because what it does is,
there's some kind of work it does to convert
between the HTTP 2 and HTTP 1.1 requests,
I think, that is how it works.
But coming back to browser-based applications
and using this .NET framework,
potentially what .NET 7 is going to give us
with GRPC might actually be a problem solver as well.
What you have with GRPC in .NET 7,
the biggest feature is JSON transcoding.
So suppose you have a GRPC service,
then you can annotate the prototype
with bringing in a prototype from Google
and having the Google HTTP annotations.
What you can do is,
you can invoke your RPC calls
using HTTP verbs over a URL.
Now, this is where I told you
that this is going to get interesting
because GRPC is all about procedures, actions
while with JSON transcoding,
one big part of it is,
you can develop restful APIs
on top of your GRPC APIs.
This is like, okay,
this is an interesting irony for me.
When I talk about GRPC,
it is all about procedures.
And then this thing comes in where it says
you can develop restful or HTTP APIs
using GRPC, which is very interesting.
So what you effectively get is,
for every prototype,
you can choose to annotate it
with some special annotations,
give it a URL pattern
over which it can listen to
and specify which HTTP verb
that it can listen to as well.
And I can browse and connect to that endpoint
like a restful endpoint.
So I can have a get issued,
I can have a post issued,
I can have a put, delete or patch issued
or even head and options issued.
But the underlying thing is that
you only need these annotations in your prototype.
You don't need to do that on your code.
The code is still
the single implementation that you give it
for your GRPC service.
It's the same code,
but you are talking to the same code
using GRPC
as well as a restful way.
So that's one way
you can actually consume GRPC
in your browser-based applications.
I know that from Preview 4
when it was released to .NET RC1
things have changed.
There's support for server streaming as well
which is quite nice to see.
But obviously client streaming and bi-directional streaming
still not supported because
we are talking about browser-based APIs here.
So that is one interesting
thing.
It was actually an experimental feature in .NET 6
called GRPC HTTP API.
Because of the popularity
it's actually made its way into
.NET 7 as a full-fledged supported feature.
Which is again, nice thing to talk about
because we are recording in October
the month of October fest.
So talking about community,
community, popularity about a feature
making into .NET 7 is a great thing.
Yeah, in .NET 7
next month?
So we're not far away at all?
So .NET Conf Agenda is out.
I am looking forward
to the various presentations.
I'm quite excited to learn about
what's upcoming in .NET 7
because it just gets better and better.
Oh, no, definitely.
Once a year now as well.
I know.
It's predictable, there's differences.
You know which ones are the
long-term supported versions and so on.
So it's quite easy to make a decision
on what version of .NET your project goes on
as well.
So it's quite good.
I'm still trying to get my head around rest
on top of GRPC.
At first I was thinking
why?
Oh, je peux
t'en parler un peu.
On parle de tech
et de software.
Il y a un cas de genouillement pour tout.
C'est là que je pense.
Un exemple est que
tu ne veux pas créer un API rest
avec tout ce code dupliqué.
C'est un cas de use parfait.
Si tu veux que les apps de GRPC
soient consommés de manière simple
par vos clients browser, c'est parfait.
On parle de l'application 0,
1 et riche en RMA model.
Ce niveau de l'application de l'HTTT
est parfait.
Mais je pense que plus de
l'application restable qui
a le place de
Hattios aussi.
Comment tu peux faire ça?
Ou peut-être que tu peux faire ça
avec JSON transcoding?
Assimément, tu dois formuler
la réponse de certaine façon
et que tu as inclus les URLs
et tout.
Exactement.
Je peux avoir certains
ressources pour
juste l'aspect de GRPC?
Je peux avoir certains ressources pour
juste l'aspect?
Parce que les deux types de request
sont différents
pour l'application GRPC.
Et pour l'autre, l'application
JSON vient de l'enquête.
Je peux le marquer entre les deux
et faire sure que
quand la réponse va en bas, il y a des choses
qui sont en bas.
Peut-être que les intercepteurs
peuvent jouer avec les messages de
réponse et les messages de request.
Ça pourrait être un moyen
d'attacher avec les messages de réponse
et de envoyer un
Hattios compliant
JSON.
C'est intéressant.
J'ai pensé que
ça serait intéressant.
Quels sont les intercepteurs?
On n'a pas découvert ça?
Non, on n'a pas découvert.
Je n'ai pas découvert beaucoup de
des intercepteurs dans mes paroles, mais les intercepteurs
sont des petits pièces de code
dont vous pouvez utiliser pour intercepter le call.
C'est tout de même
ce que le tient dit.
Il interie quelque chose.
Si vous voulez faire des codes avant que
votre call soit envoyé au service
ou peut-être la réponse est envoyée
au service à l'application,
vous pouvez avoir des intercepteurs en place.
Il fonctionne pour tous les requests
plutôt que le milieu,
qui fonctionne pour tous les requests de STP.
C'est sur le level GRPC.
Le plus intéressant est que vous avez des intercepteurs
sur le client et le service.
Si c'est sur le client,
je peux intercepter
les requests avant que ça se passe au service.
Il peut en fait
penser avec les requêtes
qui sont envoyées,
pour les headers, pour l'authentication, etc.
Et sur le service, je peux intercepter
les requests d'incompréhension
et les réponses d'interception
avant que ça se passe au service.
Il y a cette fonctionnalité sur les deux côtés.
C'est un autre truc que je veux
jouer sur et voir.
Il s'intéresse beaucoup
des possibilités
avec les logins, les validations,

C'est un concept intéressant.
C'est un concept intéressant.

Un autre concept,
qui je l'ai juste écrit,
et que je sais que vous m'avez mentionné,
c'est quelque chose de « deadlines »
qui est fascinant.
Nous avons des capacités de streaming
avec le GRPC.
Les streaming peuvent être souvent
une operation long-run.
Il y a deux concepts,
une est la dédeline,
une est la touche de cancellation.
La dédeline,
comme le dit le tin,
c'est une dédeline. Je vous donne une dédeline
sur laquelle vous devez canceler
ce que vous faites, et vous arrêtez tout.
Je peux donner
la dédeline préconfigurée
en disant que après ce point de
temps, vous devez arrêter
la processation.
C'est intéressant,
avec un peu de configuration,
une adhéance de la dédeline
sur le service. Vous pouvez
tricler la dédeline dans votre service,
comme les operations long-run.
Parce que si je ne l'ai pas
envoyé, je vais envoyer la dédeline
pour que je sois la dédeline.
Mais vous pouvez
faire cela. Le token de cancellation
de ce que je comprends est plus
sur le client. Le client
s'en contrôle plus.
Je peux envoyer un token de cancellation
et dire que je peux envoyer
un token de cancellation, et si je
choisis de canceler la dédeline
de long-run, je peux
faire cela.
Il y a des choses qui servent
de même pour le même, mais
que sur les différents côtés
de l'application.
Le token de cancellation
fait que c'est plus comme un
local de fonction.
C'est tout ce que le client
s'en contrôle. Quand vous connectez
à un service GRPC, ce que
un client a besoin est un canal GRPC.
Ce canal GRPC est le long-run
ou le long-sans
de la dédeline. C'est un objectif
très grave. C'est celui
qui vous configurez, qui vous connectez
à un client de
GRPC.
C'est celui qui vous configurez.
Ce canal GRPC
est ce qui vous aide
avec tout.
Quand vous avez votre canal GRPC,
le client s'en contrôle, je peux envoyer
un token de cancellation avec tout le
que je t'ai envoyé.
Et puis faire tout ce que je
besoin.
Avec la dédeline, j'aime l'idée
que ça se passe par le service.
Je n'ai jamais compris que c'est
downstream ou upstream.
Quelles quelles
articles, le read a dit le même
que je dis downstream.
Oui, downstream.
Parce que vous êtes en train de
tricler les niveaux,
donc c'est downstream. Je suis en train de
faire ça.
Je sais que c'est un des messages de service
pour des services de Kubernetes.
Estio a dit le autre jour.
Je vais être très confus.
Je pense que downstream.
Comme vous le dites, le flow est
downstream.
Je pense que c'est un flow de water.
Je ne comprends pas.
Cool.
Avant de nous râper, on va faire des
des tips de dédeline.
Oui, c'est un tip de dédeline.
C'est un article que j'ai lu
sur l'experimental web transport
sur le support de STP3 en Kestrel.
La toute idée derrière
est une nouvelle
specification de drap pour un protocole transport
similaire
à WebSocket.
Il y a des applications de temps réel, mais
il y a des capacités de streaming.
Il y a des relations
sur le GRPC,
et je l'ai parlé
sur les web hubs en passant.
Ça vous aide à créer des applications de temps réel.
Je suis en train de
regarder ça de cette perspective.
Il y a un protocole de transport
de temps réel
qui est similaire
à WebSocket,
mais avec un stream de data
bidirectional.
Il y a beaucoup de travail sur WebSocket,
si vous voulez ouvrir une prochaine
connection, vous pouvez envoyer des
données.
Ça ressemble à la utilisation
de l'institut de la STP2, 3,
des réquestes multiplexes et des
capacités de streaming.
Je peux partager cet article avec vous.
Je l'ai skimé au article, mais je me suis dit
que c'est drôle.
Ça ressemble à
beaucoup de choses.
Il y a aussi un service de WebPub
qui est intéressant.
Je peux partager cet article avec les
podcasts Nodes,
pour que tout le monde puisse avoir un read.
Ça a l'air intéressant.
Je n'ai pas joué beaucoup,
je n'ai pas eu le temps de
aller à la ligne de cours, mais
l'idée s'est promis.
Il y a des codes et des apps
en place.
Je me suis dit que je suis content de le

Je suis un peu froid
de la question.
Je vais vous donner
des informations, des choses
pour les podcasts Nodes.
Je vais vous donner
des choses comme le config,
le profil PowerShell,
le config Terminal,
le config Chocotie,
les choses différentes,
et les choses de la même github.
Je trouve que c'est beaucoup plus organisé
de ma configuration. Je ne suis pas
pas malade de la perdre.
Si je suis sur une autre machine, je suis
juste en train de s'en baser sur la github.
Toutes mes changements sont
beaucoup plus intentionnelles.
Si je suis en train de
utiliser, je ne sais pas,
quoi que ce soit, Vim, et je veux
changer ça, ou Gitter,
je change dans la VimRC,
ou .gitconfig,
ce que c'est, et je suis
créé un commit explique.
C'est très important de me mettre
dans ma github, avec tout le config,
mais je ne suis pas malade de la perdre
de la histoire. Je suis
beaucoup plus motivé
de faire des tweets et de faire mon set-up
plus bien. Je vais vous donner
un lien sur le blog post, et je vais
parler de comment je le fais, en utilisant
les symlinks. Il y a des files.
.gitconfig,
ou .whatever, parce que beaucoup de files
sont en train de commencer avec le .
mais, de ce que je comprends,
le concept de .files est, je vais vous expliquer
où vous créez
symlinks, pour que vous puissiez mettre
tous vos configs pour des choses différentes
dans la même github. Mais quand je l'ai fait, je ne l'ai pas
nommé .files, mais il y a une communauté
entre les files et les .files, et ça est cool.
Oh, wow!
C'est intéressant, oui.
C'est mon tip. Donc, avant de nous râper,
où est le meilleur place pour les listeners
pour vous, si vous avez des questions?
Vous pouvez me dire sur Twitter.
Donc, mon nom est
Purnima Nair.
Donc, c'est un long nom
de spéla, je comprends, mais
évidemment, il sera dans les notes de la show.
Et je dois dire, je n'ai jamais,
jamais connu
quelqu'un qui est tellement rapide
en appuyant sur Twitter DM.
Oh non!
La raison est, comme j'ai expliqué,
si je me laisse les choses comme ça,
il est souvent oublié
par le message que j'ai
avec mon
hard work
que je mets en place pour quelque chose de travail.
Donc, je ne veux pas que les organisateurs
ou les organisateurs podcast
soient délaisés,
parce que je ne réponds pas.
Je vais juste le faire, et puis, et puis,
parce que ça a été passé, quelqu'un a
répliqué, a demandé quelque chose,
je n'ai pas répliqué en temps, et il a
l'air oublié.
Je dois dire, je veux que
tout le monde soit aussi rapide
pour vous, parce que ça fait du difficile,
surtout quand vous êtes en train de faire des répliques,
la latence entre organiser quelque chose
devient vraiment difficile.
Donc, merci.
Je n'aime pas la certainité,
je suis un planter, je fais
des choses
dans une manière très spécifique,
et j'aime planter,
parce que j'aime les choses planter,
je n'aime pas d'autres être
planter pour moi.
C'est un privilège
d'être part de la conférence
comme un speaker,
ou un podcast, comme d'où je suis
dans le show, c'est un privilège.
Le plus possible c'est de
être part de la conférence, oui, je suis vraiment
rapidement repris à vous, parce que c'est Twitter
et Twitter pince moi, mais c'est bien.
Le privilège est tout le monde,
donc merci beaucoup
pour vous joindre sur le show, c'était
beaucoup de fun, j'ai eu
une discussion avec vous sur GRPC.
Oui, absolument, j'ai eu
une blesse sur le show, j'ai aimé
discuter, c'est vraiment
beaucoup de fun.
Et je me souviens de me presser
sur le record, parce que
pour les listeners, je n'ai pas
l'air d'avoir été
expérimenté avec une nouvelle
plateforme pour le record, donc je n'ai pas
l'air d'avoir été pressé pour le record, donc pfff, c'est
vraiment...
Et merci à tous d'avoir
écouté, et je vous remercie de
que ce podcast est sponsorisé par Everstack,
qui est mon propre company,
qui évolue le développement de software et
services de consultation.
Pour plus d'informations, visite
Everstack.com. Et si vous aimez
le podcast, s'il vous plait, vous aidez
à me séparer sur la word sur les socials, je
n'utilise pas le hashtag
de l'Exception et je peux être
trouvé sur Twitter,
et ma blog
d'Ancloch.com a le lien sur
mon site, et nous allons
inclure les liens à tout le truc
que vous avez mentionné aujourd'hui,
sur les notes de la show, qui peuvent être
trouvé sur
UnhandledException.com

Episode suivant:


Les infos glanées

Je suis une fonctionnalité encore en dévelopement

Signaler une erreur

TheUnhandledExceptionPodcast

Tags
Card title

Lien du podcast

[{'term': 'Technology', 'label': None, 'scheme': 'http://www.itunes.com/'}]

Go somewhere