Dan Farrelly, Tony Holdstock-Brown - Inngest, Easy Asynchronous Workflows
Durée: 64m50s
Date de sortie: 03/06/2024
This week we have the co-founders of Inngest, Dan Farrelly and Tony Holdstock-Brown. Inngest is a event driven workflow platform that makes it incredibly easy to build asynchronous workflows in your application. We talk about the history of Inngest, how it works, and how you can use it to build your own workflows.
- https://twitter.com/djfarrelly
- https://github.com/djfarrelly
- https://www.linkedin.com/in/djfarrelly/
Episode sponsored By Clerk (https://clerk.com)
Become a paid subscriber our patreon, spotify, or apple podcasts for the full episode.
Les gens qui ont essayé de faire ça sur le serverless,
comme faire un travail synchronisé sur le serverless,
et tout le plate-formes n'avaient vraiment pas de grandes solutions pour ça.
Tony utilise cette phrase,
le serverless est le plus basé dans le domaine.
Si vous pouvez le construire pour le serverless,
ça ne peut pas aller à tout le monde.
C'est vraiment intéressant parce que le serverless est statuel,
par défaut.
Bonjour, bienvenue à la DevTools FM podcast.
C'est un podcast sur les tools de développement
et les gens qui le font.
Je suis Andrew et c'est mon co-host Justin.
Bonjour tout le monde,
nous sommes vraiment excitées à avoir Dan et Tony sur le podcast.
Donc Tony est le CEO de Ingest
et Dan est le CTO.
Je suis tellement excité de vous avoir ici.
Un grand fan de la travail que l'Ingest fait.
Donc oui, c'est une grande opportunité.
Avant de nous en parler et de parler un peu plus sur l'Ingest,
pouvez-vous nous parler de nous, plus de vous-mêmes?
Oui, bien sûr.
Allez, Tony.
Cool, c'est cool.
Oui, mon nom est Tony,
l'un des founders de l'Ingest.
J'ai été en co-hosts depuis que j'étais enfant.
J'ai appris comment s'y aller en co-co-côte par accident.
Quand vous avez été dans le musicien,
quand vous étiez à 12 ans,
vous étiez dans le bunch de choses à la même time.
Et vous étiez en train de dire, oh, c'est cool.
Vous avez été dans le chien, en Java,
et tout ça, quand j'étais enfant.
Et vous avez sorti en train de faire de la technologie.
Vous avez alors été en train de faire de la technologie.
En revanche, c'est incroyable.
Donc oui, super, super.
Envers le programme de la jeune âge.
C'est vraiment bon.
Pour avoir ça, ce sort d'un premier start.
Oui, je suis allé de l'Ondres à la baie.
10 ans plus tard.
Et aussi, j'ai travaillé pour le travail.
Donc, je suis en train de faire de la technologie.
C'est génial.
Je suis Dan, le CTO.
Et Tony et moi, on en a parlé quand il a été dans la piste.
Et je suis de New York,
je l'ai étudié.
J'ai étudié en programme comme une nécessité.
Et ça m'a fait penser que c'est cool.
Ça me fait retourner à l'époque
quand j'ai étudié des websites de crappies
envers l'enfant de l'enfant.
Et puis j'ai commencé à vendre des choses
comme un business que j'ai commencé.
J'ai étudié, mais c'était quelque chose d'autre.
J'étais dans la musique.
Et puis j'ai étudié dans le programme de la chose.
J'ai juste aimé le travail.
Et je l'ai rencontré à l'heure pour la journée.
Et puis on a fait un petit tour du tour.
On a été avec eux plusieurs années plus tard
et nous avons commencé ces choses ensemble.
Mais je suis allé dans le meantime
pour devenir le CTO à une compagnie qui s'appelle Buffer.
Elle fait social media,
schedulements de softwares, et tout.
C'était une grande expérience.
Ça m'a fait partie de l'expérience
que Tony et moi avons depuis été ensemble.
Mais c'est un peu sur nous.
C'est cool, on utilise Buffer.
C'est drôle.
C'est incroyable.
Je n'ai pas de code sur moi,
mais je suis sûr.
Quand tu es assez jeune,
tu es comme, oh, oh, oh, oh, oh, oh.
Je me suis toujours dit
que je vais me faire des erreurs, des bugs.
Donc, on va faire un tour.
Qu'est-ce que...
On va commencer avec ce qu'il y a.
Qu'est-ce qu'il y a et où est-il?
J'ai été en train d'égoigner à l'EU,
et juste pour le temps, ne pas aller à l'EU.
C'est incroyable, c'est vraiment cool.
C'est super rireur de me faire aider,
mais c'est tellement difficile,
et je respecte vraiment les gens qui le font.
La santé est incroyable,
et il y a beaucoup qu'on doit considérer.
Les patients sont...
Tout le patient est unique,
et tu dois construire des workflows très complexes
pour faire ça.
C'est très simple.
Tu sais, quand un patient fait x, tu fais y.
Mais si tu as des indications contraintes,
et tu n'as pas de traitement,
tu as toutes ces branches,
et tu as tout ce logiciel qui explose,
et les développeurs doivent s'occuper de ça.
Ça devient vraiment difficile.
Et une bonne façon de se résoudre,
c'est des événements en général,
parce que les événements sont en fait
quelque chose qui se passe dans ton système,
ou dans le monde réel.
Et puis tu commences à couper toutes ces différentes choses
comme événements, et des systèmes de cuisson,
et de state, et ton code devient un grand défi.
Et ça se trouve que,
le structure d'environnement
est vraiment bon
en comparaison avec les événements à la scale,
et l'SQS a été en train de se faire,
depuis 2004,
c'est vraiment, vraiment difficile.
Il ne faut que les problèmes de structure fondamentale
ne se résoudrent pas les problèmes de application,
donc tu es à la sortie de la situation,
de la contrôle de la flow,
de la fonction de la fonction de la façon
que l'une queue peut s'occuper d'une autre queue,
et le code est un grand défi.
Donc, j'ai vraiment voulu résoudre ce problème
et faire des choses de la caractéristique,
pour que tu puisses dire des choses comme,
quand ça arrive,
faire ces particules les plus tôt,
et attendre pour ce autre chose de faire,
pour plus de 24 heures,
et si ça ne se passe pas,
faire X, Y, et Z.
Et le manque de quelque chose qui se passe,
c'est tellement difficile de faire un système comme Africa.
Donc, oui,
c'est une nécessité
parce que les défis de la construction
sont construits par vous-même.
Je pense que,
pour partager un petit peu
sur ce que l'ingest est,
on veut prendre tout ce que Tony a shared,
et nous allons le faire
dans des plateformes de développement.
Vous avez cette expérience
d'inclusion à l'inclut,
donc, typiquement,
les devs vont atteindre 4 queues
ou des éventuels,
comme Tony l'a mentionné,
c'est un peu de l'infrastructure de haut niveau.
Tout ce qui est,
est en train de être
à la sortie de vous,
pour que vous puissiez construire,
donc,
tous les autres trucs de la construction,
c'est comme,
vous devez construire,
rétruire,
prendre le travail,
comme Tony l'a mentionné,
caduer,
si vous allez les faire,
vous voulez peut-être
avoir un code pour être interrupté,
comme,
poursuivre ce code
pendant que ça se passe,
ou résumer,
pendant que c'est programmable,
parce que vous êtes tous,
ces longues procédures,
que,
vous devez,
vous devez,
vous devez construire
sur le top de la compagnie,
comme,
les queues de la commodité,
que nous avons.
Nous avons des choses amusantes,
mais,
c'est encore juste à vous,
donc, ce sont les basics
que vous devez,
pour faire de l'inclut,
et,
ce n'est pas,
qu'on ne peut pas,
couvrir,
vous savez,
selon,
quelle,
solution que vous vous faites,
votre création de concurrence,
peut-être,
l'impotence,
peut-être,
peut-être,
vous devez faire des trottinettes,
peut-être,
vous devez faire un travail limité,
peut-être,
vous devez faire des débalances,
sur quelque chose,
peut-être,
vous devez faire des batchs,
ou bien,
et c'est tout ce que vous avez,
le contrôle de la contrôles,
que vous êtes en train de faire,
aussi,
c'est juste,
comme,
à vous,
vous savez,
et c'est toujours,
quelque chose de ce que vous avez,
de ce que vous faites sur le top de la infrastructure,
donc,
vous savez,
à l'endroit de la journée,
c'est comme,
vous savez,
ingeste,
c'est aimé,
pour être une plateforme de développement,
qui permet aux développeurs,
de faire des récords plus liés,
de l'encoder plus vite,
et plus facilement.
Donc,
vous savez,
on a commencé là,
à ce point,
et puis,
quand vous vous êtes allé,
comme,
en fait,
mettre ces systèmes dans la production,
c'est vraiment difficile.
Donc,
vous n'avez pas,
comme,
rien,
rien de la logique,
rien d'observatrice,
dans les systèmes,
de la récupération,
de les issues de widespread,
comme,
vous savez,
vous vous couvrez ensemble,
des scripts customes,
pour,
vous vous faire,
ça,
ou vous battrez,
des lettres de la mort,
ou quelque chose.
Donc,
vous savez,
vous allez,
vous allez au lieu de l'obstacle,
il y a comme,
le tout,
ce qu'il y a,
de la fin,
est,
un problème,
qui est un problème,
qui est un problème,
qui est un problème,
qui est,
honnêtement,
et malheureusement,
qui s'est essayé de se résolver,
à chaque company,
mais,
si vous n'avez pas de la connaissance,
vous allez faire des erreurs,
vous allez faire des choses,
et,
ça juste se termine,
comme,
en essayant de réinventer la voiture,
et faire des choses,
et beaucoup de choses sont pour faire bien,
et pour faire reliant,
assez difficile.
Donc,
vous savez,
c'est juste,
comme,
dans un chelon,
là-bas.
Ouais.
Je pense que,
sur le dessus,
juste rapidement,
c'est en fait un SDK,
qui est en train de droiter,
c'est ça.
Et puis,
vous vous dites,
une fonction qui dit,
quand ce événement s'occupe,
rassemblez ma fonction.
Vous avez des pièces,
et chaque étape est une transaction de code,
qui va either travailler,
rire,
commettre,
ou rétrayer.
Et puis, vous ne vous inquiétez pas de l'accuse d'événement.
Donc,
c'est assez simple,
assez facile à commencer.
Vous ne vous avez vraiment pas besoin de apprendre beaucoup,
mais c'est aussi vraiment puissant.
Et ça ne s'occupe pas de la complexité,
de les choses que vous avez besoin de faire,
si vous voulez.
C'est vraiment cool.
Je voudrais que je vous donne un petit break,
pour remercier notre sponsor pour la semaine.
Clark.
Clark offre un bon usage de l'adresse,
à l'aide de vos appels,
pour que vous puissiez construire les appels plus vite,
et pour que vous ne vous inquiétez pas
en intégrant l'offre dans votre app.
L'offre est une chose super faite.
Il y a des choses qui sont malades,
que vous voulez pour le logger.
Il y a des multifacteurs,
des logins sociaux.
Si vous essayez de construire une appels d'interpréteur,
vous devez ne pas vous inquiéter
sur tous ces différents types d'interpréteurs.
Et vous ne voulez pas faire ça.
Vous voulez juste construire votre appel.
Et c'est ce que Clark vous permet de faire.
Clark vous permet de faire tout ce qu'il faut
pour ajouter l'authentication
et les utilisateurs de votre appel
avec un petit hassle possible.
Il y a des components très cool
qui font ajouter l'Ui de votre appel
vraiment facile.
Et en s'en ayant la suite,
j'ai mis un de les websites
que j'ai utilisé pour découvrir la musique
en arrivant et je l'ai utilisé au suivant.
C'est un de les deux sites qui ont été utilisés
pour les gens qui ont été utilisés
et l'appel qu'ils appellent
est vraiment parallèle.
Mais la meilleure partie de Clark
est leur prie.
Avec leur appel de frein,
vous avez des gens qui sont des utilisateurs
de 10 000 mois et vous n'avez pas de paix
pour un utilisateur d'un premier jour.
Head over to clark.com
pour voir les gens.
Ou vous pouvez aller entendre l'épisode 75
où nous interviewons un de les co-founders, Braden.
Vous voulez pas entendre ces ades anymore?
Vous pouvez devenir un membre
d'un des différents channels
que nous offrons.
Avec votre membre,
vous allez avoir les épisodes un peu plus tard
et vous vous ajoutez.
Vous pouvez vous ajouter à la page
Mais si vous voulez trouver un autre moyen
de soutenir la page,
vous pouvez head over to shop.devtools.fm
où vous pouvez acheter des merch
pour soutenir la page.
C'est quelque chose de comme la haine que je suis en train de faire.
Et avec ça,
on va retourner à la page.
Ça nous rend compte de...
Il y a une conversation qui se passe
beaucoup sur Twitter où les gens sont comme,
où sont les rails de l'élément d'un script de travail
ou on regarde ce qu'on a fait
en ce temps de hv.
Comment on peut y aller?
Et c'est comme un choc et un chique, évidemment.
Mais ça me rend compte de...
Si on pense de la réelle et des days de hv,
en particulier quand ce sont des promenades en cours,
vous avez des libraries
qui pourront prendre des jobs de base.
Et vous avez des solutions plus prises,
comme des solutions de package.
Et bien sûr,
ils ont probablement eu des challenges offertables.
Et peut-être ne les éclairent pas
comme les entreprises ont besoin.
Mais pour beaucoup de la salle de start-up,
surtout de la date,
ce sont des choix de faute.
Et puis, on est venus
de ce genre de bonnes incendies,
c'est ce que je pense.
On a commencé à bouger
des systèmes monolithiques
et puis on a commencé à couper
un peu de choses ensemble
et à adopter des coupes de crône
et tout ça.
Et nos opérations ont été plus compliquées.
Et maintenant, on commence à voir
une plus commoditisation
des services comme un simple API.
Nous parlons de vous,
et je pense que vous faites
un très bon travail
de prendre ces backers.
nenhuma approche et pas juste de fixer
schön Lara,
Oui, et donc on知道
que c'est max possible
Il y a beaucoup à dire, il y a beaucoup à dire et c'est un point très bon.
Il y a un peu de métaux que je pense que d'observer avant d'interpréter,
et la direction que l'industrie a été en train de faire, qui est super intéressant.
Et l'une des choses qui est une observation, c'est que les tables de la construction de un produit ont eu un peu plus de temps,
et que la construction de la construction de l'industrie a été très bien.
Et c'est un point très bon, et c'est un point très bon.
Et c'est un point très bon, et c'est un point très bon.
Et c'est un point très bon, et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
C'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Et c'est un point très bon.
Oui, tout de même.
album a patté.
Volume a senté.
Et on a senti !
State sucks, use state sucks, producers suck,
how do you make all of this work, and there's the whole signals thing that people like are finally reactivity in the browser, that's cool.
And state sucks in the back end, like, if you want to do something, you want to schedule something in advance, cool, schedule a job,
then you need to store that job ID, meta-dédecité around that job ID, just in case you need to cancel that in the future.
And if you need to cancel it, you need to add another API handler, load everything, properly cancel, maybe reschedule.
You end up building your own system on top of the underlying primitives.
That sucks, you shouldn't necessarily have to do that at all.
It takes so much time, effort, and it will be buggy, and there's no observability.
That's the hardest thing, I think, like, figuring out what goes wrong in a system like that is extremely challenging.
So, I think it makes sense to start with the infrastructure, and that's what we've been doing for the last 15 years,
but it also makes sense to go one step further and solve our problems as developers.
Yeah.
And that's really at the end of the day, just like, I mean, developer experience, like, developers demand more, there's more demand on them.
You know, we just need to, that's, to the point of this podcast, right?
It's like, developers love their tools, give them better tools, give them better abstractions.
You know, that's what we need to do.
Yeah, ease of use in DX in these cases is huge, because like, we had the temporal episode like two years ago,
and I'm like, inspired, I'm like, I'm going to go use this on a personal project, and then, no, no.
If you've ever tried to do that, it's not a fun time.
You have to start different Docker things.
You have to know how everything works.
So, just like, starting with that type of product seems super hard, but it seems like you guys with Ingest have really focused on making it get easy to set up, easy to use locally.
So, can you speak to some of the things that you've done to make the DX good in those cases?
Yeah.
Yeah, I can jump in there.
I think one of the key things that was early is like, you know, Tony and I went through several iterations of what worked for a developer, what made sense, right?
You always had to do that with any new product.
And one of the things that was a major unlock was our development server, our dev server.
So, basically, what our dev server is, is, it's our open source binary.
You run it on your machine, and it runs the entirety of basically Ingest on your machine, in a single command.
So, you can just grab the binary, or you can MPM, and you can, you know, MPX it, and it spins up and gives you actually a UI.
So, the UI can allow you to, like, actually have some visibility into the events happening, the functions, what failed, what are the errors, easily retry, send test events, replay things.
And, like, that actual unlock was major, because mostly people are dealing with that situation, like you talked about Andrew, which is like, any local setup for any of these types of systems is painful.
People were, like, running Kafka on the machine, and then running, you know, how many consumers to replicate production.
It becomes cumbersome, right?
The fans start spinning.
We upgrade our Macs, you know, whatever it is.
And, you know, so we needed to try to make that less painful, right?
And, I mean, the feedback loop is pretty bad, depending on the programming language that you're using is, like, I have queues running, I have these different, you know, queues calls this queue, calls this queue updates this job state.
You know, how do you get visibility into that?
You're hacking away with it.
You're clicking on things, you're stopping containers, starting containers.
It should just be as easy as, you know, run a binary, and you can run on your machine.
You can drop a binary in CI and run integration tests on it.
If you want, you should be able to do that.
So, that was really the key.
And the same thing, the other half, that's the SDK.
And it needed to feel native in each language.
Like, it needs to feel like I'm just writing TypeScript or JavaScript, or I'm just writing Go or Python.
We don't want to, like, have, like, a new programming model that you might come up with, or might have a different way of working with things.
And I think that's really key, because developers can, like, look at a complex thing, instead of thinking about, oh, this is four queues, and one or two database models, and this S3 bucket for more state, it's just, I can read my code.
I can just read it top to bottom and understand what this is doing.
And I think that's what's really key.
And, you know, I think where we approach it from that dev server, an SDK example.
And that's, I think, one of the things that resonates with people when they just get it.
They're like, when I get the dev server running, it's just, I understand what's happening.
Like, I can see my code running in a way that, like, traditionally background logic, like, you never see it, right?
And that's what makes it harder, especially if you've been on the front end or you've been somewhere else.
You just can't visualize something that's long running for days or weeks.
But if you can put it in the UI, if you can make it interactive, it's a major unlock for devs.
Yeah.
There's also a few principles that we really care about.
We shouldn't really replace programming primitives.
We shouldn't really do things that are not idiomatic to your language.
That sucks.
No developer wants to do that.
And so making sure that we give you all the power of the systems, but really simply so that you almost only have to learn a few primitives without learning the underlying infrastructure is basically the whole goal.
So it sounds really cliche.
It sounds like business speak, you know, time to value, but genuinely do time to value.
Making sure that it's super easy for people to get it up as quickly as possible is key.
And friction sucks across everything that we do in life.
So we should make it easier for developers.
So yeah, I think you guys take the crown from Veit Dev Server, Veit Dev Server, where you can like see all the stuff.
Your guys is Dev Server.
When I saw it on the page, I was like, wow, that's the selling point right there.
Like, I want this now.
Like you could build out this entire infrastructure on your own and you still wouldn't have this like core super useful piece of technology.
So good job there.
I'm definitely going to check out guys out because of that.
Thanks.
Thanks.
Super kind.
I think one interesting thing about that is like, you know, that was one of, you know, there's, there's arguments and conversations about shift left or shift right and whatnot and serverless.
I know you've listened to a couple episodes where you talked about that kind of like, you know, oh, we, we got a test on AWS, right?
Whether, you know, whether you're using some frameworks to do that or using the, you know, one of the AWS's frameworks that you can use.
But, you know, at the end is like the quicker that you can get things going on your local machine and you can control it and not have to pay for compute or whatever resources that you're using or the delay of feedback loop in that is like,
I just want to drop this in CI.
Why can't I, right?
Why can't I just run this locally program against it, run 10 different services?
Like, you should be able to do that.
So like that is, I don't know of itself is like the more that you ask of devs, like, oh, you got it, you don't have to have an account to start your test.
You can just start running it, right?
And I think that is such a key thing to just get people like, like Tony's, it's time to value in business speak, but it really is about like developer experience of like, if you get some joy out of it, like, oh, okay, this is easy.
I can work your, any developer with their tools when they get that like a little bit of, you know, kind of success.
So they understand it like that is a major unlock.
And that just kind of start to feed back into like, okay, I can be productive in this.
I can actually do something.
I can build something cool with this versus like, I'm battling the system for days or weeks because I've heard this is cool, but I don't get it.
Am I, is it my, is it a skill issue?
No, probably it's not.
There's a, there's a big theme that I see here in the like sort of next generation of like what I call like high, like high bar DX companies.
And kind of the first spot where this, this sort of pattern really stood out to me was Dino's KV.
So it's like, there's so many service like developer tool service providers that they kind of operate as a SAS.
You have a portal, you go in, you have to configure things, you know, it's like traditional SAS sort of set up,
even if you have an SDK, it's like, you can't just dive into it and like do something, you're always in this other world.
And then I was playing around with Dino KV and it's just like, it just feels like, oh, like Dino FS, it just like Dino Open KV.
And it just works locally.
And, you know, they did all this work to make it's like, oh, you have a SQLite back end if you're doing it locally.
But if you're doing it, you know, Dino Deploy, we have this whole foundation DB backing and you can at least use the API transparently, you know,
and this is like, it gives you different, like different guarantees, you know, based on where you're running, but that transparency.
And I think that that notion of we're seeing this trend of like, we want to focus less on infrastructure and more on what we're doing.
And then going even further, like what you are doing is like, we want to focus just on what we're doing.
And like less on anything else is like, let me just start with the code and just start where I'm at, do the thing that I'm trying to do.
And then I can like upgrade into this larger service and get all these extra capabilities of the business.
Yeah, I think it's a phenomenal direction.
And I think like you and recent and Dino, and there's like a few companies that are just like at the pinnacle of this like developer experience journey,
which I think is hugely important for industry.
Yeah, it makes sense.
Like it's also like, really, in some ways, you point out something that's really astute, and it makes a lot of sense.
Some companies really don't want to do anything but focus on their entire product and business, no infrastructure, no mess.
And as soon as one team does that, it almost forces all the other teams to do that as well, otherwise they move slower.
And so it's almost become non negotiable in a way, because if you don't do it and somebody else is doing that, then you move slower than them.
And that's not good.
And so it's almost become a requirement in a way, which means not only has the table stakes of what we need to build to go more complex.
The way that we build it has almost had to change to keep up with how fast technology and products have to wait right now.
So it's super interesting to see.
It's super interesting to observe just like how this works on a individual level for developers, but also like a macro level for how we build tools as people together.
Kind of crazy.
Yeah, super interesting.
It's like, we've seen it over decades where the abstractions continue to improve.
We saw like jump 20 years ago, start with the cloud, and that was big time at that point.
Right.
And then we've seen other things and it generally like the abstractions always kind of move up, right?
Like, we're not managing Linux servers in our closet like we used to.
Sometimes you can, if you want to go for it, like, that's great.
But like Tony said, things move like there's more software devs, there's more software.
There's more of a demand on these things of like how can you get as much done, right?
And that's where those frameworks like Arale's came out, right?
It's like, let's build something that we can be a little bit more productive in, but then things continue to evolve, right?
And I think that's even more cloud or the promise of serverless, I'll say, kind of came out of.
And, you know, in some of those situations, it's like, yeah, you kind of have to evolve.
The entire experience has to evolve and software devs are expensive, right?
Like, I don't want to, when I was the CTO buffer, I didn't want to have my team.
Like, we had taken a path off the venture path and we were a profitable company never going to raise again.
So we had to, like, I was trying to always improve the productivity of my team so that one, they're happy, all right?
So they're not like banging their head against the wall every day with their development environment.
But then also, too, is like, so they can like build cool things and build cool things that, you know, were helpful for our customers.
And there was always this like refinement of like, you know, we saw the DevOps kind of like movement happen and whatnot.
So, you know, there's always going to be a push and they're just kind of that wave that you talked about was is and is going to continue to evolve over time.
And if the bar will continue to be pushed, I think.
So, you know, where does it end?
I don't know.
You know, we don't know what things look like, but it'll just keep getting getting kind of like keep ratcheting up and new things will exist, right?
Because the demand will get higher on all of us, every engineer.
Another thing that I wanted to ask about is SDK design.
So in a product like yours, like having a good SDK is paramount because this is your product interface.
And it's a tricky thing.
I have a startup in a similar space doing a slightly different thing, but thinking about what the shape of the interface that you expose, what the SDK looks like is paramount.
And I've done a lot of research about like startups in similar spaces as ours, like how they all express that.
So I'm curious, it's like, what was your journey to get to the sort of SDK that you'll have now?
Yeah, there was a bit of a journey.
No joke.
Firstly, some observations.
Like, we also like to just try and learn as quickly as possible and observe and see what happens and try and find the fundamental facts about what we're building.
So we started with DAGs, DAGs, not what's up for developers writing code.
Just frankly, not.
We don't think about code in the serment DAGs and steps.
We think about code as procedural logic, and that's what we want to write.
And then secondly, like, one of our main requirements as a team and a company and a product for developers is that we think about what you are trying to build and the problems that you encounter instead of the fundamental queuing technology.
So how can we make it easy for you to build what you need to build, and then we'll handle all of the stuff that makes that difficult.
And that has a few implications, right?
You should be able to really easily define functions and say when they should run.
You should really, really, really easily be able to say, I want to set concurrency limits for each of my users because I run a multi tenant business.
And each of my users should be able to run 10 things at once with a global limit of maybe 100.
That has been impossible in every queuing system forever, which is absolutely bananas because multi tenant environments have been here for like decades.
And so focusing on simplicity is really important.
I think finally, before Dan, your thoughts, I suppose.
We mentioned this before, but we really, really, really do not want to mess with the sort of primitives that you expect from your own language.
We want to keep things basically as, there's a word for it, there's completely blanking right now.
We want to keep things basically as ordinary as possible.
So we shouldn't be really changing any of the fundamentals of TypeScript.
We shouldn't be adding anything crazy.
We shouldn't be changing the way that you write code.
We shouldn't be swapping out the way that contexts or your Go functions work.
You should be able to just use the language as is and make things idiomatic.
That's the word, on my word.
And keeping that is really, really important.
So we think a lot about what developers are trying to solve instead of what we are trying to solve for them.
And I think that is a really, really, really big mind shift difference.
Because every tool fundamentally can get sidetracked by we are solving this problem instead of our developers are solving this problem and we need to help them do that.
And that's huge.
I think to follow what Tony said, I think what is interesting is like, you know, one of the things that we also saw early on in the journey was
the one little area that was like kind of like people started speaking loudly and chirping about what we were building was
people trying to do this on serverless, like do anything, right?
Like run asynchronous work on serverless.
And all the platforms really didn't have great solutions for it.
And it kind of is like, Tony uses this phrase like serverless is the lowest common denominator.
If you can build it for serverless, then it can go anywhere.
And that's really interesting because serverless is stateless by default.
So ingest functions, the state is held externally.
So if your server crashes or your serverless function terminates, the state is stored and we resume your code and resume it from the point of failure.
And we kind of get you there with memoization, right?
So we take that approach when you find your ingest function, you don't need to worry about that.
What's happening behind it, behind the hood, behind the hood.
But, you know, you can see that our code is open.
You can understand how it works.
You can understand how the SDK, how it works in the back end as well.
So, you know, with that, it's like, if you want to run on serverless, you need to kind of keep it as simple as possible, right?
You need to just implement it in the language primitives.
And that, what's cool is the side effect is that like, you can run ingest functions, like you can deploy ingest functions to Dino.
You can deploy them to Verso.
You can deploy them to a Docker container.
You can use bun.
You can use AWS, you know, Lambda.
So it kind of, and even like other runtimes that are, you know, say different than typical node or, you know, go or whatever, which are like, you know, run it in, I mean, Cloudflare work or something.
It doesn't matter because it's just, it's just JavaScript, right?
And I'm sure you could, we haven't really experimented with it, but I'm sure you can run it in like a lot of other places or use Wasm and things like that, which is pretty, pretty fun.
So anywhere that you can really run it, like, that's the point.
So it should be a small enough footprint.
And that's the person we took with it.
But, you know, it doesn't mean that, you know, that there's not other things that we might experiment with in the future, right?
That we might do differently.
But it's, it's really working for devs and meeting them where they already are, because they're already deploying their code of this platform.
We say, keep doing that.
Run your ingest functions wherever you want.
It doesn't matter.
So that's why we also defaulted to, like, we invoke things via HTTP.
It doesn't mean that we'll always invoke via HTTP.
We won't, we'll, we'll, we'll have other methods, but it at least allows you to, like, have a workload in any cloud.
It doesn't matter and move it wherever you want or run it locally, run it in CI.
It doesn't, you know, anything, anywhere you want to run it is fine.
Yeah.
I think, like, actually, like all of this is a meta point thinking about the principles of what you do when you build your system.
And for us, one of the principles is like optionality and flexibility and making it easy for people to develop and not have to change things,
not have to spin up new info, not have to deploy a new set of services, be able to change clouds if they want to at any particular point.
Maximizing for optionality is, is really, really key.
And I think it's really helpful to figure out as a, as a developer overall.
And we talked to, like, everyone's talked about that for years, you know, code reuse or making sure that things are customizable.
We're thinking about interfaces and, like, Joe Armstrong, amazing RIP.
Erlang talked about this in, in contracts between different systems, being the most important piece, like just thinking about, yeah, optionality and making sure that things are flexible is really good.
We have a, we have a number of folks that are, you know, multi language, right, which means that, you know, they're polylot teams, whatever, whatever phrase you want to use there.
And they have long running complex processes that might invoke a Python function, right, that might be type type script that is calling a Python function.
So you're, you, we have these teams that can then compose their systems in however they really want.
So it allows people, if that's what they're using to be, you know, have purpose driven, like maybe your, your ML team is building with Python, that's fine, that's great.
You know, they're using certain things that are really great in that, in that ecosystem.
So why not just keep that there, write a Python ingest function and call it from your JavaScript code.
And it'll all just kind of work together, like you don't need to kind of think about how those two are interacting.
And I think that's just kind of interesting also, because, you know, it just kind of dives into what, what, what Tony just kind of kind of talks about as well.
Yeah, it's, it seems like writing your code in, in this paradigm really changes how you think about writing your back end code, like, or, like, as Tony was saying, we like to write things procedurally,
but it seems like if you like took this to its logical conclusion in your code base, you'd be writing lots of like really modular functions that you like interspers.
So like, what, what patterns do you think the framework stresses?
And then also when I was reading through the docs, you guys had a lot on end impotency.
Like, how does that all play into this too?
Maybe I can take the first one.
I'm just in terms of patterns.
Because it's similar to what we've been talking about previously, which is like making sure the SDK solves your problems instead of solves the underlying infrastructure problems.
I think one of the things that we try and do is map what happens in your system as closely to code as possible.
An example might be really basic, like appointment scheduling.
When somebody books something, run some, run some code in the future to remind them of their appointment, like two hours before their appointment, unless they cancel their appointment.
In which case, you don't want to send that reminder.
So you can add declarative cancellation configuration to say when a cancellation event is received from this user ID or for this appointment ID, automatically cancel the function without running state.
And that sort of stuff is, is, is really, really helpful for figuring out and sort of defining how functions work.
I think, yeah, largely trying to make things as simple as possible for the users is really key.
And that means you don't necessarily need to learn too many patterns up front.
You need to think about what your code is trying to express and then sort of map that to the primitives that are available.
And you don't really need to learn too many primitives because it's stuff that I think people have already been thinking about for a long time.
Or a few, don't get me wrong.
If you're doing something extremely high volume and you're sending like thousands of events per second, maybe you want to batch and group things to reduce your costs.
A couple of other users were sending, you know, tens of thousands of things per second and then introduced batching.
And the cost just went down by like a thousand, which is cool.
Awesome. Super helpful.
But realistically, you, you can just think about what you're trying to build, how that works on a foundational level and then start running steps, which are code level transactions, just like a database transaction for code, really.
And you're, and you're good to go.
There are some patterns for sure that you get introduced to along the way as you get more advanced.
So, fan out where one event can do 10 different things in parallel or where one parent function can invoke many child functions or do event coordination to say,
I want to check for the presence or absence of these 10 events in the future.
Totally possible, all relatively easy because it's just a line of code.
Getting started should be simple.
I think the tough thing about building developer tools that's relatively abstract is you can never predict what developers are trying to build.
And so trying to make something that handles every case is hard.
So it comes back to the primitives that's available, really.
I don't potency though.
It's a fun one.
Yeah.
Yeah.
And you know what?
This is a difficult one.
So we have a couple of ways that you can handle and just to take a step back.
I don't potency, right?
And however, everyone pronounce it a little differently, you know, devs, when they learn it, whatever.
But basically it means like, I can run this code as many times as I want, and it'll have the same effect.
Right?
So if you've ever used like an upsert or written an upsert, that's basically it.
I'm not going to create multiple of these, right?
But you need to think about that when you're building systems, right?
The systems might trigger multiple events or multiple messages, and you need to be able to handle those things.
So at the basic level, like every developer building systems, especially distributed systems, like has to think about this.
But at the basic level, like, you know, we do this in our docs, and like as Tony talked about, it's like, you know, we try to give up the patterns or share the patterns,
but like, we don't know what you're building.
But here are the things that you can use that resonate with how you think in your head about how you're building the system.
So when people think about potency, they want to make sure that something does not execute multiple times, right?
So that's the ideal is like the easiest way to handle item potency is don't run the code again if you don't need to, if it has been fulfilled.
So we offer a couple of ways, either at the producer level, really, when you're sending the event from your system for your back end, the user signed up, or the order was completed,
you can attach an ID on to there, and we will dedoop those things and we'll prevent it from running.
Or on the consumer side, you can have your function if you can't access that code, you can say, you know what, give me these unique keys, and let me take this, you know, any data from the payload, any keys that might exist.
So any parent, you know, it's basically like every event's just a J, like object, Jason object, and the Jason object, you can kind of nest it might have a user ID in there, and you might say, oh, this function should only run once per user ID,
you can just write, you know, event dot data dot user ID, and make that your item potency key.
So we just make those things kind of, kind of easy to grab, and very modular, so you're like, hey, it's just a key, you don't need to define it up front, you can define it later, it's the flexibility, right,
and those that same key pattern is what we use to do the multi tenant keys, the multi tenant concurrency that Tony talked about earlier, which is, you know, how do I, how do I make sure that only that each user does not take up too much resources,
just run this function, and just pass me this, this whatever key, it could be a user could be an account, it could be a bucket, it could be a tenant name, or some other other kind of keys so really like, we just provide some of these helpers,
and, you know, but we, we have to encourage developers to think about these things, you know, down like as they're writing their code, because, you know, when you write an ingest function, you know, we've kind of hinted at this, you define your code, and you define it into these steps,
and each step is an atomic unit, and each step is invoked, and it's retried independently, and that state from each step is held by ingest so that we upon failure of step, we can resume it at that point, or if your code tells us step dot sleep for three days,
your function stops running, we hold on to that state, and we re invoke your function, the memoization resume it at the point that you stopped your code from executing.
So with that, it's like each step you still like, need to think about it being idempotent or how am I handling idempotency in this situation so it's a tricky topic, but we try to at least offer some of these tools that allow people to maybe work around it or handle this in their own system in the way that they want to and that's
I think what's hard is, is like when you're designing the SDK, it's how do you anticipate the flexibility that someone can write and combine maybe two things together or compose some sort of compound key, you need to be able to be flexible so yeah it's a, it's a, it's a, it's a tricky problem and no one really likes like handling it so
but it's a bear, it's always going to be there.
Distributed systems of fun, TODO.
Always, always.
Speaking of distributed, I have a broader question, this had kind of popped up when you were talking about calling like flows across language boundaries, like somebody's calling something from Python that was written in JavaScript or vice versa.
I've seen, I've worked in multiple companies that had background jobs and usually there's like some configuration folder that they've stuck somewhere has got a bunch of yaml that are defining these jobs which you know thank God the y'all aren't doing that.
A, a thing that I could see that could be tricky for your approach is that say you have a polyglot team and they're defining a bunch of ingest functions that do things on events.
It may be easy for them to lose sight of what will actually happen on an event as more and more of this code gets spread throughout different projects in their code base.
So it's like, oh, order created event comes in and now a bunch of things are happening and I don't really know everything that's happening from that.
So how do you provide users visibility with, you know, what would happen, or do you, and, yeah, is there any way that they know like what they can call or like what they, you know, what events they can invoke or like, come to that flow.
Yeah, this is an interesting one, definitely.
I think like fundamentally, yes, once you, once you look at events and you sort of build an architecture map is totally possible to do that.
It really depends on where you are.
In the Dev server, you register individual apps and those apps run based off of based off of events.
Some people might have a huge system and they might have Java, Python, TypeScript or running together and it's unlikely that people will bring up all services when they're working on one individual component.
In the cloud, it is certainly possible to say, hey, this event triggers these 20 functions and this is what you should expect to happen based off of these things happening.
The Dev server possible to, it's possible to get that mapping and we can build an architecture mapping for you.
But it really depends on how far you go and how much you run locally as a developer working on your entire system.
So it's kind of like that is a true problem and a true thing that happens with events.
But it's also kind of nice at the same time because events offer this interesting abstraction and they decouple your code such that when something happens, you can just say, cool, hook into this particular event and run this function.
And that's super nice.
Actually, it's super nice and it is good that you end up with this problem of like, wow, I'm doing so much when this particular thing happens.
Awesome.
Instead of having to read maybe a 2000 line controller or a bi hand or and be like, wow, a lot is happening here and we don't know what that is.
So it's kind of easier to describe the system in this way.
It turns out because you can just say, hey, show me all the show me all the subscribers show me all the functions that are interested in this event and build a mapping, which is super nice.
So, yeah, the decoupling is is is pretty interesting.
I think one interesting aspect of like a multi language polyglot company continues to be types.
And this is a almost heated debate amongst everyone and thrift is no longer in the conversation.
Yay.
But types, whether it's proto buff, whether it's Jason schema, open API, whether it's, you know, any of the other things that have been going on type safety across multiple different languages is kind of insane.
And, you know, the PIP types super cool.
But at the same time, making sure that you that you do things in a type safe ways is super important.
I think that's that's probably the thing that that we end up caring about most in the future, because the system maps.
Yep, cool.
Totally possible to put these architect diagrams, but doing things in a safe way, especially across versioning of events is really interesting.
Because unfortunately, when you build something great, two months time, the requirements can change, you end up changing the event payload.
That means that things inevitably break or you need to handle versioning and cross disparate systems yourself, making sure that that is good before you go to production and that you run CI and it can fail if any of the events are unexpected is extremely important.
And that's a huge challenge in in many systems that we that we definitely want to solve.
And one thing that we do have is when you when you use ingest, you basically can see in a global sense, no matter how many apps you have, whatever language they are, you know, two, one, five, whatever cloud you can see where they're running and whatnot, you can see the global list of functions and what events they're triggered by.
And then the reverse mapping is possible so you can see all the events, the raw payloads that are coming through and what functions they trigger.
So the data is there.
And then we've built in already a, you know, from day one, we knew that if you're working with events, you should be able to at least have a standard field of where you are can attach a version, right?
We care if using Semver or some date string doesn't matter.
You can use that and then you can key off that in different systems, right?
And we encourage people to use.
We don't require them, but we encourage them to set these versions.
So before we wrap up the podcast, we like to talk about the future a little bit.
So what are you guys working on it ingest?
What's next? What do you want to ship?
If you can talk about it.
Oh, yeah, yeah, yeah, yeah, I can talk about it for sure.
There's a ton.
There's a ton.
Fundamentally, again, it all comes back to the principle of making developers lives easier.
Hmm.
And a lot of the stuff that we want to work on as cool.
The technology is, um, and it's fun and fascinating.
It doesn't matter if it's not making developers lives better.
So, uh, a few different things upcoming connect, which is an alternative to serve swap out serve for connect.
Serves serves the API for handling ingest functions connect.
We'll connect to us.
Um, and then we'll keep the TCP connection open for low latency, um, fast caching, um, smart.
And then we'll also load balancing cause we can handle the orchestration outside and we know how many connect servers you have running, the capacity.
We can do all of the orchestration, um, which is cool.
There's something really, really awesome.
Um, alongside the observability stack that that's that's being completely refreshed and our UI is getting so much nicer.
Um, with new designers and folks that have been on board.
Um, that's super exciting.
They also added something to the design, which is cool.
Uh, so step.waver event allows you to pause your functions and wait for a new event to come in.
And they will completely stop and keep running.
Inside that, there is an expression that you can write to say, I only want to run this function.
For example, you create an order and the order is over $500.
Function will stop event comes in soon as it's over $500.
The function resumes with that event.
That expression is cool.
We use this expressions everywhere inside the UI so that you can search for function runs and events.
To get information.
Um, we're also adding a look back to wait for events.
So you can be like, also look back on all the past events from the last hour to solve race conditions, which is amazing and super cool.
Um, so that's fun and people want that because race conditions are a thing.
Um, the observability stuff is just so good though.
The new designs that our design has been working on.
Um, fantastic and cannot wait because, uh, making sure that developers can see what's going on and easily debug.
Super important.
So, um, to sort of hold them back, especially when they're trying to debug things.
That's like our high stress environment, you know, and so making that look nicer, feel nicer.
Um, it's going to be so good.
There's some stuff in the background, which we can talk about distributed systems things, but it's more for us and less for our developers.
No one will know that we're putting it in place.
And so, uh, yeah, I don't know if we want to go into it, but it's really cool.
Yeah, there's a whole underlying like beast behind the system where it's like we run it multi-tenant.
Right. So there's, there's like a, a lot of fun things that we get to learn underneath about the system to make all these things that work.
Right. So the actual queuing infrastructure underneath that allows you to run them like multi-tenant queuing, or just we handle throttling at our level, right?
Rather than you handling it at the worker level, that stuff will just get better, faster, more, you know, kind of more scaling.
And that I think is the fun part of what we're, what we get to do internally in our team, because we get to handle the biggest distribution, which is great.
Do you guys use ingest to build ingest?
100%. Partially. Partially, yes.
Yeah, there's a, there's, there's, there's a lot of stuff that actually runs like retries, sort of replace a bunch of stuff, the cancellation, which you just straight up ingest functions.
I love the stuff that you do in the API, turns out ingest functions.
But unfortunately, you can't use the queue to build the queue, because we needed to build the underlying queuing system.
So we have to build the queuing system regardless.
And we'll do that for you.
Yeah, there's, there's a lot of fun.
We encourage everyone to dog food the product, right, at the company.
So engineers pick up things, we write all sorts of functions that we can do things like, you know, we shift it off like we don't use intercom, we use ingest, and we use, we use req.
We send as our email sender, but basically like we build all our flows in that code.
And we dog food, all these different things.
And that I think is really, really cool.
And the more that you can do that, the more that you can understand what the developers, like their challenges are or the ergonomics, and you can get frustrated and then try to fix that and channel that.
And that is so important, right?
If you don't work on your own product.
That's also why we do a support rota with every engineer, right?
If your engineers are shielded from the actual engineers or the developers that are using your product, they won't know how to make it better, right?
And I think that that, like the separation, especially in developer tools is, is not doing anyone's service.
So you need those people close and dev rel or sales is not going to communicate all the frustration perfectly back to the engineering team that can solve those problems.
How do you bring them closer?
And that's, you know, doing the support rota, doing support in general, and the dog food.
So it's completely important.
So, you know, we always want to find new and fun ways that we can use it and test out our system.
So it's great though.
So one of the last things we like to ask to close out is just a directionally future facing question.
So I think for this, a good, a good question will be to ask just like about this space and industry.
So we talked about that transition from like, you know, these monolithic frameworks in the early like 2010s or whatever into the sort of like more cloud world, more serverless world and kind of we're here.
High class DX, really, really high bar for quality and software.
We're expecting more, we're expecting the ship more complex features.
Where do you think the like next phases of the industry will be like what are going to be the next important things and how do you see yourself playing into that if you do it all?
Such a good question.
From a developer point of view, I think it's particularly industry.
I think it's particularly interesting how abstractions built and in a way commoditise the underlying things beneath it.
So AWS commoditise infrastructure.
Very cool. Awesome.
Also databases. RDS is cool.
You know, neon is cool. Planet scale is cool.
All of these things are amazing.
And now people don't need to worry about the underlying infra.
Things you mentioned, Dino KV, super cool.
You don't need to worry about how you store data.
And stuff like us within just sort of commoditise is a bunch of underlying infrastructure with regards to queuing events and state so that you can just write code.
And that in a way means that you don't care about the clouds or the infrastructure or the execution environment because they're stateless, which is super cool.
And you can move from ECS to Goop to Lambda to Cloudflare to, you know, if you wanted to go bare metal, you can if you want to.
And I'm really curious what happens in the future because the trends all point in that direction of people building more and more abstractions.
Taking it to the extreme.
I suspect that as people build better and better DSLs over the top of what we've been doing natively, of which there are lots of really, really, really cool examples.
It will become easier to compose these systems to a point where it should be a few lines of code will get you so so far.
And that is wild, especially because the thing that people talk about nowadays, you know, with like a co-pilot being able to write your code for you is kind of wild.
And if you can abstract everything to a few lines of code, what does that mean for the future is a really, really, really interesting thought.
And I don't know if anyone can actually predict that because it's crazy.
So, yeah, from the product engineer, from the developer's perspective, I think like things get faster, easier.
And that is awesome across all fronts, back end front and, you know, the signal stuff that we talked about, amazing.
From an engineering point of view, so much is changing as well, fundamentally, in terms of like, you know, computation with stuff, low level libraries, like the things that people are doing with research right now in terms of how to make numbers faster or differentiation faster in AI, ML workloads as wild.
So, like, there's so many different directions. It's kind of crazy. Super cool. Everything is fascinating. You learn it all. It's like Pokemon.
Yeah, I think I'll add one, one thing there is like, you know, we've seen, I think there's different people approaching it in different ways.
But I don't think the developers want to fight with infrastructure, right?
And I think there's people cladding back against serverless.
That was, we have not seen the final definition of this, right?
We will continue to see that evolve in ways that might feel more like somewhere in between, right?
Like, I just need these resources. I need to define this. This is my capacity, etc.
Define it in your code.
And, you know, we've seen this also, like some people use the approach of like, you know, infrastructure as code, or, which is, you know, an improvement, but not quite there.
Like, there's other layers, and then there's framework defined infrastructure as you might hear, people use that phrase.
And that stuff, that's cool as well.
But, you know, there's layers and layers that you can go there.
Like, we don't want to be defining YAML.
We don't want to be writing.
I want this, I want three S3 buckets. Give me them.
You know, so I think those kind of layers, like just like you mentioned earlier, it was like, we'll continue to kind of drive that is we, I think, who knows, like the next thing, like, you know, naturally, if you're like, oh, if we keep just going this way, everything's going to be no code.
But I don't believe that.
I think we're still going to be writing hard things to solve hard problems.
And, you know, LLMs are cool and stuff, but they're based on language, which is inaccurate, like, you know, it is not complete.
So we will see further things that evolve around there about how to wrangle these things.
And so, and what other models might exist in the future.
So how do you work in systems to take, to have determinism about things that are completely like as the providers say, like, are not deterministic.
So there's a, there's a lot of interesting things that will, that will change, I think, with how we build software in the next decade.
And, but those problems will be like, those problems will shift, right?
It won't be on the infrastructure level at some point, right?
It'll be, it just keeps moving up.
So I think that in the future we'll be approaching problems in different ways.
Of course, as we did, you know, we can look back 10 years ago and make that same statement.
I'm saying it right now, UML will finally have its day.
Do you ever see, did you ever see any of the Brett Victor videos like back in the day, like Wari, is it Wari Dream?
Is that his website? Wari Dream.
Yeah. And he's got like the small tool, there's like a thing and like a great thing from like the 80s.
And I forgot like this ID, they're super integrated and you can like change variables and see the effects of it as it happens.
And like, dude, all of that stuff is so cool.
But I don't know, the direction in which we're going in is, it's just like straight up procedural code, just as is and low code, no code.
Or so hard to, so hard to get right as developers, you know, I think we're so entrenched in the way that we work, that it's going to be hard to change.
But that stuff is super cool.
Like he's done so much research, huge fan.
Yeah. What a man?
Well, that wraps it up for our questions on this episode.
Thanks for coming on guys.
This was a really fun talk into a whole land of cues that I didn't know it existed.
And it looks like you guys have found a really cool solution to that.
So thanks for coming on.
Awesome. No, thank you for having us. Huge fans again.
Of all you both do.
Yeah. Thanks, Justin. Thanks Andrew.
Yeah, Tony Dan. Complete pleasure.
You're really pushing forward like a lot of DX in this space.
And it's great to see, not only because like it's needed.
This is like kind of an unfortunate area that's like nice to have a good experience around, but also because it's like, we just need to continue to push the DX on the floor across the industry.
So I love to see y'all, I like shining example and appreciate the work you're doing.
Thank you. Thank you so much.
Episode suivant:
Les infos glanées
devtools.fm:DeveloperTools,OpenSource,SoftwareDevelopment
A podcast about developer tools and the people who make them. Join us as we embark on a journey to explore modern developer tooling and interview the people who make it possible. We love talking to the creators front-end frameworks (React, Solid, Svelte, Vue, Angular, etc), JavaScript and TypeScript runtimes (Node, Deno, Bun), Languages (Unison, Elixor, Rust, Zig), web tech (WASM, Web Containers, WebGPU, WebGL), database providers (Turso, Planetscale, Supabase, EdgeDB), and platforms (SST, AWS, Vercel, Netlify, Fly.io).
Tags