Brendan O'Brien - n0, Iroh and the Future of Peer to Peer
Durée: 66m19s
Date de sortie: 07/07/2025
This week we're joined by Brendan O'Brien (b5), founder and CEO of n0, the company behind Iroh - a peer-to-peer networking library that prioritizes reliability and "just works." Iroh enables developers to establish direct, authenticated connections between any two devices using only their public keys, achieving near 100% connection success rates. We discuss the pragmatic approach to P2P networking, why they chose to focus solely on the transport layer, and how Iroh is already running in production on hundreds of thousands of devices.
- https://twitter.com/b5
- https://github.com/b5
- https://github.com/n0-computer
- https://iroh.computer/
- https://www.iroh.computer/docs
- https://github.com/n0-computer/iroh
- https://github.com/n0-computer/iroh-examples
- https://github.com/n0-computer/awesome-iroh
- https://perf.iroh.computer/
- https://discord.gg/n0
- https://n0.computer/
Le mode moderne de la shipping est de la shipping d'un app.
On a la shipping des front-end, on les appelle les front-end,
parce qu'il y a un casque de maison que vous ne vous en avez pas.
Et la thèse de la cour, c'est de l'app,
mais n'oubliez pas,
tout ce que nous disons, c'est de la shipping du whole app.
On met le client et le service dans le whole, dans le même bain,
oui, c'est difficile.
Bonjour, bienvenue à DevTools FM.
C'est un podcast sur les tools de développement,
et les gens qui font ça.
Je suis Andrew et je suis ma co-host Justin.
Salut tout le monde,
nous sommes vraiment excitées à Brin et no Brian.
Donc Brin, tu es connu en B5 online,
et je vous ai réveillé un peu de meet-ups en personne,
Brooklyn et aussi en Berlin,
à la conférence locale, qui est vraiment drôle.
Donc, vous êtes l'un des,
le CEO de l'Ontario de N0,
ou vous le n'aurez-vous dit N0 ?
Nous en avons,
et N0 est ce qu'on a commencé,
mais maintenant, N0 est la première, on a beaucoup de choses.
Cool, cool, cool.
Donc, vous êtes en train de travailler sur Iro,
ce brusse base peer-to-peer,
un library de networking.
Donc, nous sommes vraiment excitées à parler de ça aujourd'hui.
Ce sera un épisode technique,
mais je pense que ce sera vraiment, vraiment intéressant.
Donc, je suis excité de vous dire.
Mais avant de parler de ça,
Would you like to tell our listeners
a little bit more about yourself ?
Totalement, oui.
Je vais en B5 professionnellement,
je suis plus internet-native
que je suis de meet-space-native.
Je suis très élevé,
j'ai été travailler dans et autour de peer-to-peer systèmes,
et vraiment acheter cette notion de ce que nous referons maintenant
en utilisation de l'agence de user
pour une partie meilleure de 10 ou 11 ans à ce point.
Donc, c'est un plaisir et un honneur de ma carrière
pour pouvoir construire et travailler en source d'open source
sur un basis de la journée,
tenter de faire des nuances de ça,
tenter de travailler sur cet espace,
et de créer et de créer et de construire dans l'open,
et de faire ça être un pilier
de bonnes valeurs,
de bonnes valeurs,
qui sont en l'âge de l'âge,
c'est un peu un total de l'air.
Ça m'a emmené à me voir,
cette travail sur Iro.
Qu'est-ce qui vous a aidé
à construire le numéro 0
et l'Iro ?
Totalement.
Ma co-farmée et moi ont rencontré
nous en travaillant en état d'un projet IPFS,
qui est le système interplanétaire de file.
Nous avons décidé de commencer le numéro 0
vraiment
comme un
sort de l'impasse pour l'IPFS,
où nous avons réalisé
que ce que nous pensions
que ce que nous avons ressenti
était un très bon
engenérement de hardcore
sur un bon
état d'âge de la campagne.
Et nous pensions que ça
serait un changement de fonction de la steppe
si nous avons ça
dans un travail de
Google Chrome
en état d'engineering.
Et ça a signé
que la langue des systèmes
et de prendre des choses
en quelques états.
Et nous pouvons seulement
faire ça
parce que
le projet IPFS
a vraiment élué
cette opportunité
dans cet espace incroyable
pour en revivre
beaucoup de pensées
et c'est ce qui nous a emmenés.
Et nous avons commencé
le projet
juste pour dire
que nous allons
faire le plus de
la performance
de soupes et de nuts
en tant que possible
pour l'intervention
d'une libraire de networks.
Et nous allons
essayer de faire ça
en 2025
ou en 2023
quand nous avons commencé
avec de nombreux lessons
que nous avons appris
sur d'autres projets
comme WebRTC
et Libby2P
comme nous pouvons.
Donc c'était le moment
où vous décidiez
que
cette laitre de la steppe
était quelque chose
qui était vraiment important
pour vous
juste pour vous
sortir un peu plus loin
parce que
Peer2Peer
est
plutôt...
Je ne veux pas dire
que c'est nuch
parce que, évidemment,
il y a une longue histoire
de Peer2Peer
des applications.
Mais
ce n'est pas
ce genre de space
qui vous a vraiment compétenus.
Oui, je pense que c'est
absolument bien
de le dire,
comme c'est
si on pense
sur l'amount de
la trafic
qui se fait
sur le R&D
pour les services réguliers,
c'est
la grande majorité
de l'Internet.
Et donc
c'est absolument
nuch
et c'est absolument
ce
petit espace.
Mon involvement
a commencé
en faisant
un peu de travail
dans le travail
de la R&D
en 2016
où on a commencé
à réaliser
que
on sort de
ce genre de thèmes
qu'il y a
des
grosses souhaites
sur l'Internet
qui sont juste
gésillées par la DNS.
Vous pouvez
tourner
un whole bunch
de
l'Internet
et vous pouvez
tourner
beaucoup d'accès
sur l'Internet
juste par les contrôles de la DNS.
Et ce sont
des adresses
qui sont
que vous utilisez
les URLs
pour les décrire
et les référer
sur l'Internet.
Et
on a commencé
à problematiser
ceci
dans nos mains.
Et
peu d'un
peu d'un
peu d'un
peu d'un
peu d'un
peu d'un
peu d'un
peu d'un
peu d'un
peu d'un
peu d'un
peu d'un
peu d'un
peu d'un
peu d'un
peu d'un
On a
alım
question
clé
endar
on fait un petit peu de tout et l'Iro a un peu d'une autre approche.
Est-ce que vous pouvez déterrir ce que l'approche nous est?
Totalement, oui.
Je pense que...
Et ce n'est pas pour le faire de l'air, c'est-à-dire qu'il y a plus...
Et l'une des choses qu'on parle de, c'est d'avoir une libraire moderne,
on a vu cette sorte de trend
sur les dernières
10 ans de travail dans ce space, je dirais,
où
la métaphore que j'aime c'est que tu es tentant de construire un voiture électrique.
Le monde roule en gaz et tu veux populariser
un nouveau moyen pour faire la même chose.
Tu veux toujours être un voiture,
tu veux toujours être un des médecins,
tu veux toujours être un truc qui connecte à d'autres devises et qui se roulent.
Mais
tu as un problème de coût
pour avoir un voiture électrique, tu as besoin d'une station électrique,
une station de gaz,
une infrastructure qui n'existe pas,
et puis tu as besoin de plantes pour faire ce genre de choses,
et tu as besoin de toutes les pièces qui vont dans un voiture
que tu ne peux pas faire.
Et donc maintenant tu as besoin d'un voiture électrique,
un carburateur, qui ne t'a pas fait.
La métaphore se démarre quand je parle de la voiture interne,
mais le point est
que tu as vu cette approche de l'océan pour longtemps.
Hey, on va essayer de faire un peu de pièces,
on va faire les connecteurs,
et puis on va faire chaque pièce de la stack
entre la connexion règle
tout le monde en user space.
On va faire des problèmes d'autorisation,
synchronisation,
et juste aller et aller,
et le truc qui s'est élevé internally
c'est un slogan,
c'est Nginx, publié par Postgres.
On ne voit pas ceci
dans l'autre espace,
tu ne t'as pas prévu de l'équipe Nginx
pour faire des base de la database.
Ce serait ridicule,
ils font des proches en reverse,
et c'est difficile.
Quand on veut réinventer
la stack et les gens
dans une source de fuels,
c'est très tentant de faire
tout pour construire le entire voiture.
On n'a pas besoin de faire ça,
on ne doit pas boire l'eau,
on peut juste faire le micro,
sur cette configuration très basse,
une pièce très reliable
qui fait juste le tout,
qui est que tu as un public,
et tu peux délire le device
par public.
C'est le moyen de la connexion,
c'est la chose que l'IRO a.
Tout est un layer en haut
qui est ouvert à la communauté
pour interpréter,
on a beaucoup de gens
qui ont ouvert le space,
beaucoup d'autres ont vécu
pour se remplir,
c'est très fun et excitant
de voir.
Mais c'est ce que nous faisons
de ne pas boire l'eau,
on peut juste avoir
le premier paramètre que nous avons
senti que nous l'avons,
une connexion très reliable,
c'est la chose que nous voulons
continuer à tenter de l'impréver.
Tout est resté
pour les devises de la communauté.
OK,
l'IRO est un projet
pour faciliter
ces bonnes connexions
par la période.
Est-ce que tu peux
nous donner un petit peu de salaud
sur l'IRO
de la définition
et de la restée de l'écosystème
sur l'IRO que tu as construit?
Nous allons parler plus de
des aspects spécifiques,
mais je veux
donner
une bonne overview
de tout ce que tu fais.
Absolument.
Nous décrivons que l'IRO est
p2p-quick.
Le protocole est
htb3,
beaucoup de la route internet
sur quick.
Et ça a fait un peu de choses
sur l'IRO que nous aimons.
C'est un peu de la priorité,
et c'est basé sur
les standards,
et nous utilisons
les standards spécifiques
pour refermer
les aspects ITF.
Le projet est
vraiment intérêt à vous donner
à la hausse de cette abstraction
de tout ce que tu veux
être dialé, et que
ce public soit discoverable
et dialé de tout ce que tu as.
Nous décrivons aussi qu'un petit
spécifique,
qui, en un autre,
permet de faire un spécifique
qui est appelé le prétocole
de l'application
de la plate-dame protocole.
C'est tout ce que nous faisons
de notre protocole.
Les protocoles sont
des métiers htb.
Vous pouvez les composer,
les utiliser, et des exemples
pour nous.
Nous avons des exemples pour
les messages broadcasts,
les gossips,
les messages de toutes les notes
qui sont dans un sujet.
Vous pouvez créer des patterns
de la publication, de toutes les
différentes primitives.
Et puis, tout le monde,
et d'autres expériences,
et des intégrations avec des choses
existantes.
C'est un protocole pour
prendre quelque chose d'automarcher,
le projet pour le prétocole
et le projet pour le protocole.
C'est le moyen de l'expérience de la
Le projet de la production est
un projet de start,
un point de fin de l'aéroport,
et c'est ce qui vous donne la note.
L'application est de l'adresse
de les protocoles, et c'est
comment vous composez
la fonctionnalité que vous avez
faite dans l'HTTP,
pour obtenir
les features que vous voulez
mettre en place.
On va nous dégager
les détails techniques
pour un peu de temps.
Vous avez sorti de mentionné
Airo, vous faites
des connections
de très fortes
accessories entre deux pierres.
Qu'est-ce qui s'est passé
sous la hood pour faciliter ça ?
Je l'ai vu dans les docs,
il y a un référence à magique
et vous avez un approach
de comment ça se passe ?
Totalement.
La poutine de la hood,
pour ceux qui ne savent pas
ce que c'est,
le problème de la poutine
vient de l'IPV4.
Ce que nous avons dans nos routiers
de la maison, c'est un transfert de
networks de l'adresse,
mais ça signifie que
votre dévice derrière votre routier
n'a pas un adresse public
d'IPV, c'est facile de dégager.
On doit se mettre en place
dans le firewall de la networks,
qui ressemble à un peu de mal,
mais c'est bien quand vous faites ça.
C'est la façon dont nous pouvons
mettre les connecteurs sur Internet.
Airo fait un approach
très moderne pour comment nous faisons ça.
C'est aussi
très pragmatique,
que l'emphasis est que ça devrait
travailler 100% de l'heure,
même si ce n'est pas possible
pour vous pour vous faire du pouls.
Si vous travaillez dans
un environnement corporel,
c'est souvent le plus facile de voir
où ils ne vont pas pouvoir le faire.
En ce cas,
vous allez encore parler
d'un relaye encrypté.
Il y a des connecteurs qui sont
en contact,
et ça peut faciliter.
C'est pas plus direct, mais ça marche.
La chose qui est
magique
est que, en combinant
avec des techniques de networking,
Airo va
dynamiquement
prendre le bon et le meilleur path
sur Internet pour les connecteurs
comme ça.
Vous pouvez imaginer ces choses
qui vont souvent se tourner
dans un app d'une certaine manière.
Vous commencez à la réquestion,
et vous vous dévouez de votre maison,
et vous vous laissez des rangements.
Airo ne peut pas avoir ce problème.
On a décidé de ne pas faire
des proclos barば les
rideurs, si vous ne vous le hast paspiné
c'est pas obligé,
mais on vai faire des chicler이고
contre cela.
Nous avons vraiment worldly
tout boatie,
connection between the two devices.
Sometimes that's the server, right?
But it's useful in either context.
Yeah, I think a lot of people listening to this will probably have experienced
is like you're on your home Wi-Fi and you like walk out of range and it
switches over to like cellular network and you like lose connectivity or
you can't access something that you're previously accessing or you know,
whatever happens happens all the time.
So it's probably a much harder problem than it seems like to solve that.
It's pretty gnarly to say the least.
If you want to see a whole bunch of jaded engineers who have
shackled this problem head on just join jump in our discord and they'll
happily regale you.
So I want to dig a little bit more into quick.
So you're building on top of quick, like you mentioned, H2B3.
Quick is interesting.
It's like a UDP stream.
So like not reliable, like the TCP reliable mechanisms.
So packets can drop and that's okay.
And you just like send more of them.
So why quick versus like a traditional TCP TLS stack?
TCP, I suppose.
Yeah.
I think this is my favorite question to answer because if you look at the
intent behind quick, it really mimics what we're trying to do in IRO.
The court of sort of like base statement for the, for the work around
quick, when it, when the project started was basically, we have this problem
inside of TCP, it's called a head of line blocking.
We get these situations where we can't actually have like nice independent
streams.
And so one big packet of data, it can interrupt that whole TCP stream and
clogged things up.
And the reason that head of line blocking is a real problem is because the
TCP protocol has ossified.
We can't actually iterate on that design because middle boxes through
the internet have through convention figured out ways to rely on the
unencrypted parts of TCP packets.
And so we now have this world where we like straight up can't upgrade TCP.
It's not possible.
And so what quick sort of said from the jump was like, all right, let's see
if we can back up redo this in a way that will give us the capability and
capacity to iterate, which in practical terms, it means moving the
congestion controller out of kernel space and into an encrypted packet payload.
And so yes, this is the part of the podcast that gets deeply technical.
But like, this basically means let's, the core, the core thing that
quickest trying to do is like, hey, let's just encrypt it all.
If it's encrypted, then the middle boxes can't touch it.
They can't mess with it.
And if they can't touch it, you mess with it, then we can have more control
over the way that data will flow through the internet.
And that's the way that we're going to build a better HTTP in the long run.
And like being totally honest, quick today is not as fast.
Like, I don't know if you've seen some of these like conversations where
like TCP compared to quick in a data center, like quick is not as quick.
And the reason for that is you have all of these kernel optimizations of TCP.
And so there's way less conversation between the kernel and user space
when you're actually doing these connections.
And so yeah, you're going to get a faster connection.
But now in a latency sensitive context, like pretend you're trying to get a
phone to talk to a laptop in the wild, quick is way better today.
Right.
It's a better protocol as we're talking, because it's got this way better
ahead of line blocking characteristics.
You get this new trick called cheap streams, where you can now say like,
I want to have 100 different streams.
And in a browser tab context, that's a huge deal because that's all of the
blocker stuff that VIT or webpack is fitting out for you these days.
And so if you want to do 100 requests in parallel over four connections,
that's going to be fine.
And nothing's going to sort of clog up any of those four pipes.
And that for us is like this spiritual alignment between what we're trying
to do in IRO at the higher level.
It's like, we want to take the quick concept to its natural conclusion,
which is like, great.
And now that should work across any device, anywhere.
And we should be able to do a whole swaths of the internet this way.
And we should pass that on to developers so they can build apps
that do cooler stuff in the wild.
Right.
That's the kind of core step function for why we were sort of all in on quick.
Yeah, that's wild.
Quick is definitely not also fine.
Definitely moving fast.
There's like one, I noticed there was like one sort of thing in the spec
that's like relatively new, but I feel like it's up y'all's alley,
like multi path quick, where you can kind of have.
And I don't fully understand it.
It's like a single connection, but it can happen like over multiple networks.
Like, are y'all preparing for that?
Are you planning on using it?
Do you already use it?
Like, what is the state of the spec?
I don't even know.
This is a great question.
Thank you for, thank you for this tee up.
So our team has been working on multi path for at least six months now.
We have been grinding on it to the core of our project.
And it is the biggest thing that we are coupling with our one dot overlays,
which is scheduled for the back half of this year, last couple of months of the year.
In a nutshell, multi path is basically a quick connection
that is aware of multiple paths to the internet and can make use of them both.
And so if you can imagine if you had an IPv4 and an IPv6 connection,
you could use them both and you could use them both to just get the bytes
as fast as possible from A to B.
And so this is the idea that a single logical connection can actually leverage
multiple paths at the sort of like sending packets layer through the internet.
Now, where this gets really interesting for us is we mentioned those relays earlier,
right, where that is one path that data can take through the internet
to talk between two IRO connections, sorry, two IRO endpoints to use proper parlance.
Multi path would allow us, what we're doing right now,
basically the current version of IRO is all of that work is happening
outside of the congestion controller.
And so there's this thing that like is aware of multiple paths.
It kind of figures out what's happening and quick is sort of doing its thing,
but it's not totally aware of natively of this idea of multiple paths.
And so what we want to do is we want to have a multi path
and we want to label that for the relay and for a direct connection
and ideally for relay direct for IPv4 and IPv6
and have all of that flow through the congestion controller
so that it's aware of all of these paths and it can leverage them all.
And so there are worlds in the future
where you could be sending some data over the relay
and some data over direct connection, depending on what's faster
and have that just be like dynamically switching.
Where this gets really interesting is
there is nothing in the spec that says that we can't leverage
different types of definitions of paths.
WebRTC could be a path.
And so we could actually have devices that can talk to browsers
via a multi path congestion controlled system
and have that all just be dynamic.
Wi-Fi aware could technically be a multi path path,
which is this new spec that is emerged where you can finally,
if you've ever used AirDrop on an Apple device,
that is the core of the Wi-Fi aware thing.
It basically leverages the idea that all of our modern devices
have two Wi-Fi antennas
and you can actually skip the router
and just send data directly between the devices
without going through the router.
That's how AirDrop is fast
and that's why it's faster than other stuff.
That exists on...
Apple has a proprietary protocol for that,
but recent European regulations have forced that to be uniform
across Android and iOS.
And so there's actually a world where Iroh can be aware of
a path between an Android device and an iOS device
using a combination of WebRTC, IPv6,
or encrypted relay and Wi-Fi aware
and it will just figure out the right connection,
do the right thing and get you the fastest possible
sort of connectivity between those two devices.
And all of this is under an abstraction
you don't have to think about.
You just dial, you think in streams
and all of that GAC that I just mentioned
can just be stuff you heard is cool on a podcast
and you never have to worry about it.
And that's kind of the sort of core reason
why we really think multi-path is like...
There's some like congruencies here
that are starting to line up now that we've been at the project
for a couple of years
and it's just really fun to sort of see things like this line up
where the spec is highlighting
this thing that we really wanted to do the whole time.
Implementing it is hard.
Opening and closing paths is really gnarly.
Doing it in Rust is even more fun,
but that's kind of where we're at these days.
So one of the cool things about Iroh
is that it uses content addresses rather than IP addresses.
This harkens back to your IPFS days
where an interplanetary file system,
it's not going to work with just normal IP addresses.
So what are some of the benefits
of using content addressable storage
and what have been some of the challenges?
Totally.
I should highlight that it's optional
to use content address storage.
Most of our streaming use cases don't actually use it
because streaming is really...
It's really hard to hash a stream.
I don't know if you've ever tried.
It's really, really difficult.
But content addressing is basically that.
You take the content that you want to move around,
you hash it and you refer to it by its hash.
That's content addressing.
And when you refer to something by its hash,
what you can do is this magical trick
when you get it on the other side,
you can calculate the hash of the thing you got.
And if they don't match,
then you know somebody lied to you.
They gave you data that you didn't ask for.
That's a really great primitive
for moving data around without needing to trust the origin.
And this is a lovely primitive
that allows us to say,
hey, we can get data from wherever we need
because we know the hash that we're looking for.
Now, hashes are super ugly.
I think they are not human readable things.
And so we have lots of...
This is kind of where we're really excited
to ship out as a library,
where we don't think it's a win
in the UX space for users to see hashes.
Even those of us who are used to using get
like day in day out, we think it branches.
We don't think in commit hashes.
Like really only in times of desperation
are we passing around the hashes.
And so we think that that's kind of the right approach
and we work a lot with projects
that are building on top of IRO
to say, hey, how do you think about content?
How do you label things?
And let's use that as the system to do pointers
to this really powerful primitive
of content address data transfer.
We have a really, really, really intense version of that
built on the Blake 3 hashing algorithm
that allows us to incrementally verify data
as it's coming in down to the kilobyte
and we do all kinds of stuff in there.
We can also take data and stream that in
from multiple providers.
So you get this classic like, hey,
I know five people have this data set I want.
Please cut up the request across five people
and just fetch it incrementally from them all.
Rebalancing that as you figure out who has what
or who has a faster connection.
And so like a lot of that is sort of handled for you.
And the idea in the IRO ecosystem is you just like,
cool, let's just use our blocks for that.
We'll set it up.
And that's the same as it would be
as like installing OAuth in an HTP app.
You just like bring that in, wire it up and get to work.
Is that sort of on the same layer is you mentioned earlier
that there's this like you're using public keys
to identify like connection points
other than IP addresses is like the public key dialing,
the same as like the, you know,
connecting to like a piece,
like a content addressed,
or those like different layers.
Distinct layers.
Yeah.
And so the connection always is always
between two public keys.
And so in this world,
because we're peer to peer,
we don't have traditionally,
we don't have TLS certificates, right?
You don't have like, hey,
I'm a descendant of this origin domain source.
And that's,
and so it's not what you do is you just create a key
and you say, cool, this is my key.
And every packet is encrypted with that key.
And so this is,
if you're familiar with MTLS,
which is mutual TLS,
this is kind of in that flavoring of things.
And we actually do use TLS
under the hood, like the real version of it.
And we're using Edwards curve keys to sign things.
And so whenever you're having a conversation in IRO,
it is always not optionally end to end encrypted
using those keys.
And the nice thing is
those keys can then become primitives
for building authorisation schemes.
And so you can say like, hey,
I'm gonna, I have this user profile
and this user profile possesses the following keys.
That key is my phone,
that other key is my laptop.
And now when you talk about them in that way,
every single byte through that connection
is authenticated, right?
And that's like a really exciting jump off point
for authentication.
But that is a lower level primitive
than the content address stuff.
That's up at the protocol level.
And so you would pick that.
You would be just as justified using like,
I wanna do video streaming instead.
And I just know no content address data transfer
just video streaming.
That's totally a valid use case.
Catch you, cool, that makes sense.
So maybe just like to form sort of a holistic picture
of what you're building here.
Can you sort of just walk us through what happens
when there are like two devices
that are connecting on IRO?
Like everything from like the initial handshake
to, you know, maybe establishing the connection
or like falling back to a relay.
Like what is the path?
Like what happens?
Yeah, totally.
I'll do the fast version of this
because the slow version will take an hour and a half.
But in summary, what happens is
you're gonna get some instruction.
We'll start with the most interesting case,
which is just I want a connection to that key.
And we'll call that Bob's key.
We'll use the classic Bob and Alice,
known in clichures.
So I'm gonna be Alice, I'm gonna try and dial Bob.
Bob, somehow I got a key from Bob.
Usually this is, and this is where we are not
like a classic peer-to-peer project.
We think it's completely valid to put Bob's key
in a database and like get that as an API endpoint.
That's totally fine.
So you went, you did an API call and you said,
okay, the app told me to go dial Bob and Bob has this key.
You now need to discover the IP address of that key.
And so the first thing that'll happen
is some sort of discovery system.
IRO has a number of discovery systems built into it.
They're pluggable.
Some of them actually will tell you about keys
that it's found.
And so that's like MDNS.
So if you want to just do local offline only,
I only want to do local network connectivity.
That's our MDNS stuff.
And it will actually tell you like, hey,
here's a stream of keys that I'm finding.
But in this case, let's assume that the common case
where Bob is actually a device
that is not in my local house,
but is instead across the country.
And so what I'll do is I'll first go through
a DNS discovery service by default.
And so I will actually take Bob's key.
I will do dns.iro.link
and I will look up a literal actual DNS record.
And that's the magic from now mapping a public key
to a dialable set of details.
In the background, before this ever started,
Bob went online, so Bob's endpoint connected.
And Bob put a DNS record on this service.
Bob signed a packet that said, hey, I'm Bob.
I've gone and talked to the relay server.
I know that my public dialing details are here
and my home relay,
the relay that I use if you wanna talk to me is this one.
And that'll be, usually those are geographically named.
So that'll be like US West.
And that'll be like, cool.
Bob's home relay is US West on the public R network.
And Bob will put that in the DNS record.
My lookup, as Alice will discover,
hey, I wanna talk to Bob.
Bob's IP address is X and Bob's home relay is Y.
I will then create a connection to the home relay
and send the first packets.
I will just start literally sending the data
that I'm trying to send to Bob.
And so that'll just go.
And so what is happening here
is we're trying to get the fastest time to first byte
that we possibly can.
And so that data is going to,
we're just gonna start sending data.
That first packet is always gonna flow through the relay
because the relay knows how to talk to Bob.
And so Bob's gonna get that packet.
In parallel, we are gonna start
interactive connectivity establishment,
which is the whole dance
if you've ever heard of stun, turn and ice.
If those concepts are familiar to you, I'm sorry,
you've had to learn about hole punching
and your life is a little bit worse for it.
If you haven't heard those concepts, don't worry about it.
What's happening in parallel
is a direct connection is being established.
And the moment that we have a direct connection,
packets will just start flowing over it.
And so this gets you a really nice fast
time to first byte through an encrypted relay
and gets you all of the niceness of like, cool.
Once that thing switches direct to you,
it's just like sort of actually see it happen.
And so that's kind of the core
of how you start sending data to Bob.
Under the hood, that primitive that you're working with,
the thing that you're trying to send
is you're just opening a quick connection.
This is the same as if you were trying
to dial through quick.
And so you usually would say,
I want either a unidirectional or a bidirectional stream.
Bidirectional is gonna feel more like a web socket.
Unidirectional is gonna feel more request response.
And that's gonna actually,
it's not totally true.
Unidirectional just means I just want to shoot data at you
and you can't respond.
But yeah, that's kind of the corporate.
You're not thinking much about UDP.
You're not dealing in the like lower level GAC.
You can really tweak that and set it up to be lossy
if you super want to.
But like the experience that you're sort of getting
at the high level feels way more HTTP like,
hey, I wanna take this data,
I wanna send it over there.
But like just like the low levels of HTTP.
The parts where you like, actually,
I guess TLS to be super specific.
Makes a lot of sense.
So you have to have that initial relay connection
that's like, my understanding just like quickly
glancing under the dog.
I thought that the relay connections
were mostly just for fallback
if you couldn't establish a direct connection.
But so that initial connection does happen through relay
and then it establishes the direct connection.
Gotcha.
Yeah, you can now,
I was totally able to do them just no relay.
So like when you discover peers locally,
that conversation never crosses the relay
cause you have a direct connection.
Like why would you?
But yeah, in like the majority of these cases,
I'd say, and now just to give you a sense,
we are very upset if ever more than a kilobyte
of data is sent over the relay
in the process of establishing a connection.
Like that's something is wrong.
So like usually it's first couple of packets
and then we're offered to a direct connection.
Like this happens very fast.
But this is the modern internet.
It's really critical that initial data flows
as fast as the latency is.
And so that's how we guarantee that, right?
We want this to be competitive
with existing normal non-funcified peer-to-peer libraries,
which means sub-second connections,
which means none of this like,
hey, waiting that you would normally experience.
And so that's why we always flow through the relays at first.
It's the reliable path
and that we use both as a fallback.
But the thing about our connections
is they're constantly transitioning based on needs.
And I think this really highlights
the practicality of what you're doing.
Especially see a lot of conversations
on Blue Sky and Macedon or whatever
about what does it mean to be decentralized
and a lot of just pentantic arguments
about how these things shape up.
And you've taken a very practical approach here.
It's like, okay, we want to facilitate
these direct connections in as many cases as we can,
but we also don't want to sacrifice performance to do so
or make it incredibly difficult for the user to use.
So I like the practical trade also.
Yeah, I built on top of a peer-to-peer stack
for five years
and we had this one giant if statement
at the bottom of our stack.
And it was basically like,
hey, does P2P feel like working today?
If yes, let's use that.
And if no, just use HTTP.
And that's a nightmare, right?
Like, that's objectively not okay.
Like, you need to be able to use IRO
in 100% of context.
Like, it needs to be a reliable communication channel.
Right? And if it's not,
then we're just creating problems for people.
We're not actually sort of helping move the ball forward.
And so it really is that if statement
that we've been trying to attack
for the better part of three years.
So we kind of covered it in this,
but I'll sort of ask this question anyway
and we can kind of transition.
So you're using DERP servers.
I think DERP is designed in crypto
to relay for packets or something.
That was by like, TelSkill designed that or made that?
Yeah, originally.
We started there.
We no longer actually use DERP as SPECT.
And so it's been,
that's just more of a technical detail.
And it's a byproduct of us going all in on quick.
And so now we actually do everything
inside of the quick protocol.
And so like, yes, we have encrypted relay,
but like this is, yeah,
in many ways it just looks like a quick connection.
Yeah, yeah.
Yeah, I was gonna,
I guess I was gonna ask sort of how you balance
centralisation with reliability
in terms of relays,
but I guess we sort of covered that.
It's like establish initial connections
and then fall back if you absolutely need to.
Yeah, and there's one detail we didn't cover.
We made a very specific choice
to address the relays by URLs
and to publish the relay code
in addition to the IRO connected in code.
So we, both pieces of the software
sort of available as open source
that you can spin up your own thing as you'd like.
I'm with you, I sort of like
the sort of decentralisation hand wringing.
I'm like, I'm kind of like academically here
for like an online mud fight,
but like in practical terms,
we really just wanted to function
before you have the debate about the whole thing.
And so at best IRO is federated
because it relies on these relays servers, right?
The, but you can deploy your own network.
And almost all of our big production users
have their own relay servers.
Most of them are, they start off
on the public relay servers that number zero runs
and then they sort of like graduate to their own set
and they sort of build their own.
We have a service where we run that for people,
but like we have, it's also just open source
and so we work a bunch with projects
that will do more exotic stuff,
like fuse a relay server
with some of their existing infrastructure
or sort of run that as just like the others
who will hybridise where the public number zero network
is a fallback.
And so like, then now you have like true
federation across organisations in that sense.
And so yeah, we can still have the decentralisation
sort of like, blathering match,
but it's also like, you know,
that at the end of the day, we really focus on just like,
can we get the thing to function?
Speaking of functioning,
you guys have boasted some pretty high connection rates.
I've seen 200K thrown around for active connections.
What have been some of the biggest challenges
getting to that scale?
And what surprised you about real world usage patterns?
Every single app that we have worked with
and every single developer team that we've worked with
has like a dramatically different story
around their network.
And it's been wild to sort of like experience that
just to give you some examples.
Yeah, like we've worked,
like everybody in the streaming video space
often has just like a silly number of concurrent connections,
like a bonkers, like the liveness property of apps
that have voice and video chat built into them.
Really, like just like,
we very quickly had to scale the relay servers
to be able to do a million concurrent connections.
Like, that was a prerequisite for working in that space.
In contrast, some of the more data intensive stuff,
we've been doing a lot of work lately
in and around the AI and like AI training,
training of actual LLMs.
And they really exchange like a lot of data
in these bursty rounds.
And so like each node will be producing like
on the order of 10s to low hundreds of megabytes of data
that needs to be broadcast to everybody,
but in very specific moments.
And like, we ran into this pattern problem,
we're like, oh shoot, we had way too much data
actually flowing over the relays.
That like one kilobyte thing I talked about earlier,
we had like these machines are so high powered
that they were able to jam 10s of megabytes
into the pipes before anything could even happen
because they're really high powered machines.
And so like we just had a completely different
deployment pattern and usage pattern.
And then like also just like stuff
that showed up out of the wild
that we really didn't expect
as kind of these off-label use cases.
We're getting a lot of people using IRO now
even in the cloud
because it has service portability, right?
You just take that key and you'll move it around anywhere.
Some of that could be running on GCP,
some of that could be running on AWS,
some of that could be running on Hetzner
and you just talk to services by key.
And as long as you like,
you can get the throughput that you need,
it's a really convenient way to get portability in the cloud.
And we're all of a sudden like,
that's not like peer-to-peer,
like we don't have whole punching problems.
These are publicly dilable devices.
But like oddly,
a lot of the folks we're working with
do have whole punching problems
and a lot of it's like configuration of the firewall
and your VPS and like all of this like
sort of like infrastructure GAC of DevOps
that IRO had really helps glaze over
where now it doesn't really matter
if your app is being deployed to an employee's laptop
or to a cloud VM.
They both just sort of like get keys
and talk to each other
and that has made for like a really interesting
sort of usage pattern.
So we've seen some wild stuff.
Like it's been absolutely wild scaling this thing for real.
The last thing I will say on that topic though is
the coolest bit about peer-to-peer networks
is the scaling characteristics.
You can't get this thing called sublinear scaling
out of peer-to-peer
where it is genuinely possible to run
tens of millions of devices
on a Raspberry Pi as the relay server,
which is just like a bonkers thing
because you have to kind of invert the model of thinking.
The client is the server now, it can do server things.
And so every time you onboard a new user
you're adding a server,
you're adding capacity to your network
and that when those people are talking
over direct connections
that has no usage bill to you the service provider.
And so as the dev,
you are able to get bonkers scaling characteristics
and that is just like,
that is formerly known as sublinear scaling,
which is the idea that as you add capacity,
the total cost per unit goes down.
And like most of the time
that we talk to people,
they sort of like when they think about peer-to-peer
they're like quietly in the back of their head
thinking of like BitTorrent
and they're like, well that made my computer hot.
And we are well beyond that world.
We're in a world where like the amount of traffic
that a single node and some of these apps will do
is smaller than the ads you would be downloading
in another app, right?
Where it's just like,
you can do one request an hour
and if you have a million users doing a million
online at a given point
and they all do a request every hour,
that's a million requests that you could serve.
And so the capacity,
just the scaling characteristics
are completely different
from anything that you see in a normal sort of space.
And that's where the real magic shows up.
And like sadly with peer-to-peer networks
and I think this is part of why they kind of die is like,
you don't get to see that benefit
when you're like 10 nodes,
just starting off
and you're building it out from scratch.
Where that magic really shows up
is when you hit some of these scaling moments.
And if you're prepared for it,
you all of a sudden just watch your costs level
and they no longer go up.
And that's like a really wonderful moment.
But like the number of teams
that get to see that moment are quite low.
And so I think that you don't see a ton of advocacy
for peer-to-peer on the wild
because it's just, it's really hard to get there.
Yeah, that's what I'll just think about.
Maybe like,
when I, I guess when I was first thinking
of like the network topology,
I was like just thinking like, you know, traditional,
I was like, oh, Bob wants to contact Alice.
Like Bob contacts the relay.
The relay helps establish a direct connection to Alice.
But like, there's something that I didn't think about
is like if there are like multiple people in the channel
and like someone else is already connected to Alice.
Is like, is it possible to hop between them?
Is like, I mean, are you doing like these
sort of dynamic typologies
or is it really just trying to,
as much as you can,
just establish a direct connection
between the two users?
Two devices.
And this is where we have lots of space to iterate.
So some of our protocols do actually have
that hopping characteristic
where things will move through peers
and some of them don't, right?
We have leader election algorithms
where like, hey, you want to run raft consensus.
You want to pick one node,
you want it to be the sort of like driver of the conversation.
And so this is the fun part
about stopping boiling the ocean
and saying, oh no, let's leave that open as a design space.
All of those, even if you're doing hops between nodes,
those are still pairwise connections, right?
And so the primitive you need
at the end of the day is being able
to dial one device to another,
even if you're building a broadcast system.
We, there are,
I won't bore you with the academia,
but like we all of our connectivity on the internet
is based on this device talking to that device.
And we'll actually,
if you get really specific, this port and that port
is sort of discussing something with each other.
But yeah, it's a,
and I think you're sort of like at the doorstep of where,
of why we got into this game in the first place,
which is really, we're, yes, we get,
like peer-to-peer is the way that we express our goal,
but we're really, we think about ourselves
as a user agency company,
not as a peer-to-peer connectivity company.
If you kind of like,
just to get a little meta for a second,
that, I don't know,
are you all familiar with the dead internet theory?
Like is this,
is this just to like quickly elucidate
this idea that like the public internet is like kind of dead.
It's just full of robots and like people that have like
really kind of made it very difficult to know
if you're actually talking to a person,
if what the motivations of that people are.
And what we're sort of seeing,
whether you come from the very privacy conscious side of things,
or where you're just like kind of wandering
through the internet these days,
wondering whether this is AI or not,
you're experiencing a bit of a loss of agency.
You're, you're experiencing this like,
oh, you know, I can't,
I can't do all the things that I used to do.
And I sort of have lost a degree of control.
And a lot of that is really characterized these days
in terms of like, hey,
being more privacy conscious and like Googling
and all this stuff.
Sure, but like,
I think in practical terms,
what a lot of people are doing is they're just retreating
into the cozy web,
which is like Discord channels,
web, WhatsApp groups and like signal and whatnot.
And that's,
that's you claiming agency back.
You're saying, hey,
I want to be able to like take control
over who's part of this conversation
and how this is sort of like working between people.
You're seeing this in blue sky at the top level of this
where it's like, hey,
we want to be sort of like really concerned
about who runs and controls this,
these public systems of dissemination.
But the reason we care about P2P
is our core thesis is that
if you want to take that conversation
to his natural conclusion,
those cozy conversations,
those Discord channels
are really where the internet is going.
That's where we're all going to be sort of like building
and working and iterating.
And so that means the world's going to look more like signal,
like the internet isn't dead,
it's just changing,
we're changing the way that we talk to each other.
And so we need systems for building stuff
that looks and behaves more like signal
and less like Facebook, right?
And for that,
we really need to kind of come up
with this like missing primitive,
which is like,
how do I have an encrypted end-end connection
to the person that I'm talking to, right?
As specifically,
and we think that's kind of the step function change
that we're trying to anticipate and be a part of.
And we think this like electric engine
is just fundamentally missing from the networking stack.
We need to be able to do this stuff.
We need to be able to construct these apps
that know how to dial our friends
without ever dialing some home server, right?
And like,
what developers choose to do that with that?
It's totally there.
That's like up to them to decide
how to use that technology.
But for us, we're like,
this needs to exist and it needs to be robust
and it needs to be sort of on the level
that is comparable with HTTP, right?
And that's kind of how we,
why we're doing what we're doing,
we're really sort of like trying to facilitate
this set of tools that will allow developers
to build that next generation of apps.
It has these incredible scaling characteristics.
It's possible for some kid in a college dorm room
to ship an app to tens of millions
that will cost them on the order of hundreds
to thousands of dollars.
That's a really cool byproduct.
But it also means that that person could ship an app
that's competitive with big and common things
because it is both lighter
and ships more capabilities to the end user's device.
And so that's the internet I want to live in.
And so that's sort of the change
that we're trying to facilitate where I can have.
Not necessarily more privacy, but just more control.
Yeah, strong echoes there with the local first movement.
I mean, so we ran in each other local first conference
and that's been a big conversation in that space
is like agency and control
and connection, you know, it's like local first,
not local only is the thing that I'd like to say.
It's like, it's still important to have connection,
but like sort of maintaining some agency
in that space is important.
So I like your framing on this.
And I think a good transition here is like
looking at local first apps and edge computing.
What do you see or like the next big opportunities
for peer-to-peer networking?
So like what is the opportunity space here
in the next few years?
Yeah, and I think to like build on that
sort of like long-winded rant,
I think if you like really zoom out and go back
at like what the internet sort of like did to us
in the beginning is like we initially got the ability
to associate according to interest instead of place, right?
Like if you think about the way that why the internet
was really popular all of a sudden like you could talk
to people that shared your interests,
not just people who live in the same town as you.
And I think now with this like cozy web conversation,
thanks Maggie, who is a friend of the local first movement,
Maggie Appleton is the author of the cozy web article
that I'm drawing a lot of this from.
I think the cozy web is actually the reinvention
of place on the internet.
I think we're in this space where we're now saying,
hey, actually we want some of that place back.
We want a context where we can say,
I know who I am in that space.
I know my identity and I know who I'm talking to.
And that can feel like a familiar watering hole.
And to me, that reinvention of place
and having that be driven through direct connections
is the real opportunity.
Is the moment where it's like, okay, can we ship apps
that like kind of feel more like chat apps
but can bloom in functionality a lot?
And can all of a sudden say like, oh cool,
let's just like flip in some video voice chat in there.
And like let's just like add some like new sort of like,
let's jump out of this telegram and into some game
that is only a part of this thing inside of this cozy
place making space.
And that's where we really want to see
iteration improvement.
I think that's where we see a bunch of UX
and some of these networking primitives
really interacting and interesting and meaningful ways.
And so we're really gong ho on that space in particular.
And we think that that's the most,
and like that is like straight up local first, right?
Like I went to local first conflict
cause it was like this is,
and came back from it's super excited.
That community is talking a lot
and struggling a lot with like sync
and like that's like a core sort of challenge right now.
And I think in talking with a lot of local first folks
it's just like, well,
if somebody could ship us good peer to peer,
that would be nice.
And this is part of where we're like,
excited about this evolution.
There's a maturity now that wasn't here five, six years ago
where like people really wanted to invent
that whole stack themselves,
really wanted to do all the peer to peer networking themselves.
And like it kind of didn't work well.
And like, and it ended up being too big.
You spent too much for your innovation budget.
And so now I think you're seeing this modern approach
particularly sort of like typified
by the local first movement
where we are now back in a space of composition.
We're willing to work together.
And like our team is just like kind of hard limited
on this like, cool,
we're gonna do the connection part
and everything else is gonna be collaborations.
And that's, and we have other people who are like,
hey, we're doing the sync part
and the networking thing is not a problem.
And like to me, that's really where we start to see that
accumulating to a benefit in the end user, in the long run.
And so yeah, I really think a lot of things rest on
these types of communities.
Local first is a great example of it.
Getting their act together,
shipping things that are usable at scale
by like the massive typescript community.
And like really making sure that that is like,
we're putting good tools in people's hands,
they're groccable.
It's really hard to take P2P quick
and turn it into a typescript API
that's easy to consume, that works in the browser.
But that's where we need to go.
Yeah, there's this drama going on
in the gaming community right now
also where it's like gamers wanna be able
to like own their games
past when game companies deem those non profitable anymore.
We live in an era where like,
I'd say probably the last 20, 30 years, it's just gone.
Like there's nowhere for it to go.
Like it's either held up in a company
or it's a service that somebody
doesn't wanna run anymore.
So I see us moving towards this more peer to peer thing
and like being able to compose these things
and ship those things.
Maybe it'll prove longevity for those things
cause it makes me immensely sad
like when TikTok was gonna go away.
It's like, that's like a slice of culture
and like history that's just like gone
because of how we build applications right now.
And I think you've really hit the nail on the head, right?
Like I think the modern way of shipping an app
is to ship half an app.
Like you ship the front end, right?
And when we call them front ends
because there's a back of house that you don't get, right?
And the core thesis around peer to peer,
like forget peer to peer,
all we're saying is ship the whole app, right?
Like put the client and the server in the whole,
in the same binary.
Yes, it's hard.
You don't get the web platform development
unless you wander through a whole bunch of wasm gack
and we're working on it.
But like at the core,
the game server is like such a great example.
We have a couple of projects
that are sort of like really building on top of this.
I would shout out Jumpy,
which was the first folks and Bones
to came engine in Rust.
We recently, a community member contributed a Godot
or Godot engine plugin.
And like this, the core thesis is just like,
I don't want the game server to go away.
I want to, when I buy stuff,
I want to own it and I want it to function.
So that like, hey, and like,
you saw this in the gaming community,
like can I have land parties back, right?
Like you remember how like we just can't have land parties anymore?
Which is ridiculous
that like we can't all sit in the same room
and play the same thing, right?
And we have a whole bunch of people
in our community crawling through grass
to like someone just shipped a UDP socket layer
on top of Dump pipe,
which is this Unix socket thing
or this socket thing that we've built
specifically so they can play Counter Strike
with their friends.
Like it's just like,
that was what they wanted.
And they're like,
can we have all of this machinery
just fake a UDP socket
so that I can play Counter Strike with my friends?
And like, yeah.
It's weird that a whole generation has come up
only consuming half an app, right?
Like it's, yeah, it's a bonkers notion.
So, building P2P apps is hard.
What advice would you give to developers
that have been burned by this type of technology in the past?
Oh, the jaded folks who have said,
don't do it that way, you'll never make a dime.
You are right to feel that way.
You are right to be upset.
The last time that you tried this stuff,
this time it works, I promise.
But I think that it's this boil the ocean thing
that we really have to get past, right?
Like there are use cases for IRO
that I wouldn't recommend today
because we don't have robust protocols built to do those.
Like I wouldn't try and kick off
like a really great streaming video thing
unless you're like really prepared
to do some innovation budget today.
A lot of the streaming video projects that we work with
have like really sophisticated dev teams
doing all the video stuff
and they just need the networking layer.
And so like, I think this is,
we will kind of like work more directly with the community
to sort of signal
when we think certain use cases are opening up.
Right now, if you wanna transfer files,
if you wanna build chat apps,
if you wanna build any of these things,
like we are there, like we are in a great place.
You wanna do voice calls, like just straight voice calling.
That's kind of the latest thing
that's making major progress
that you could probably kind of clutch together.
But I think there are three layers
that everyone needs to be aware of.
There's us, you know, actually there's four.
There's the operating system development people
and the like syscall development folks.
We don't talk about them much, but they deserve a shout out.
On top of them are like primitives people,
people who build databases and networking tools.
That's us, but that's also,
that also includes the entire database community.
And then for us in particular, one layer up,
these are the middleware developers.
These are protocol developers.
These are people who are going to take
our bonkers rust stuff.
And usually they're the ones that are putting
groccable, workable APIs
and getting you up into a higher level language.
And so I don't expect that people working with Aero
in the long run will be touching rust.
My hope is that they're actually touching TypeScript.
And the way that they're touching TypeScript
is a really great protocol developer has said,
well, hey, I've identified a whole bunch of people
and they all want to do local first development.
They want state sync.
They probably want to do like a little bit
of video audio calling,
but they need like a really good off story.
And so he's getting these stacks that feel a lot like,
you know, the 10 stacks,
a lot likes sort of the next JSs of the world
that have like thought through the use case
and or super base is another example
where it's like, we have an API drawn up
that is really tailor made to what you're trying to do.
And that's where I think most developers
should be consuming Aero.
It's not actually like, I don't, you know,
eventually, if we do it right,
it'll just like say like Aero inside.
I'm sure there's some trademark around that,
but like, you know, like,
it's a similar idea as where it's like,
you'll know that there's Aero down there somewhere.
And some really great protocol developer
has done the work of taking Aeros primitives
and turning it into something really useful for you
up higher up the stack.
If you're the type of person
who's comfortable writing distributed systems,
who is interested in Rust, yeah,
you should be,
you should be one of the sort of like small thousands
of people that are working directly with us.
And we are constantly shipping tools
for you to do invariant testing and metrics collection
and checks and balances on like,
hey, how well does this thing work
when there are 5,000 nodes and 20% of nodes churn
and like a lot of these really difficult distributed systems
problems that we cover in great detail
with the protocol developer community.
But I think if you're looking
to take advantage of this stuff,
I think you should just start looking for frameworks
that include Aeros dependency.
And those are the ones that you should try to consume.
I'm not recommending that everybody stop and learn Rust
so that they can take advantage of some of this
like fancieness, right?
I instead think that we should be focusing
on this composition over a boil the ocean approach.
And so there's lots of like really great projects
that are, that we are constantly working with
to sort of like deliver some of this value too.
But like my hope is that at maturity,
you're not seeing, you're not having to like
consume Rust directly.
It's a great language.
It's critical that we write it that way.
Like Aero runs on an ESP32
and that's only because there's no garbage collector
and the whole thing is memory safe.
And like we have like, we put a lot of time and energy
into making this a really robust library.
But consuming that, like, ah, it would be a lot nicer
if you had just like HTTP fetch as the API
that you worked with.
And so like that's the kind of thing that we wanna see.
And so you really, you just gotta pick the altitude
that you're working at.
If you're a Distribute Systems Engineer,
we wanna talk to you yesterday.
If you're an app developer,
we go and go and talk to your framework
and say like, hey, do you do peer to peer?
And what's, if not, how would you go about doing that?
You know, cause for a whole bunch of them,
it's not even possible.
They'll rely on specific primitives
and concepts that are just like inherently centralized,
which isn't bad, right?
We can pluralize those things, but you don't,
there are places where this will go,
you'll get like 10x the benefit
because you're building on a CRDT
and all of a sudden CRDTs just naturally work everywhere.
And they have this like lovely property and great,
you get a win because you picked the right tech stack
and they hit the right maturity.
But yeah, so that was kind of long winded.
I don't know if I'm gonna say.
That's great, no, that's fantastic.
I have one question that I wanna ask
that we normally talk about.
So there's a lot of components of IRO that are open source.
And we talk to a lot of founders on the podcast
about building sustainable open source businesses.
And I kinda want to talk to you about the same thing.
And there's like another thing
that I think you overlock with.
You said like you're at the infrastructure layer
and you consider like the protocol people
to be like a layer above.
But like a lot of the protocol companies
that I've seen really struggle to survive.
And I think, you know,
your paths working in the P2P space
and looking at PFS and stuff,
there's like a lot of dead companies in that space.
So how are you thinking about the business of IRO?
And like you're building all this great technology,
but how do you make it sustainable?
And how do you like, you know,
ensure the long term viability of this like thing
that we obviously really deeply need?
I'm so glad you asked.
Some people like don't have the guts
to like ask the money questions.
My job is largely the money questions.
My favorite anecdote,
so I, she's an investor at Red Point Capital.
She wrote a blog post.
I can't remember her name,
but when I would attribute the quote to her
that she said, doing an open source startup
is like trying to hit two home runs in the same company.
Right?
You first have to like get the open source project
to be successful and you have to get it adopted.
And then you need to figure out a monetization scheme.
And like, I've been having this like a lovely debate
with PVH from IngenSwitch
where he like, he had a great rant
from the local first conference
where he was like, the metaphor of building
an open source project these days is like,
you bake a bunch of pies
and you wander down to the local farmer's market.
And you put all your pies on your stand
and you put up a sign that says $0
and you give them all away.
And then you go home and you say,
well, why didn't they make any money on my pies?
Right?
Like everybody's very happy.
They're giving it all away.
But like, we haven't, you know,
I had a business card stuck in my car
that sort of like said that like,
oh, I would do services for you if you asked me nicely.
But I never bothered to show anybody that.
And I think that's what this,
like both of these things,
this like two home runs and metaphor
around like just naming a price
for the value you create in the world
is the necessity of like really
getting this split main modality right.
In open source, we fundamentally are exchanging value
in a way that does not pass through dollars, right?
We don't, we want to just give our software away.
We, that's the way that we feel creates
the most benefit in the world.
But we still need to turn our code into food
and to be able to do that,
we have to pass through dollars.
And so we've, with a networking stack,
that challenge is quite acute
because if you can imagine for a second
a like commercial license for HTTP,
you're kind of dead in the water, right?
Like no one's gonna, like no one's gonna put that in their stack.
And so we've taken a different approach
that is much more around the,
hey, this is peer-to-peer is really hard to hold, right?
And so we have a complimentary service called nodes,
N0DES, nodes.irodeccomputer.
And for 5,500 bucks a month,
you get a direct connection with the core team.
And we will make sure that you're holding IRO right.
We will spin up a network for you.
So all the relays run for you.
And we will do blue, green deployments for you.
We'll collect anonymized metrics for you.
And we will analyze that traffic
and understand where it is and isn't working.
And we'll work directly with your teams
being like, hey, this new protocol just hit VO90
cause we just did this little release.
You should bump to that, you should change this,
you should hold this differently.
And that, we have meaningful revenue
and a meaningful growing customer base from that.
The products are distinct.
The IRO is an open source project.
All aspects of it are given away,
including the relay servers
that we're running on the commercial side.
On the commercial side,
you are getting direct access to the team
that is building it.
You are talking to us in a way
that we worked really hard to get that number
into a uniform box where we're not sizing you up
based on the kind of company that you are
and trying to charge you something different
based on who you are.
And that, for us, has been really hard to get right.
It took us a long time of actually just doing something
that looked a lot more like high powered consultancy
where we were working directly with companies
just to figure out lighthouse customers,
hey, you're trying to deploy this, how does this work?
And we've been scaling down the cost of this.
In the next six or seven months,
we're gonna launch something that looks a lot
like a Vercel for peer-to-peer
where we will take your code,
we will build your code,
we will test your code in a simulated distributed system.
We will enforce all of those invariants for you.
And then when your code passes in your merge domain,
we'll deploy that around the world.
And those will be running
and those will form the backbone
of your applications network.
And we're gonna do that sort of
for like on the order 50 bucks a month.
And the theory there is,
hey, we should really be kind of like leaning on this idea
of being able to deploy these services.
Now that product is really pointed directly
at the protocol developer community.
It's not necessarily directed at the,
like I want to deploy an app community.
These are people who are really familiar
with the notion of invariant testing.
They want to sort of assert that like,
regardless of the amount of churn,
the following should never happen
or like the pathological cases,
the network should never do X, Y, and Z.
It's really complicated, right?
Oh, we have built systems for testing that internally.
And so we're just taking all the tools we've built
to make IRO, and we're pushing them up the stack.
And we're making like, okay, cool.
We do all of that today through collecting telemetry
and defining the invariance over telemetry.
And so that has been like a trick that we've learned
that we can sort of now fit into this uniform interface
that will get you a push to deploy style feel
and will hopefully allow us to iterate a lot faster.
One of the biggest things that we see
in peer-to-peer development is just like,
it's slow as all get out.
Like, I love maintaining the IRO.computer website
cause I just merged domain and it's live.
And that's the iteration cycle, right?
And that's just incredible.
Where we wanna be in a year
and where we need to be in a year is you push domain
and that is live deploying to million strong node networks
where it's cool, that's an update.
And now we've fixed some pathological use case.
And we need that iteration speed.
And so we're working on that as quickly as we can
to sort of run that to ground.
All of that is gonna be like,
that's all in our view convenience, right?
Like, we're not taking away any capability from you.
We're not saying, hey, you can't use IRO for X, Y, and Z.
We're not taking away any protocols,
but we are saying like, hey, yeah,
if you want this to be like really the expertise
of the team that built this, expresses a product,
okay, you're gonna pay us a monthly fee for that.
We're gonna run infrastructure for you.
And that sort of like coupling of infrastructure,
automatic deployment to testing
and like all of this sort of like stuff
that frankly I talked to a lot of developers
and they say that they will do it
and they just don't like doing it
because it's a lot of work to like,
you know, go in and deal with all that gap.
They much rather write code.
We're building schools that do that for them.
So far, that's been successful.
Honestly, it's taken us a long time
to sort of get to that space
and like cracking the sustainability challenge
in an open source project has been,
is the single like hardest thing that I've worked on
as a founder in the open source world.
We always knew we were gonna do open source.
We're always like just hard committed
to like, we're gonna use a very, very, very permissive license
for all of our, and we just sort of backpedaled
and it's like, okay, great.
Now we have to figure out revenue.
And then like, which I think
that may not sound unfamiliar.
It may not be an open source specific problem,
but yeah, we're very happy with where we're at
in terms of the product offering
this complimentary node service
that rolling out has been a lot of fun.
And I don't feel like we're taking an extractive relationship
with the community.
To me, that's always the litmus test.
Like, does it feel like we're actually
like taking something away?
Like we're taking some like piece
that like would actually be really useful
if everybody had it, but we're just like
keeping it on our side of the fence
to make sure that people pay us.
Yeah, we're not doing that.
And I don't feel like we're doing that.
And so I'm quite happy with where we've landed.
The high road and also the hard road.
Dude.
Yeah, but yeah.
That being said, I have kind of changed my tune
a little bit when I talked to folks
these days developing open source.
I think LLMs have really changed things
like as open source maintainers,
we are getting different challenges now.
But we get a lot more issues sent to us
from people who are talking to an agent
that has hallucinated something
that and now becomes our problem.
Or even worse, they've like filed a poll request
that claims to do X, Y or Z
and we can't actually verify that it does that
without a careful read.
And that's creating way more like signal
than we have time to sort of ingest
and work on as a team.
And so I think that you will,
I think that even open source is like due
for some contours and changes
as a culture of practice in the new world.
I don't know what that looks like yet.
I don't have any answers there,
but we're seeing it, we're kind of keep an eye on it.
Love to talk about it in a year
once we've learned something,
but I don't know if that's helpful.
Well, that wraps it up for our questions.
It seems like you've built a really cool thing
and the services you plan to offer
in the path to 1.0 also seem very exciting.
So thanks for coming on the show and talking about it.
Much appreciated guys.
Thank you so much for having me.
I really appreciate it.
Your podcast is lovely.
Thank you for your due.
Thank you so much.
Yeah, and Brennan, y'all do fantastic work.
I'm excited to see how Iro develops
and hopefully I can play with it soon.
But yeah, thanks for coming on
and yeah, good luck with 1.0.
Thanks so much guys. Have a great day.
Episode suivant:
Les infos glanées
devtools.fm:DeveloperTools,OpenSource,SoftwareDevelopment
A podcast about developer tools and the people who make them. Join us as we embark on a journey to explore modern developer tooling and interview the people who make it possible. We love talking to the creators front-end frameworks (React, Solid, Svelte, Vue, Angular, etc), JavaScript and TypeScript runtimes (Node, Deno, Bun), Languages (Unison, Elixor, Rust, Zig), web tech (WASM, Web Containers, WebGPU, WebGL), database providers (Turso, Planetscale, Supabase, EdgeDB), and platforms (SST, AWS, Vercel, Netlify, Fly.io).
Tags