Juri Strumpflohner - NX, Nrwl

Durée: 61m28s

Date de sortie: 29/04/2022

Join us this week with Juri Strumpflohner, the head of DX and Europe Engineering at Nrwl.

Nrwl maintains NX a next generation build system with first class monorepo support and powerful integrations.

Join us as we dive deep into the feature set and learn all about how the make builds lightning fast.


Tooltips

Andrew

Justin

Juri

De l'aspect de l'œil de bird's eye, c'est pas de faire des repositions separate,
où chaque équipe travaille en isolation, mais plutôt d'avoir une seule reposition,
où plusieurs thèmes peuvent travailler dans la même reposition.
Bonjour, bienvenue à la podcast de DevTools FM.
C'est un podcast sur les tools de développeurs et les gens qui font ça.
Je suis Andrew et c'est mon co-host Justin.
Salut tout le monde, notre guest aujourd'hui est Yuri Strumpfloner.
Yuri est le directeur de l'expérience de développeurs à Narwhal
et nous partage aujourd'hui pour parler de l'INX.
Yuri, Would you like to tell our listeners a little bit more about yourself?
Oui, hi. As you said, my name is Yuri.
Je m'appelle from Northern Italy.
I've been with Naul for a good 2 years already.
Most lately I'm taking over the role of director of the developer experience.
I can fully focus on making an X and stuff better
and then interact more with the community, which I'm really enjoying.
That's awesome.
What is the role of director of developer experience?
What does your day-to-day look like?
I guess it probably depends a lot from company to company.
We are around 30 people, but still the whole death row section is pretty small.
So it's me plus a lot of engineers have been traditionally at Naul
and also been doing that as the second part, like talking at conference
because they just enjoy it.
We have a lot of Google developer experts on our team.
It's part of their kind of thing they do even outside of work.
I guess the whole thing is fully focusing on developer relations,
making sure, like, feeding in the development feedback
from the community back into our development teams.
But it's also like being very near to the actual engineering, specifically for an X.
So I also basically go through, like, check out new features,
take part in feature brainstorming sessions
to make sure that also from the developer experience part of you
things are kind of in line with what I know from the community.
So it's all around that plus content production,
also creating even integrations in an X and stuff like that.
So pretty interesting so far.
That's cool.
Well, one more question before we start talking about an X.
So normal is kind of a, like part of it's like developing this open source tooling,
but part of it's also like a consultancy.
So how does the business work?
Yeah, exactly.
Like, basically an X is our, if you want, 20% project.
So we mostly, like, do 80% of consulting and 20% work on open source,
which is an X, an X Cloud,
and also people can choose to do more, like, docs and content production
that set of area.
But yeah, right now, at least, it's mostly focused on, like,
those 20, 30% of our time.
We're kind of trying to balance that out,
like, trying to slowly invert that even, right?
So focusing more and more on an X.
But the whole, like, consulting business for us is also important,
specifically to kind of be there in the real world scenarios, right?
Because a lot of what an X is today is because, like, we work at clients,
we see the issues that come up there,
especially with larger scale repositories and stuff like that.
And so then we kind of feed that back into an X.
And that's hugely valuable.
So there will always be some consulting part of an X,
and not only general, but yeah.
Right now, that is a big chunk of where,
like, we obviously get the money,
because an X is basically free, right?
So it's open source, you can just start using it.
That's super interesting.
So, like, an X was literally born out of a customer need
of managing these large monorepos company to company?
Yeah, basically, our co-founders,
Jeff Cross and Victor Saffkin,
they have been at Angra team inside Google.
And so basically, when they, like, quit their job there
and, like, thought about founding their own company
and helping businesses outside of Google,
they obviously wanted to take some of the things
that they learned inside Google, like,
for instance, how to handle, like, large scale monorepos
with them, right?
And kind of implement that in a way that is really approachable,
also for, like, everyone outside.
Google is kind of known for their big, big monorepo, right?
With Blaz, which is their internal tool,
which is known as Bazel outside.
But it's really a complex set of tools that they use.
They can afford it, because they're huge.
And so their goal was kind of to take that with them,
to kind of philosophy and try to give that
into the open source world and basically kick it up from there.
So, yeah, it kind of started from there.
It started very small, initially actually as an extension
to the Angra CLI even, because Angra has already kind of a CLI,
which is very minimal.
So it started as an extension to that,
and then they quickly figured, like,
there's really, like, the need for something like standalone, right?
And so from there, it really evolved.
And nowadays, it's basically a standalone development tool
that you can use.
I suspect that's kind of where the naming came from,
since Angular uses, like, the NG.
So NX kind of goes in line with that whole naming scheme.
It's kind of not 10 times developer.
Like, we have the joke, it's not like 10 times developer.
It's like NX developer, right?
You get even more productive than NX.
So if we could take a step back
and just talk about NX from just a broad term.
So if people are loosely familiar with Monorepos
or maybe not even super familiar,
how would you go about explaining, like, what NX is
and what problem it seeks to solve in their project?
Mm-hmm.
Yeah, I think, like, there are, like, two things here, right?
Like, there's one thing, it's like, what is an X
and, like, what does it do and how can it help,
also in the case of Monorepos.
And then there's more the generic approach of, like,
what is a Monorepo even, right, and why might you need it, right?
So, at that point, like, I might even plug, like,
we have a page that we created, like, Monorepo.tools,
which is kind of an approach where we try to explain it,
like, from someone that is, like, what is Monorepo?
So they got pretty, like, popular recently in the Java ecosystem in general, right,
although they have been around for quite a while.
We try to break it down, what are the tools that are available
and, like, why you might need it.
But, like, from a bird's eye perspective, it's, like,
not having separate repositories, right,
where, like, every team works in isolation,
but rather having one single repository
where multiple teams might work inside the same repository, right?
So that's kind of the Monorepo approach.
And there are different, like, things why you might need that, right?
That we could really probably talk the entire hour about that.
But it's, most of the times, it is around, like, co-chairing, right,
making, like, allowing for easier re-effectorings across the code base
and lowering the friction in general for the co-chairing approach in general.
Obviously, if you have multiple repositories,
like, you need to have, like, CI set up for every repository,
you need to have a publishing process in place
and, obviously, like, then deal with version conflicts as you share code.
And specifically, as we have seen from our own experience,
within bigger and larger enterprises,
what often happens, seems like, they don't even have a registry,
a copy and paste code between, like, different repositories,
which is, like, super bad, right?
But, like, that's how it goes, right?
You have to ship features quickly,
and so sometimes, like, those work-rounds are being made.
So, monorepos can help mitigate that a bit.
And to do a second question, like, what is an X
and how can it help in those scenarios?
An X has been designed from the ground up
to kind of support that, like, scenario of a monorepo,
even at a very large scale, right?
Although, specifically in the last years,
we made some quite huge improvements in the sense of that,
you can even use it in a very, very small environment.
For instance, we have even startups that,
just like, start with one application, right?
You start quickly prototyping it,
and X has a couple of features that allow it to go quickly
as you create your application with those generators.
So, and what happens then is, like, you start with a single app,
just use some of the CLI features from an X,
and then out of the start, like, you get, like,
well, we might create a second app there,
because, like, it's handy, like, we want to deploy it separately.
And out of the sudden, you have two, three apps,
maybe in a mobile app in there,
and you have a small monorepo, right?
So, it doesn't always need to be huge, right?
Although, we definitely have, like, clients
that have, like, 200 plus projects in the same NX monorepo.
So, it can really scale.
But, like, we really cover, kind of, the thing
where you can quickly start with a small thing,
benefit from those code-sharing features at a small scale,
and then, as you need to scale up,
because, like, your startup maybe grows, right?
We can stay with you, basically, along that way.
So, that's more or less, like, where an X can help out.
Pour nos listeners, ça pourrait être familier
avec un outil comme Lerner,
comment s'il s'agit d'un outil de l'autre,
de ces outils qu'ils pourraient avoir utilisé avant?
Qu'est-ce qui le fait?
Oui, je pense qu'il y a un couple de choses.
Lerner est plus à un niveau de plus,
donc, ça vous permet d'avoir différents packages
dans le même monorepo et le même workspace.
Et ça vous aide, pour vous,
à faire des processus comme l'installation de l'MPM packages,
comme ces plus de plus de levels de management,
même avec le processus de publication,
comme les increments de version,
les créations de packages,
qui sont dépendantes de l'autre, etc.
Et l'onx, ça fait le même chose,
mais ça aussi, ça va plus loin.
Il y a des structures de l'autre,
qui peuvent vous aider.
Par exemple, des graphes de dépendance,
où vous pouvez voir
quel projet, dans le même monorepo,
dépendant de d'autres projets.
De notre perspective, un monorepo,
c'est peut-être aussi,
où vous avez un couple de projets,
qui ne vous parlent pas de l'autre,
donc, vous avez juste d'avoir des bénéfices,
peut-être, des CIs, etc.
Mais, généralement,
vous avez des secteurs différents,
mais vous avez beaucoup de libraries,
qui interagent avec l'autre,
donc, il y a des dépendances entre eux.
Et donc, c'est quand vous avez besoin de plus de support,
que juste quelque chose qui vous aide
à les construire,
ou l'MPM install.
Donc, vous avez besoin d'une idée,
de quoi les dépendances sont,
qui sont, même,
qui vous aide,
de ne pas avoir des dépendances entre eux,
parce qu'ils ne devraient pas dépendre de eux.
Donc, des choses comme ça.
Et puis, évidemment, il y a la partie,
que l'enseignateur ne peut pas avoir,
en particulier, en ce moment,
des partenaires et des integrations,
avec des tools,
qui, on pense, sont vraiment bons,
et qui sont les meilleures pratiques,
si vous voulez être productifs,
et en développement, comme Cypress,
Jazz, Linting, setups,
donc, les configs,
peuvent être assez cumbersome.
Il y a des gens qui l'aiment,
c'est certainement fun de jouer en,
avec des configs ou des types de configs,
qui travaillent bien en monorepo setup.
Mais, comme vous le savez,
c'est pas vraiment ce que vous avez payé,
donc, vous avez des features d'autorisation,
et c'est là où l'INX peut monter,
et nous battons les tests,
et nous allons bien travailler,
dans ces setups-là.
Donc, l'INX vient de les choses,
qui, en tant que devs,
les tools, et les mettre en place,
dans les pratiques les plus belles,
oui, exactement.
C'est donc, en général,
en général, un part de l'INX.
L'INX est très basé,
en mode moderne,
donc, il y a un part court,
qui a des choses comme,
la partie de graph,
qui comprend comment vous travaillez,
qui est vraiment aidante,
pour les optimisations en plus,
pour les construire,
comprendre ce qui change dans un certain PR,
ou quelque chose,
pour être plus rapide.
Et puis, sur les autres, c'est les plugins,
qui nous permettent de mettre,
donc, les plugins pour Angler,
pour Yacht, pour Next.js,
et tout ça.
Nous supportons des plugins de corps,
nous sommes juste beaucoup de gens,
donc, nous supportons vraiment tout ça.
Et, probablement, il y a un commune huge,
qui aussi donne des plugins,
d'une façon, pour que vous puissiez construire
votre propre.
Mais, c'est l'idée, qu'il y a des plugins de corps,
et tous ces plugins,
qui sont des générateurs,
qui vous utilisent pour,
en général, un setup d'application.
Et ce setup,
est déjà plus confortable,
d'avoir le type script,
d'avoir le gest, le slynt,
les preuves, et les choses,
même le livre de la histoire.
Et, vous pouvez continuer,
même, c'est pas juste un initial setup,
qui, je pense, est très cool,
c'est pas juste que vous puissiez commencer rapidement,
mais aussi,
comme vous le développez,
ça vous aide, donc, vous pouvez générer des compétences,
des routes, des libraries,
comme vous le pouvez, vous pouvez même
bénéficier des générateurs, pour les changer rapidement.
Et, pas toujours,
juste de bouger rapidement,
mais aussi, la consistance derrière ça.
Nous, pour exemple, nous travaillons avec des organisations,
c'est comme, ils veulent avoir
le livre de la histoire, de manière certaine,
ils veulent avoir quelque chose dans leur livre,
qui est toujours,
d'être structuré de manière certaine,
les compétences doivent être comme,
les libraries, les constructeurs,
doivent avoir cette structure.
Ce qui vous permet, c'est aussi de customiser,
vous pouvez, d'exprimer,
créer vos propres générateurs,
qui font de la structure,
et ça aide, avec la consistance,
surtout, de la construction,
de l'entraînement, de l'expansion.
Ça peut pas être important,
si vous êtes petit, mais si vous avez de l'expansion.
Une des choses qui je trouve
très difficile avec des générateurs,
c'est la discoverability.
Les gens ne sont pas toujours necessarily
toujours au niveau d'un projet,
mais hey, si vous creativez un nouveau composite,
nous aimerions le faire,
dans un code de génération,
au lieu de
couper et battre le component,
que vous avez vu ici,
et puis de changer de ce que vous voulez faire.
Donc, ce n'est probablement pas nécessairement une chose in-ex-specific.
Mais comment vous allez faire de socialiser le fait que ces générateurs sont ici et que ils peuvent faire tout ça ?
C'est un très bon point.
C'est quelque chose qu'on a travaillé beaucoup pour faire surement que les gens le découvrent plus facilement.
Par exemple, nous avons récemment appris à nos enfants,
et c'est un peu d'un jour, où, si vous allez à nx.dev,
vous pouvez faire des packages,
et des packages que vous utilisez,
comme React, Angular, Next.js, etc.
Ça vous donnera une liste où vous avez déjà les générateurs qui sont là,
et vous pouvez les explorer en plus en plus.
Ça vous aide beaucoup avec la disponibilité.
Mais la main qui est la plus importante que je utilise et que je souhaite les gens installer pour l'encompte nx.
C'est quelque chose que nous avons développé par nx.
C'est une integration IDE avec la code visus-studie.
Il y a aussi des extensions de la communauté de développeurs pour la webstore.
Il y a deux plugins qui font des choses similaires.
C'est le meilleur moyen de l'explorer.
Vous installez un code VS,
et ça a déjà un commande génératif qui ouvre le dropdown.
Et vous pouvez chercher des routes de components.
Vous pouvez vraiment explorer ce qu'il y a possible.
Vous pouvez utiliser des formats,
et vous pouvez explorer des choses plus visuelles.
Pour les générateurs, c'est ce que je vous suggère.
Vous allez explorer ces formats,
et vous allez voir ce qui se passe.
Et puis, vous êtes plus familier avec ça.
Parce que, si vous avez un commande,
vous pouvez copier et écrire le commande.
La prochaine fois, vous êtes plus vite.
Vous avez juste de l'accent de commande,
et vous changez des choses.
Et puis, vous allez.
Mais pour la discoverabilité, c'est un produit incroyable.
C'est un approche très intéressant.
Je vois une idée sur Twitter,
de ne pas faire un goût pour votre CLI.
C'est beaucoup plus invité.
Mais un goût pour un CLI ne peut pas vraiment faire sens.
C'est comme, où vous déploiez ça?
Vous faites un serveur local?
Il y a toutes ces questions que vous devez répondre.
Comme un code VS,
ça fait beaucoup de sens.
C'est déjà un truc qui est configuré.
Pourquoi pas votre CLI?
C'est une idée cool.
On l'a même eu à un moment,
je me souviens de la première version.
Je n'étais pas même dans la finale.
Ils ont été comme un produit standalone.
Mais comme vous le voyez,
il ne ressent pas,
parce que, après, vous faites un produit,
c'est goût, et vous devez sélectionner le travail.
Ça se sent pas mal,
mais en fait, il n'y a pas beaucoup d'adoptions.
Mais quand on l'a intégré,
c'est une extension,
c'est une extension de C,
c'est une workspace.
Ce n'est pas juste la génération
qui vous donne,
mais ça vous aide à faire des files
qui utilisent toutes ces codes,
des codes-lens,
pour augmenter le travail.
Ça vous aide à naviguer.
C'est super utile.
Vous avez mentionné des choses
sur Builds, Configuration, React, Angular.
Est-ce que l'NX vous aide
avec ces Builds ?
Je n'ai pas créé un outil
de design-system CLI,
où c'est presque un système de design
pour l'NX,
et nous faisons
les jays,
et les bundles.
Est-ce que l'NX vous aide à faire ça ?
Exactement.
Avec ces plugins,
vous pouvez installer sur le top de l'NX,
les React 1 ou Angular 1.
Ils viennent avec les générateurs,
mais aussi avec des executeurs.
Les executeurs sont des tasks
des executeurs,
pour construire,
pour tester.
Ils viennent avec les plugins.
Vous pouvez créer votre propre,
et vous pouvez le faire
pour un script de NPM,
si vous avez un projet de package,
ça marche aussi.
Ils ont des constructeurs,
parce qu'ils sont un peu abstracts
et ils sont un peu en train de la configuration.
Les réactions
sont utilisées par les réactions,
c'est un package web,
qui utilise un web-pack,
à moins de maintenant.
Si vous voyez qu'il y a quelque chose d'utilisé,
vous pouvez changer l'honneur
sans vous faire tout.
C'est un des bénéfices.
Vous pouvez configurer
quelques options.
Il y a des possibilités de le faire,
vous pouvez également hooker
votre config web,
et vous pouvez le faire,
et vous le mettre en X.
Il y a des gens qui ont besoin
de cette customisation,
mais beaucoup de choses vont avec les options.
Nous avons essayé
de la construire dans un de nos best practices,
pour faire surement l'output,
ce que vous avez à propos.
Nous pouvons couvrir de différents outils,
de l'air.
Le point d'imaginer
c'est la partie de la upgrade.
Si nous nous voici
dans le futur, on peut utiliser
l'EOS build, plutôt que le webpack,
ce serait possible.
En X, il y a des mécanismes
qui sont appelés migrations.
Quand vous vous aviez de l'une version
pour la prochaine,
vous pouvez pas aller dans la prochaine workspace,
mais il y a un command
qui se termine
avec la dernière de la migration.
Ça va être de la version
que vous êtes en,
pour la prochaine.
En termes de l'ex-it,
mais aussi si vous utilisez un plugin React,
et la version React
qui vient avec ça.
Nous avons pu,
même un peu plus d'anime,
réactiver des folks de Webpack 4
à Webpack 5,
sans avoir de la chance
de faire beaucoup de choses.
Nous avons rééactué les configs de Webpack,
nous avons même migré la code,
les files de configuration et les files de source,
si il y a des choses qui ont besoin de changer,
des importations.
C'est un mécanisme très puissant.
Ça vient de l'environnement
d'entreprise,
parce que les entreprises
veulent être en date,
parce que ce n'est pas juste
pour la version de la version de React,
mais aussi
pour les issues de sécurité.
Ils doivent être en place,
donc il faut en garder,
mais en même temps, on sait que c'est difficile
d'y faire,
de faire des choses qui sont migrés.
Ce sont des choses super utiles
pour les migrations.
Ça almost
semble être une migration database.
Pour les plugins, vous avez besoin
d'élever une nouvelle version,
et vous avez besoin d'élever
un code mod,
ou de changer,
ou de changer,
pour en mettre en place un template.
Exactement.
Le plugin est installé
pour éviter des migrations spécifiques,
mais vous devez en faire le même.
Encore une fois,
on peut changer la version de la version de package,
mais pour les versions de Webpack,
c'est bien possible que ce soit de React,
ou de Storybook,
qui peut aussi être appliquée
d'un X.
On peut bien bien s'y changer,
pour les tools intégrés,
comme ils devraient,
sans vous débrouiller.
Mais en cas de Storybook,
on devra aussi changer
les histoires qui sont publiées,
car les histoires ont changé.
C'est quand vous vous déviez plus tard,
vous vous devez en faire des T, des manipulations,
des choses comme ça, et vous devez
en faire des choses.
En X, vous avez l'infrastructure
d'identifier
de la version que vous voulez,
et vous listez toutes ces migrations
et vous le vous faites,
mais vous vous vous faites écrire la code.
Il y a un point de vue d'un file
qui vous donne l'information
sur le contexte, comme un code
de code de code,

comme un code de code,
comme un code de code,
comme un code de code,
comme des codes.
Il y a des utilités que l'enx
déjà aie à l'enx,
pour insérer des choses
dans certaines positions d'un enx
du travail, en dealant
avec des files de conférence en enx,
il y a des aidants pour ça,
donc il n'est pas ça qu'il faut en écrire
de scratch à chaque fois,
mais selon la migration,
il peut être plus complexe
ou plus simple.
Mais oui, vous, as un autorité,
vous avez d'habitude des écrits.
Vous n'avez pas besoin, mais oui, vous avez d'habitude.
Donc si vous faites une migration complexe
et vous avez d'avoir un transform ASD
ou quelque chose,
vous avez des outils que vous recommandez
ou est-ce que vous avez un set-up
ou est-ce que vous utilisez
un code de code JS, un Babel,
ou quelque chose,
est-ce que le développeur
tient à l'enx ou est-ce que l'enx
donne un framework pour prendre
les plus complexes?
Oui, l'enx provoque
déjà avec ce framework.
L'enx vient avec un dev kit,
et ça vient avec
des outils que vous voulez
faire ces migrations,
pour les écrire.
Ce qu'on a souvent fait, c'est que
il y a un package called TSQuery
qui est vraiment aidé
pour les query des types ASD.
Vous pouvez les mettre en place
comme vous le besoin.
Nous, les team ex-core,
nous avons utilisé pour nos plugins,
mais le constructeur
de comment ces migrations
sont élevé et comment les manipulations
doivent être faites,
c'est déjà là. Vous pouvez les écrire
et les construire.
En fait, pour les authors de plugins,
je recommande de regarder
les plugins de core et ex-core,
et comment ils migrent les files.
C'est le meilleur moyen de apprendre.
Un peu de les scénarios
ont été découvert,
comme les réconfixes web,
ou les choses comme ça,
c'est là. Vous pouvez vraiment
faire un copy-paste et adapter pour vos propres needs.
On a parlé
des features de la date
que vous pouvez utiliser
avec NX, mais l'une des choses
que la page de marketing s'appelle
est que c'est super rapide
en faisant beaucoup de choses.
Qu'est-ce qui fait NX si rapide
et ce qui fait faire des choses rapides?
C'est évidemment, comme sur la scale,
vous voulez avoir
ceci.
C'est aussi un grand point de paix
que j'ai entendu d'une fois que je l'ai entendu
dans les articles,



Vous pouvez commencer très facilement,
mais quand vous vous créez,
ça devient vraiment un pain,
car les PR et CI
sont en 4 heures.
Et ça, c'est
un peu de la spécificité,
mais vous n'êtes pas capable de merging
les features.
C'est là que l'on a été
en train de faire des commands affectés.
Nous avons parlé de la graphite
de la graphite de la suite de la scène,
donc elle sait
quelle producte dépend de laquelle.
Et quand vous exécuter
un feature et créer un PR
et vous le pousser,
vous pouvez en faire un test
d'affecte.
Et donc, vous pourrez que la graphite de la PR
soit basée sur un point de base
qui est la master ou la main.
Vous pouvez faire un div, voir
que les projets sont touchés,
et ensuite, avec la graphite de la dépendance,
figurez-vous à quel point il faut être rétesté.
Et ça, ça fait déjà un tout
un choc de structure.
Si vous avez un grand monoreprop,
vous pouvez changer un peu de files,
il n'y a pas besoin de faire le test pour tout.
Et ça, ça fonctionne pour test,
pour l'intention, pour le construire.
Vous pouvez créer vos propres targets,
et ça, ça automatique,
vous travaille avec ces commandes affectées.
Vous pouvez les mettre en place.
Je vous ai usually defined
as a first step,
so you always should do that,
that's basically the basis
on where you start.
The most reason why we introduced
last year more or less is
local computation caching.
And local computation caching is basically
just the concept of, you run that command
with those flags, those environment variables,
and it includes those source files.
Let's compute a hash out of those.
Store it in a local caching,
node modules folder, wherever you specify.
And so next time you come around
and execute the same command, we won't execute it again.
So what might happen is
you have executed that already,
you run an effected command on CI,
or even on a local machine,
that happens to include
that other project that you executed previously.
So while that project would be pulled out of the cache.
So now you have the effected command
cutting off a whole lot of the branch,
but even from those five projects
maybe that enter that branch of those affected projects,
just two of them would be rebuilt
because the other half would already be cached.
So there's an additional
improvement and speed improvement there.
And the cool part about the whole caching
approach is that it's really completely transparent.
So you as a developer,
we print out at the end of the command
that this has been pulled out of the cache,
or we print something like, well out of those five
projects too have been pulled out of the cache,
but you don't notice at all.
If we wouldn't print it out, apart from the speed improvement,
because it's basically instant, you wouldn't notice.
We restore basically the
whole console lock that usually
gets printed out, let's say by jazz, for instance,
with all the colors and everything
in the same order you would usually have
plus the potential artifacts.
Let's say in a build output you have the disk files
that get produced, those would be just pulled out as well.
So that's what I see,
that's the second step.
In fact by now, we introduced that one and a half years ago,
and by now that's the default.
So you don't even have to enable that,
that's just like it happens locally, it caches the commands.
Because it just makes sense.
We even at some point, we had
affected tests, and then
we had a dash-dash only failed.
What would happen is, you run
affected tests, and then maybe five failed,
well let me just rerun those.
So you do dash-dash only failed,
but once we had the computation cache,
well let's drop that flag,
the cache would just pull out everything
as fast as you would run all of them again.
That's a pretty cool feature I feel.
And then the next step is obviously
to then distribute it.
The computation cache is just local
for your workspace.
And so then if you want to really share it
with your team members, and specifically also with CI,
because CI is usually the pain point,
then you share it.
And that's where like annex cloud comes in.
And that's where you basically can export that cache,
that local computation cache,
that folder is simply being synced
with the cloud-based storage,
and then synced to our developer machines,
depending on which ones basically
attach to that same storage.
And so that is hugely beneficial,
specifically in CI as I mentioned.
Interesting, so this is a kind of a case
where if like you built something locally,
and maybe you were the first person on your team
to build something,
and it hasn't changed for anybody else,
then they would like pull down the cache
remotely on that thing that you built.
Exactly.
That's pretty magic.
Yeah, exactly.
So you wouldn't have to, like even your co-work,
you wouldn't have to re-execute it, right?
And specifically as I mentioned in CI,
what happens is you might have a whole set of PRs,
and usually a PR train,
a merge train that gets into main or master.
And so PRs before you might
execute the same commands
of the same set of source files,
granted you didn't touch them, right?
But as mentioned before, they might re-enter
and you affect the tree, but you didn't really touch them,
right? They just happen to be dependencies.
But then the PRs before you just execute the builds
on those or tests on those, you would benefit, right?
So those are like some of the main
like advantages of that.
And then the next step, if you want,
like in that set of that scale,
so basically affected, then you have the caching.
And then on top of that
is the whole distributed task execution.
That's the kind of top feature we have right now.
Because that is an order,
or a payment you often see in CI.
Does that same like
graph extend to node modules?
So say like you updated React,
does it then run all affected packages
that depend on React and then so on and so forth?
Yeah, so dependency graph
also has the node modules in there.
And that's, yeah, that's useful
specifically for the affected part.
But also in some scenarios,
like for even like,
preventing the inclusion of certain packages
in certain libraries. Let's say you have some libraries
that you don't want to have them
import the React package,
for instance, because they should be pure,
they should be just typescript based, whatever, right?
As you don't want like a developer accidentally pull that in.
Now the dependency graph has that information, right?
And so we have for instance a lint rule
that prevents you to do that.
Or another use case is like,
which is pretty interesting one,
is like if you have for instance,
I don't know, a node based app in your annex workspace,
or even now with remix, right?
You want to actually run it on a server side.
Now, if you're in a monorepo,
in a monorepo that is structured with an axe,
you have just one package JSON to vary root, right?
So there's a single version policy
across the entire monorepo,
which is pretty important as well.
And so what happens if you build a remix app or a node app,
you can actually generate a package JSON
based on what that project imports.
So it can generate your dependencies automatically, right?
So if you happen to pull in certain library,
it would just also add that to your
to dependencies of package JSON,
because then if you then set it up in a Docker container,
you would have those that are needed on a server side
of that node product to run.
So the whole dependency graph in general is like,
it initially has been used for exploring,
right, and doing those effective commands,
but there's a whole lot of interesting features
that you can then build on top of it
once you have that knowledge, right?
Because you can directly access it from your plugin,
you can extend it, so it's pretty cool.
Yeah, that's pretty awesome.
Drawing those boundaries
in application structure is pretty
tough sometimes,
so it's nice to have tools to assist with that.
You mentioned there's a single package.JSON
in the root
and that it's versioned
uniformly.
So how does publishing actually
work with NX?
So if you have a single package.JSON,
it means you have
ostensibly, like in your package.JSON,
you define what the module name should be
and all that stuff.
So somehow
for these subprojects,
there's a dynamic package.JSON
that's generated for them
that defines what dependencies they have
and then takes care of all the publishing
and everything. So can you just talk about
how that process works a little bit?
Yeah, sure.
The single package.JSON at the root level
is basically for dependencies
inside your workspace, inside your NX monorepo.
So what it means is simply that
if you usually act in a monorepo,
you will have one version of React.
Or if you use Angular and so on,
you have one single version.
And that is different to some
other tools that also learn
for instance, where every package has its package.JSON
potentially can have even different versions.
So obviously, if you have
a single version, you need to coordinate.
If you want to upgrade, you upgrade
all the things that are in a monorepo.
That might be more work initially sometimes.
But it is also beneficial, especially
with those migrations, because
usually the team or the person that migrates
then also knows what needs to be fixed.
Because usually it's just dependent on
well, in React 18 or whatever, they change this
thing. And so if that arrow
type comes up, I already know
you need to fix this.
So in that migration process, much easier for the person
just to go through. Obviously needs more coordination.
But the goal is to be better
compatible even between those libraries.
Because otherwise you might end up in weird situations
where one app imports a library
that should be used
with React 18, while another one
uses a previous version.
So there might be some weird behaviors
at runtime when you deploy.
But that is really just for the dependency
in your workspace. Now the libraries
usually in the side dynamics workspace,
they don't need to be published.
So you can directly depend off them.
Because there is a main package,
like a main TS config base at the root
of the whole workspace, that links
in those libraries. And so you can really just
import them. So what I mean is that
the library really just pulls in the source
and bundles it into the application,
sorry, pulls in the source of the library
and bundles it in.
And you deploy it with it. You can however,
which is the RQ reuse case,
use publishable libraries.
So you might have a couple of those
where you want to have them on npm
or even within your organization, it's usually not a case
that huge organization have one
nx monorepo, but they have a couple of them, right?
And so some pieces you might even want to publish
because you want to reuse them
even outside order
like areas where they don't use an x, for instance.
So just traditionally publish them to npm,
internal register, stuff like that.
So in that case, you just generate a publishable
library and that would have
a package JSON.
Now that wouldn't mean that that single library would have
its own Node modules folder, but that would have a
package JSON that maybe define a couple of scripts
that are needed in there, right? Or
the version that you want to publish then, right?
Like things like that.
And the whole generation of the dependencies,
well, that happens still, like if you want, for instance,
to have publish react library, right?
So that would also take care
in the build process of that library
to also add basic dependencies
that the library imports into that
package JSON dynamically. If you want,
like you could disable that behavior even, but
usually you want to have that, right?
And so still in the end, what you end up with
isn't this folder, you end up with a library,
with this package JSON, with this version in there, right?
And so you can just do an npm publish as you're
usually accustomed to.
Now, that's it, like
nx usually just goes
to the bundling step, to the build step.
So we don't have
something in there in nx that does the publishing itself.
So like, I don't know,
update the versions based on
semantic versioning or
doing changelog generation or, I don't know,
push up to npm registries.
What usually happens is like, you create your custom
scripts in nx workspace that does that, right?
Which you can then again hook
into those nx targets and then
use effect at publish, for instance,
so that only those packages of change will be published.
So that totally works, so you can hook in your own tool chain.
Now we're currently exploring
that area, so we want to add some support
for publishing, but the thing we stayed out
of it is because it's very, very custom, right?
So a building
is pretty, like you can pretty standardize that,
like in terms of like what output you want to produce,
how you want to build certain things in an optimal way.
While publishing is a bit more custom,
like do you want to use conventional
comments? Like do you want me to increase
the version number?
We're currently exploring how far do we want to go
and help developers, and where do we
want to stay away and say, ok, we go until,
I don't know, increment the version number,
if you want to have
change log generation based on conventional comments,
well, that's something we add on top of it, right?
So that's something we're currently
right now exploring,
because we definitely would also kind
of want to add that in, because then we
really would close the circle, right?
So it really can set up a new project,
create a couple of publishers of lives,
especially for open source projects,
have the publishing process well in there,
not just to build and testing, right?
And so you can really have the whole experience
and be very quick at actually publishing packages at NPM.
Yeah, the last time I looked
at NX, I came to the conclusion
that NX is really like catered
towards more like a collection,
a monorepo, that's a collection of
apps rather than like
a traditional learn a monorepo where it's just like,
you have like 12 node packages
that might be interdependent, you publish them
and people consume them.
For context, I actually do run a tool
that does monorepo publishing called auto
that kind of solves all these problems.
Yeah, and it is a very hard,
it is a hard problem.
Yeah, yeah,
if you want to, feel free
to reach out. It is definitely
not a simple problem. So, yeah,
the way that NX handles
package JSON is very interesting.
Like when I looked last, it was a lot of like
manual configuration
to like define the dependencies between
things, but it seems like that's become
a lot more automatic in the recent days.
Yeah, yeah, absolutely,
absolutely. And also to your point,
in terms of like the apps and lips kind of structure,
that is also something you can now choose.
Like, well, initially, you really had like,
okay, you have apps and libraries and it was very
much like target towards like app publishing,
right, so that use case. Now, you can
actually, when you set up an NX workspace,
you can choose like the core or open source
preset, even. And there, you would really
have like just a packages folder with
just a bunch of libraries in there, right. So, it would be
kind of the same setup. So, you can really
kind of choose and that's also one thing
that we wanted to have is kind of more the
flexibility of like, well, I don't need an app,
right, so I just need packages because
I'm really in that open source scenario, right,
where I just want to have a bunch of like
packages that happen to be in a small
monore, because they just belong together, right,
and then happen to
set up a workflow that allows me to publish
them quickly.
That's really cool, yeah.
Something that we were dealing with
at work is we have
this app Lib setup,
right, and it's a traditionnel
app, but there's a few packages
that we would like to publish
independently, like our UIs,
UILib, or Component Library.
But we don't want to do
the traditional like mono repo setup
because there's a lot of
requirements that it instills on you
about the structure of your application,
you know.
So it seems like maybe something like
NX is more suited to that use
case, it's like, oh, well, you're building an application
but you want to just then publish
some, you know,
packages from that, it's like, it'd be
a little bit easier to...
Yeah, exactly. The use case we typically
have is that, like, things like,
I don't know, the component design
library for an organization, right, so
the organization might have like the main
monorepou, where they develop their apps in,
but then they have the component library that is being
developed within that monorepo, right, but the
component library is still kind of independent,
right, because they are small components reusable
to a high degree, so those packages are
then usually the case where they also want
to patch them up, publish them to an internal
repository, such as, like, other
folks that are outside of the monorepo
can just benefit from it. Like, then in the end
it just becomes like a React package,
angro package, whatever you happen to use, right.
So yeah, that's totally
a use case we often see, like, it's not so rare.
Mais quand on a been talking about monorepo,
we've all kind of just been, at least
me and Justin, have been thinking
JavaScript monorepos, but does NX
support more than JavaScript? Can you have a polyglot
monorepo? Yeah, totally.
So we happen to be mostly
on front-end space, so because we...
Like, the whole team start from anger now
in React, Next.js, Node and that kind of stuff,
so it's very monorepo heavy.
But for instance, we have our internal monorepo,
where we use, like, Java back-hands,
Node back-hands, within the same NX monorepo.
So, NX itself doesn't really care.
A lot of the plugins that we provide
happen to be JavaScript focused.
But for instance, there are plugins from the community
that allow you to run .NET projects,
Go projects, and other kind of things
within the same NX monorepo.
So it doesn't really matter.
Next, what it does, like, at its core,
is just figure out dependencies, and execute
those dependencies based on what changed, right.
So that's what it does. It's basically a task
runner, if you want, that comes
with caching and that kind of thing.
So you can benefit from that.
But that's it, right.
That really happens what plugin you happen to use.
So if you have a plugin that supports Go,
or Java back-hands or .NET back-hands,
whatever you happen to use, or Vue.js,
you just plug it in, and that's it,
and you can go for it.
Recently, we are currently trying to market
a little bit more and create more content
around it, but we even allow nowadays
to extend the dependency graph.
So if you have, for instance, Go,
which has its own dependency structure
within different Go libs,
so how you follow from one lib to the other,
you can even, like, part it on your own,
create your own dependency part,
and plug that back into an X,
and that way you have even those dependency graphs
integrated, so an X would even know
how your Go structure looks like.
So the functionalities are there.
What we are currently trying to do is kind of
educate people, like, teach them, produce material
around it, so that people see, like, the potential
that are in there.
But in X itself, it happens to be written
in TypeScript, in JavaScript, but you can
plug in, like, even a polyglot
processor, like, if you want, so definitely.
That's awesome.
So we just got a few more questions left.
One thing I'd like to go back
and touch on, because I think it's a pretty cool idea,
but we didn't talk about it much, was
an X's distributed task execution.
Can you detail, like, what you might use
that for, and, like, what it's actually doing?
Yeah, so
that basically ties
into the caching and the fact that execution,
which we talked before.
And it's solving the problem of when you have,
like, multiple, like, on CI friends,
what you want to do is you want to paralyze
as much as possible, right?
But then what happens, like, an X is able to,
like, with the dependency graph to see, like,
which process you have.
Now, if you have the fact that tests, for instance,
which runs, like, I don't know, 100 tests,
right, just tests, that are being executed,
what might happen is that some of those take
longer, some are shorter, right?
And so if you just balance and paralyze them,
like, equally,
like, you will end up in a suboptimal situation
where, like, all the others are kind of waiting
in idle state, like, those basically
different processes, right?
And so you have to wait until all of them are complete
and then you go ahead, right?
Then DT can solve you with that, because, like, DT kind of
learns how long certain
tasks take or took in the past.
And so it allows you to balance that out
automatically. So you don't have to kind of
care about, like, paralyzing stuff,
right, and then charting, like, on CI,
the different processes.
But DT kind of takes them in.
It knows the structure, it knows the dependencies,
it knows how previous runs went,
because, like, they went through that pipeline as well.
And so it does basically paralyzation
splitting up, right?
So it doesn't make sense to split up those tests,
because they are pretty fast, so let me just group them
into one kind of process, like, that runs,
like, sequentially within the same thing,
right, while the others are being paralyzed out.
And so what you end up, like, is a balanced run
where all of the runs basically take more
or less the same time, right?
So it's basically an optimal use of your CPU time on CI.
So that's kind of the idea.
And our main design goal there
was to be able to set up super easily.
Because what we see, for instance, is, like,
in companies, like, the paralyzation on CI,
you need to kind of have some knowledge, right?
You need to kind of know, like, how do you do that
in, like, GitHub, right, or CircleCI
or GitLab.
And so you need to have nearly a person
dedicated to kind of take some time
things through that and kind of figure out,
like, how this works best.
And with DT, it's really just, like,
a couple of flags setting on, starting basically
with DT server, and that would then kind of
pull in and do the paralyzation.
And, again, like, they are similar to the
cache restoration, where, I mentioned before,
we, like, restore the console logs in the same order,
the output in the same order.
With DT, it's the kind of same thing.
So if you go then to, like, your CircleCI
and you look at the logs, they are at the same thing.
Like, if you execute them in one single run,
right, so you can really see sequentially what
happened. And that's something we really pay
a lot of attention to, like, and which ties a bit
into the whole, the very experienced part,
right, of an X, where, obviously, we could just,
like, face back the result and say, like,
okay, this run happened to be in, like,
that first process up there and that different
kind of setup in different containers.
So go there and see the logs there, right?
But what we want to do is, like, reassemble
everything in the same order as you kind of
pipe them in, such that, like, for example,
a developer, especially for debugging purposes,
is a super easy spot and figure out,
like, what happened, right?
Which, obviously, it needs a lot of work, right?
Like, there's that extra mile you have to do
to go basically to make sure that the integration
back in works.
That's super cool. So, like, even,
you don't even have to, like, set up sharding
for, like, just, it's just, like, oh, I got this.
Exactly. Exactly.
And you can totally try it out.
Like, what we did, like, for instance,
I think it was two months ago, like, with NX Cloud.
And, you see, NX Cloud, like, as an exclament,
that is kind of our paid addition on top of NX,
right? Our commercial addition, if you want.
But then, like, what we did is, like,
two months ago, we really opened it up,
such that it's basically free for everyone.
Because we said, like, everyone has
500 hours of such safe time on NX Cloud,
which is basically more, like, a lot more
than most, like, repositories use.
Like, unless you are, like, a super large company.
But then you obviously most probably afford
to pay for it, right?
But, like, others are totally able to use it.
And so with that, you have to distribute caching
as well as the DT already integrated, right?
So it's, it's, like, we wanted to
really make it a no-brainer, just opt-in.
And when you set up the NX workspace, just connect
to NX Cloud and have the caching going.
Even without you noticing a lot, right?
So you can, can straight out benefit from it.
I'm assuming that this is something that would have to be integrated
in a plugin to the right.
So, for example,
it seems like you, like,
it would still need some idea
of how to split up the test, you know,
like if you're using JAS.
But, I mean, if you're using a JAS plugin,
then I'm assuming that, you know,
that knows everything about JAS anyway,
and you can provide this capability of, like,
doing whatever.
So if you were writing a new plugin
for, like, VTest or something,
you would, like, add this capability
as well to, like, you know, pass
the specific flags to make it split up or whatever.
So really all you're thinking about as an NX consumer
is, like, here's the flags that I would pass
to a plugin that's compatible with this feature.
Right? Something like that?
Not really.
Like, the actual, like, the sharding,
like the whole DT setup, for instance,
that works out of the box.
If you wrap it in, like,
even if you package it up in a complete NX Executor,
which is, like, the full blown version,
you have a plugin that you develop
and that gets certain parameters in,
and, like, you work with those.
And those are configuration-based.
So NX is able to even read them, right?
And even, like, the DT and NX Cloud setup
is able to read it.
What we also have, however, even if you execute note scripts,
like, you can, like, wrap them
in a very simple Executor,
which we call run commands,
which is really just, like, one line,
you have a command colon and that invokes, like, note
and then dot dot dot dot dot dot slash
and points, like, to your relative part
of the file, right?
But since this is also wrapped
and you have an output folder that is specified,
NX knows, okay, this command needs to be executed,
right? This is the output it produces,
so you can measure that, right?
You can measure, like, how long it took,
which then, like, use then as an information
for the next runs and for the splitting.
And it knows where, like, the output is
happening, right? So it can cache that
and move it into the cache folder.
So it's not, like, you can really wrap,
like, even your own custom note tooling
and whatever you have, even locally in your workspace,
in those commands, it would just work out of the box.
So it's not something where you need,
like, as a plugin author,
expose certain things to NX
such that he knows how to actually optimize it.
It happens, like, this is a black box, right?
It has input parameters, flags,
you execute that, that's the output, that's it.
So it really looks from outside of it.
Cool.
Nice. That's awesome. That's really cool.
Cool. So with that, I think we'll move on
to tooltips.
My first tooltip is about a feature
in VIT.
For those who don't know, VIT is a
Next Generation ESM
build tool for building websites
that are really fast.
I've been using it recently
to build a storybook alternative
because while I hold a special place
in my heart for storybook,
it is quite an old project and has
a bit of baggage in the code
from how we used to build things.
And I think with all these great tools
that we have today, you could build
a better version of a storybook
with a lot less overhead of
all of those tools.
And so to accomplish a lot of
the things that I've had to be doing,
I have to code a lot of virtual modules.
So you might ask yourself, what's a virtual module?

instead of
generating code for, say,
like all of the
exports in a story, I can instead
create a virtual module
that to the
build system, to ES build,
it looks like a normal file
and I can say, hey, here's my virtual module,
it's called at foo
and then this
VIT plugin will return the content
of that virtual file.
So if you were
looking at the plugin, you literally return
a string with
some exports and it's just straight up javascript.
So this
pattern is a little hard
to wrap your head around at first, but for
every time I've tried to build
a documentation generator, this is the way
that I do it, is that a virtual module
in the build tool
and it's a super powerful way to do
things. So if you've never
heard of that concept, I'd encourage
you to go to their docs, check out what you can do
with it and maybe try to play around with it yourself.
That's all. This is a pretty
important concept broadly.
So I first
like ran into this concept
way back when I was working a lot
in the view ecosystem. So view.js
has a single file component
and out of that single file component
it has this template, it's got
the actual javascript that it runs and it's got
some styles and it actually breaks those
down into virtual modules
and back in
especially like
webpack 3, webpack 4 days,
it was not
super trivial necessarily to do that, you had to get
creative with how your plugin, so
it's nice that they expose this first class
API for this.
Yeah, that was the cool part about it, it was super
simple to set up. I just copy and pasted
this little plugin function and I was off
to the races, whereas webpack you kind of have to
figure out what hook you want
to tap and what part of the pipeline
is best and that's
a lot.
So my first tool tip of the day
is this project called YJS.
YJS is
a CRDT
implementation. Essentially
if you need to do
real time editing, real time collaborative
editing of something, this project
can help you out.
So if you want to have a shared text
field or something like that
then yeah, definitely check it out.
CRDTs
are non trivial
or operational transforms
if you're taking a different approach
it's a really hard
space, it's a really hard space to do well
but this project gives
a lot of good APIs
and yes, it's pretty nice.
So definitely recommend checking out if you need
to do any real time editing stuff.
You've mentioned CRDTs much more than
once on this podcast, Justin.
Are you building a real time collaborative
editing of piece of software?
Are you just very interested in the space?
I plead a fifth.
I'll talk about it when I have something.
Yeah, we already
talked about it during the podcast.
We did that next console, I wanted to plug it
specifically because a lot of people don't know about it
and I feel like it's super powerful
especially obviously if you use an X
it integrates there and gives you
that discoverability aspect where
if you're new, you don't know, you can
visually explore what are the capabilities
that you have there.
So yeah, definitely check it out.
If you're a web-sum user, as I mentioned
there are even some plugins there
to community plugins
that do basically most of the same.
Maintain by the community itself.
It's really cool.
It's such a great concept, as Andrew was saying
earlier, it's a great way to have
a UI for a CLI
is by doing it via a VS Code extension.
That's pretty smart.
You're clever.
This tooltip falls right in line with my last tooltip.
For my storybook alternative that I'm building
one of the things I need to do
is extract the stories from a file.
So there's a couple of ways
I could go about that.
I could just be real stupid. I could just use
FS, read the file, try to use some
regexes to parse what
exports are, but that's not going to
work in the end. That's a bad solution.
The next step, you might go to
the most popular tool
that would do this for you.
What you really want is an AST
that you can then go, okay let's go through
the AST, get all the exports, and then we're off to the races.
The leading
choice for that right now would probably
be Babel.
But Babel, it's built on JavaScript
and it's pretty slow.
My tool is already built on
VIT, so the whole point of it is
as fast as possible.
If I throw some Babel in there, it's
eventually going to bog down the rest
of the build speed, and I want sub-second
startups.
So I went to the bleeding edge
and went for SWC. SWC
is a rust-based
compiler
for the web.
It's sort of like ESBuild, but the biggest
difference between ESBuild and
SWC is
SWC actually exposes
the AST so you can do
things like extracting those exports
or even doing transformations.
And since it's built on Rust, it's still
really fast.
So it goes really well with VIT and this whole
thing that I'm trying to build.
And currently, for my little test
storybook, it starts up in
250 milliseconds.
So it is really fast.
I'm excited to finish building this on the weekends.
Nice, nice.
Yeah, I'm excited to see what you come
with.
Cool.
Yeah, I always say, I never approach
SWC from the AST perspective, never
approach that part.
We recently started using that in an X as well.
We have a new package that is
like AdNormalJS,
which is really like, we call it
JS, which might be a bit misleading,
but we wanted to have the best
JavaScript packaging experience,
but specifically for TypeScript, so it creates
both of them.
And there we have both options, you can use
SWC to compile your TypeScript or TSC
obviously, which would be the default
way to go.
And you can even switch back and forth.
And the speed aspect is really awesome, absolutely.
Super fast.
Yeah,
I was looking at the AST, and they
do have Type information in there, so
that's pretty interesting. I also use this
to extract JS.comments
from the AST, so cool things.
Cool, I need to check that out.
They're working on a Rust-based
API too, so
if you want to do Rust-based AST manipulation
then you can.
Yeah, they're probably going to break all my code,
but that's fine. I know a person
who recently learned a lot of Rust,
he could probably help out.
So,
my last tooltip of the day is
this really, really interesting project
that I stumbled across. It's very much
in line with my last tooltip.
It's called WebSocketPie.
We'll
include the link in the show notes.
WebSocketPie,
the idea behind this project
is to be able to give you a real-time
multi-user application
without having to
really think too much about the server
or the server configuration.
You set up a standard server,
and then the client sort of manages
all the connections,
and so, essentially,
so long as you have the correct
room name and the right version,
then you'll automatically connect
and you can send messages.
There's a predefined format,
but if you were looking for a really,
really easy way to make a game lobby
or a chat app
or something like that,
this would be a really interesting
sort of mechanism for that.
Now, this is more of a hobbyist thing
if you're communicating
sensitive information
or security is a big thing,
maybe not this,
because it does rely a lot
on clients
doing the configuration
setup. This is really just to get started
really easy,
but if you're making a game or something,
it would be really interesting to check out.
Yeah, my next pick is
Obsidian. This is not really
programming-related
or dev tooling, but I've been using this
recently. I'm very interested in the
space of personal knowledge database
and how to keep nodes and stuff,
especially now that I've transitioned
more in content production,
doing that much more actively,
this is super helpful, and I feel like
I've tried different approaches, but this is the one
that clicked most for me
in the sense of how I build up my
knowledge base and how I can connect
nodes between them, to approach
it much more from a graph perspective.
Usually what happens is
the biggest fear is usually, okay, I store
my nodes somewhere, how do I
find it? What you usually do is
create folder structures,
but then they're super rigid,
then you start using tags, but
what tag did I use last time?
In this case, what I
figured is what I usually remember
is context. I might not remember the name
of someone, but I know I was on their show,
I talked to them at that, or they did
that talk, so I find a talk
and from there I find a link to the name of the person.
In that kind of thing,
it's really powerful
and I was even thinking about
exploring it more in the sense of
partially even exporting it to
your blog platform, to have
something like a digital garden where you have
small nodes, you push them up,
to not always just have those big
large blog posts, but rather extract them
and have some small nodes
connected in there. It's really cool
space, also a very big community behind it,
large forum and chat room
that they have. It's pretty awesome.
Justin, bring up
hipstersmoothly.com
Yes, Yuri,
it is very easy to export your
Obsidian MD notebook to a
statically built site, and that's what this is.
Put a digital garden at the top.
I need to check that.
Awesome.
You can check out the repo. They have their own solution
that they sell, but it's pretty
trivial to set this thing up yourself.
Me and Justin are big Obsidian people,
or at least we were.
Justin also created, yeah.
I use their
paid solution, so this
my notes on the site is
through their posted solution,
which is pretty good.
Nice.
You had one last one on here.
Do you want to share this, Arthur?
Yeah, this is something I stumbled up
most recently, so I'm not
super deep into WIM. I started using
as probably most developers,
on and off, continuously,
you jump into WIM and then it doesn't
work to back out again,
but it did stick now for me
at least for the last two years, mostly
with Invisuality Code, there's
a VS Code WIM plugin,
so to have more WIM shortcuts,
but still not the full WIM environment.
In general, now,
this LUNAR WIM or LWIM
came up.
It's pretty cool, so I started exploring it a bit,
but really just like since last week,
for those quick things where you have
R in your terminal, you just want to
kick off, open up a project, quickly explore it.
It's really powerful.
It's based on new WIM,
so it sets you up already with a couple of plugins
that get you started
quickly, if you're not super deep
into WIM, it's super nice to get started with it.
Does this actually come with
GUI, or is it just using
Neo WIM through the terminal, just pre-configured?
Yeah, it's pre-configured.
You spin it up from within your terminal
and then you have already a tree structure
set up, code syntax
highlighting. It basically re-install
some of the plugins for you as you open up files.
So it's pretty nice,
but this is pretty short.
Like it started exploring last week.
It's pretty cool.
Yeah, yeah, that's awesome.
Definitely having more opinionated WIM
set ups is a good thing,
because WIM, there's a long history there
and it can be very
very hard
to get into and get all your configuration
set up for everything. It's cool though.
Yeah, I think that about wraps it up for the episode.
Thanks for coming on, Yuri.
It was cool to talk to you about NX.
We talked about it in our first episode
and it was cool to take this deeper dive
and learn more about it.
Yeah, it feels like we've come full circle.
Yuri, thanks for being on. It was great.
Awesome. Yeah, thanks for having me.
Well, that's it for this week's episode
of DevTools FM. Be sure to follow us on YouTube
and wherever you consume your podcasts.
Thanks for listening.
Sous-titres réalisés par la communauté d'Amara.org

Les infos glanées

Je suis une fonctionnalité encore en dévelopement

Signaler une erreur

devtools.fm:DeveloperTools,OpenSource,SoftwareDevelopment

A podcast about developer tools and the people who make them. Join us as we embark on a journey to explore modern developer tooling and interview the people who make it possible. We love talking to the creators front-end frameworks (React, Solid, Svelte, Vue, Angular, etc), JavaScript and TypeScript runtimes (Node, Deno, Bun), Languages (Unison, Elixor, Rust, Zig), web tech (WASM, Web Containers, WebGPU, WebGL), database providers (Turso, Planetscale, Supabase, EdgeDB), and platforms (SST, AWS, Vercel, Netlify, Fly.io).
Tags
Card title

Lien du podcast

[{'term': 'Technology', 'label': None, 'scheme': 'http://www.itunes.com/'}]

Go somewhere