James Garbutt - e18e

Durée: 54m1s

Date de sortie: 12/05/2025

In this episode, we talk with James Garbutt about e18e, a community-driven initiative focused on improving the performance of JavaScript packages across the ecosystem.We discuss: • The goals and vision behind e18e • What’s slowing down the JS ecosystem • Why performance work is often invisible—and how to fix that • The importance of community coordination in open source • How developers can get involved in improving the packages they rely onIf you care about build times, bundle sizes, and the health of the JavaScript ecosystem, this episode is for you.This episode is sponsored by WorkOS (https://workos.com) and Mailtrap (https://l.rw.rw/devtools_4)🔗 Links • e18e Website: https://e18e.dev/ • GitHub: https://github.com/e18e • Discord: https://discord.gg/e18e • James on Twitter: https://twitter.com/jgarbutt • Full episode + transcript: [link to your site if available]🎧 Subscribe to Devtools.fm for more conversations with the people behind the tools developers use every day.

Le path hot, en termes de la dépendance que les gens ne puissent pas vraiment soutenir ce combat.
Donc, la toute chose était de créer des alternatives,
plutôt que de faire attention à ce que ce soit pour les gens qui sont en train de se faire.
Bonjour, bienvenue à DevTools.fm.
C'est un podcast sur les tools de développement et les gens qui le font.
Je suis Andrew et je suis ma co-host Justin.
Hey, everyone.
We're really excited to have James Garbuck on with us.
So, James, you are one of the minds behind this movement called E18E,
which is an initiative to clean up and speed up and just improve the JSC ecosystem.
So, we're really excited to chat about that today.
But before we get started, would you like to tell our listeners a little bit more about yourself?
Yeah, thanks, Favé, me by the way.
Yeah, so I spend a big chunk of my time leading the E18E initiative
and I maintain quite a lot of open source projects as well.
So, a lot of my time is split across all these different things.
You know, I view user and unjust and chock it out and a whole bunch pass by I've tried.
So, I'm sort of hopping between all these things.
But yeah, E18E is definitely not focused at the minute
and probably will be for a while, I imagine anyway.
But yeah, see where it all goes.
Yeah, it's quite a big lofty initiative and kind of amorphous.
So, can you explain to us what E18E is and where it started?
Yeah, so, some time ago, I think maybe a year and a half ago or something,
there was basically a social thread somewhere
where Bjorn, who was working on Astro at the time and Anthony working on VIT
and I was sort of chipping away at cleaning up dependencies in a whole bunch of things.
We all found each other in a thread of all places
and just started chatting about maybe we should have a space to discuss this stuff.
You know, and we have people like Marvin doing the speed of the ecosystem
blogs as well at the same time.
So, maybe we should at least create a discord or something
and we can all work together on performance related open source tasks.
And so, from that, we created a discord to at least discuss this stuff,
which then turned into the E18E project,
where we can get more people involved and create issues
and easier to pick up work for everyone else, basically,
just to expose the performance problems more, I imagine.
Well, yeah, it all came out of just various people trying to improve performance
in all sorts of different projects and not knowing about each of us work, basically.
So, it's been really good to connect those people
and just provide a space for new contributors to get involved as well.
I feel like when this initially came up,
before the initiative happened and discord and all that other stuff,
it was like, there was like a tweet or something about,
oh, well, if you install this package,
actually you get like this explosion of dependencies.
And there was like a lot of discourse on that.
And, you know, it kind of,
like was the JavaScript drama of the day as usually is.
And so, did you were you like already working on this problem
before that sort of had happened?
Is this something that's argue in your mind?
Or was there was there like a moment where you're like, OK,
like I can get in and like do something about this?
Or like, how was your involvement started?
Yeah, so it was the big drama at the time.
Long before that, I sort of being contributing individually
just to a bunch of projects to reduce dependency trees at the time.
And I kind of started doing that maybe like four years or so ago.
And there's like a trail in my GitHub of various attempts of creating,
you know, like link plugins and other tools to help you out with this stuff.
And none of them really took off.
So, you know, I carried on doing the PRs myself.
But that was long before E18E.
And then E18E started
and maybe like a few months in or five to six months in.
This drama began,
but it did help a lot actually to add some visibility to this stuff.
And we we got a lot of people involved
that wanted to help out because they'd seen this.
But yeah, ultimately, the dependency tree stuff
is because certain projects do actually need to support
really old versions of Node, for example.
You know, the hot path in terms of the dependency
that most people pulling doesn't really need to support that far back.
So the whole thing was to create alternatives
rather than trying to get rid of whatever it is that's pulling all those in.
Which I think has mostly been successful.
In a lot of cases, we've contributed upstream as well.
So, you know, but you don't need to create an alternative
because quite a lot of maintainers are happy to, you know,
I increase the engine constraint, for example.
But there's always going to be some use cases
where packages like the one that had the crazy dependency graph
do actually have a use somewhere,
just not in the sort of mainstream projects.
You know, the common use case doesn't need to support Node 1, for example.
So as long as we have alternatives,
we can massively reduce the dependency tree depth, essentially.
And if you really do need that stuff,
you can still pull in, you know, the one that supports old versions.
But yeah, ultimately, the drama did help
drive a lot of the effort.
But we were already working on a lot of this stuff for a long time before then.
Quand les gens pensent que les performances,
je ne pense pas que les gens pensent que les dépendances sont de ma graphite.
Je pense que les gens pensent que les performances de la runtime sont actuelles.
Donc pourquoi est-ce que ce problème de la tree
a été le premier que les gens ont appris?
Et quelles sont les bénéfices de la vraie vie?
Est-ce que la tree a réduit ça?
Oui, c'est différent pour chaque projet,
parce que nous essayons d'improuver la performance de la runtime,
mais aussi de la expérience de la production de la production de la pointe de vue.
Donc la taille installée, pour exemple,
fait que le toulon de la developer a plus d'intérêt que la runtime,
parce que vous pourrez généralement bundler les choses,
donc vous allez les réétenir de toutes les choses.
Et comme si vous avez une tâche de la dépendance,
ça ajoute une complexité pour les choses d'ex-ex,
mais ça ne nécessitera pas de runtime de masse.
Mais vous avez des lesquels sont dans le milieu,
où ils sont d'accord,
mais aussi ont un grand nombre d'autres codes qui peuvent être modernisés,
parce que les plateformes ont été démarrées depuis.
Si nous pouvons éprouver ces plateformes,
ou créer des alternatives qui le déclenent sur plus de modernes primitives,
et des choses comme ça, qui sont construites en fonction des choses,
vous allez éprouver la performance de la runtime et de l'ex-ex,
dans beaucoup de cas.
Et il y aura des choses qui vont aussi être,
si vous avez éprouvé les bundles,
c'est des outils d'outils qui vont arriver à la runtime.
Mais oui, nous avons encore beaucoup de
de la surface de la clean-up, où ça justifie l'ex-ex.
Mais c'est encore important,
parce que vous pouvez en mettre en bas,
peut-être des centaines de megabytes en s'en train de s'installer,
quand vous en en mettez peut-être un peu de megabytes en réalité,
ou quelque chose comme ça.
Ça fait qu'il ne matters pas beaucoup.
Quand vous parlez de personnes qui ont des outils,
ou d'utiliser ou d'understand les outils,
quels sont les outils qui sont les plus simples que vous avez?
C'est comme ça que vous savez
quel est votre state,
quel est votre santé, votre projet?
Oui, à la minute,
c'est l'un des tasks que nous essayons de travailler
à peu près,

pour s'exprimer beginners dans l' splendor

Ça va ip pap irrangerfulness,
un entreprise qui va donner une fleur à l' alcanz de le installation,

Notre intérêt de darüber,
nous cherchons de quoi on mulait,
des choses comme package size.dev qui vous dit comment l'installation est grande et vous savez
comme bundles phobies comme les tools pour vous dire quel est le size de la course et les choses comme
ça mais oui la minute c'est comme bien beaucoup de tools et pas beaucoup de connectes dans ça
donc on a un travail en train de créer comme un unifier la library et l'on qui
joins tous ces deux ensemble parce qu'on devrait pouvoir le faire contre votre propre projet
et juste avoir des summaries ou des réponses ou quelque chose qui vous dit si il y a des warnings
de l'autre. Mais oui, l'admittedly est un peu difficile à me dire, il faut juste savoir
comme la grande liste de tools pour aller à l'un. Cependant, ça est documenté mais ça pourrait
être mieux évidemment si ça juste fonctionne de l'un de l'autre. Mais oui, vériez la page des ressources
parce que c'est super utile pour eux. Nous avons comme les plugins lintes et les choses comme ça,
comme ça, qui m'a aidé et un tool de l'optimisation pour faire avec les plugins de la build et les
analyses bundles et les choses comme ça. C'est cool de voir vous les gars en travaillant sur les
stuff bundles et les analyses bundles. C'est un endroit que je pense que c'est tellement exploré
et de la tool de performance. Il y a deux ones webpack qui ont été autour pour un an à ce point
et qui n'ont pas changé tout et rien n'a été passé par les DX. Je suis content de voir si
ça peut développer plus d'un certain point. Oui, et en fait, l'une des choses que nous voulons
comment continuationner plus Bene Bry copyright, c'est d'avoir des prototypes que l'on voit
enbble d'un point d' manageablement, on vas me
vous pouvez browse-en, et bien, les warnings et les choses.
Donc, si nous avons plus de features à cela,
beaucoup de cela devrait être solvée juste par l'utilisation de l'UI.
Donc, si nous pouvons avoir une analyse bundle, cela sera vraiment bien.
Mais je pense que c'est la direction qu'on est en train de faire.
Juste parce que, comme vous le dites, il y a déjà beaucoup de analyses bundles,
mais il y a beaucoup plus de tools maintenant que il y avait.
Et, vous savez, les choses plus moutantes que nous pouvons faire,
en en savoir les duplicates et les modules ES versus common JS,
et tout ce genre de choses, et même en suggérer des alternatives,
des dépendances, etc.
Donc, j'ai l'intérêt de voir où cela se termine.
Mais oui, nous avons beaucoup de tools à construire.
Oui, je suis en train de regarder ceci,
Celle d'埃sipodcribe l'écwad,


L'EPL ou le téléphone.







il sera disponible dans l'UI. Si vous n'avez pas vu ça, il y a un tool qui dit que les types sont d'accord.
C'est super utile parce que ça fait que les definitions de type sont résolvables par type script, avec les maps export et tout comme ça.
Si vous pouvez juste rassembler ça et publier tout ce qu'il y a dans le Node modules UI, vous avez déjà une bonne overview de n'importe quel problème potentiel.
Je pense que c'est très bien de juste continuer à construire des plus grands détails dans ce genre de choses.
Comme vous l'avez dit, l'UI est super bien, comme avec tout ce qu'il construit.
Je me suis dit que je suis un de mes packages et je me suis dit que c'est un de mes trés.
C'est un de mes trés, c'est un de mes trés.
Je me suis toujours aimé juste d'expander dans la plus basse lait.
Tu sais, tu vas au bout du trés de la dépendance et tu vas voir ce qu'il y a et tu trouveras quelque chose qui est odd.
Ce épisode est broughte à vous par WorkOS.
WorkOS add enterprise features to your app without the overhead.
Single sign on with any provider, directory sync that just works, roll base access control, audit logs, admin portals, secure credential store, you name it.
They have modern APIs, well designed SDKs and documentation that respects your time.
It's built for developers backed by great support and trusted by teams shipping fast and scaling off.
Just focus on your product and let WorkOS handle the enterprise stack.
You can get started at WorkOS.com.
WorkOS also has a podcast called Crossing Enterprise Chasm.
It's really worth a listen if you're in a startup and you're looking at serving a market or moving to enterprise customers.
Thanks again to the WorkOS team for sponsoring us.
Part of the E18E initiative is to identify packages in the ecosystem that are heavily dependent upon
and they're either not maintained or they're targeting a really old version or there's some other reason why you want to optimize them away.
But this has to be like, some of these packages may be relatively straightforward in their implementation, but some of them I'm assuming aren't.
As you take on a new package, you're taking on a maintenance burden of maintaining that library.
So how do you all balance the sort of ecosystem need for a new package versus the maintenance burden of taking that on?
Yeah, so this is basically one of the things that we try to do, obviously, is contribute upstream above all else.
So if we can work with the existing maintainers to improve what already exists, then we'll choose to do that.
But it's not always possible because some maintainers are long gone, you just can't contact them.
And some maintainers have different requirements, like on the version of Node they need to support or something else.
So yeah, sometimes there is a valid reason to create a fork or an alternative.
But to sort of help with the maintenance burden when that does happen, a few packages, for example, have been hosted by some of the organizations that are in our community.
So for example, there's an ES Tool in org, which contains a couple of packages that are built by the community, but owned by the ES Tool in organization, on purpose, so there's not just one maintainer.
And it's the same with Tiny Libs. Tiny Libs has maybe like six libraries, which are each individually owned by someone who originally created it, but the organization maintains all of them.
So we can share the burden and basically ensure that it's not going to die off anytime soon.
So yeah, I think a solution to that is just making sure that there's always an option for new maintainers to move their package to one of these orgs and get the help basically so that they're not alone.
And we've seen quite a few contributors that want to create a new package but don't want to be the only one maintaining it.
So they're really happy to have the option to join an org and have some help.
But that's, you know, that's also why we try not to switch to things that are not battle tested, because we've seen this a few times where it's all good someone can create an alternative package.
But, you know, if it's got no users yet and no one's really tried it in the wild or any of that stuff, we can't really promote that people move to it.
So a big chunk of the work that we do in the communities to reach out to, you know, larger projects to try these new tools out.
And if they do work in some sort of beta version or something, then, you know, we can gradually move them over to them.
But it's, you know, it's always going to result in a level of trust at some point in that you just have to trust that the maintainers are not going to disappear.
The same that we did with the existing packages.
But ideally, most of them have more than one maintainer, at least.
Can't always be true, but, you know, in a lot of cases, it is.
And, you know, quite a lot of the community members work together well enough that most of them are maintained by multiple members in the community.
Not many of them are like solo ventures or whatever.
So yeah, hopefully it just carries on that way.
One tough topic when it comes to publishing packages is supporting different node versions.
Part of that big, big drama back then was supporting all the way down to like some of the oldest node versions.
And that's a hard thing to do if you're running a project.
It really ups the maintenance burden if you want to be like, oh, this last version supports all the way back to this old unsupported version.
So what's your take on supporting node versions like that?
And like when we drop an EOL for like node 18, how long do you think we should support node 18?
Yeah, it's a tough one because I don't.
So I believe that there probably are people somewhere that do need to support their old node.
So, you know, like I said earlier, we don't want to get rid of the packages that let them do that.
But then most packages, I would expect, can assume that you're using like node 16, at least, or something like that.
But yeah, you mentioned node 18 falls out of long term support soon.
And yeah, it will be a great day when that happens.
But at the same time, obviously, you can't just switch off it immediately.
Like, there will be a lot of projects that still have to support it for a while.
And that's okay.
But it does mean that quite a lot of projects have the option to create a new major version or something that does require node 20 and above.
You know, I don't believe that most of us should.
So most of us should not have to support very old nodes, you know, like 0.8 or whatever.
But quite a lot of us will have to support node 18, for example, even when it's not long term support.
So I think that's fine.
I think just naturally over time, we should keep an eye on it and when we can, you know, bump the constraint.
Because quite a lot of these projects as well, didn't purposely support old node.
They just left the constraint like that and have been changing, like keeping the code compatible since.
But really, you know, when you bring it up, quite a lot of projects have said, oh, well, why don't we just bump the version up and they have, you know, it deletes a lot of code, because you can use built in functions and things that didn't exist before.
So yeah, my recommendation would always be just try use a more modern node.
But I fully understand that's not always possible, depending on, you know, what platform you're working in and who it's for and things like that.
So it's different for every person.
So, I mean, thinking about like, OK, so there's packages that depend on all versions of node.
There's packages that depend on other packages that potentially have like a bunch of dependencies themselves, but what are the other like big areas of cleanup that you see in the ecosystem, like things that just people can do to really just improve, you know, their packages.
Yeah.
So the really big one that we've been pushing a lot recently is moving away from dual packages.
You know, quite a lot of packages out there in the wild basically ship double of the code because they want to support common JS and ES modules.
And in some cases, still, umd, for example.
So yeah, we've been doing a fairly big push to move a lot of packages to be ES modules only.
And I would recommend that for anyone if you can, you know, I, some, obviously, some packages still need to support common JS.
But if, if you're lucky enough that you don't need that, then you should switch to ES modules, especially now that the latest node versions can require it in common JS.
So, you know, you're not losing many users by dropping the common JS support because they can still require the module.
But think things like that.
And there's an ongoing discussion about source maps as well, but that affects like install size, not runtime performance.
So it's, it's fairly low on the priorities, but would make a big difference in a quite a lot of packages ship source maps.
And the source maps can often be much bigger than the code itself.
So if you didn't ship that, you know, modules would be a lot smaller, but there needs to be some solution to, you know, be able to pull source maps in when you want to debug this stuff.
So there's still discussion going on around that.
But yeah, in general, like if you're maintaining some project, however, I would just recommend browser and known modules.dev, for example, and see what your dependency tree looks like and just be more aware of what you're pulling in.
Keeping eye out for like duplicate dependencies, for example, you know, different versions of the same thing.
You could have a look for a mixture of common JS and ES module dependencies and just see if there's more modern alternatives to some packages, smaller ones, even.
So, you know, I go back to Tiny Libs example, like Tiny Libs has quite a lot of packages that are more modern and faster and smaller than the older ones they replaced.
So, yeah, if you just browse around the Tiny Libs org and on JS as well, both have a lot of alternatives like that.
But also, you can just run like that. We have a lint plugin as well in E18E.
So you can run that and it'll suggest replacement dependencies basically.
It'll detect what you're importing and just, you know, suggest maybe you could use this thing instead.
And that's not always another dependency.
It could be a built-in thing that browsers have, for example, natively.
So, yeah, I think a lot of it is just being more aware of your dependency tree and also just being more aware of how the platform has changed, you know, we ship more features to browsers and node all the time.
And often people don't realize that something is built in now that they're pulling a package for.
You could drop the package and just use the built-in function.
So, definitely keep an eye on just like change vlogs and stuff like that, release announcements and things.
And then you'll, you know, you'll, you'll end up with a much cleaner code base because of it.
Yeah, hopefully when the require ESM stuff drops, we have to worry about a lot less of this.
It's like, it's hard to anticipate what the new problems will be.
It's like, we're, we're very versed in this old problem that we've all struggled with for so long.
Like, I'm interested to see if there's like an unanticipated thing coming because I know for the longest time we didn't ship require ESM because the node people are like, oh, it's bad for performance.
Like having to guess if this file is CJS or ESM.
So maybe some of those will come to bear their head, but I don't know.
Yeah.
And, you know, there's still, there's still quite a few people that prefer common JS.
So I don't think it's, maybe, maybe never, maybe it'll just be a long time, but I don't think all packages will move to it.
I think you will end up with some maintainers that genuinely prefer to use common JS.
And for example, from what I remember, there's some open telemetry stuff, for example, that's not as easy to do in ES modules, because you need to hook into like the require resolver or something like that.
You can't really do as easily in ES modules, but things like that obviously will get resolved over time.
But yeah, there'll still be a preference for a few people to use common JS.
So I think it'll just be a mixture for a long time.
But as long as you've got alternatives, you know, I think you'll just choose the one that you prefer.
And hopefully most people choose ES modules, because it does help bundle as well and things like that, because it's a natural, you know, like it's a tree shakable way more easier than common JS as well, just because it's less dynamic.
But yeah, we have yet to see like what the performance of like the module resolution and stuff like that will be in comparison.
As far as we've seen so far, it's fine, but we need to move more packages to it and, you know, help the node team, especially just try out more, especially the require stuff and get more feedback,
because they will iterate on that as well.
So it's, yeah, it's not close to being done yet.
This episode is sponsored by Miltrap, an email platform that developers love.
Go for high deliverability, industry best analytics and live 24-7 support.
You can get 20% off for all plans with the Pro-Mail code DevTools.
Check out the details in the description below.
So speaking of bundling and performance of ESM, barrel files are a big topic when it comes to performance.
So why are barrel files so bad for performance and yeah, yeah.
Why are they so bad?
Terrible.
So it's a good way of seeing this actually is in browsers.
So let's say you, you have like an ES module that's a barrel file.
So and a barrel file, by the way, is just some file that reexports a bunch of the exports of some other files.
You know, I can have in like an index file.
But let's say you have one in a browser, especially, and you've just pulled it from unpackage or something, some CDN.
They, from the module resolution, then of your browser, it will have to send a request to all those modules that it's re-exporting, even if you don't use them.
So in a bundle, you can, like, you can kind of get rid of some of that, but it does mess with tree shaking a lot as well.
So you are better off just importing from the exact modules, basically, what you want.
And there are lint plug-ins to detect this stuff now.
I think biome has it built in and a few other, you know, there's an ES lint plug-in because it's just proven to be really bad for tree shaking.
But if you run your code natively in browsers, it's even worse, obviously, because it has to load those files.
So it's, yeah, it's generally just not good for performance, but it is nice for usability sometimes.
So that's why we've ended up here.
But I think that will change over time, because, especially with export maps, for example, you can more easily split modules up now and export specific things rather than needing one big index file.
So we'll see what happens, though, because there's still a lot of them in the wild, unfortunately.
Yeah, before export maps, it was just, it was so hard to do, like, multiple exports without barrel files, right?
Like, you could throw a JS file at the root of your project and then point it at something in a disk folder, but dual packaging hazards abound and not a fun time.
Exports maps have made it better, but the syntax a little, it's a high bar for most, and it's very, very easy to mess up.
Yeah, 100%.
And that's, I think Publint helps with that.
Going back to that, you know, just something to link the export map, basically, and help you out, because it is a complex thing.
A lot of people still get it wrong.
Simple stuff like putting the types line after the module line itself, means that sometimes the types never get loaded, for example.
So using a link tool, or like, are the types wrong, will help you figure that out, but it is a complex thing in general.
And there's import maps as well, but I haven't seen many of those in the wild.
So, yeah, hopefully there's just good documentation on it.
Seeing a little bit more of those in the Dino world, because they're leveraging stuff like that.
I wanted to ask, when I was going through the site on the speed up section, there is one thing that I had noticed.
So you have this call out about avoiding generator functions for hot code paths.
And I was curious, like, how much of a problem that is in practice, like, what the practical slowdowns are for relative code.
And also, like, this is, the call out is very specific, like, hey, some JavaScript engines are not going to optimize this, using this kind of code could lead to slow execution.
But like, internal execution, or internal implementation of an engine is a moving target, right?
Like, V8 is always changing, JavaScript Core is always changing.
So how do you, when you are making these suggestions, how do you, like, keep abreast of the performance changes in the space to, you know, build up your recommendations over time?
And like, if someone else is, like, building a code base that relies on these features, how do they keep track of it?
What's your advice there?
Yeah, it's that specific page is basically a very difficult set of advice to keep up to date.
I'm not sure there's, like, a final answer to this or anything in that.
Like you said, the engines change over time.
So I think on the same page, we specify that you should avoid chaining array methods, you know, like map and filter and things like that.
But I think, from what I remember, VA actually optimises a bunch of that under the hood.
So if you do two filters, for example, I think it's smart enough that it can figure out that it's one iteration or something like that, in some cases.
So it is a difficult thing to document because the engine might push an update, you know, next week that optimises that.
So I think we probably should update that page anyway, just to mention that it is a moving target, like you said.
And as far as I'm aware, async and generators still do have some performance issues compared to, like, sync code.
But the engines probably would benefit from you doing this stuff so that they can have opportunity to optimise it right.
So yeah, I think it's good advice, but at the same time, it's difficult to keep track of when the engines themselves have increased performance of this stuff.
So I think it would be better that we can present it as like advice rather than definitely don't use this, if that makes sense.
Yeah, I know like benchmarks are fraught in a lot of different ways, right?
It's like really easy for benchmarks to misrepresent data, but like this kind of thing seems like the situation where benchmarks would actually be super valuable.
It's like here's the data collected on like these code paths of like here's async, here's generators, here's another, you know, type of structure and just looking at like the performance implications of these code paths.
I've stumbled across features like this in the past where it's like, okay, don't use this feature, it's slow.
And then it changes over time, but you still have that idea in your head.
It's like, oh, it's slow. And it's hard to update that mental model.
And it's hard to talk about too, just because it's hard to visualise and what is the magnitude of the slowdown.
Like, I don't know. It's always a challenge.
Well, a good example of this, I think as well is the various colour libraries.
You know, we've got chalk, picot colours, ANSI's and a bunch of other libraries that do ANSI like terminal colours.
And you know, ultimately terminal colours are just some escape sequences.
So all of these libraries, when they compete on benchmarks, they're just competing on how fast they can concatenate some strings basically, you know, because under the hood, all of them will join the escape sequence with like the text that you want to make red, for example, and then another escape sequence.
And it's how fast you can do that.
And so when I see benchmarks for stuff like that, I do think like we should avoid relying on benchmarks for those things, because the engine might change how, you know, the string dot split performs versus iterating through a string and looking for the character you want to split by.
So the benchmark might be worthless a week later if the engine pushes some update that optimises it.
And then, you know, library A might have been faster than library B.
And then next week, library B is faster than library A.
So yeah, in a lot of those cases, I think micro benchmarks in particular where you're just measuring some sort of fundamental operation are not very useful.
But then using benchmarks more to benchmark like more complex code between libraries and stuff like that, or even just to keep track of change over time, you know, against the against different versions of node, for example,
probably a really good idea as maintainers.
You know, if you have a, if you maintain a library that cares a lot about performance, probably a good idea to have some regular run benchmark just to test that, you know, like a new version of node hasn't come out that makes it slower or something.
It would be good to see more libraries doing more of that, I think, and more apps and things.
A long time ago, Crocford had wrote, Douglas Crocford wrote JavaScript, the good parts, you know, talking about like, oh, well, like actually only use this like subset of JavaScript.
And that was, I mean, that was before like ES6 and all that stuff.
So this has been a while.
And as I was reading through the sort of performance recommendations, I was like, I was kind of thinking, do we need another JavaScript, the good parts?
Should we, like as a community rewrite that book and like talk about it in modern terms?
And I was curious if there was like one or two things, if we like, we're all writing a book like that, what are one or two things that you would like want to make sure 100% were added?
Oh, that is a tough one.
Like, you know, because I'm a big fan of ES modules, for example, but like I explained earlier, I can't really say, oh, everybody should use ES modules.
I mean, they should, but, you know, if you prefer a different syntax, maybe not.
But yeah, there's, there's things like just different primitives and different built in functions and stuff you can use now that you couldn't before.
Like using sets and maps and things like that.
But also, you know, there's so many proposals out there that are introducing new features that I wish were a thing already so that we could get rid of a lot of older sort of mechanisms and things.
You know, there was a proposal, unfortunately, shot down recently that was adding two poles and records to JavaScript.
And it, you know, it would have made a lot of template in engines a lot faster, for example, because they would be able to compare like compare two sets of template strings,
between renders basically, a lot easier.
But yeah, I've mentioned like a lot of stuff around these newer primitives.
But then at the same time, like it's, the language is changing all the time, people are adding proposals still all the time and new standards are coming out.
So it's very difficult to know what to recommend to people.
And you know, I have friends that don't work in front end and working, you know, like C sharp, for example, still joke that every day we have a new framework or something.
And every day we've got a new syntax or something.
But yeah, that's probably a good thing sometimes, just to keep things moving.
Yeah, that's a double-edged sword.
There's lots of movement, but it can be a lot to handle.
I don't think I ascribe to that joke as much anymore, like sure, maybe like five, eight years ago, there were a new framework around every corner, but it's a little bit different now.
Like I take that comment with the grain assault.
Yeah.
And, you know, quite a few of them are dependent on the same stack these days.
So, you know, everybody uses V for example.
Well, most frameworks.
And there's like Pouya that runs on JS is working on Nitro, which has like H3, this little web server inside it, that I think multiple frameworks are going to be dependent on that soon as well.
So it's really good to see that collaboration and that even if there is a new framework every day, they're all at least using a lot of the same underlying technology.
You know, and actually communicating with each other and collaborating.
So it's not like the old days.
So in the E18E Universe, there's three large projects going around.
So there's TinyLibs, UnJS, and ES tooling.
What's like the differences between all of them and how does E18E own any of them or are you just contributors to various things?
So, basically, first of all, one of the early sort of intents that we had when making E18E was that it shouldn't really own anything.
It's just an initiative and a community, you know, to provide a space for people to collaborate.
And that's kind of why we have the ES tooling org, as well, for example, which holds a bunch of E18E focused tools, but it isn't E18E.
And similar, like TinyLibs existed before E18E, because it had tiny benches in at the time, I think, which is really cool package for benchmarks.
And UnJS existed before, but they're all basically organizations, as in GitHub organizations, that happen to be working on things that are in this space.
So ES tooling generally holds tools the E18E community has come up with, like the lint plugin, for example, that help out suggesting module replacements and things like that.
And then UnJS was already being built by Pooya, who works on a lot of nuked things and that's basically a big collection of packages that are all really focused and well built.
And a lot of them are so that you can have an abstraction on top of the various cloud providers out there.
So you can have un storage, is one of the libraries, which lets you have a storage layer regardless of which cloud you're hosted on.
So stuff like that's really cool.
TinyLibs has like TinyExec and TinyBench and a bunch of other packages.
And so that's its own org with its own maintainers that have, well, I happen to run that one now as well.
We have our own discord for that, you know, separate from E18E, because it existed before that.
And there's like maybe five maintainers that maintain all of that.
So yeah, we're in the E18E community, we're trying to promote these organizations because they've got good aims and well good goals.
But we don't own any of them, you know, the community as a whole is like contributing to a lot of them.
They just happen to be well aligned to what we're doing.
But then we do have projects that we're working on.
You know, I have like a github board somewhere that I use to keep track of what the big ones are we're doing.
And those are more like, you know, a big contribution somewhere that's taken months, that the community is working on and there's maybe like two or three people out of the community owning it.
So a good example of that is like the new prettier CLI, which I've been reviewing some of it today.
It's like someone measured it in some benchmarks, it was like 10 times faster than the current prettier CLI.
And that is going to be available, like in prettier itself, like as the official one, it's not a separate thing you install.
It will be the new CLI at some point.
So, you know, the community is working on that at the minute.
And that's massive project that's been going on for maybe like three months in the community.
But Fabio that started it has been working on it for maybe like four years.
So, yeah, we have a bunch of ongoing work like that that's really big that will change how we use a lot of these tools massively.
But again, we don't like, we as E18E don't own it, we just contribute into some existing projects, you know, and try and tell about.
So that's the aim, really.
So for folks who maybe you're hearing about this for the first time and want to do some analysis on their projects, maybe they want to get involved or they want to help improve the ecosystem.
Where do you suggest they get started?
If you want to get involved in contributing, for example, we have an E18E issues repo that has like loads of open issues for a lot of cleanup and speed up stuff, you know, where we've already defined what needs doing, but someone needs to pick the work up.
So if you want to help out and you want to contribute somewhere that will have an impact, you know, on a lot of people, it's a really good place to just pick an issue up and try it out.
And, you know, there's loads of people in the community that will help guide you through that and will do code review and things like that.
So like an example recently was a collaboration with 11T, where we have an issue like an umbrella issue in the repo to just investigate if 11T can be made smaller or faster.
And a couple of people from the community who saw the issue and joined in and started investigating it and there's maybe been like four or five PRs landed in 11T.
And a new release recently, I think they named it like the E18E release.
But that was really cool to see, you know, that they landed all these PRs and it got like 20% smaller or something like that.
So yeah, if you want to get involved on that side of things, just have a look through the issues or join the discord or both.
And there's always people willing to help out and point you in right direction.
But as a maintainer of your own projects as well, if you maintain open source projects and you want some help looking into the performance,
you know, join the discord and share the project and people will probably join in and contribute.
And if you just want to find out how to make your project not open source necessarily, but just make your own project faster and lighter.
Have a read through the E18E website and use a lot of the, there's a lot of good tools in the resources page.
But use the link plugin as well, because that's, that gives you a lot of easy wins in that it'll just suggest alternative dependencies that have already been battle tested and proven out.
So in a lot of cases, it's a drop in replacement.
So you could already have like speed gains and, you know, better memory performance, for example, small memory footprint or install size or whatever, just by replacing a dependency.
So I definitely recommend just run in linter and see what it reports stuff like that basically.
Well, that wraps it up for our questions this week. Thanks for coming on James. This has been a particularly inspiring episode to me, I will probably spend my night looking at dependency graphs and trying to ship smaller packages.
So thanks for coming on and inspiring me and our audience.
Yeah, no worries. Hopefully, you know, people get more interested and involved in performance changes because for too long it has been low priority.
So it would be good to see more people care about this stuff.
Yeah, absolutely. Thanks again James for coming on and thanks for your work for the community.
I'm excited that this initiative exists and hopefully we'll see some good stuff in the future.
Yeah, always thanks for having me.

Les infos glanées

Je suis une fonctionnalité encore en dévelopement

Signaler une erreur

devtools.fm:DeveloperTools,OpenSource,SoftwareDevelopment

A podcast about developer tools and the people who make them. Join us as we embark on a journey to explore modern developer tooling and interview the people who make it possible. We love talking to the creators front-end frameworks (React, Solid, Svelte, Vue, Angular, etc), JavaScript and TypeScript runtimes (Node, Deno, Bun), Languages (Unison, Elixor, Rust, Zig), web tech (WASM, Web Containers, WebGPU, WebGL), database providers (Turso, Planetscale, Supabase, EdgeDB), and platforms (SST, AWS, Vercel, Netlify, Fly.io).
Tags
Card title

Lien du podcast

[{'term': 'Technology', 'label': None, 'scheme': 'http://www.itunes.com/'}]

Go somewhere