A RetroSearch Logo

Home - News ( United States | United Kingdom | Italy | Germany ) - Football scores

Search Query:

Showing content from http://nolanlawson.com below:

Read the Tea Leaves | Software and other dark arts, by Nolan Lawson

16 Jun

Selfish reasons for building accessible UIs

Posted by Nolan Lawson in accessibility, Web. Tagged: accessibility. 6 Comments

All web developers know, at some level, that accessibility is important. But when push comes to shove, it can be hard to prioritize it above a bazillion other concerns when you’re trying to center a <div> and you’re on a tight deadline.

A lot of accessibility advocates lead with the moral argument: for example, that disabled people should have just as much access to the internet as any other person, and that it’s a blight on our whole industry that we continually fail to make it happen.

I personally find these arguments persuasive. But experience has also taught me that “eat your vegetables” is one of the least effective arguments in the world. Scolding people might get them to agree with you in public, or even in principle, but it’s unlikely to change their behavior once no one’s watching.

So in this post, I would like to list some of my personal, completely selfish reasons for building accessible UIs. No finger-wagging here: just good old hardheaded self-interest!

Debuggability

When I’m trying to debug a web app, it’s hard to orient myself in the DevTools if the entire UI is “div soup”:

<div class="css-1x2y3z4">
  <div class="css-c6d7e8f">
    <div class="css-a5b6c7d">
      <div class="css-e8f9g0h"></div>
      <div class="css-i1j2k3l">Library</div>
      <div class="css-i1j2k3l">Version</div>
      <div class="css-i1j2k3l">Size</div>
    </div>
  </div>
  <div class="css-c6d7e8f">
    <div class="css-m4n5o6p">
      <div class="css-q7r8s9t">UI</div>
      <div class="css-u0v1w2x">React</div>
      <div class="css-u0v1w2x">19.1.0</div>
      <div class="css-u0v1w2x">167kB</div>
    </div>
    <div class="css-m4n5o6p">
      <div class="css-q7r8s9t">Style</div>
      <div class="css-u0v1w2x">Tailwind</div>
      <div class="css-u0v1w2x">4.0.0</div>
      <div class="css-u0v1w2x">358kB</div>
    </div>
    <div class="css-m4n5o6p">
      <div class="css-q7r8s9t">Build</div>
      <div class="css-u0v1w2x">Vite</div>
      <div class="css-u0v1w2x">6.3.5</div>
      <div class="css-u0v1w2x">2.65MB</div>
    </div>
  </div>
</div>

This is actually a table, but you wouldn’t know it from looking at the HTML:

If I’m trying to debug this in the DevTools, I’m completely lost. Where are the rows? Where are the columns?

<table class="css-1x2y3z4">
  <thead class="css-a5b6c7d">
    <tr class="css-y3z4a5b">
      <th scope="col" class="css-e8f9g0h"></th>
      <th scope="col" class="css-i1j2k3l">Library</th>
      <th scope="col" class="css-i1j2k3l">Version</th>
      <th scope="col" class="css-i1j2k3l">Size</th>
    </tr>
  </thead>
  <tbody class="css-a5b6c7d">
    <tr class="css-y3z4a5b">
      <th scope="row" class="css-q7r8s9t">UI</th>
      <td class="css-u0v1w2x">React</td>
      <td class="css-u0v1w2x">19.1.0</td>
      <td class="css-u0v1w2x">167kB</td>
    </tr>
    <tr class="css-y3z4a5b">
      <th scope="row" class="css-q7r8s9t">Style</th>
      <td class="css-u0v1w2x">Tailwind</td>
      <td class="css-u0v1w2x">4.0.0</td>
      <td class="css-u0v1w2x">358kB</td>
    </tr>
    <tr class="css-y3z4a5b">
      <th scope="row" class="css-q7r8s9t">Build</th>
      <td class="css-u0v1w2x">Vite</td>
      <td class="css-u0v1w2x">6.3.5</td>
      <td class="css-u0v1w2x">2.65MB</td>
    </tr>
  </tbody>
</table>

Ah, that’s much better! Now I can easily zero in on a table cell, or a column header, because they’re all named. I’m not wading through a sea of <div>s anymore.

Even just adding ARIA roles to the <div>s would be an improvement here:

<div class="css-1x2y3z4" role="table">
  <div class="css-a5b6c7d" role="rowgroup">
    <div class="css-m4n5o6p" role="row">
      <div class="css-e8f9g0h" role="columnheader"></div>
      <div class="css-i1j2k3l" role="columnheader">Library</div>
      <div class="css-i1j2k3l" role="columnheader">Version</div>
      <div class="css-i1j2k3l" role="columnheader">Size</div>
    </div>
  </div>
  <div class="css-c6d7e8f" role="rowgroup">
    <div class="css-m4n5o6p" role="row">
      <div class="css-q7r8s9t" role="rowheader">UI</div>
      <div class="css-u0v1w2x" role="cell">React</div>
      <div class="css-u0v1w2x" role="cell">19.1.0</div>
      <div class="css-u0v1w2x" role="cell">167kB</div>
    </div>
    <div class="css-m4n5o6p" role="row">
      <div class="css-q7r8s9t" role="rowheader">Style</div>
      <div class="css-u0v1w2x" role="cell">Tailwind</div>
      <div class="css-u0v1w2x" role="cell">4.0.0</div>
      <div class="css-u0v1w2x" role="cell">358kB</div>
    </div>
    <div class="css-m4n5o6p" role="row">
      <div class="css-q7r8s9t" role="rowheader">Build</div>
      <div class="css-u0v1w2x" role="cell">Vite</div>
      <div class="css-u0v1w2x" role="cell">6.3.5</div>
      <div class="css-u0v1w2x" role="cell">2.65MB</div>
    </div>
  </div>
</div>

Especially if you’re using a CSS-in-JS framework (which I’ve simulated with robo-classes above), the HTML can get quite messy. Building accessibly makes it a lot easier to understand at a distance what each element is supposed to do.

Naming things

As all programmers know, naming things is hard. UIs are no exception: is this an “autocomplete”? Or a “dropdown”? Or a “picker”?

If you read the WAI ARIA guidelines, though, then it’s clear what it is: a “combobox”!

No need to grope for the right name: if you add the proper roles, then everything is already named for you:

As a bonus, you can use aria-* attributes or roles as a CSS selector. I often see awkward code like this:

<div
  className={isActive ? 'active' : ''}
  aria-selected={isActive}
  role='option'
</div>

The active class is clearly redundant here. If you want to style based on the .active selector, you could just as easily style with [aria-selected="true"] instead.

Also, why call it isActive when the ARIA attribute is aria-selected? Just call it “selected” everywhere:

<div
  aria-selected={isSelected}
  role='option'
</div>

Much cleaner!

I also find that thinking in terms of roles and ARIA attributes sharpens my thinking, and gives structure to the interface I’m trying to create. Suddenly, I have a language for what I’m building, which can lead to more “obvious” variable names, CSS custom properties, grid area names, etc.

Testability

I’ve written about this before, but building accessibly also helps with writing tests. Rather than trying to select an element based on arbitrary classes or attributes, you can write more elegant code like this (e.g. with Playwright):

await page.getByLabel('Name').fill('Nolan')

await page.getByRole('button', { name: 'OK' }).click()

Imagine, though, if your entire UI is full of <div>s and robo-classes. How would you find the right inputs and buttons? You could select based on the robo-classes, or by searching for text inside or nearby the elements, but this makes your tests brittle.

As Kent C. Dodds has argued, writing UI tests based on semantics makes your tests more resilient to change. That’s because a UI’s semantic structure (i.e. the accessibility tree) tends to change less frequently than its classes, attributes, or even the composition of its HTML elements. (How many times have you added a wrapper <div> only to break your UI tests?)

Power users

When I’m on a desktop, I tend to be a keyboard power user. I like pressing Esc to close dialogs, Enter to submit a form, or even / in Firefox to quickly jump to links on the page. I do use a mouse, but I just prefer the keyboard since it’s faster.

So I find it jarring when a website breaks keyboard accessibility – Esc doesn’t dismiss a dialog, Enter doesn’t submit a form, ↑/↓ don’t change radio buttons. It disrupts my flow when I unexpectedly have to reach for my mouse. (Plus it’s a Van Halen brown M&M that signals to me that the website probably messed something else up, too!)

If you’re building a productivity tool with its own set of keyboard shortcuts (think Slack or GMail), then it’s even more important to get this right. You can’t add a lot of sophisticated keyboard controls if the basic Tab and focus logic doesn’t work correctly.

A lot of programmers are themselves power users, so I find this argument pretty persuasive. Build a UI that you yourself would like to use!

Conclusion

The reason that I, personally, care about accessibility is probably different from most people’s. I have a family member who is blind, and I’ve known many blind or low-vision people in my career. I’ve heard firsthand how frustrating it can be to use interfaces that aren’t built accessibly.

Honestly, if I were disabled, I would probably think to myself, “computer programmers must not care about me.” And judging from the miserable WebAIM results, I’d clearly be right:

Across the one million home pages, 50,960,288 distinct accessibility errors were detected—an average of 51 errors per page.

As a web developer who has dabbled in accessibility, though, I find this situation tragic. It’s not really that hard to build accessible interfaces. And I’m not talking about “ideal” or “optimized” – the bar is pretty low, so I’m just talking about something that works at all for people with a disability.

Maybe in the future, accessible interfaces won’t require so much manual intervention from developers. Maybe AI tooling (on either the production or consumption side) will make UIs that are usable out-of-the-box for people with disabilities. I’m actually sympathetic to the Jakob Nielsen argument that “accessibility has failed” – it’s hard to look at the WebAIM results and come to any other conclusion. Maybe the “eat your vegetables” era of accessibility has failed, and it’s time to try new tactics.

That’s why I wrote this post, though. You can build accessibly without having a bleeding heart. And for the time being, unless generative AI swoops in like a deus ex machina to save us, it’s our responsibility as interface designers to do so.

At the same time we’re helping others, though, we can also help ourselves. Like a good hot sauce on your Brussels sprouts, eating your vegetables doesn’t always have to be a chore.

2 Apr

AI ambivalence

Posted by Nolan Lawson in Machine Learning, NLP. 26 Comments

I’ve avoided writing this post for a long time, partly because I try to avoid controversial topics these days, and partly because I was waiting to make my mind up about the current, all-consuming, conversation-dominating topic of generative AI. But Steve Yegge’s “Revenge of the junior developer” awakened something in me, so let’s go for it.

I don’t come to AI from nowhere. Longtime readers may be surprised to learn that I have a master’s in computational linguistics, i.e. I studied this kind of stuff 20-odd years ago. In fact, two of the authors of the famous “stochastic parrot” paper were folks I knew at the time – Emily Bender was one of my professors, and Margaret Mitchell was my lab partner in one of our toughest classes (sorry my Python sucked at the time, Meg).

That said, I got bored of working in AI after grad school, and quickly switched to general coding. I just found that “feature engineering” (which is what we called training models at the time) was not really my jam. I much preferred to put on some good coding tunes, crank up the IDE, and bust out code all day. Plus, I had developed a dim view of natural-language processing technologies largely informed by my background in (non-computational) linguistics as an undergrad.

In linguistics, we were taught that the human mind is a wondrous thing, and that Chomsky had conclusively shown that humans have a natural language instinct. The job of the linguist is to uncover the hidden rules in the human mind that govern things like syntax, semantics, and phonology (i.e. why the “s” in “beds” is pronounced like a “z” unlike in “bets,” due to the voicing of the final consonant).

Then when I switched to computational linguistics, suddenly the overriding sensation I got was that everything was actually about number-crunching, and in fact you could throw all your linguistics textbooks in the trash and just let gobs of training data and statistics do the job for you. “Every time I fire a linguist, the performance goes up,” as a famous computational linguist said.

I found this perspective belittling and insulting to the human mind, and more importantly, it didn’t really seem to work. Natural-language processing technology seemed stuck at the level of support vector machines and conditional random fields, hardly better than the Markov models in your iPhone 2’s autocomplete. So I got bored and disillusioned and left the field of AI.

Boy, that AI thing sure came back with a vengeance, didn’t it?

Still skeptical

That said, while everybody else was either reacting with horror or delight at the tidal wave of gen-AI hype, I maintained my skepticism. At the end of the day, all of this technology was still just number-crunching – brute force trying to approximate the hidden logic that Chomsky had discovered. I acknowledged that there was some room for statistics – Peter Norvig’s essay mentioning the story of an Englishman ordering an “ale” and getting served an “eel” due to the Great Vowel Shift still sticks in my brain – but overall I doubted that mere stats could ever approach anything close to human intelligence.

Today, though, philosophical questions of what AI says about human cognition seem beside the point – these things can get stuff done. Especially in the field of coding (my cherished refuge from computational linguistics), AIs now dominate: every IDE assumes I want AI autocomplete by default, and I actively have to hunt around in the settings to turn it off.

And for several years, that’s what I’ve been doing: studiously avoiding generative AI. Not just because I doubted how close to “AGI” these things actually were, but also because I just found them annoying. I’m a fast typist, and I know JavaScript like the back of my hand, so the last thing I want is some overeager junior coder grabbing my keyboard to mess with the flow of my typing. Every inline-coding AI assistant I’ve tried made me want to gnash my teeth together – suddenly instead of writing code, I’m being asked to constantly read code (which as everyone knows, is less fun). And plus, the suggestions were rarely good enough to justify the aggravation. So I abstained.

Later I read Baldur Bjarnason’s excellent book The Intelligence Illusion, and this further hardened me against generative AI. Why use a technology that 1) dumbs down the human using it, 2) generates hard-to-spot bugs, and 3) doesn’t really make you much more productive anyway, when you consider the extra time reading, reviewing, and correcting its output? So I put in my earbuds and kept coding.

Meanwhile, as I was blissfully coding away like it was ~2020, I looked outside my window and suddenly realized that the tidal wave was approaching. It was 2025, and I was (seemingly) the last developer on the planet not using gen-AI in their regular workflow.

Opening up

I try to keep an open mind about things. If you’ve read this blog for a while, you know that I’ve sometimes espoused opinions that I later completely backtracked on – my post from 10 years ago about progressive enhancement is a good example, because I’ve almost completely swung over to the progressive enhancement side of things since then. My more recent “Why I’m skeptical of rewriting JavaScript tools in ‘faster’ languages” also seems destined to age like fine milk. Maybe I’m relieved I didn’t write a big bombastic takedown of generative AI a few years ago, because hoo boy.

I started using Claude and Claude Code a bit in my regular workflow. I’ll skip the suspense and just say that the tool is way more capable than I would ever have expected. The way I can use it to interrogate a large codebase, or generate unit tests, or even “refactor every callsite to use such-and-such pattern” is utterly gobsmacking. It also nearly replaces StackOverflow, in the sense of “it can give me answers that I’m highly skeptical of,” i.e. it’s not that different from StackOverflow, but boy is it faster.

Here’s the main problem I’ve found with generative AI, and with “vibe coding” in general: it completely sucks out the joy of software development for me.

Imagine you’re a Studio Ghibli artist. You’ve spent years perfecting your craft, you love the feeling of the brush/pencil in your hand, and your life’s joy is to make beautiful artwork to share with the world. And then someone tells you gen-AI can just spit out My Neighbor Totoro for you. Would you feel grateful? Would you rush to drop your art supplies and jump head-first into the role of AI babysitter?

This is how I feel using gen-AI: like a babysitter. It spits out reams of code, I read through it and try to spot the bugs, and then we repeat. Although of course, as Cory Doctorow points out, the temptation is to not even try to spot the bugs, and instead just let your eyes glaze over and let the machine do the thinking for you – the full dream of vibe coding.

I do believe that this is the end state of this kind of development: “giving into the vibes,” not even trying to use your feeble primate brain to understand the code that the AI is barfing out, and instead to let other barf-generating “agents” evaluate its output for you. I’ll accept that maybe, maybe, if you have the right orchestra of agents that you’re conducting, then maybe you can cut down on the bugs, hallucinations, and repetitive boilerplate that gen-AI seems prone to. But whatever you’re doing at that point, it’s not software development, at least not the kind that I’ve known for the past ~20 years.

Conclusion

I don’t have a conclusion. Really, that’s my current state: ambivalence. I acknowledge that these tools are incredibly powerful, I’ve even started incorporating them into my work in certain limited ways (low-stakes code like POCs and unit tests seem like an ideal use case), but I absolutely hate them. I hate the way they’ve taken over the software industry, I hate how they make me feel while I’m using them, and I hate the human-intelligence-insulting postulation that a glorified Excel spreadsheet can do what I can but better.

In one of his podcasts, Ezra Klein said that he thinks the “message” of generative AI (in the McLuhan sense) is this: “You are derivative.” In other words: all your creativity, all your “craft,” all of that intense emotional spark inside of you that drives you to dance, to sing, to paint, to write, or to code, can be replicated by the robot equivalent of 1,000 monkeys typing at 1,000 typewriters. Even if it’s true, it’s a pretty dim view of humanity and a miserable message to keep pounding into your brain during 8 hours of daily software development.

So this is where I’ve landed: I’m using generative AI, probably just “dipping my toes in” compared to what maximalists like Steve Yegge promote, but even that little bit has made me feel less excited than defeated. I am defeated in the sense that I can’t argue strongly against using these tools (they bust out unit tests way faster than I can, and can I really say that I was ever lovingly-crafting my unit tests?), and I’m defeated in the sense that I can no longer confidently assert that brute-force statistics can never approach the ineffable beauty of the human mind that Chomsky described. (If they can’t, they’re sure doing a good imitation of it.)

I’m also defeated in the sense that this very blog post is just more food for the AI god. Everything I’ve ever written on the internet (including here and on GitHub) has been eagerly gobbled up into the giant AI katamari and is being used to happily undermine me and my fellow bloggers and programmers. (If you ask Claude to generate a “blog post title in the style of Nolan Lawson,” it can actually do a pretty decent job of mimicking my shtick.) The fact that I wrote this entire post without the aid of generative AI is cold comfort – nobody cares, and likely few have gotten to the end of this diatribe anyway other than the robots.

So there’s my overwhelming feeling at the end of this post: ambivalence. I feel besieged and horrified by what gen-AI has wrought on my industry, but I can no longer keep my ears plugged while the tsunami roars outside. Maybe, like a lot of other middle-aged professionals suddenly finding their careers upended at the peak of their creative power, I will have to adapt or face replacement. Or maybe my best bet is to continue to zig while others are zagging, and to try to keep my coding skills sharp while everyone else is “vibe coding” a monstrosity that I will have to debug when it crashes in production someday.

I honestly don’t know, and I find that terrifying. But there is some comfort in the fact that I don’t think anyone else knows what’s going to happen either.

18 Jan

Goodbye Salesforce, hello Socket

Posted by Nolan Lawson in Life. 6 Comments

Big news for me: after 6 years, I’m leaving Salesforce to join the folks at Socket, working to secure the software supply chain.

Salesforce has been very good to me. But at a certain point, I felt the need to branch out, learn new things, and get out of my comfort zone. At Socket, I’ll be combining something I’m passionate about – the value of open source and the experience of open-source maintainers – with something less familiar to me: security. It’ll be a learning experience for sure!

In addition to learning, though, I also like sharing what I’ve learned. So I’m grateful to Salesforce for giving me a wellspring of ideas and research topics, many of which bubbled up into this very blog. Some of my “greatest hits” of the past 6 years came directly from my work at Salesforce:

Let’s learn how modern JavaScript frameworks work by building one

Salesforce builds its own JavaScript framework called Lightning Web Components, which is a little-known but surprisingly mighty tool. As part of my work on LWC, I helped modernize its architecture, which led to this post summarizing some of the trends and insights from the last ~10 years of framework design. Thanks to this work, LWC now scores pretty respectably on the js-framework-benchmark (although I still have some misgivings about the benchmark itself).

This work was also eye-opening to me, because it was my first time working as a paid open-source maintainer. Overall, I think it was a great choice on Salesforce’s part, and I wish that more companies were willing to invest in the open-source ecosystem, or at least to open up their internal tools. In LWC, we had plenty of external contributors, we got direct feedback from customers via GitHub issues, and it was easy to swap notes with other framework authors (notably in the Web Components Community Group). Plus I believe open source tends to raise the bar of quality for any project – so it’s something companies should consider for that reason alone.

A tour of JavaScript timers on the web

On the Microsoft Edge team, I learned a ton about browser internals, including little-known secrets about why certain browser APIs work the way they do. (Nothing better than “I wrote the spec, let me tell you what’s wrong with it” to get the real scoop!)

This post was a brain-dump of all the ways that JavaScript timers like setTimeout and setImmediate work across browser engines. I was intimately familiar with this topic, since Edge (not Chromium-based at the time) had been working to revamp a lot of their APIs such as Promise and fetch.

The original inspiration was a conversation I had with a colleague during my early days at Salesforce, where we debated the most performant way to batch up JavaScript work. This post still holds up pretty well today, although new fanciness like scheduler.yield() and isInputPending() isn’t covered.

Shadow DOM and accessibility: the trouble with ARIA

As part of my work at Salesforce, I was heavily involved in the Accessibility Object Model working group, partnering with folks at Igalia, Microsoft, Apple, and elsewhere to help fix problems of accessibility in web components. This led to a slew of posts on this topic, but ultimately my proudest outcome is not my own, but rather the Reference Target spec spearheaded by Ben Howell at Microsoft and now prototyped in Chromium.

After ~2 years in the working group, I was honestly starting to lose hope that we’d ever find a spec that the browser vendors could agree on. But eventually Ben joined the group, and his patience and tenacity won the day. I didn’t even contribute much (I mostly just gave feedback), but I’m still proud of what the group accomplished, and I’m hopeful that accessibility in web components will be considered a solved problem in a couple years.

My talk on CSS runtime performance

Big companies have big codebases. And big codebases can end up with a lot of CSS. Most of the time you don’t need to worry about CSS performance, but at the extremes, it can become surprisingly important.

At Salesforce, I learned way more than I ever wanted to know about how browsers handle CSS and how it affects page performance. I fixed several performance bottlenecks and regressions due to CSS (yes, a CSS change can make a page slower!), and I also filed bugs on browsers that made CSS faster for everyone. (Attributes are now roughly as fast as classes, I’m happy to say.) I also gave this fun talk at Perf.now summarizing all my findings.

Memory leaks: the forgotten side of web performance

Salesforce is a huge SPA, and as such, it has its share of memory leaks. I found that the more I looked for, the more I uncovered. At first I thought this was a Salesforce-specific issue, but then I built fuite and slowly realized that all SPAs leak. It’s more like a chronic condition to be managed than a disease to be eradicated. (If you can find an SPA without memory leaks, I’ll give you a cookie!)

I continue to maintain fuite, and I still occasionally hear from folks who have used it to fix memory leaks in their apps. Since I wrote it, Meta also released Memlab, and the Microsoft Edge team made tons of memory-related improvements to the Chromium DevTools. I still strongly feel, though, that this field is in its infancy. (Stoyan Stefanov has a great recent talk on the topic, pointing out how critical yet under-explored it is.)

The balance has shifted away from SPAs

My work on memory leaks also led me to question the value of SPAs in general. With all the improvements in browsers over the years, I came to the conclusion that MPAs are the right architecture for ~90% of web sites and apps. SPAs still have their place, but their value is dwindling year after year.

Since I wrote this post, Chrome and Safari both shipped View Transitions, and Chrome shipped Speculation Rules. With this combo, you can preload a page when the user hovers a link and then smoothly animate to it once they click. This was the whole raison d’être of SPAs in the first place, and now it’s just built into the browser.

SPAs are not going away, but their heyday is over. I think someday we’ll look back and be amazed at how much complexity we tolerated.

Conclusion

I’m grateful to Salesforce and all my wonderful colleagues there, and I’m also excited to start my next chapter at Socket. More than anything, I’m excited by the crew that I’ll be joining – John-David Dalton is a former colleague from both Microsoft and Salesforce, and Feross Aboukhadijeh is someone I’ve admired for years. (I’ve spent enough hours hearing his voice on the JS Party podcast that we practically feel like old friends.)

It’s hard to predict the future, but I know that, whatever happens, I’ll be talking about it on this blog. I’ve been running this blog for 14 years through 6 different jobs, with topics ranging from NLP to Android to web development, and I don’t see myself slowing down now. Here’s to a great 2025!

30 Dec

2024 book review

Posted by Nolan Lawson in Books. Leave a Comment

2024 was another lite reading year for me. The fact that it was an election year probably didn’t help, and one of my resolutions for 2025 is to spend a heck of a lot less time keeping up with the dreary treadmill of the 24-hour news cycle. Even videogames proved to be a better use of my time, and I wouldn’t mind plugging another 100 hours into Slay the Spire next year. But without further ado, on with the books!

Quick links Nonfiction The Inner Game of Tennis by W. Timothy Gallwey

I spent a lot of time this year competing in Super Smash Bros – going to locals, practicing my moves, and eventually competing at Seattle’s biggest-ever Smash tournament. I got 385th place out of 888 entrants, which is not too shabby given the world-class caliber of talent on display. It’s a strange use of my time if you consider videogame tournaments to be dumb, but I had a lot of fun, met some great folks, and learned a lot about competition and the esports scene.

At one of my locals, I was introduced to The Inner Game of Tennis, a sort of self-help book for tennis pros written in the 70’s. At first glance it doesn’t have much to do with videogames, but as it turns out it’s one of the best books you can read to get better at anything – sports, music, public speaking, you name it. It’s a short but dense book – there’s so much wisdom packed into so many brief, pithy sentences that you’ll probably have to re-read several paragraphs before it sinks in.

If you’re familiar with mindfulness or meditation then much of it may feel like old hat, but I still found it helpful for the immediate applications to one’s backswing (or ledgedash, in my case). It’s a good pairing with Thinking, Fast and Slow for the concept of two modes of thinking – in this case, how to quiet your conscious mind so that the wisdom of your unconscious can shine through.

Plagues Upon the Earth and The Fate Of Rome by Kyle Harper

The covid years got me interested in humanity’s experience with disease throughout history. These two books both pack a wallop, showing how disease and (maybe to a lesser extent) climate change ravaged the Roman Empire.

End Times: Elites, Counter-Elites, And The Path Of Political Disintegration by Peter Turchin

Turchin’s model of how elites become complacent about “immiseration” of the poorer classes, leading to opportunistic “counter-elites” leveraging popular outrage to pursue their own power, sounds pretty familiar.

Deep Work by Cal Newport

I always need a reminder that real work happens when you give yourself space and time for creativity. A good pairing with John Cleese’s classic talk.

Fiction World Made by Hand by James Howard Kunstler

My favorite fictional book I read this year. Paints a very compelling vision of a deindustrial future, but without a lot of the pessimism or nihilism that you might expect from the genre. In the end, it’s actually a very hopeful and uplifting book, and the characters are vivid and multi-textured. Strongly recommended if you like post-apocalyptic fiction or cli-fi.

The Death of Attila and The Firedrake by Cecilia Holland

Cecilia Holland is one of those authors whose work is bafflingly unknown. Many of her books are out-of-print, and if it hadn’t been for the recommendation of my mother, I’d have never heard of her.

If you like historical fiction, though, and if you appreciate intense attention to historical details, then her books are a great read. I love little touches like the Huns speaking Hunnish (although nobody knows what it sounded like!) or one of William the Conquerer’s knights speaking Burgundian but not (Norman) French. These are the details that a lesser author would gloss over.

I’d recommend starting with The Firedrake since it’s a bit shorter and faster-paced. I’m looking forward to devouring all of her books, regardless of which time period they’re set in.

The Constant Rabbit by Jasper Fforde

A supremely silly book, and occasionally a bit too cliché and on-the-nose with its metaphors, but still a fun read. If you like Douglas Adams or Kurt Vonnegut then you’ll probably find a lot to enjoy in its dry humor.

Sea of Tranquility by Emily St. John Mandel

I’m always a bit disappointed when speculative fiction assumes the same customs and culture of our time but transplants them onto a whiz-bang sci-fi future – just a change of scenery. But some of the time travel and metaphysical bits in this book are pretty neat. It’s a bit like a compressed Cloud Atlas in how it’s structured.

1 Dec

Avoiding unnecessary cleanup work in disconnectedCallback

Posted by Nolan Lawson in Web, web components. 1 Comment

In a previous post, I said that a web component’s connectedCallback and disconnectedCallback should be mirror images of each other: one for setup, the other for cleanup.

Sometimes, though, you want to avoid unnecessary cleanup work when your component has merely been moved around in the DOM:

div.removeChild(component)
div.insertBefore(component, null)

This can happen when, for example, your component is one element in a list that’s being re-sorted.

The best pattern I’ve found for handling this is to queue a microtask in disconnectedCallback before checking this.isConnected to see if you’re still disconnected:

async disconnectedCallback() {
  await Promise.resolve()
  if (!this.isConnected) {
    // cleanup logic
  }
}

Of course, you’ll also want to avoid repeating your setup logic in connectedCallback, since it will fire as well during a reconnect. So a complete solution would look like:

connectedCallback() {
  // setup logic
  this._isSetUp = true
}

async disconnectedCallback() {
  await Promise.resolve()
  if (!this.isConnected && this._isSetUp) {
    // cleanup logic
    this._isSetUp = false
  }
}

For what it’s worth, Solid, Svelte, and Vue all use this pattern when compiled as web components.

If you’re clever, you might think that you don’t need the microtask, and can merely check this.isConnected. However, this only works in one particular case: if your component is inserted (e.g. with insertBefore/appendChild) but not removed first (e.g. with removeChild). In that case, isConnected will be true during disconnectedCallback, which is quite counter-intuitive:

However, this is not the case if removeChild is called during the “move”:

You can’t really predict how your component will be moved around, so sadly you have to handle both cases. Hence the microtask.

In the future, this may change slightly. There is a proposal to add a new moveBefore method, which would invoke a special connectedMoveCallback. However, this is still behind a flag in Chromium, and the API has not been finalized, so I’ll avoid commenting on it further.

This post was inspired by a discussion in the Web Components Community Group Discord with Filimon Danopoulos, Justin Fagnani, and Rob Eisenberg.

20 Oct

Why I’m skeptical of rewriting JavaScript tools in “faster” languages

Posted by Nolan Lawson in performance, Web. Tagged: javascript. 22 Comments

I’ve written a lot of JavaScript. I like JavaScript. And more importantly, I’ve built up a set of skills in understanding, optimizing, and debugging JavaScript that I’m reluctant to give up on.

So maybe it’s natural that I get a worried pit in my stomach over the current mania to rewrite every Node.js tool in a “faster” language like Rust, Zig, Go, etc. Don’t get me wrong – these languages are cool! (I’ve got a copy of the Rust book on my desk right now, and I even contributed a bit to Servo for fun.) But ultimately, I’ve invested a ton of my career in learning the ins and outs of JavaScript, and it’s by far the language I’m most comfortable with.

So I acknowledge my bias (and perhaps over-investment in one skill set). But the more I think about it, the more I feel that my skepticism is also justified by some real objective concerns, which I’d like to cover in this post.

Performance

One reason for my skepticism is that I just don’t think we’ve exhausted all the possibilities of making JavaScript tools faster. Marvin Hagemeister has done an excellent job of demonstrating this, by showing how much low-hanging fruit there is in ESLint, Tailwind, etc.

In the browser world, JavaScript has proven itself to be “fast enough” for most workloads. Sure, WebAssembly exists, but I think it’s fair to say that it’s mostly used for niche, CPU-intensive tasks rather than for building a whole website. So why are JavaScript-based CLI tools rushing to throw JavaScript away?

The big rewrite

I think the perf gap comes from a few different things. First, there’s the aforementioned low-hanging fruit – for a long time, the JavaScript tooling ecosystem has been focused on building something that works, not something fast. Now we’ve reached a saturation point where the API surface is mostly settled, and everyone just wants “the same thing, but faster.” Hence the explosion of new tools that are nearly drop-in replacements for existing ones: Rolldown for Rollup, Oxlint for ESLint, Biome for Prettier, etc.

However, these tools aren’t necessarily faster because they’re using a faster language. They could just be faster because 1) they’re being written with performance in mind, and 2) the API surface is already settled, so the authors don’t have to spend development time tinkering with the overall design. Heck, you don’t even need to write tests! Just use the existing test suite from the previous tool.

In my career, I’ve often seen a rewrite from A to B resulting in a speed boost, followed by the triumphant claim that B is faster than A. However, as Ryan Carniato points out, a rewrite is often faster just because it’s a rewrite – you know more the second time around, you’re paying more attention to perf, etc.

Bytecode and JIT

The second class of performance gaps comes from the things browsers give us for free, and that we rarely think about: the bytecode cache and JIT (Just-In-Time compiler).

When you load a website for the second or third time, if the JavaScript is cached correctly, then the browser doesn’t need to parse and compile the source code into bytecode anymore. It just loads the bytecode directly off disk. This is the bytecode cache in action.

Furthermore, if a function is “hot” (frequently executed), it will be further optimized into machine code. This is the JIT in action.

In the world of Node.js scripts, we don’t get the benefits of the bytecode cache at all. Every time you run a Node script, the entire script has to be parsed and compiled from scratch. This is a big reason for the reported perf wins between JavaScript and non-JavaScript tooling.

Thanks to the inimitable Joyee Cheung, though, Node is now getting a compile cache. You can set an environment variable and immediately get faster Node.js script loads:

export NODE_COMPILE_CACHE=~/.cache/nodejs-compile-cache

I’ve set this in my ~/.bashrc on all my dev machines. I hope it makes it into the default Node settings someday.

As for JIT, this is another thing that (sadly) most Node scripts can’t really benefit from. You have to run a function before it becomes “hot,” so on the server side, it’s more likely to kick in for long-running servers than for one-off scripts.

And the JIT can make a big difference! In Pinafore, I considered replacing the JavaScript-based blurhash library with a Rust (Wasm) version, before realizing that the performance difference was erased by the time we got to the fifth iteration. That’s the power of the JIT.

Maybe eventually a tool like Porffor could be used to do an AOT (Ahead-Of-Time) compilation of Node scripts. In the meantime, though, JIT is still a case where native languages have an edge on JavaScript.

I should also acknowledge: there is a perf hit from using Wasm versus pure-native tools. So this could be another reason native tools are taking the CLI world by storm, but not necessarily the browser frontend.

Contributions and debuggability

I hinted at it earlier, but this is the main source of my skepticism toward the “rewrite it all in native” movement.

JavaScript is, in my opinion, a working-class language. It’s very forgiving of types (this is one reason I’m not a huge TypeScript fan), it’s easy to pick up (compared to something like Rust), and since it’s supported by browsers, there is a huge pool of people who are conversant with it.

For years, we’ve had both library authors and library consumers in the JavaScript ecosystem largely using JavaScript. I think we take for granted what this enables.

For one: the path to contribution is much smoother. To quote Matteo Collina:

Most developers ignore the fact that they have the skills to debug/fix/modify their dependencies. They are not maintained by unknown demigods but by fellow developers.

This breaks down if JavaScript library authors are using languages that are different (and more difficult!) than JavaScript. They may as well be demigods!

For another thing: it’s straightforward to modify JavaScript dependencies locally. I’ve often tweaked something in my local node_modules folder when I’m trying to track down a bug or work on a feature in a library I depend on. Whereas if it’s written in a native language, I’d need to check out the source code and compile it myself – a big barrier to entry.

(To be fair, this has already gotten a bit tricky thanks to the widespread use of TypeScript. But TypeScript is not too far from the source JavaScript, so you’d be amazed how far you can get by clicking “pretty print” in the DevTools. Thankfully most Node libraries are also not minified.)

Of course, this also leads us back to debuggability. If I want to debug a JavaScript library, I can simply use the browser’s DevTools or a Node.js debugger that I’m already familiar with. I can set breakpoints, inspect variables, and reason about the code as I would for my own code. This isn’t impossible with Wasm, but it requires a different skill set.

Conclusion

I think it’s great that there’s a new generation of tooling for the JavaScript ecosystem. I’m excited to see where projects like Oxc and VoidZero end up. The existing incumbents are indeed exceedingly slow and would probably benefit from the competition. (I get especially peeved by the typical eslint + prettier + tsc + rollup lint+build cycle.)

That said, I don’t think that JavaScript is inherently slow, or that we’ve exhausted all the possibilities for improving it. Sometimes I look at truly perf-focused JavaScript, such as the recent improvements to the Chromium DevTools using mind-blowing techniques like using Uint8Arrays as bit vectors, and I feel that we’ve barely scratched the surface. (If you really want an inferiority complex, see other commits from Seth Brenith. They are wild.)

I also think that, as a community, we have not really grappled with what the world would look like if we relegate JavaScript tooling to an elite priesthood of Rust and Zig developers. I can imagine the average JavaScript developer feeling completely hopeless every time there’s a bug in one of their build tools. Rather than empowering the next generation of web developers to achieve more, we might be training them for a career of learned helplessness. Imagine what it will feel like for the average junior developer to face a segfault rather than a familiar JavaScript Error.

At this point, I’m a senior in my career, so of course I have little excuse to cling to my JavaScript security-blanket. It’s part of my job to dig down a few layers deeper and understand how every part of the stack works.

However, I can’t help but feel like we are embarking down an unknown path with unintended consequences, when there is another path that is less fraught and could get us nearly the same results. The current freight train shows no signs of slowing down, though, so I guess we’ll find out when we get there.

13 Oct

The greatness and limitations of the js-framework-benchmark

Posted by Nolan Lawson in performance, Web. Tagged: benchmarking. Leave a Comment

I love the js-framework-benchmark. It’s a true open-source success story – a shared benchmark, with contributions from various JavaScript framework authors, widely cited, and used to push the entire JavaScript ecosystem forward. It’s a rare marvel.

That said, the benchmark is so good that it’s sometimes taken as the One True Measure of a web framework’s performance (or maybe even worth!). But like any metric, it has its flaws and limitations. Many of these limitations are well-known among framework authors like myself, but aren’t widely known outside a small group of experts.

In this post, I’d like to both celebrate the js-framework-benchmark for its considerable achievements, while also highlighting some of its quirks and limitations.

The greatness

First off, I want to acknowledge the monumental work that Stefan Krause has put into the js-framework-benchmark. It’s practically a one-man show – if you look into the commit history, it’s clear that Stefan has shouldered the main burden of maintaining the benchmark over time.

This is not a simple feat! A recent subtle issue with Chrome 124 shows just how much work goes into keeping even a simple benchmark humming across major browser releases.

So I don’t want anything in this post to come across as an attack on Stefan or the js-framework-benchmark. I am the king of burning out on open-source projects (PouchDB, Pinafore), so I have no leg to stand on to criticize an open-source maintainer with such tireless dedication. I can only sit in awe of Stefan’s accomplishment. I’m humbled and grateful.

If anything, this post should underscore how utterly the benchmark has succeeded under Stefan’s stewardship. Despite its flaws (as any benchmark would have), the js-framework-benchmark has become almost synonymous with “JavaScript framework performance.” To me, this is almost entirely due to Stefan’s diligence and attention to detail. Under different leadership, the benchmark may have been forgotten by now.

So within that context, I’d like to talk about the things the benchmark doesn’t measure, as well as the things it measures slightly differently from how folks might expect.

What does the benchmark do exactly?

First off, we have to understand what the js-framework-benchmark actually tests.

Screenshot of the vanillajs (i.e. baseline) “framework” in the js-framework-benchmark

To oversimplify, the core benchmark is:

This is basically it. Frameworks are judged on how fast they can render 10k table rows, mutate a single row, swap some rows around, etc.

If this sounds like a very specific scenario, well… it kind of is. And this is where the main limitations of the benchmark come in. Let’s cover each one separately.

SSR and hydration

Most clearly, the js-framework-benchmark does not measure server-side rendering (SSR) or hydration. It is purely focused on client-side rendering (CSR).

This is fine, by the way! Plenty of web apps are pure-CSR Single-Page Apps (SPAs). And there are other benchmarks that do cover SSR, such as Marko’s isomorphic UI benchmarks.

This is just to say that, for frameworks that almost exclusively focus on the performance benefits they bring to SSR or hydration (such as Qwik or Astro), the js-framework-benchmark is not really going to tell you how they stack up to other frameworks. The main value proposition is just not represented here.

One big component

The js-framework-benchmark typically renders one big component. There are some exceptions, such as the vanillajs-wc “framework” using multiple web components. But in general, most of the frameworks you’ve heard of render one big component containing the entire table and all its rows and cells.

There is nothing inherently wrong with this. However, it means that any per-component overhead (such as the overhead inherent to web components, or the overhead of the framework’s component abstraction) is not captured in the benchmark. And of course, any future optimizations that frameworks might do to reduce per-component overhead will never win points on the js-framework-benchmark.

Again, this is fine. Sometimes the ideal implementation is “one big component.” However, it’s not very common, so this is something to be aware of when reading the benchmark results.

Optimized by framework authors

Framework authors are a competitive bunch. Even framework users are defensive about their chosen framework. So it’s no surprise that what you’re seeing in the js-framework-benchmark has been heavily optimized to put each framework in the best possible light.

Sometimes this is reasonable – after all, the benchmark should try to represent what a competent component author would write. In other cases… it’s more of a gray zone.

I don’t want to demonize any particular framework in this post. So I’m going to call out a few cases I’ve seen of this, including one from the framework I work on (LWC).

Again, none of this is necessarily good or bad. Event delegation is a worthy technique, v-memo is a great optimization for those who know to use it, and as a Svelte user I’ve even worked around the whitespace issue myself. Some of these points (such as event delegation) are even noted in the benchmark results. But I’d wager that most folks reading the benchmark are not aware of these subtleties.

10k rows is a lot

The benchmark renders 1k-10k table rows, with 7 elements inside each one. Then it tests mutating, removing, or swapping those rows.

Frameworks that do well on this scenario are (frankly) amazing. However, that doesn’t change the fact that this is a very weird scenario. If you are rendering 8k-80k DOM elements, then you should probably start thinking about pagination or virtualization (or at least content-visibility). Putting that many elements in the same component is also not something you see in most web apps.

Because this is such an atypical scenario, it also exaggerates the benefit of certain optimizations, such as the aforementioned event delegation. If you are attaching one event listener instead of 20k, then yes, you are going to be measurably faster. But should you really ever put yourself in a situation where you’re creating 20k event listeners on 80k DOM elements in the first place?

Chrome-only

One of my biggest pet peeves is when web developers only pay attention to Chrome while ignoring other browsers. Especially in performance discussions, statements like “Such-and-such DOM API is fast” or “The browser is slow at X,” where Chrome is merely implied, really irk me. This is something I railed against in my tenure on the Microsoft Edge team.

Focusing on one browser does kind of make sense in this case, though, since the js-framework-benchmark relies on some advanced Chromium APIs to run the tests. It also makes the results easier to digest, since there’s only one browser in play.

However, Chrome is not the only browser that exists (a fact that may surprise some web developers). So it’s good to be aware that this benchmark has nothing to say about Firefox or Safari performance.

Only measuring re-renders

As mentioned above, the js-framework-benchmark measures client-side rendering. Bundle size and memory usage are tracked as secondary measures, but they are not the main thing being measured, and I rarely see them mentioned. For most people, the runtime metrics are the benchmark.

Additionally, the bootstrap cost of a framework – i.e. the initial cost to execute the framework code itself – is not measured. Combine this with the lack of SSR/hydration coverage, and the js-framework-benchmark probably cannot tell you if a framework will tank your Largest Contentful Paint (LCP) or Total Blocking Time (TBT) scores, since it does not measure the first page load.

However, this lack of coverage for first-render goes even deeper. To avoid variance, the js-framework-benchmark does 5 “warmup” iterations before most tests. This means that many more first-render costs are not measured:

For those unaware, JavaScript engines will JIT any code that they detect as “hot” (i.e. frequently executed). By doing 5 warmup iterations, we effectively skip past the pre-JITed phase and measure the JITed code directly. (This is also called “peak performance.”) However, the JITed performance is not necessarily what your users are experiencing, since every user has to experience the pre-JITed code before they can get to the JIT!

This second point above is also important. As mentioned in a previous post, lots of next-gen frameworks use a pattern where they set the innerHTML on a <template> once and then use cloneNode(true) after that. If you profile the js-framework-benchmark, you will find that this initial innerHTML (aka “Parse HTML”) cost is never measured, since it’s part of the one-time setup costs that occur during the “warmup” iterations. This gives these frameworks a slight advantage, since setting innerHTML (among other one-time setup costs) can be expensive.

Putting all this together, I would say that the js-framework-benchmark is comparable to the old DBMon benchmark – it is measuring client-side-heavy scenarios with frequent re-renders. (Think: a spreadsheet app, data dashboard, etc.) This is definitely not a typical use case, so if you are choosing your framework based on the js-framework-benchmark, you may be sorely disappointed if your most important perf metric is LCP, or if your SPA navigations tend to re-render the page from scratch rather than only mutate small parts of the page.

Conclusion

The js-framework-benchmark is amazing. It’s great that we have it, and I have personally used it to track performance improvements in LWC, and to gauge where we stack up against other frameworks.

However, the benchmark is just what it is: a benchmark. It is not real-world user data, it is not data from your own website or web app, and it does not cover every possible definition of the word “performance.”

Like all microbenchmarks, the js-framework-benchmark is useful for some things and completely irrelevant for others. However, because it is so darn good (rare for a microbenchmark!), it has often been taken as gospel, as the One True Measure of a framework’s speed (or its worth).

However, the fault does not really lie with the js-framework-benchmark. It is on us – the web developer community – to write other benchmarks to cover the scenarios that the js-framework-benchmark does not. It’s also on us framework authors to educate framework consumers (who might not have all this arcane knowledge!) about what a benchmark can tell you and what it cannot tell you.

In the browser world, we have several benchmarks: Speedometer, MotionMark, Kraken, SunSpider, Octane, etc. No one would argue that any of these are the One True Benchmark (although Speedometer comes close) – they all measure different things and are useful in different ways. My wish is that someday we could say the same for JavaScript framework benchmarks.

In the meantime, I will continue using and celebrating the js-framework-benchmark, while also being mindful that it is not the final word on web framework performance.

28 Sep

Web components are okay

Posted by Nolan Lawson in performance, Web, web components. 9 Comments

Every so often, the web development community gets into a tizzy about something, usually web components. I find these fights tiresome, but I also see them as a good opportunity to reach across “the great divide” and try to find common ground rather than another opportunity to dunk on each other.

Ryan Carniato started the latest round with “Web Components Are Not the Future”. Cory LaViska followed up with “Web Components Are Not the Future — They’re the Present”. I’m not here to escalate, though – this is a peace mission.

I’ve been an avid follower of Ryan Carniato’s work for years. This post and the steady climb of LWC on the js-framework-benchmark demonstrate that I’ve been paying attention to what he has to say, especially about performance and framework design. The guy has single-handedly done more to move the web framework ecosystem forward in the past 5 years than anyone else I can think of.

That said, I also heavily work with web components, both on the framework side and as a component author. I’ve participated in the Web Components Community Group and Accessibility Object Model group, and I’ve written extensively on shadow DOM, custom elements, and web component accessibility in this blog.

So obviously I’m going to be interested when I see a post from Ryan Carniato on web components. And it’s a thought-provoking post! But I also think he misses the mark on a few things. So let’s dive in:

Performance

[T]he fundamental problem with Web Components is that they are built on Custom Elements.

[…] [E]very interface needs to go through the DOM. And of course this has a performance overhead.

This is completely true. If your goal is to build the absolute fastest framework you can, then you want to minimize DOM nodes wherever possible. This means that web components are off the table.

I fully believe that Ryan knows how to build the fastest possible framework. Again, the results for Solid on the js-framework-benchmark are a testament to this.

That said – and I might alienate some of my friends in the web performance community by saying this – performance isn’t everything. There are other tradeoffs in software development, such as maintainability, security, usability, and accessibility. Sometimes these things come into conflict.

To make a silly example: I could make DOM rendering slightly faster by never rendering any aria-* attributes. But of course sometimes you have to render aria-* attributes to make your interface accessible, and nobody would argue that a couple milliseconds are worth excluding screen reader users.

To make an even sillier example: you can improve performance by using for loops instead of .forEach(). Or using var instead of const/let. Typically, though, these kinds of micro-optimizations are just not worth it.

When I see this kind of stuff, I’m reminded of speedrunners trying to shave milliseconds off a 5-minute run of Super Mario Bros using precise inputs and obscure glitches. If that’s your goal, then by all means: backwards long jump across the entire stage instead of just having Mario run forward. I’ll continue to be impressed by what you’re doing, but it’s just not for me.

Minimizing the use of DOM nodes is a classic optimization – this is the main idea behind virtualization. That said, sometimes you can get away with simpler approaches, even if it’s not the absolute fastest option. I’d put “components as elements” in the same bucket – yes it’s sub-optimal, but optimal is not always the goal.

Similarly, I’ve long argued that it’s fine for custom elements to use different frameworks. Sometimes you just need to gradually migrate from Framework A to Framework B. Or you have to compose some micro-frontends together. Nobody would argue that this is the fastest possible interface, but fine – sometimes tradeoffs have to be made.

Having worked for a long time in the web performance space, I find that the lowest-hanging fruit for performance is usually something dumb like layout thrashing, network waterfalls, unnecessary re-renders, etc. Framework authors like myself love to play performance golf with things like the js-framework-benchmark, and it’s a great flex, but it just doesn’t usually matter in the real world.

That said, if it does matter to you – if you’re building for resource-constrained environments where every millisecond counts: great! Ditch web components! I will geek out and cheer for every speedrunning record you break.

The cost of standards

More code to ship and more code to execute to check these edge cases. It’s a hidden tax that impacts everyone.

Here’s where I completely get off the train from Ryan’s argument. As a framework author, I just don’t find that it’s that much effort to support web components. Detecting props versus attributes is a simple prop in element check. Outputting web components is indeed painful, but hey – nobody said you have to do it. Vue 2 got by with a standalone web component wrapper library, and Remount exists without any input from the React team.

As a framework author, if you want to freeze your thinking in 2011 and code as if nothing new was added to the web platform since then, you absolutely can! And you can still write a great framework! This is the beauty of the web. jQuery v1 is still chugging away on plenty of websites, and in fact it gets faster and faster with every new browser release, since browser perf teams are often targeting whatever patterns web developers used ~5 year ago in an endless cat-and-mouse game.

But assuming you don’t want to freeze your brain in amber, then yes: you do need to account for new stuff added to the web platform. But this is also true of things like Symbols, Proxys, Promises, etc. I just see it as part of the job, and I’m not particularly bothered, since I know that whatever I write will still work in 10 years, thanks to the web’s backwards compatibility guarantees.

Furthermore, I get the impression that a wide swath of the web development community does not care about web components, does not want to support them, and you probably couldn’t convince them to. And that’s okay! The web is a big tent, and you can build entire UIs based on web components, or with a sprinkling of HTML web components, or with none at all. If you want to declare your framework a “no web components” zone, then you can do that and still get plenty of avid fans.

That said, Ryan is right that, by blessing something as “the standard,” it inherently becomes a mental default that needs to be grappled with. Component authors must decide whether their <slot>s should work like native <slot>s. That’s true, but again, you could say this about a lot of new browser APIs. You have to decide whether IntersectionObserver or <img loading="lazy"> is worth it, or whether you’d rather write your own abstraction. That’s fine! At least we have a common point of reference, a shared vocabulary to compare and contrast things.

And just because something is a web standard doesn’t mean you have to use it. For the longest time, the classic joke about JavaScript: The Good Parts was how small it is compared to JavaScript: The Definitive Guide. The web is littered with deprecated (but still supported) APIs like document.domain, with, and <frame>s. Take it or leave it!

Conclusion

[I]n a sense there are nothing wrong with Web Components as they are only able to be what they are. It’s the promise that they are something that they aren’t which is so dangerous.

Here I totally agree with Ryan. As I’ve said before, web components are bad at a lot of things – Server-Side Rendering, accessibility, even interop in some cases. They’re good at plenty of things, but replacing all JavaScript frameworks is not one of them. Maybe we can check back in 10 years, but for now, there are still cases where React, Solid, Svelte, and friends shine and web components flounder.

Ryan is making an eminently reasonable point here, as is the rest of the post, and on its own it’s a good contribution to the discourse. The title is a bit inflammatory, which leads people to wield it as a bludgeon against their perceived enemies on social media (likely without reading the piece), but this is something I blame on social media, not on Ryan.

Again, I find these debates a bit tiresome. I think the fundamental issue, as I’ve previously said, is that people are talking past each other because they’re building different things with different constraints. It’s as if a salsa dancer criticized ballet for not being enough like salsa. There is more than one way to dance!

From my own personal experience: at Salesforce, we build a client-rendered app, with its own marketplace of components, with strict backwards-compatibility guarantees, where the intended support is measured in years if not decades. Is this you? If not, then maybe you shouldn’t build your entire UI out of web components, with shadow DOM and the whole kit-n-kaboodle. (Or maybe you should! I can’t say!)

What I find exciting about the web is the sheer number of people doing so many wild and bizarre things with it. It has everything from games to art projects to enterprise SaaS apps, built with WebGL and Wasm and Service Workers and all sorts of zany things. Every new capability added to the web platform isn’t a limitation on your creativity – it’s an opportunity to express your creativity in ways that nobody imagined before.

Web components may not be the future for you – that’s great! I’m excited to see what you build, and I might steal some ideas for my own corner of the web.

Update: Be sure to read Lea Verou’s and Baldur Bjarnason’s excellent posts on the topic.

18 Sep

Improving rendering performance with CSS content-visibility

Posted by Nolan Lawson in performance, Web. 7 Comments

Recently I got an interesting performance bug on emoji-picker-element:

I’m on a fedi instance with 19k custom emojis […] and when I open the emoji picker […], the page freezes for like a full second at least and overall performance stutters for a while after that.

If you’re not familiar with Mastodon or the Fediverse, different servers can have their own custom emoji, similar to Slack, Discord, etc. Having 19k (really closer to 20k in this case) is highly unusual, but not unheard of.

So I booted up their repro, and holy moly, it was slow:

There were multiple things wrong here:

Now, to my credit, I was using <img loading="lazy">, so those 20k images were not all being downloaded at once. But no matter what, it’s going to be achingly slow to render 40k elements – Lighthouse recommends no more than 1,400!

My first thought, of course, was, “Who the heck has 20k custom emoji?” My second thought was, “*Sigh* I guess I’m going to need to do virtualization.”

I had studiously avoided virtualization in emoji-picker-element, namely because 1) it’s complex, 2) I didn’t think I needed it, and 3) it has implications for accessibility.

I’ve been down this road before: Pinafore is basically one big virtual list. I used the ARIA feed role, did all the calculations myself, and added an option to disable “infinite scroll,” since some people don’t like it. This is not my first rodeo! I was just grimacing at all the code I’d have to write, and wondering about the size impact on my “tiny” ~12kB emoji picker.

After a few days, though, the thought popped into my head: what about CSS content-visibility? I saw from the trace that lots of time is spent in layout and paint, and plus this might help the “stuttering.” This could be a much simpler solution than full-on virtualization.

If you’re not familiar, content-visibility is a new-ish CSS feature that allows you to “hide” certain parts of the DOM from the perspective of layout and paint. It largely doesn’t affect the accessibility tree (since the DOM nodes are still there), it doesn’t affect find-in-page (⌘+F/Ctrl+F), and it doesn’t require virtualization. All it needs is a size estimate of off-screen elements, so that the browser can reserve space there instead.

Luckily for me, I had a good atomic unit for sizing: the emoji categories. Custom emoji on the Fediverse tend to be divided into bite-sized categories: “blobs,” “cats,” etc.

Custom emoji on mastodon.social.

For each category, I already knew the emoji size and the number of rows and columns, so calculating the expected size could be done with CSS custom properties:

.category {
  content-visibility: auto;
  contain-intrinsic-size:
    /* width */
    calc(var(--num-columns) * var(--total-emoji-size))
    /* height */
    calc(var(--num-rows) * var(--total-emoji-size));
}

These placeholders take up exactly as much space as the finished product, so nothing is going to jump around while scrolling.

The next thing I did was write a Tachometer benchmark to track my progress. (I love Tachometer.) This helped validate that I was actually improving performance, and by how much.

My first stab was really easy to write, and the perf gains were there… They were just a little disappointing.

For the initial load, I got a roughly 15% improvement in Chrome and 5% in Firefox. (Safari only has content-visibility in Technology Preview, so I can’t test it in Tachometer.) This is nothing to sneeze at, but I knew a virtual list could do a lot better!

So I dug a bit deeper. The layout costs were nearly gone, but there were still other costs that I couldn’t explain. For instance, what’s with this big undifferentiated blob in the Chrome trace?

Whenever I feel like Chrome is “hiding” some perf information from me, I do one of two things: bust out chrome:tracing, or (more recently) enable the experimental “show all events” option in DevTools.

This gives you a bit more low-level information than a standard Chrome trace, but without needing to fiddle with a completely different UI. I find it’s a pretty good compromise between the Performance panel and chrome:tracing.

And in this case, I immediately saw something that made the gears turn in my head:

What the heck is ResourceFetcher::requestResource? Well, even without searching the Chromium source code, I had a hunch – could it be all those <img>s? It couldn’t be, right…? I’m using <img loading="lazy">!

Well, I followed my gut and simply commented out the src from each <img>, and what do you know – all those mystery costs went away!

I tested in Firefox as well, and this was also a massive improvement. So this led me to believe that loading="lazy" was not the free lunch I assumed it to be.

Update: I filed a bug on Chromium for this issue. After more testing, it seems I was mistaken about Firefox – this looks like a Chromium-only issue.

At this point, I figured that if I was going to get rid of loading="lazy", I may as well go whole-hog and turn those 40k DOM elements into 20k. After all, if I don’t need an <img>, then I can use CSS to just set the background-image on an ::after pseudo-element on the <button>, cutting the time to create those elements in half.

.onscreen .custom-emoji::after {
  background-image: var(--custom-emoji-background);
}

At this point, it was just a simple IntersectionObserver to add the onscreen class when the category scrolled into view, and I had a custom-made loading="lazy" that was much more performant. This time around, Tachometer reported a ~40% improvement in Chrome and ~35% improvement in Firefox. Now that’s more like it!

Note: I could have used the contentvisibilityautostatechange event instead of IntersectionObserver, but I found cross-browser differences, and plus it would have penalized Safari by forcing it to download all the images eagerly. Once browser support improves, though, I’d definitely use it!

I felt good about this solution and shipped it. All told, the benchmark clocked a ~45% improvement in both Chrome and Firefox, and the original repro went from ~3 seconds to ~1.3 seconds. The person who reported the bug even thanked me and said that the emoji picker was much more usable now.

Something still doesn’t sit right with me about this, though. Looking at the traces, I can see that rendering 20k DOM nodes is just never going to be as fast as a virtualized list. And if I wanted to support even bigger Fediverse instances with even more emoji, this solution would not scale.

I am impressed, though, with how much you get “for free” with content-visibility. The fact that I didn’t need to change my ARIA strategy at all, or worry about find-in-page, was a godsend. But the perfectionist in me is still irritated by the thought that, for maximum perf, a virtual list is the way to go.

Maybe eventually the web platform will get a real virtual list as a built-in primitive? There were some efforts at this a few years ago, but they seem to have stalled.

I look forward to that day, but for now, I’ll admit that content-visibility is a good rough-and-ready alternative to a virtual list. It’s simple to implement, gives a decent perf boost, and has essentially no accessibility footguns. Just don’t ask me to support 100k custom emoji!

17 Sep

The continuing tragedy of emoji on the web

Posted by Nolan Lawson in Web. Tagged: emoji. 3 Comments

Pop quiz: what emoji do you see below? [1]

Depending on your browser and operating system, you might see:

From left to right: Safari on iOS 17, Firefox 130 on Windows 11, and Chrome 128 on Windows 11.

This, frankly, is a mess. And it’s emblematic of how half-heartedly browsers and operating systems have worked to keep their emoji up to date.

What’s responsible for this sorry state? I gave an overview two years ago, and shockingly little has changed – in fact, it’s gotten a bit worse.

In short:

As a result, every website on the planet that cares about having a consistent emoji experience has to bundle their own font or spritesheet, wasting untold megabytes for something that’s been standardized like clockwork by the Unicode Consortium for 15 years.

My recommendation remains the same: browsers should bundle their own emoji font and ship it outside of OS updates. Firefox does this right; they just need to switch to an up-to-date font like this Twemoji fork. There is an issue on Chromium to add the same functionality. As for Safari, well… they’re not quite evergreen, and fragmentation is just a consequence of that. But shipping a font is not rocket science, so maybe WebKit or iOS could be convinced to ship it out-of-band.

In the meantime, web developers can use a COLR font or polyfill to get a reasonably consistent experience across browsers. It’s just sad to me that, with all the stunning advancements browsers have made in recent years, and with all the overwhelming popularity of emoji, the web still struggles at rendering them.

Footnotes
  1. I’m using Codepen for this because WordPress automatically replaces native emoji with <img>s, since of course browsers can’t be trusted to render a font properly. Although ironically, they render the old flag (on certain browsers anyway): 🇲🇶
  2. For posterity: using Wikimedia’s stats for August 12th through September 16th 2024: 1.2% mobile Safari 15 users / 32% mobile Safari users = 3.75%.

RetroSearch is an open source project built by @garambo | Open a GitHub Issue

Search and Browse the WWW like it's 1997 | Search results from DuckDuckGo

HTML: 3.2 | Encoding: UTF-8 | Version: 0.7.4