PostHole
Compose Login
You are browsing eu.zone1 in read-only mode. Log in to participate.
rss-bridge 2025-05-29T00:00:00+00:00

One Roundtrip Per Navigation

What do HTML, GraphQL, and RSC have in common?


One Roundtrip Per Navigation

May 29, 2025

Pay what you like

How many requests should it take to navigate to another page?

In the simplest case, a navigation is resolved in a single request. You click a link, the browser requests the HTML content for the new URL, and then displays it.

In practice, a page might also want to display some images, load some client-side JavaScript, load some extra styles, and so on. So there’ll be a bunch of requests. Some will be render-blocking (so the browser will defer displaying the page until they resolve), and the rest will be “nice-to-have”. Maybe they’ll be important for full interactivity but the browser can already display the page while they load.

Okay, but what about loading data?

How many API requests should it take to get the data for the next page?


HTML

Before much of the web development has moved to the client, this question didn’t even make sense. There was no concept of “hitting the API” because you wouldn’t think of your server as an API server—it was just the server, returning HTML.

In traditional “HTML apps”, aka websites, getting the data always takes a single roundtrip. The user clicks a link, the server returns the HTML, and all the data necessary to display the next page is already embedded within that HTML. The HTML itself is the data. It doesn’t need further processing—it’s ready for display:

<article>
<h1>One Roundtrip Per Navigation</h1>
<p>How many requests should it take to navigate to another page?</p>
<ul class="comments">
<li>You're just reinventing HTML</li>
<li>You're just reinventing PHP</li>
<li>You're just reinventing GraphQL</li>
<li>You're just reinventing Remix</li>
<li>You're just reinventing Astro</li>
</ul>
</article>

(Yes, technically some static, reusable and cacheable parts like images, scripts, and styles get “outlined”, but you can also always inline them whenever that’s useful.)


“REST”

Things changed as we moved more of the application logic to the client side. The data we want to fetch is usually determined by the UI we need to display. When we want to show a post, we need to fetch that post. When we want to show a post’s comments, we need to fetch those comments. So how many fetches do we make?

With JSON APIs, a technique known as REST suggests to expose an endpoint per a conceptual “resource”. Nobody knows what exactly a “resource” is but usually the backend team will be in charge of defining this concept. So maybe you’ll have a Post “resource” and a Post Comments “resource”, and so you’ll be able to load the data for the post page (which contains the post and its comments) in two fetches.

But where do these two fetches happen?

In server-centric HTML apps (aka websites) you could hit two REST APIs during a single request, and still return all the data as a single response. This is because the REST API requests would happen on the server. The REST API was used mostly as an explicit boundary for the data layer, but it was not really required (many were happy to use an in-process data layer that you can import—like in Rails or Django). Regardless of REST, the data (HTML) arrived to the client (browser) in one piece.

As we started moving UI logic to the client for richer interactivity, it felt natural to keep the existing REST APIs but to fetch them from the client. Isn’t that kind of flexiblity exactly what JSON APIs were great at? Everything became a JSON API:

const [post, comments] = await Promise.all([
fetch(`/api/posts/${postId}`).then(res => res.json()),
fetch(`/api/posts/${postId}/comments`).then(res => res.json())
]);

However as a result, there are now two fetches in the Network tab: one fetch for the Post and another fetch for that Post’s Comments. A single page—a single link click—often needs data from more than one REST “resource”. In the best case, you can hit a couple of endpoints and call it a day. In the worst case, you might have to hit N endpoints for N items, or hit the server repeatedly in a series of client/server waterfalls (get some data, compute stuff from it, use that to get some more data).

An inefficiency is creeping in. When we were on the server, making a bunch of REST requests was cheap because we had control over how our code is deployed. If those REST endpoints were far away, we could move our server closer to them or even move their code in-process. We could use replication or server-side caching. Even if something got inefficient, on the server we have many levers to improve that inefficiency. Nothing is stopping us from improving things on the server side.

However, if you think of the server as a black box, you can’t improve on the APIs it provides. You can’t optimize a client/server waterfall if the server doesn’t return all the data needed to run requests in parallel. You can’t reduce the number of parallel requests if the server doesn’t provide an API that returns all the data in a batch.

At some point you’re going to hit a wall.


Components

The problem above wouldn’t be so bad if not for the tension between efficiency and encapsulation. As developers, we feel compelled to place the logic to load the data close to where this data is used. Someone might say this leads to “spaghetti code”, but it doesn’t have to! The idea itself is solid. Recall—the UI determines the data. The data you need depends on what you want to display. The data fetching logic and the UI logic are inherently coupled—when one changes, the other needs to be aware of that. You don’t want to break stuff by “underfetching” or bloat it by “overfetching”. But how do you keep the UI logic and the data fetching in sync?

The most direct approach would be to put the data loading logic directly in your UI components. That’s the “$.ajax in a Backbone.View” approach, or “fetch in useEffect” approach. It was incredibly popular with the rise of client-side UI—and still is. The benefit of this approach is colocation: the code that says what data to load is located right next to the code consuming it. Different people can write components that depend on different data sources, and then put them together:

function PostContent({ postId }) {
const [post, setPost] = useState()
useEffect(() => {
fetch(`/api/posts/${postId}`)
.then(res => res.json())
.then(setPost);
}, []);
if (!post) {
return null;
return (
<h1>{post.title}</h1>
<p>{post.content}</p>
<Comments postId={postId} />
</article>

function Comments({ postId }) {
const [comments, setComments] = useState([])
useEffect(() => {
fetch(`/api/posts/${postId}/comments`)
.then(res => res.json())
.then(setComments);
}, [])
return (
<ul className="comments">
{comments.map(c => <li key={c.id}>{c.text}</li>)}
</ul>

However, this approach makes the problem from the previous section much more severe. Not only does rendering a single page take a bunch of requests, these requests are now spread out in the codebase. How do you audit for inefficiencies?

Someone might edit a component, add some data loading to it, and thus introduce a new client/server waterfall to a dozen different screens using that component. If our components ran on the server only—like Astro Components—data fetching delays would at best be nonexistent and at worst be predictable. But on the client, smudging the data fetching logic across components cascades the inefficiencies without good levers to fix them—we can’t move the user any closer to our servers. (And inherent waterfalls can’t be fixed from the client at all—even by prefetching.)

Let’s see if adding a bit more structure to our data fetching code can help.


Queries

[...]


Original source

Reply