Mental model
When I started researching, I found that there are solutions outside of Next.js, but they were either incomplete or tied to specific tools like Vite or esbuild. The more I dug, the more I realized that what we really have is a pattern without a proper implementation.
It reminded me of Flux back in the day—a pattern that introduced new ideas but lacked clear direction on how those ideas should fit into existing applications. So, since it’s up to us (the developers), I decided to design a tool that’s library-agnostic. Instead of being locked into existing toolchains, it works before—or in parallel with—them.
One of the key ideas I kept coming back to was having two versions of my code. The whole point of the pattern is to blur the line between front-end and back-end—but technically, that line still exists and is quite strict. Early on, I assumed there must be a tool somewhere that is doing exactly that. Unfortunately that wasn’t the case. And so I created it myself.
So, something that I’ll run before anything else. It will produce a server
and client
version of my code. After that my build tool will prepare a client bundle based on the source in the client
directory and my Node server will spin up the HTTP server using the files in the server
directory.
By taking this approach, I gain the freedom to implement the glue code in isolation—without interfering with the internals of other tools.
What exactly is Forket doing
(You don’t really need to know these details. If you want to try the solution jump to the How to use it section.)
After establishing itself Forket starts consuming one file after each other. It copies over what’s not JavaScript/TypeScripts. For the rest it runs a set of operations.
1. Bulding a graph
First, we need a graph that represents the component tree and its dependencies. The library literally reads the file contents, transforms the text into an AST (Abstract Syntax Tree), and analyses the code. Part of that analysis is to find out the import
statements. Then it sets a role to each file. I’m especially proud of this part, since I spent time making the graph visible in the console. It looks like this:
Notice how the graph includes both (server)
and (client)
files. Those marked as (client)
are bundled and shipped to the browser, while the ones marked as (server)
remain on the backend. Server actions are also highlighted—for example in /src/server-actions/auth.js
we have (SAs: login, logout)
.
2. Producing a “server” version of the code. Finding client boundaries
The main goal here is to identify client boundaries and prepare them for hydration on the client. That preparation involves:
- Component props serialization — Only primitive values are sent over the wire. Functions are skipped unless they are server actions, in which case they’re replaced with a specific string ID. The same applies when passing a promise.
- Children extraction — Rendered
children
are placed inside a<template>
tag so they can be reused during hydration. - Glue code — Additional logic triggers the hydration of the client boundary.
Here’s an example: a server component that loads a note on the server and its comments on the client:
export default async function Page({ example }) {
const note = await db.notes.get(42);
const commentsPromise = db.comments.get(note.id);
return (
<div className="container">
<div>
{note.content}
<Comments commentsPromise={commentsPromise} />
</div>
</div>
);
}
We await
the note (via db.notes.get
) and render its content, but for the comments we don’t care that much. Instead, db.comments.get
returns a promise that we pass to the client <Comments>
component. This creates an interesting mix: some logic runs purely on the backend but flows into the frontend.
Here’s how that component looks after Forket prepares it for the server—meaning this is what our Node server will render and stream to the browser:
export default async function Page({ example }) {
const note = await db.notes.get(42);
const commentsPromise = db.comments.get(note.id);
return (<div className="container">
<div>
{note.content}
<CommentsBoundary commentsPromise={commentsPromise}/>
</div>
</div>);
}
function CommentsBoundary(props) {
const serializedProps = JSON.stringify(forketSerializeProps(props, "Comments", "f_43"));
const children = props.children;
return (<>
{children && (
<template type="forket/children" id="f_43" data-c="Comments">
{children}
</template>)}
<template type="forket/start/f_43" data-c="Comments"></template>
<Comments {...props} children={children}/>
<template type="forket/end/f_43" data-c="Comments"></template>
<script id="forket/init/f_43" dangerouslySetInnerHTML={{
__html: `$F_booter(document.currentScript, "f_43", "Comments", ${JSON.stringify(serializedProps)});`
}}></script>
</>);
}
After rendering the components, the Node server sends the following to the browser:
<div>
Note 42
<template type="forket/start/f_43" data-c="Comments"></template>
<p>Loading comments...</p>
<template type="forket/end/f_43" data-c="Comments"></template>
<script id="forket/init/f_43">
$F_booter(
document.currentScript,
"f_43",
"Comments",
"{\"commentsPromise\":\"$FLP_f_0\"}"
);
</script>
</div>
Note 42
is the note’s content. <p>Loading comments...</p>
is what the <Comments>
component returns by default before the promise is resolved. Notice the last argument of the $F_booter
function. It’s the serialized version of the client boundary props. The promise is just a string which Forket will parse and transform to an actual promise on the client.
3. Producing a “client” version of the code. Finding server actions.
The main challenge here is detecting where server actions (server functions) are used and replacing them with something that omits their actual implementation. The goal is to ensure the code lives only on the server—so it doesn’t get bundled or sent to the browser.
Here is an example that is using a server action:
"use client";
import { createNote } from "./actions.js";
export default function EmptyNote() {
return (
<button onClick={() => createNote()}>
Create note
</button>
);
}
The content of the actions.js
is as follows:
"use server";
import db from './db.js';
export async function createNote() {
return await db.notes.create();
}
After Forket process the file we’ll get:
"use client";
const createNote = function(...args) {
return window.FSA_call("$FSA_createNote", "createNote")(...args);
};
export default function EmptyNote() {
return <button onClick={()=>createNote().then(console.log)}>Create note</button>;
}
So, instead of the real createNote
our client code will trigger a globally available function called FSA_call
. This will make a request to our Node server and will execute the right piece of code.
4. Annotating client entry points
If you plan to use server components, you’ll need to adjust how you think about single-page applications. In this model, the server takes the lead, while the browser hydrates so-called islands.
Forket includes a step that exports these island components to the global scope, allowing other utilities to locate and hydrate them in the right place. The only requirement is to have at least one file in the root directory with "use client"
at the top.
Here’s an example of a client entry point:
"use client";
import ReactDomClient from "react-dom/client";
import React from "react";
import f_6 from "./components/Feed.tsx";
window.$f_6 = f_6;
/* FORKET CLIENT */
// @ts-ignore
(()=>{(function(){let y=new Map,w=window.$F_renderers={},...
Forket makes sure that React is around and also the <Feed>
component.
How to use it
Now that we understand how it works, let’s look at how much effort it takes to use Forket in a real setup. First, it’s important to note that there are two perspectives when working with Forket:
- At build time — The library splits your code into two parts (client and server).
- Runtime glue code — It streams the components and hydrates them on the client.
Both use a shared configuration, which you can find here. Essentially, it’s a forket.config.js
file with two mandatory options:
// forket.config.js
import path from "path";
import { fileURLToPath } from "url";
const __filename = fileURLToPath(import.meta.url);
const __dirname = path.dirname(__filename);
const config = {
sourceDir: path.normalize(path.join(__dirname, "src")),
buildDir: path.normalize(path.join(__dirname, "build")),
}
export default config;
We basically say where is our source code and where to store the files after transformation.
At build time
After installing the library (via npm install forket
) you may run
> npx forket
This command will search for a forket.config.js
nearby and will start doing the magic.
The other way of doing it is via Forket JavaScript API.
import Forket from 'forket';
const forket = await Forket({
watch: true, // listening for changes in your src dir
printGraph: true,
});
await forket.process();
At runtime
As we said above there is a bit of a glue code that is needed to make both server and client work together.
Instrument your HTTP server
Let’s say that you have some sort of HTTP server library like express:
import express from "express";
import Forket from "forket";
const port = 8087;
const app = express();
const server = http.createServer(app);
Forket().then((forket) => {
app.use("/@forket", forket.forketServerActions());
app.get("/", forket.serveApp({
factory: (req) => <App request={req} />
}));
});
server.listen(port, () => {
console.log(`App listening on port ${port}.`);
});
We are defining an endpoint for our server functions and making sure that Forket is serving our main page component. Internally the library is using renderToPipeableStream to stream the React components. Notice that the path /@forket
is configurable if you want to have something different.
At least one client entry point
And we should not forget to create at least one file in the root directory with "use client"
.
"use client"
And with this we are ready to go. At build time well have our source code transformed into the build
directory. After that our usual pipeline will create a client bundle and will spin up our HTTP server.
Here’s how one of the examples looks like in the terminal:
Final words
It’s been three wonderful months building Forket, and I’ve had a lot of fun along the way. I definitely plan to keep using it and continue improving it.
I’m excited for others to try it out and share feedback—whether it works smoothly in your setup or not. I’d also be happy to explore bringing it into projects that use Vite, Webpack, or other tooling.
- Kiambu Web Design Services
- Kiambu SEO Services
- Kiambu Digital Marketing Services
- Kiambu Social Media Marketing Services
- Kiambu Lipa Pole Pole Phones
- Nyali Web Design Services
- Nyali SEO Services
- Nyali Lipa Pole Pole Phones
- Mombasa Lipa Pole Pole Phones
- Meru Web Design Services
- Meru SEO Services
- Meru Digital Marketing Services
- Meru Social Media Marketing Services
- CBD Nairobi Web Design Services
- Westlands Web Design Services
- Outer Ring Road Web Design Services
- Outer Ring Road SEO Services
- Thika Road Web Design Services
- Thika Road SEO Services
- Thika Road Digital Marketing Services
- Thika Road Lipa Pole Pole Phones
- Langata Web Design Services
- Langata SEO Services
- Langata Lipa Pole Pole Phones
- Mombasa Road Web Design Services
- Mombasa Road SEO Services
- Mombasa Road Lipa Pole Pole Phones
- Mombasa Road Digital Marketing Services
- Mombasa Road Social Media Marketing Services
- Karen Web Design Services
- Karen SEO Services
- Karen Digital Marketing Services
- Garden City Web Design Services
- Thika Road Mall Web Design Services
- Thika Road Mall SEO Services
- Thika Road Mall Lipa Pole Pole Phones
- Eastlands Web Design Services
- Eastlands SEO Services
- Eastlands Lipa Pole Pole Phones
- Donholm Web Design Services
- Donholm SEO Services
- Donholm Lipa Pole Pole Phones
- Ruai Web Design Services
- Ruai SEO Services
- Ruai Lipa Pole Phones