At the beginning of 2025, I finally decided to build myself a new portfolio. I still pretty much liked the one I made back in 2021, but I felt the need to put to good use all the cool stuff I’ve learned these past couple years working with WebGPU. And, besides, half of the projects featured in my case studies had been put offline anyway, so it was about time.
I didn’t really know where I was going at this point, except that:
- It would, of course, feature multiple procedurally generated WebGPU scenes. I already had a few concepts to explore in mind, like particles or boids simulation.
- I wanted to take care of the design myself. It may seem weird, especially since I was very happy with what Gilles came up designing for my last portfolio, and also because I do suck at design. But this would give me more freedom, and I’ve also always liked building things from scratch on my own.
- Last but not least, it had to be fun!
1. The journey
The (tough) design and content process
Don’t do this!
At first, I had no idea what to do design wise. Fonts, colors: there are so many things that could go wrong.
I started with simple light and dark colors, kept the fonts Gilles had chosen for my previous portfolio and started to copy/paste its old text content. It didn’t feel that great, and it wasn’t fun for sure.

I definitely needed colors. I could have wasted a few hours (or days) choosing the right pairing, but instead I decided this could be the right opportunity to use this random color palette generator utility I’ve coded a few years ago. I cleaned the code a bit, created a repo, published it to npm and added it to my project. I also slightly changed the tone of the copywriting, and that led me to something still not that great, but a bit more fun.

I let it site for a while and started working on other parts of the site, such as integrating the CMS or experimenting with the WebGPU scenes. It’s only after a long iteration process that I’ve finally set up my mind on this kind of old school video games retro vibe mixed with a more cheerful, cartoonish aesthetic, almost Candy Crush-esque. Impactful headings, popping animations, banded gradients… you name it.
Of course, I’ve never gone as far as creating a Figma project (I did select a few reference images as a moodboard though) and just tested a ton of stuff directly with code until I felt it wasn’t that bad anymore. All in all, it was a very long and painful process, and I guess every designer would agree at this point: don’t do this!

Do you actually read portfolios content?
Another painful point was to settle on the actual content and overall structure of the site. Do I need detailed case studies pages? Do I need pages at all? Will the users even read all those long blocks of text I will struggle to write?
In the end, I chose to drop the case studies pages. I had a couple of reasons to do so:
- Often times the project ends up being put offline for various reasons, and you end up showcasing something the user cannot visit anymore. This is exactly what happened on my previous portfolio.
- Most of the client work I’ve been doing those past years has been for agencies, and I’m not always allowed to publicly share them. I have no problem with that, but it slightly reduced the number of projects I could highlight.
From there on, it was a quick decision to just go with a single landing page. I’d put direct links to the projects I could highlight and small videos of all the other projects or personal works I could feature. On top of that, I’d add a few “about” sections mixed with my WebGPU scenes, and that’d be the gist of it.
Speaking of the WebGPU scenes, I really wanted them to be meaningful, not just a technical demonstration of what I could do. But we’ll get to that later.
The final UX twist
After a few months, I felt like I was entering the final stage of development. The page structure was mostly done, all my various sections were there and I was working on the final animations and micro-interactions tweakings.
So I took a step back, and looked back at my initial expectations. I had my WebGPU scenes showcasing my various technical skills. I had handled the design myself, and it wasn’t that bad. But were the flashy colors and animations enough to make it a really fun experience overall?
I think you already know the answer. Something was missing.
Except for the random color palette switcher, the UX basically consisted of scroll-driven animations. Most of the 3D scenes interactions were rudimentary. I needed an idea.
The design already had this video game cheerful look. So… What if I turned my whole portfolio into a game?
Once again, I started writing down my ideas:
- The user would need to interact with the different UI elements to unlock the theme switcher and color palette generator buttons.
- Each WebGPU scene could serve as a way to unlock the following content, acting as a very basic “puzzle” game.
- Keep track of the user overall progress.
- Allow the user to skip the whole game process if they want to.
This means most of the users wouldn’t ever make it to the footer, or use this random palette generator tool I’ve struggled to implement. This might very well be the most riskiest, stupidest decision I’ve made so far. But it would give my portfolio this unique and fun touch I was looking for in the first place, so I went all in.
Of course, it goes without saying it implied a major refactoring of the whole code and I needed to come up with original interaction ideas for the WebGPU scenes, but I like to think it was worth it.


2. Technical study
Now that you know all the whys, let’s have a look at the hows!
Tech stack
I’ve decided to try Sanity Studio as I’ve never worked with it before and as I knew it would be a relatively small project, it’d be a perfect fit to start using it. Even though I felt like I just scratched its surface, I liked the overall developer experience it provided. On the other hand, I already had a good experience working with Nuxt3 so this was an easy choice.
No need to mention why I chose GSAP and Lenis — everyone knows those are great tools to deliver smooth animated websites.
Of course, the WebGPU scenes had to be done with gpu-curtains, the 3D engine I spent so much time working on these past two years. It was a great way to test it in a real-life scenario and gave me the opportunity to fix a few bugs or add a couple features along the way.
And since I wanted the whole process to be as transparent as possible, I’ve published the whole source code as a monorepo on GitHub.
Animations
I won’t go too deep into how I handled the various animations, simply because I’ve essentially used CSS and a bit of GSAP here and there, mostly for canvas
animations, SplitText
effects or the videos carousel using ScrollTrigger
observer.
The basic scenes
There are a lot of components on the website that needed to draw something onto a <canvas>
and react to the theme and/or color palette changes.
To handle that, I created a Scene.ts
class:
import type { ColorPalette } from "@martinlaxenaire/color-palette-generator";
export interface SceneParams {
container: HTMLElement;
progress?: number;
palette?: ColorPalette;
colors?: ColorModelBase[];
}
export class Scene {
#progress: number;
container: HTMLElement;
colors: ColorModelBase[];
isVisible: boolean;
constructor({ container, progress = 0, colors = [] }: SceneParams) {
this.container = container;
this.colors = colors;
this.#progress = progress;
this.isVisible = true;
}
onResize() {}
onRender() {}
setSceneVisibility(isVisible: boolean = true) {
this.isVisible = isVisible;
}
setColors(colors: ColorModelBase[]) {
this.colors = colors;
}
get progress(): number {
return this.#progress;
}
set progress(value: number) {
this.#progress = isNaN(value) ? 0 : value;
this.onProgress();
}
forceProgressUpdate(progress: number = 0) {
this.progress = progress;
}
lerp(start = 0, end = 1, amount = 0.1) {
return (1 - amount) * start + amount * end;
}
onProgress() {}
destroy() {}
}
Since switching theme from light to dark (or vice versa) also updates the color palette by tweaking the HSV value
component of the colors a bit, I’ve just put a setColors()
method in there to handle these changes.
The progress
handling here is actually a remain of when the WebGPU scenes animations were mostly scroll-driven (before I introduced the game mechanisms), but since a few scenes still used it, I kept it in there.
All the 2D canvas scenes extend that class, including the WebGPU fallback scenes, the theme switcher button or the dynamic favicon generator (did you notice that?).
The WebGPU scenes
One of the very cool features introduced by WebGPU is that you can render to multiple <canvas>
elements using only one WebGPU device. I used this to build 4 different scenes (we’ll take a closer look at each of them below), that all extend a WebGPUScene.ts
class:
import { GPUCurtains } from "gpu-curtains";
import type { ComputeMaterial, RenderMaterial } from "gpu-curtains";
import { Scene } from "./Scene";
import type { SceneParams } from "./Scene";
import {
QualityManager,
type QualityManagerParams,
} from "./utils/QualityManager";
export interface WebGPUSceneParams extends SceneParams {
gpuCurtains: GPUCurtains;
targetFPS?: QualityManagerParams["targetFPS"];
}
export class WebGPUScene extends Scene {
gpuCurtains: GPUCurtains;
qualityManager: QualityManager;
quality: number;
_onVisibilityChangeHandler: () => void;
constructor({
gpuCurtains,
container,
progress = 0,
colors = [],
targetFPS = 55,
}: WebGPUSceneParams) {
super({ container, progress, colors });
this.gpuCurtains = gpuCurtains;
this._onVisibilityChangeHandler =
this.onDocumentVisibilityChange.bind(this);
this.qualityManager = new QualityManager({
label: `${this.constructor.name} quality manager`,
updateDelay: 2000,
targetFPS,
onQualityChange: (newQuality) => this.onQualityChange(newQuality),
});
this.quality = this.qualityManager.quality.current;
document.addEventListener(
"visibilitychange",
this._onVisibilityChangeHandler
);
}
override setSceneVisibility(isVisible: boolean = true) {
super.setSceneVisibility(isVisible);
this.qualityManager.active = isVisible;
}
onDocumentVisibilityChange() {
this.qualityManager.active = this.isVisible && !document.hidden;
}
compilteMaterialOnIdle(material: ComputeMaterial | RenderMaterial) {
if (!this.isVisible && "requestIdleCallback" in window) {
window.requestIdleCallback(() => {
material.compileMaterial();
});
}
}
override onRender(): void {
super.onRender();
this.qualityManager.update();
}
onQualityChange(newQuality: number) {
this.quality = newQuality;
}
override destroy(): void {
super.destroy();
document.removeEventListener(
"visibilitychange",
this._onVisibilityChangeHandler
);
}
}
In the real version, this class also handles the creation of a Tweakpane GUI folder (useful for debugging or tweaking values), but for the sake of clarity I removed the related code here.
As you can see, each of these scenes closely monitors its own performance using a custom QualityManager
class. We’ll talk about that later, in the performance section.
Okay, now that we have the basic architecture in mind, let’s break down each of the WebGPU scenes!
Since WebGPU is not fully supported yet, I’ve created fallback versions using the 2D canvas API and the Scene
class we’ve seen above for each of the following scenes.
Hero scene
The scenes featured in the portfolio somehow respect a kind of complexity order, meaning the more you advance in the portfolio, the more technically involved the scenes become.
In that way, the hero scene is by far the most simple technically speaking, but it had to look particularly striking and engaging to immediately capture the user’s attention. It was thought as some sort of mobile puzzle game splash screen.
It’s made of a basic, single fullscreen quad. The idea here is to first rotate its UV
components each frame, map them to polar coordinates and use that to create colored triangles segments.
// Center UVs at (0.5, 0.5)
var centeredUV = uv - vec2f(0.5);
// Apply rotation using a 2D rotation matrix
let angleOffset = params.time * params.speed; // Rotation angle in radians
let cosA = cos(angleOffset);
let sinA = sin(angleOffset);
// Rotate the centered UVs
centeredUV = vec2<f32>(
cosA * centeredUV.x - sinA * centeredUV.y,
sinA * centeredUV.x + cosA * centeredUV.y
);
// Convert to polar coordinates
let angle = atan2(centeredUV.y, centeredUV.x); // Angle in radians
let radius = length(centeredUV);
// Map angle to triangle index
let totalSegments = params.numTriangles * f32(params.nbColors) * params.fillColorRatio;
let normalizedAngle = (angle + PI) / (2.0 * PI); // Normalize to [0,1]
let triIndex = floor(normalizedAngle * totalSegments); // Get triangle index
// Compute fractional part for blending
let segmentFraction = fract(normalizedAngle * totalSegments); // Value in [0,1] within segment
let isEmpty = (i32(triIndex) % i32(params.fillColorRatio)) == i32(params.fillColorRatio - 1.0);
let colorIndex = i32(triIndex / params.fillColorRatio) % params.nbColors; // Use half as many color indices
let color = select(vec4(params.colors[colorIndex], 1.0), vec4f(0.0), isEmpty);
There’s actually a wavy noise applied to the UV
beforehand using concentric circles, but you get the idea.
Interestingly enough, the most difficult part was to achieve the rounded rectangle entering animation while preserving the correct aspect ratio. This was done using this function:
fn roundedRectSDF(uv: vec2f, resolution: vec2f, radiusPx: f32) -> f32 {
let aspect = resolution.x / resolution.y;
// Convert pixel values to normalized UV space
let marginUV = vec2f(radiusPx) / resolution;
let radiusUV = vec2f(radiusPx) / resolution;
// Adjust radius X for aspect ratio
let radius = vec2f(radiusUV.x * aspect, radiusUV.y);
// Center UV around (0,0) and apply scale (progress)
var p = uv * 2.0 - 1.0; // [0,1] → [-1,1]
p.x *= aspect; // fix aspect
p /= max(0.0001, params.showProgress); // apply scaling
p = abs(p);
// Half size of the rounded rect
let halfSize = vec2f(1.0) - marginUV * 2.0 - radiusUV * 2.0;
let halfSizeScaled = vec2f(halfSize.x * aspect, halfSize.y);
let d = p - halfSizeScaled;
let outside = max(d, vec2f(0.0));
let dist = length(outside) + min(max(d.x, d.y), 0.0) - radius.x * 2.0;
return dist;
}
Highlighted videos slider scene
Next up is the highlighted videos slider. The original idea came from an old WebGL prototype I had built a few years ago and never used.
The idea is to displace the planes vertices to wrap them around a cylinder.
var position: vec3f = attributes.position;
// curve
let angle: f32 = 1.0 / curve.nbItems;
let cosAngle = cos(position.x * PI * angle);
let sinAngle = sin(position.x * PI * angle);
position.z = cosAngle * curve.itemWidth;
position.x = sinAngle;
I obviously used this for the years titles, whereas the videos and trail effects behind them are distorted using a post-processing pass.
While this was originally tied to the vertical scroll values (and I really liked the feeling it produced), I had to update its behavior when I switched to the whole gamification idea, making it an horizontal carousel.
Thanks to gpu-curtains DOM to WebGPU syncing capabilities, it was relatively easy to set up the videos grid prototype using the Plane
class.
The trail effect is done using a compute shader writing to a storage texture. The compute shader only runs when necessary, which means when the slider is moving. I’m sure it could have been done in a thousands different ways, but it was a good excuse to play with compute shaders and storage textures. Here’s the compute shader involved:
struct Rectangles {
sizes: vec2f,
positions: vec2f,
colors: vec4f
};
struct Params {
progress: f32,
intensity: f32
};
@group(0) @binding(0) var backgroundStorageTexture: texture_storage_2d<rgba8unorm, write>;
@group(1) @binding(0) var<uniform> params: Params;
@group(1) @binding(1) var<storage, read> rectangles: array<Rectangles>;
fn sdfRectangle(center: vec2f, size: vec2f) -> f32 {
let dxy = abs(center) - size;
return length(max(dxy, vec2(0.0))) + max(min(dxy.x, 0.0), min(dxy.y, 0.0));
}
@compute @workgroup_size(16, 16) fn main(
@builtin(global_invocation_id) GlobalInvocationID: vec3<u32>
) {
let bgTextureDimensions = vec2f(textureDimensions(backgroundStorageTexture));
if(f32(GlobalInvocationID.x) <= bgTextureDimensions.x && f32(GlobalInvocationID.y) <= bgTextureDimensions.y) {
let uv = vec2f(f32(GlobalInvocationID.x) / bgTextureDimensions.x - params.progress,
f32(GlobalInvocationID.y) / bgTextureDimensions.y);
var color = vec4f(0.0, 0.0, 0.0, 0.0); // Default to black
let nbRectangles: u32 = arrayLength(&rectangles);
for (var i: u32 = 0; i < nbRectangles; i++) {
let rectangle = rectangles[i];
let rectDist = sdfRectangle(uv - rectangle.positions, vec2(rectangle.sizes.x * params.intensity, rectangle.sizes.y));
color = select(color, rectangle.colors * params.intensity, rectDist < 0.0);
}
textureStore(backgroundStorageTexture, vec2<i32>(GlobalInvocationID.xy), color);
}
}
I thought I was done here, but while running production build tests I stumbled upon an issue. Unfortunately, preloading all those videos to use as WebGPU textures resulted in a huge initial payload and also significantly affected the CPU load. To mitigate that, I’ve implemented a sequential video preloading where I’d have to wait for each video to have enough data before loading the next one. This gave a huge boost regarding initial load time and CPU overhead.

Invoices scene
The third WebGPU scene was initially supposed to constitute my own take at 3D boids simulations, using instancing and a compute shader. After a bit of work, I had a bunch of instances that were following my mouse, but the end result was not living up to my expectations. The spheres were sometimes overlapping each other, or disappearing behind the edges of the screen. I kept improving it, adding self-collision, edge detections and attraction/repulsion mechanisms until I was happy enough with the result.
I like to call it the “invoices” scene, because the sphere instances here actually represent all the invoices I actually issued during my freelance career, scaled based on the amounts. Since I’m using google sheets to handle most of my accounting, I’ve made a little script that gathers all my invoices amount in a single, separate private sheet each time I’m updating my accounting sheets. I then fetch and parse that sheet to create the instances. It was a fun little side exercise and turns this scene into an ironically meaningful experiment: each time you click and hold, you kind of help me collect my money.
The compute shader uses a buffer ping-pong technique: you start with two identically filled buffers (e.g. packed raw data) then at each compute dispatch call, you read the data from the first buffer and update the second one accordingly. Once done, you swap the two buffers before the next call and repeat the process.
If you’re familiar with WebGL, this is often done with textures. WebGPU and compute shaders allow us to do so with buffers, which is way more powerful. Here is the complete compute shader code:
struct ParticleB {
position: vec4f,
velocity: vec4f,
rotation: vec4f,
angularVelocity: vec4f,
data: vec4f
};
struct ParticleA {
position: vec4f,
velocity: vec4f,
rotation: vec4f,
angularVelocity: vec4f,
data: vec4f
};
struct SimParams {
deltaT: f32,
mousePosition: vec3f,
mouseAttraction: f32,
spheresRepulsion: f32,
boxReboundFactor: f32,
boxPlanes: array<vec4f, 6>
};
@group(0) @binding(0) var<uniform> params: SimParams;
@group(0) @binding(1) var<storage, read> particlesA: array<ParticleA>;
@group(0) @binding(2) var<storage, read_write> particlesB: array<ParticleB>;
fn constrainToFrustum(pos: vec3<f32>, ptr_velocity: ptr<function, vec3<f32>>, radius: f32) -> vec3<f32> {
var correctedPos = pos;
for (var i = 0u; i < 6u; i++) { // Loop through 6 frustum planes
let plane = params.boxPlanes[i];
let dist = dot(plane.xyz, correctedPos) + plane.w;
if (dist < radius) { // If inside the plane boundary (radius = 1)
// Move the point inside the frustum
let correction = plane.xyz * (-dist + radius); // Push inside the frustum
// Apply the position correction
correctedPos += correction;
// Reflect velocity with damping
let normal = plane.xyz;
let velocityAlongNormal = dot(*(ptr_velocity), normal);
if (velocityAlongNormal < 0.0) { // Ensure we only reflect if moving towards the plane
*(ptr_velocity) -= (1.0 + params.boxReboundFactor) * velocityAlongNormal * normal;
}
}
}
return correctedPos;
}
fn quaternionFromAngularVelocity(omega: vec3f, dt: f32) -> vec4f {
let theta = length(omega) * dt;
if (theta < 1e-5) {
return vec4(0.0, 0.0, 0.0, 1.0);
}
let axis = normalize(omega);
let halfTheta = 0.5 * theta;
let sinHalf = sin(halfTheta);
return vec4(axis * sinHalf, cos(halfTheta));
}
fn quaternionMul(a: vec4f, b: vec4f) -> vec4f {
return vec4(
a.w * b.xyz + b.w * a.xyz + cross(a.xyz, b.xyz),
a.w * b.w - dot(a.xyz, b.xyz)
);
}
fn integrateQuaternion(q: vec4f, angularVel: vec3f, dt: f32) -> vec4f {
let omega = vec4(angularVel, 0.0);
let dq = 0.5 * quaternionMul(q, omega);
return normalize(q + dq * dt);
}
@compute @workgroup_size(64) fn main(
@builtin(global_invocation_id) GlobalInvocationID: vec3<u32>
) {
var index = GlobalInvocationID.x;
var vPos = particlesA[index].position.xyz;
var vVel = particlesA[index].velocity.xyz;
var collision = particlesA[index].velocity.w;
var vQuat = particlesA[index].rotation;
var angularVelocity = particlesA[index].angularVelocity.xyz;
var vData = particlesA[index].data;
let sphereRadius = vData.x;
var newCollision = vData.y;
collision += (newCollision - collision) * 0.2;
collision = smoothstep(0.0, 1.0, collision);
newCollision = max(0.0, newCollision - 0.0325);
let mousePosition: vec3f = params.mousePosition;
let minDistance: f32 = sphereRadius; // Minimum allowed distance between spheres
// Compute attraction towards sphere 0
var directionToCenter = mousePosition - vPos;
let distanceToCenter = length(directionToCenter);
// Slow down when close to the attractor
var dampingFactor = smoothstep(0.0, minDistance, distanceToCenter);
if (distanceToCenter > minDistance && params.mouseAttraction > 0.0) { // Only attract if outside the minimum distance
vVel += normalize(directionToCenter) * params.mouseAttraction * dampingFactor;
vVel *= 0.95;
}
// Collision Handling: Packing spheres instead of pushing them away
var particlesArrayLength = arrayLength(&particlesA);
for (var i = 0u; i < particlesArrayLength; i++) {
if (i == index) {
continue;
}
let otherPos = particlesA[i].position.xyz;
let otherRadius = particlesA[i].data.x;
let collisionMinDist = sphereRadius + otherRadius;
let toOther = otherPos - vPos;
let dist = length(toOther);
if (dist < collisionMinDist) {
let pushDir = normalize(toOther);
let overlap = collisionMinDist - dist;
let pushStrength = otherRadius / sphereRadius; // radius
// Push away proportionally to overlap
vVel -= pushDir * (overlap * params.spheresRepulsion) * pushStrength;
newCollision = min(1.0, pushStrength * 1.5);
let r = normalize(cross(pushDir, vVel));
angularVelocity += r * length(vVel) * 0.1 * pushStrength;
}
}
let projectedVelocity = dot(vVel, directionToCenter); // Velocity component towards mouse
let mainSphereRadius = 1.0;
if(distanceToCenter <= (mainSphereRadius + minDistance)) {
let pushDir = normalize(directionToCenter);
let overlap = (mainSphereRadius + minDistance) - distanceToCenter;
// Push away proportionally to overlap
vVel -= pushDir * (overlap * params.spheresRepulsion) * (2.0 + params.mouseAttraction);
newCollision = 1.0;
if(params.mouseAttraction > 0.0) {
vPos -= pushDir * overlap;
}
let r = normalize(cross(pushDir, vVel));
angularVelocity += r * length(vVel) * 0.05;
}
vPos = constrainToFrustum(vPos, &vVel, sphereRadius);
// Apply velocity update
vPos += vVel * params.deltaT;
angularVelocity *= 0.98;
let updatedQuat = integrateQuaternion(vQuat, angularVelocity, params.deltaT);
// Write back
particlesB[index].position = vec4(vPos, 0.0);
particlesB[index].velocity = vec4(vVel, collision);
particlesB[index].data = vec4(vData.x, newCollision, vData.z, vData.w);
particlesB[index].rotation = updatedQuat;
particlesB[index].angularVelocity = vec4(angularVelocity, 1.0);
}
One of my main inspirations for this scene was this awesome demo by Patrick Schroen. I spent a lot of time looking for the right rendering tricks to use and finally set up my mind on volumetric lighting. The implementation is quite similar to what Maxime Heckel explained in this excellent breakdown article. Funnily enough, I was already deep into my own implementation when he released that piece, and I owe him the idea of using a blue noise texture.
As a side note, during the development phase this was the first scene that required an actual user interaction and it played a pivotal role in my decision to turn my folio into a game.
Open source scene
For the last scene, I wanted to experiment a bit more with particles and curl noise because I’ve always liked how organic and beautiful it can get. I had already published an article using these concepts, so I had to come up with something different. Jaume Sanchez’ Polygon Shredder definitely was a major inspiration here.
Since this experiment was part of my open source commitment section, I had the idea to use my GitHub statistics as a data source for the particles. Each statistic (number of commits, followers, issues closed and so on) is assigned to a color and turned into a bunch of particles. You can even toggle them on and off using the filters in the information pop-up. Once again, this changed a rather technical demo into something more meaningful.
While working on the portfolio, I was also exploring new rendering techniques with gpu-curtains such as planar reflections. Traditionally used for mirror effects or floor reflections, it consists of rendering a part of your scene a second time but from a different camera angle and projecting it onto a plane. Having nailed this, I thought it would be a perfect match there and added it to the scene.
Last but not least, and as a reminder of the retro video games vibe, I wanted to add a pixelated mouse trail post-processing effect. I soon realized it would be way too much though, and ended up showing it only when the user is actually drawing a line, making it more subtle.

Performance and accessibility
On such highly interactive and immersive pages, performance is key. Here are a few tricks I’ve used to try to maintain the most fluid experience across all devices.
Dynamic imports
I’ve used Nuxt dynamic imported components and lazy hydration for almost every non critical components of the page. In the same way, all WebGPU scenes are dynamically loaded only if WebGPU is supported. This significantly decreased the initial page load time.
// pseudo code
import type { WebGPUHeroScene } from "~/scenes/hero/WebGPUHeroScene";
import { CanvasHeroScene } from "~/scenes/hero/CanvasHeroScene";
let scene: WebGPUHeroScene | CanvasHeroScene | null;
const canvas = useTemplateRef("canvas");
const { colors } = usePaletteGenerator();
onMounted(async () => {
const { $gpuCurtains, $hasWebGPU, $isReducedMotion } = useNuxtApp();
if ($hasWebGPU && canvas.value) {
const { WebGPUHeroScene } = await import("~/scenes/hero/WebGPUHeroScene");
scene = new WebGPUHeroScene({
gpuCurtains: $gpuCurtains,
container: canvas.value,
colors: colors.value,
});
} else if (canvas.value) {
scene = new CanvasHeroScene({
container: canvas.value,
isReducedMotion: $isReducedMotion,
colors: colors.value,
});
}
});
I’m not particularly fond of Lighthouse reports but as you can see the test result is quite good (note that it’s running without WebGPU though).

Monitoring WebGPU performance in real time
I’ve briefly mentionned it earlier, but each WebGPU scene actually monitors its own performance by keeping track of its FPS rate in real time. To do so, I’ve written 2 separate classes: FPSWatcher
, that records the average FPS over a given period of time, and QualityManager
, that uses a FPSWatcher
to set a current quality rating on a 0 to 10 scale based on the average FPS.
This is what they look like:
export interface FPSWatcherParams {
updateDelay?: number;
onWatch?: (averageFPS: number) => void;
}
export default class FPSWatcher {
updateDelay: number;
onWatch: (averageFPS: number) => void;
frames: number[];
lastTs: number;
elapsedTime: number;
average: number;
constructor({
updateDelay = 1000, // ms
onWatch = () => {}, // callback called every ${updateDelay}ms
}: FPSWatcherParams = {}) {
this.updateDelay = updateDelay;
this.onWatch = onWatch;
this.frames = [];
this.lastTs = performance.now();
this.elapsedTime = 0;
this.average = 0;
}
restart() {
this.frames = [];
this.elapsedTime = 0;
this.lastTs = performance.now();
}
update() {
const delta = performance.now() - this.lastTs;
this.lastTs = performance.now();
this.elapsedTime += delta;
this.frames.push(delta);
if (this.elapsedTime > this.updateDelay) {
const framesTotal = this.frames.reduce((a, b) => a + b, 0);
this.average = (this.frames.length * 1000) / framesTotal;
this.frames = [];
this.elapsedTime = 0;
this.onWatch(this.average);
}
}
}
It’s very basic: I just record the elapsed time between two render calls, put that into an array and run a callback every updateDelay
milliseconds with the latest FPS average value.
It is then used by the QualityManager
class, that does all the heavy lifting to assign an accurate current quality score:
import type { FPSWatcherParams } from "./FPSWatcher";
import FPSWatcher from "./FPSWatcher";
export interface QualityManagerParams {
label?: string;
updateDelay?: FPSWatcherParams["updateDelay"];
targetFPS?: number;
onQualityChange?: (newQuality: number) => void;
}
export class QualityManager {
label: string;
fpsWatcher: FPSWatcher;
targetFPS: number;
#lastFPS: number | null;
#active: boolean;
onQualityChange: (newQuality: number) => void;
quality: {
current: number;
min: number;
max: number;
};
constructor({
label = "Quality manager",
updateDelay = 1000,
targetFPS = 60,
onQualityChange = (newQuality) => {},
}: QualityManagerParams = {}) {
this.label = label;
this.onQualityChange = onQualityChange;
this.quality = {
min: 0,
max: 10,
current: 7,
};
this.#active = true;
this.targetFPS = targetFPS;
this.#lastFPS = null;
this.fpsWatcher = new FPSWatcher({
updateDelay,
onWatch: (averageFPS) => this.onFPSWatcherUpdate(averageFPS),
});
}
get active() {
return this.#active;
}
set active(value: boolean) {
if (!this.active && value) {
this.fpsWatcher.restart();
}
this.#active = value;
}
onFPSWatcherUpdate(averageFPS = 0) {
const lastFpsRatio = this.#lastFPS
? Math.round(averageFPS / this.#lastFPS)
: 1;
const fpsRatio = (averageFPS + lastFpsRatio) / this.targetFPS;
// if fps ratio is over 0.95, we should increase
// else we decrease
const boostedFpsRatio = fpsRatio / 0.95;
// smooth change multiplier avoid huge changes in quality
// except if we've seen a big change from last FPS values
const smoothChangeMultiplier = 0.5 * lastFpsRatio;
// quality difference that should be applied (number with 2 decimals)
const qualityDiff =
Math.round((boostedFpsRatio - 1) * 100) * 0.1 * smoothChangeMultiplier;
if (Math.abs(qualityDiff) > 0.25) {
const newQuality = Math.min(
Math.max(
this.quality.current + Math.round(qualityDiff),
this.quality.min
),
this.quality.max
);
this.setCurrentQuality(newQuality);
}
this.#lastFPS = averageFPS;
}
setCurrentQuality(newQuality: number) {
this.quality.current = newQuality;
this.onQualityChange(this.quality.current);
}
update() {
if (this.active) {
this.fpsWatcher.update();
}
}
}
The most difficult part here is to smoothly handle the quality changes to avoid huge drops or gains in quality. You also don’t want to fall in a loop where for example:
- The average FPS are poor, so you degrade your current quality.
- You detect a quality loss and therefore decide to switch off an important feature, such as shadow mapping.
- Removing the shadow mapping gives you a FPS boost and after the expected delay the current quality is upgraded.
- You detect a quality gain, decide to re-enable shadow mapping and soon enough, you’re back to step 1.
Typically, the quality rating is used to update things such as the current pixel ratio of the scene, frame buffers resolutions, number of shadow maps PCF samples, volumetric raymarching steps and so on. In worst case scenarios, it can even disable shadow mapping or post processing effects.
Accessibility
Finally, the site had to respect at least the basic accessibility standards. I’m not an accessibility expert and I may have made a few mistakes here and there, but the key points are that the HTML is semantically correct, it is possible to navigate using the keyboard and the prefers-reduced-motion
preference is respected. I achieved that by disabling entirely the gamification concept for these users, removing every CSS and JavaScript animations, and made the scenes fall back to their 2D canvas versions, without being animated at all.
Conclusion
Well, it was a long journey, wasn’t it?
Working on my portfolio these past 6 months has been a truly demanding task, technically but also emotionally. I’m still having a lot of self doubts about the overall design, key UX choices or level of creativity. I also do think that it kind of honestly sums up who I am as a developer but also as a person. In the end, it’s probably what matters most.
I hope that you’ve learnt a few things reading this case study, whether it’d be about technical stuff or my own creative process. Thank you all, and remember: stay fun!