Tue 06th Jan 2026

Designing my portfolio website to load fast

Web development Portfolio
Image for blog: Designing my portfolio website to load fast

I had never changed the design of my portfolio website for a few years and I decided to redesign it. My main goal was to reduce jank and overall page load time since the profile of clients I got would most likely be put off by this.

Backstory

I had used my portfolio to offer my services and while it worked for the most part to show my work, I felt a need to change it - going against the ethos of the quote "if it ain't broke don't fix it". One core thing I wanted the new design to do was to have a good load time because I realised a good number of previous clients had this as a non-functional requirement. Therefore in case I got a lead on my site, I wanted to showcase this.

Research

The available resources I got for fast loading websites, reduced my options to the following:

1. Minimalism - the less is more route(I think). I got two websites which, in my view, do this perfectly:

  1. https://fdocpa.com/
  2. https://www.berkshirehathaway.com/

When you visit these sites, there isn’t a point where you feel “Ok what should I do next” - it’s just take the contact information and get in touch with the business or click on the link which makes the most sense at the moment. The downside is the lack of "pizzazz" I'd say.

However, I saw this as extreme since it implies no CSS and JS - this for the people I work with won’t make sense from a branding standpoint


2. The TCP slow start mechanism - I got wind of this from a this Primeagen youtube video and further dived in using a linked article on why your website should be under 14kb

I considered this option as better since it comes from a “technical standpoint” which is one of my interests.


From the literature explored, I envisioned it can be achieved by cutting down on JS and CSS - so a reduction in the effects or any excessive “WOW” sort of animations.


Literature Review: How the TCP Slow Start Works

When a browser connects to a server, it doesn’t know how much data the network path can handle. If the server sent a massive file all at once, it might overwhelm a weak router and cause packet loss. To prevent this, TCP uses a congestion control strategy.


The Step-by-Step Handshake

Before a single byte of any website's content (HTML) is sent, the client and server must agree to “talk". It happens in the following sequence:

  1. SYN - The client sends a synchronization request.
  2. SYN-ACK - The server acknowledges the request.
  3. ACK -The client acknowledges the server.

The total time elapsed in the sequence is: 1.5 Round Trip Times (RTT).


How the "14kb" magic size comes about

After the handshake, the server begins the Slow Start phase. It uses a variable called the Initial Congestion Window (initcwnd).

In modern systems, the default initcwnd is typically set to 10 segments. Since a standard TCP segment (MSS) is about 1,460 bytes, the math works out like this:

10 * 1460 = 14600

So the 10 segments total to 14600 bytes which is 14.6KB​


The Round-Trip Progression

If your webpage is larger than 14 KB, the browser has to wait for more "round trips" to get the rest of the data:

  1. Round Trip 1: The server sends ~14 KB. It then stops and waits for the client to say, "I got it!"
  2. Round Trip 2: Once the server gets that acknowledgement, it doubles the window and sends ~28 KB.
  3. Round Trip 3: The window doubles again to ~56 KB.
  4. And so on....


Putting it to context

If critical HTML and CSS fit within that first 14 KB, the browser can begin rendering the page almost immediately after the first data packet arrives.

If your HTML is 15 KB, the browser has to wait for an entire extra round trip (which could be 50ms to 200ms depending on the user's connection) just to get that last 1 KB of data.

Performance experts obsess over the following:

  1. Inlining critical CSS.
  2. Minifying HTML.
  3. Prioritizing the "above-the-fold" content - ie the first part visible upon loading without scrolling

Before I began, there was something I also realised is the 14 KB limit for the first round trip is inclusive of HTTP headers. So this needs to be thought off too - so my goal was a 13 KB HTML file.


Actually Doing It

Step 1: The raw files

I already had a design, so I developed it but I gave all the elements inline styles they needed to fit the design and the JS was a `<script></script>` before the `</body>` tag.

The design included logos to client work which had already been sized down

Now I had to analyze what was loaded from the network in development, which was:


Now, I already knew this was to be taken with a grain of salt since the server I was to host it on uses Brotli for compression - the conventional one used is gzip, a short brotli vs gzip comparison on the two metrics I considered are as below:

  1. Size: Brotli can achieve smaller file sizes for JavaScript, HTML, and CSS compared to gzip.
  2. Speed: Brotli is faster at both compressing content on the server and decompressing it on the client's browser.


To get to know the compressed file sizes I looked for a VsCode extension to see the Brotli compression size but wasn’t fortunate(as of December 2025 that is) however I got one which got me the Gzipped size and if on gzip I can get it to 14 KB then on brotli it will even be smaller.

I used this vs code extension: https://marketplace.visualstudio.com/items?itemName=mkxml.vscode-filesize this showed me as below:


Step 2: Calculating the amount to cut down

I estimated a linear correlation for the HTML size with the gzipped size so with my goal of 14 KB the mathematics was:

(69.11 * 14) / 18.12 = 53.40

Meaning for me to get to 14KB, I had to reduce the size to circa 53.40KB

69.11 - 53.40 = 15.71

So I had to get rid of around 16 kb on the raw HTML


Step 3 (a): Slashing the content to bare essentials

I started removing bytes from the HTML in the following areas:

  1. Comments placed by auto-generated boiler plate by my text editor
  2. Unused HTML element attributes placed in by the text editor
  3. I got a unified set of CSS classes using short names to simplify repeated inline styles from the HTML raw file
  4. Moved the styling and JavaScript to their own style.css and app.js files

This moved me down in terms of file size but missed my target size, as seen below



Step 3 (b) : Minifying - removing the “whitespace”

I minified the stylesheet and js file and this had an effect on the stylesheet loaded and the page JS


Step 3 (c): Further Refinements; slashing the slashed stuff

I checked for elements which I had inline JS eg onclick and I unified similar ones under a single class or custom attribute then added event listeners via the JS file.

I removed extra lines and indentations added by text formatting tools in VS Code - of course it makes development harder but that’s a trade off I’d take for the outcome

Then I checked on the stats once more and it was down - even past the size I earlier intended



I did a lighthouse test for the webpage on the live server, these were the results

1. For mobile:


2. For desktop:


Green always means good, pair it with a "100" and we are in heaven :)


Alas, not everything will go well - the accessibility score was due to the design choice, which I don’t really aim to change since I see it as a trade off for the outcome.


When hosted on the server these were the sizes and relative load times:


You can check out the porfolio site on https://elvisben.me.ke

Any comment or feedback please share below!

Table Of Contents