The anatomy of a performant and sustainable webpage
For years now, Google, other web giants, and SEO auditors have told us that it’s important to have a fast website. In more recent years, the idea of a sustainable website has been discussed and established, and the impact that the web has on the environment can finally be measured and managed.
There are countless tools for measuring and monitoring the performance of a website, Page Speed Insights and Webpagetest are two we use. In more recent years, tools have been developed to monitor the emissions of websites as well, the first being Website Carbon. EcoPing, EcoGrader, and our own The Index are a few more.
As part of the report provided by these tools, or through analysis of the information provided, there are often insights that suggest ways to improve either the performance or sustainability of the page tested. The recommendations most commonly include “resize your images”, “serve your images in a next gen format”, or “reduce unused CSS”.
But once you have done that, is the site really as performant or as sustainable as it could be?
Unfortunately the answer is most likely no! Not because each of the optimisations in the report haven’t been addressed, simply because, without considering the performance and sustainability impact of each element during the design, it’s hard to achieve a lightning fast or sustainable, low-emissions website.
What we outline in this post is an analysis of the beleaf.au home page, the choices made during the design, and the optimisations in place to achieve a desktop page size of 106 Kilobytes (kB), a minuscule 0.02 grams of CO2 per page load, 100% in all Page Speed Insight categories on mobile and desktop, and a pass on all Core Web Vitals (CWV).
This is a long one, so let’s get started!
The design is optimised for performance and sustainability
Above any of the technical aspects in this article, design has the biggest impact on performance and sustainability. That’s because the design determines which optimisations are applied, be that reducing the transfer size of subresources, or preloading of subresources to reduce the impact of the delay whilst waiting for them to be retrieved.
A misconception is that a website design has to be minimalist to achieve a low impact site. In practice, a website simply needs to communicate what it needs in the most direct, efficient manner it can. What specifically though about the design allows it to be performant and sustainable?
For starters, from a sustainability standpoint, a bird’s eye view shows there are only three rasterised images (normally the largest subresource in terms of transfer size), and they are displayed at a maximum width of 380px even on the largest screen. This means an image of 760px wide on a high-dpi screen, but that’s a drop in the ocean compared to the required 2000+ pixel wide image required if the image is the full width and height of the screen.
From a performance standpoint, all of the required subresources for rendering the above-the-fold view are visible to the preload scanner, are preloaded where necessary, and in total are less than 42 kB in transfer size (and that includes the entire HTML of the page which is 17kB).
The takeaway here is that the design phase, as it is for so many parts of a project, determines how performant and sustainable a page can be.
The home page transfer size is small, really really small.
In February 2023, according to the HTTP Archive, the average page size was 2340.6 Kilobytes. The beleaf homepage is 106 Kilobytes. That’s ~95% smaller! This is not only small when compared against the average of the internet, but small compared with other sustainable web agencies.
Here’s how we used the 106kB and generated the 0.02 grams of CO2:
Type | Quantity | Percentage | Size | HTTP Archive Median (Feb 2023) |
HTML | 1 | 17% | 18.05 kB | 29.8 kB |
CSS | 1 | 9% | 9.26 kB | 79.2 kB |
Javascript | 2 | 8% | 8.75 kB | 530.0 kB |
Fonts | 3 | 19% | 19.81 kB | 147.8 kB |
Images | 5 | 47% | 49.83 kB | 995.4 kB |
Other | 1 | ~1% | 667.00 B | 1.1 kB |
The page transfer size being small isn’t the full picture. Even in small websites it’s possible for a site to be slow, and that leads us nicely into the next section.
The head tags are optimised for performance
Right at the start of the HTML, before the logo, site menu, search box, and first call to action, is the invisible-to-the-eye (unless you look at the source) head block of code. The following is a direct excerpt from MDN:
“The head of an HTML document is the part that is not displayed in the web browser when the page is loaded. It contains information such as the page <title>, links to CSS (if you choose to style your HTML content with CSS), links to custom favicons, and other metadata (data about the HTML, such as the author, and important keywords that describe the document). Web browsers use information contained in the head to render the HTML document correctly.”
What MDN doesn’t mention is that the head tags can, and most often do, have the most impact on performance, and not by a small margin. By introducing one incorrectly ordered head tag, it’s possible to delay or block the discovery, download, and execution of any of the other resources needed to load the page.
For the beleaf home page, the head tags are optimised to ensure there is only one blocking file, which is the CSS for the site. Not coincidentally is the fact that this is the only file required for the layout of the page. The argument could be made to inline the CSS file and therefore avoid the blocking nature altogether, but in our testing, the performance improvement was negligible to impossible-to-notice outside of lab testing. Inlining the CSS file would also increase the size of every page by almost 10 kB after compression – resulting in a non negligible increase in emissions.
Hard to discover necessary subresources are preloaded
Modern browsers have a feature called the preload scanner. The preload scanner (presuming it isn’t blocked by anything) will look ahead at the HTML before anything is rendered to find any subresources that it should load when it has spare network availability.
Some subresources are not referenced in the HTML directly and therefore can’t be found by the preload scanner. As these subresources are needed to render the above the fold view, we include a preload for each of the subresources, which can supersede loading of subresources by the preload scanner.
The following subresources are referenced in the CSS file, and therefore have preloads to avoid waiting for them to be requested by the browser:
- The body text font file
- The body text bold font file
- The heading text font file
- The leaf shape image
Optimising the head tags is not an easy task, as subtle changes in the content the tags references, the order they are loaded in, and even the content of the page influences what should be there. These few paragraphs scratch the surface and don’t begin to communicate the possibilities for optimisation. For a deeper understanding (and visual examples), it’s worth watching Get your head straight by Harry Roberts.
As few subresources as possible are required above the fold
A subresource is any file that is requested by the HTML and any of the other files loaded on the page. Subresources, as indicated by their name, are not included as part of the main HTML, and must be requested by the browser after it has received and processed some or all of the HTML.
When looking at the above the fold view of the beleaf home page, in less optimised cases, you wouldn’t be mistaken for counting as many as eleven subresources. They are the
- Primary stylesheet
- Beleaf logo
- Human icon
- Be icon
- The leaf shape
- Sustainable card icon
- Accessibility card icon
- Performance card Icon
- Body copy font file
- Heading font file
- Body font file in bold (only on screens tall enough will this be seen immediately, vertical iPad)
While eleven subresources is not a large quantity to load above the fold, any reduction we can make will have a direct impact on the time it takes to load the above-the-fold view. The Core Web Vitals measure this value by way of the Largest Contentful Paint (which item is the largest above the fold and how long it took to be visible), and the Cumulative Layout Shift (how much the above the fold section moved while sub resources were loading in).
In practice, the beleaf site only requires four subresources for the above the fold view. These are (and we discussed most of them in the head tag section) the
- Primary stylesheet
- Body copy font file
- Heading font file
- The leaf shape
The rest of the subresources are SVG images that have been embedded/inlined in the HTML. While this does increase the size of the HTML file itself, the overhead is marginal in our case as the SVG icons have been hand optimised to be as small as possible.
By inlining the subresources we also avoid the network request which, in a page tuned for performance, will always be the biggest delay in page rendering. WebPageTest and Google Chrome estimate the impact of the outbound request to be ~2kB. For subresources that are under ~2kB in size, we actually save transfer size and emissions by avoiding the outbound network request.
The subresources required above the fold are hosted on the same domain as the site.
Any subresource required for rendering the above-the-fold view will have a negative impact on performance. The time it takes for a subresource to be ready to use is made up of two parts, the outbound request and and the inbound response.
When making an outbound request, the browser first needs to open a connection to the server hosting the file. To do this, the browser needs to resolve a domain name to an IP address. This more often than not in an optimised website, is the longest part of loading a subresource.
Subresources hosted on the same domain name as the website avoid the overhead of establishing a connection because the browser has already completed this when it requested the HTML.
Suffice to say, the beleaf site hosts all its above-the-fold subresources on the primary domain.
Fonts are served as WOFF2, and subset to reduce the file size
Custom web fonts have been supported in browsers since Internet Explorer 4 (released in 1997), but it wasn’t until 2011 that custom web fonts were truly supported in a cross browser manner. Fortunately, things have come a long way since then and we can now load custom fonts without dramatically impacting performance, emissions, and the accessibility of sites. With that being said, there are a few things that can, and should, be done to ensure the impact is as small as possible.
They are:
- Subsetting the font to be as small as possible.
- Subsetting a font is the practice of removing characters from the font file to only contain the required characters. This reduces the file size resulting in a faster render time
- Loading a WOFF2 format
- WOFF2 is a modern efficient format and the conversion from a legacy format results in a smaller file size resulting in a fast render time
By doing those two things, we were able to reduce the heading font file transfer size from 132kB to 8.7kB, a reduction by over 93%!
Format | Characters | Size | Reduction in size |
Original Heading TTF | 690 | 132 kB | NA |
Subset Heading TTF | 97 | 15 kB | 89% |
Subset Heading WOFF2 | 97 | 8.7 kB | 93.5% |
For a history on custom web fonts and longer discussion on optimising them for performance and emissions, you can read an article I wrote in 2019 whilst working at Wholegrain Digital titled The performance cost of custom web fonts and how to solve it.
Images are optimised and served efficiently
Images, unless a page has autoplaying videos, are by far the largest subresource in terms of file size. Most of the time, a single image is larger than the rest of the subresources put together. Let’s take a look at how the images on beleaf are optimised.
File sizes are as small as they can be
Whilst the images on the home page collectively represent 47% of the transfer size, in total they are only ~49kB. This is a drop in the ocean compared to some pages that load several multi-megabyte images.
Image optimisation doesn’t have to be a complicated process though. Let’s take a look at two different image types, rasterised images and vector images.
Rasterised Images
Rasterised images are generally what contribute the most to file size, and are generally served to websites as JPGs or PNGs. Whilst both of these formats have served the internet well for a long time, they are not the most efficient format for serving images to evergreen browsers (browsers that are upgraded with the latest features often without interaction from the user). Rasterised images are better served now in the WebP or AVIF format.
When the beleaf site launched, it served images in WebP format. The portfolio section on the home page shows three different projects and an image associated with each. By serving the images as an AVIF to modern browsers, WebP to almost-modern browsers, and JPG to the rest, a single image was reduced from 42 kB (jpg) to 17 kB (AVIF) with no loss in quality. The table below shows the difference in file size for a single image.
Over three pages, by serving AVIF images instead of WebP, we reduced our transfer size by a minimum of 12% and a maximum of 27%.
Page | WebP Size | AVIF Size | Reduction in size by serving AVIF |
Home | 123 kB | 109 kB | 12% |
Work Archive | 200 kB | 167 kB | 17% |
Work Single | 864 kB | 623 kB | 27% |
Vector Images
Vector images on the web are served in the Scalable Vector Graphics (SVG) format. The beleaf site is using SVGs for logos, icons, and the leaf symbol. The file size of an SVG is predominantly defined by the amount of points it has. Therefore, one of the most effective ways to optimise an SVG is to remove as many points as possible.
Each of the SVG images have been run through the following process of optimisation to ensure they are as small as possible.
- Is it possible for the image to be a path with a stroke? or does it need to be a compound path with a fill
- A path with a stroke is preferred as the amount of points required to represent the shape is roughly halved compared to a compound path
- A compound path with a fill requires two paths, each with their own points. Hence the increase in amount of points
- Remove as many points from the paths as possible whilst retaining the fidelity of the shape. This process is part automated, part manual
- We use Adobe Illustrator and the simplify path command to perform a first pass optimisation.
- We then manually optimise the paths and reduce the amount of points where the automated pass could not
- Minify the SVG with a tool like imageoptim, SVGO
- We automate this in the build process using imagemin and SVGO
- This can also be done with imageoptim
By optimising the SVGs to contain as few points as possible, we achieve two things. We
- Create a file that is smaller in file size. Good for performance and emissions.
- We reduce the processing effort it takes to render the SVG, reducing the energy used, therefore reducing emissions.
Images are lazy loaded, unless they are required above the fold
By lazy loading images, the browser defers the loading of each image on the page to a later point in time when they are needed instead of loading them up front.
The portfolio images are a long way down the page, so they have absolutely no reason to not be lazy loaded.
It’s worth mentioning that the lazy loading is implemented natively within the browser. There are other methods that rely on javascript, but this has several negative side effects.
- It increases the size of the javascript loaded as custom code is required
- It increases the processing the browser has to perform because the functionality is implemented in javascript and not natively within the browser
- The images can no longer be found by the preload scanner, thus making it a challenge to optimise their loading if required
- If javascript is disabled, or the javascript file that performs the lazy loading has an error, the images will not be shown
- Most of the time, images above the fold will not be optimised to avoid the javascript based lazy loading, which then requires the javascript to be loaded (most often this is very late in the page load) before any of the images will be displayed
Image tags are optimised to serve the right size image to each screen size
The IMG tag in HTML has for quite some time had support for defining different sized versions of an image, via the srcset attribute, and serving the right one to the user.
Specifying multiple image sizes though isn’t enough for the browser to know which image file to load. To ensure the browser loads the right image for the right screen size, the sizes attribute needs to be defined. The value of the sizes attribute is what is known as a media query, and the media query specifies what size the image is going to be presented at a specific screen size.
By utilising the two attributes, we can ensure that the images served to the user are the right size for the screen and device they are using.
You can learn more about serving responsive images from MDN,
Text based (sub)resources are minified, optimised with a lossy algorithm, and losslessly compressed
If you take a look at the source code for any of the HTML, CSS, or Javascript on the site, you will see that it doesn’t represent the way that humans would write code. This is the process of minification to make the code as small as it can be.
Before stripping out the whitespace in the HTML before compressing it with GZIP for transfer from the server to the user’s device, the HTML of the page is over 66kB. After stripping the whitespace from the HTML, the size is 57kB. Once GZIP is applied, the HTML file size comes down to a little over 18kB.
The following table shows the raw size, minified size, and gzipped size of (sub)resources on the page
Resource | Raw | Minified | GZIP |
HTML | 66 kB | 57 kB | 18 kB |
CSS | 63.4 kB | 48.4 kB | 9.6 kB |
Javascript | >90kB (first party javascript only) | 23.1 kB | 8.8 kB |
Images (and other binary formats) are not gzipped because they do not compress effectively and often result in a larger file size than the uncompressed file size.
The site is hosted on a performant and sustainable server
Not all website servers are created equal, and it is important to ensure that a server is optimised for the website it is serving. We host our website on a managed Google Cloud server in Sydney that has been configured with frontend caching to ensure that server response times are kept to a minimum.
WordPress (once plugins and custom functionality is added) is not known for a good time to first byte (TTFB). In fact, in the near 5.5 million domains monitored by the HTTP Archive in February 2023, only 48.88% receive a good TTFB (it’s not possible to link directly to a view of the report). Good being a TTFB under 800ms.
When running artificial benchmarking from Melbourne to Sydney using WebPageTest, the beleaf home page median run TTFB is 230ms. When running the same test from London to Sydney, the median run TTFB is a less healthy 1.228s.
It is possible to achieve a low time to first byte in locations not in close proximity to the web server, the compromises and lack of consistency in results, coupled with our target audience being Australian based organisations and businesses, we have been deterred from implementing such a solution.
Optimisations that are harder to quantify the impact of
CSS utilisation is above 65% for a single page view
The way to a high utilisation is through only loading the CSS required rendering the elements on the page. The beleaf site does this by bundling the core elements of the site into an always loaded stylesheet (that is cached by the browser after the first page view) and conditionally loading additional CSS files in the head of the page when those components are used on the page. This part is critical as stylesheets loaded later result in a high change of flash of unstyled content.
The 65% utilisation is in practice higher due to a bug in chrome that prevents custom font face definitions from being considered used.
The inverse of a high utilisation is a large amount of wasted CSS, which most often goes hand in hand with a large CSS file. Both are bad for performance and sustainability because:
- the page load time is increased due to the need to download a larger file
- More emissions are generated due to a larger amount of bytes transferred
- A larger file simply takes longer to download
- the page load time is increased due to the need to process more code before the browser can display the page
- The browser will use more energy than if it was provided a more optimised and utilised file.
Javascript is utilised sparingly and the utilisation is above 68% for a single page view
There’s a tongue-in-cheek saying in the web development community that states “the fastest way to a slow website is by using javascript”.
While the truth is far more nuanced than a simple statement, or a subject this article will delve into, the beleaf home page (and site) uses javascript sparingly to enhance the experience, usability, and accessibility. The exception to this is a privacy friendly, less-than-2kB, analytics tag from Plausible.
For context, the HTTP Archive states the median amount of Javascript loaded on a page in February 2023 is 530 kB. The beleaf site loads less than 9kB!
Things the beleaf home page doesn’t do
While the optimisations above are important to take notice of, equally important is what the beleaf site does not do. This section could be a long list of all the anti-best-practices, but that would result in a section longer than the rest of the article. What we list below are high impact design and technical implementations to avoid when considering performance and sustainability.
We don’t use a Content Delivery Network (CDN)
As mentioned earlier, all our subresources are hosted alongside the site. We also mentioned that the reason is because the majority (when reviewing the whole site) of our subresources are required for the above the fold view. Introducing a CDN for serving the subresources would introduce a further delay for retrieving these subresources for our primary audience. For subresources that are required below the fold, the browser has more time before these resources are displayed, thus a CDN would not help performance here either.
The blanket statement for a long time has been “use a CDN to improve performance”, but the truth is much more complex than that. As can be seen in the waterfall view of our Melbourne to Sydney WebPageTest, the request-response loop of our self hosted subresources is between 49ms to 89ms. Introducing a CDN here will increase the time it takes for the subresources to be retrieved due to the need to connect to a third party domain, thus slowing down the above the fold view, resulting in poorer CWV scores and poorer user experience.
That’s not to say that a CDN shouldn’t be used at all. Web.dev has an in-depth article on CDN’s, what they are, how they work, specific features and potential benefits.
Where the CDN could have a positive impact for performance is the London to Sydney WebPageTest. In theory, the subresources would be hosted in several data centres around the world which would allow for the request-response loop to be much lower. Here’s the thing though, without hosting the primary resource (the HTML) in a location that is closer to the end user, the request-response is still going to be too slow. Yes the subresources will come in quicker, but not quick enough to have a major improvement on the user experience.
When we first launched the site, we had our DNS records proxied by Cloudflare. This resulted in an invisible CDN for our subresources as the Cloudflare network caches some subresources (images, css, javascript). What we noticed though was that our staging site was faster than our production site. When comparing the TTFB of our production and staging site, the staging site primary resource (HTML) was on average 100ms quicker. By disabling the proxying of DNS records on our production site, the difference was gone. With the proxy enabled, visitors in proximity to the server received a poorer user experience.
Cloudflare does offer a product called APO which results in the entire website being cached on their CDN. This should result in much better CWV scores and a better user experience. What we saw in practice was a slower website for users in proximity to the origin server, and inconsistent results for users further away. For a high traffic site, the inconsistent results are more than likely to average out, resulting in a net positive result, but as mentioned earlier, the tradeoffs were not worth it to us.
Other things the beleaf home page doesn’t do
- Use autoplaying videos
- Use full screen width images
- Use javascript that modifies the layout of the page above the fold
- Use javascript in the browser to render the above-the-fold view
- This could be animations that start with above the fold content hidden to full on client side rendered views
- Load critical resources from a third party
- Load full font icon libraries
- Use sprite sheets
- Load numerous tracking scripts
- Inline subresources like fonts and images in a CSS file
- Use the @import css statement
- Load font files from a third party server
Where to get started with performance and sustainability
Fortunately, the approach to optimising for performance and sustainability is not all or nothing.
In fact, getting started is as simple as gaining an understanding of how well your site performs or what the emissions are. To see how your site measures up against some other sites, you can check out our own tool called The Index.
In aiming for the top echelon of performance and sustainability, targets and a considered effort must be made and set from the beginning of the project, not dissimilar from accessibility.
If you have an existing website you would like to improve the performance and sustainability of, we can complete an initial audit to identify the current state, as well as recommend changes for improvement.
If you are looking to achieve the highest level of performance and sustainability, we can work with you on a more comprehensive project that starts with defining targets and finishes with a performant and sustainable site.
Get in touch or call us on (03) 4050 7773