img-1

Core Web Vitals failing even after using RabbitLoader

Some website owners see the Core Web Vital report failing in the Google Search Console (GSC) even after using RabbitLoader. This guide will discuss multiple factors affecting Core Web Vitals that are beyond the scope of RabbitLoader service.

What is the Core Web Vitals report?

The Core Web Vitals report shows how your pages perform and groups all the unique URLs of a website by status a status such as Poor, Need improvement, and Good, based on the field data, also called real-world usage data.

Here are the performance ranges for each status:

 GoodNeed improvementPoor
LCP<=2.5s<=4s>4s
FID<=100ms<=300ms>300ms
INP<=200ms<=500ms>500ms
CLS<=0.1<=0.25>0.25

Factors affecting the CWV report

This section specifically focuses on the reasons that may affect the CWV report, which are beyond the score of RabbitLoader and may need attention from website owners.

Hosting server storage

Many hosting packages just indicate the storage size in terms of Gigabytes (GB) or Terabytes (TB). However, the type of the store is not disclosed. Our users should ensure the storage type is SSD (Solid State Drive) and not the magnetic HDD (Hard Disk Drive). The storage affects how fast data can be read from the disk when a visitor asks for a page. HDD and other slower storages can increase the Time to First Byte (TTFB). TTFB is the delay between when a visitor requests a webpage from the hosting server, and when the server starts returning the first-ever byte.

RabbitLoader stores the cached copy of the web pages on the disk. If disk performance is good, all reads and writes will be faster reducing the server response time for the main document.

Excessive DOM size

For non-technical practical purposes, we can assume DOM size as how big your page’s HTML is. The longer the HTML, the more time will be required to process it and apply styling, etc. Unfortunately, RabbitLoader can not make the page short and trim the contents. This is something the owner of the website should take care of.

In most cases, large DOM size is caused by the “drag and drop” page builder applications. Though these page builds are easy to use, often they create multiple wrappers for the page elements due to the limitation of the approach, which results in excessive DOM size.

Lighthouse shows the warning and error status of a page based on how many nodes or HTML elements are there. These nodes are not the visible items on the page, but the markup used to render those elements-

  • Warns when the body element has more than ~800 nodes.
  • Errors when the body element has more than ~1,400 nodes.

Broken images

If any of the image referenced on the website does not exist, can delay the page rendering. Usually, the missing image lookup takes a significant time before the server decides to return a 404-Not Found response code.

28-Days average

Google updates the CrUX dataset based on a rolling average of 28 days. CrUX or the Chrome User Experience Report is collected from real devices and powers the Core Web Vitals metrics of the website. So, any change done on the website may take up to 28 days to reflect on the Core Web Vitals.