Assessing loading performance in the field with Navigation Timing and Resource Timing

By admin
Learn the basics of using the Navigation and Resource Timing APIs to assess loading performance in the field.


Jeremy Wagner
PERSON

If you’ve used connection throttling in the network panel in a browser’s developer tools (or

Lighthouse
ORG

in

Chrome
ORG

) to assess loading performance, you know how convenient those tools are for performance tuning. You can quickly measure the impact of performance optimizations with a consistent and stable baseline connection speed. The only problem is that this is synthetic testing, which yields lab data, not field data.

Synthetic testing isn’t inherently bad, but it’s not representative of how fast your website is loading for real users. That requires field data, which you can collect from

the Navigation Timing
ORG

and Resource Timing APIs.

APIs to help you assess loading performance in the field


Navigation Timing
ORG

and Resource Timing are

two
CARDINAL

similar APIs with significant overlap that measure

two
CARDINAL

distinct things:


Navigation Timing
PERSON

measures the speed of requests for HTML documents (that is, navigation requests).

Resource Timing measures the speed of requests for document-dependent resources such as

CSS
ORG

,

JavaScript
ORG

, images, and so on.

These APIs expose their data in a performance entry buffer, which can be accessed in the browser with

JavaScript
PRODUCT

. There are multiple ways to query a performance buffer, but a common way is by using performance.getEntriesByType :

// Get Navigation Timing entries: performance.getEntriesByType(‘navigation’); // Get Resource Timing entries: performance.getEntriesByType(‘resource’);

performance.getEntriesByType accepts a string describing the type of entries you want to retrieve from the performance entry buffer. ‘navigation’ and ‘resource’ retrieve timings for

the Navigation Timing
ORG

and Resource Timing APIs, respectively.

Note: Try loading a website and then enter either of the commands in the above code snippet in your browser’s console to see actual timings captured by your browser.

The amount of information these APIs provide can be overwhelming, but they’re your key to measuring loading performance in the field, as you can gather these timings from users as they visit your website.

The life and timings of a network request

Gathering and analyzing navigation and resource timings is sort of like archeology in that you’re reconstructing the fleeting life of a network request after the fact. Sometimes it helps to visualize concepts, and where network requests are concerned, your browser’s developer tools can help.

A visualization of a network request in the network panel of

Chrome
ORG

‘s DevTools

The life of a network request has distinct phases, such as

DNS
ORG

lookup, connection establishment,

TLS
ORG

negotiation, and so on. These timings are represented as a DOMHighResTimestamp . Depending on your browser, the granularity of timings may be down to the microsecond, or be rounded up to milliseconds. Let’s examine these phases in detail, and how they relate to

Navigation Timing
ORG

and Resource Timing.

Note: As you read this guide, this diagram for both

Navigation Timing
ORG

and Resource Timing may help you to visualize the order of the timings they provide.

DNS lookup

When a user goes to a URL,

the Domain Name System
ORG

(DNS) is queried to translate a domain to an IP address. This process may take significant time—time you’ll want to measure in the field, even.

Navigation Timing
ORG

and Resource Timing expose

two
CARDINAL


DNS
ORG

-related timings:

domainLookupStart is when

DNS
ORG

lookup begins.

is when

DNS
ORG

lookup begins. domainLookupEnd is when

DNS
ORG

lookup ends.

Calculating total

DNS
ORG

lookup time can be done by subtracting the start metric from the end metric:

// Measuring DNS lookup time const [pageNav] = performance.getEntriesByType(‘navigation’); const totalLookupTime = pageNav.domainLookupEnd – pageNav.domainLookupStart;

Caution: You can’t always rely on timings to be populated. Timings provided in both APIs will have a value of

0
CARDINAL

in some cases. For example, a

DNS
ORG

lookup may be served by a local cache. Additionally, any timings for cross-origin requests may be unavailable if those origins don’t set a Timing-Allow-Origin response header.

Connection negotiation

Another contributing factor to loading performance is connection negotiation, which is latency incurred when connecting to a web server. If HTTPS is involved, this process will also include

TLS
ORG

negotiation time. The connection phase consists of

three
CARDINAL

timings:


connectStart
ORG

is when the browser starts to open a connection to a web server.

is when the browser starts to open a connection to a web server. secureConnectionStart marks when the client begins

TLS
ORG

negotiation.

marks when the client begins

TLS
ORG

negotiation. connectEnd is when the connection to the web server has been established.

Measuring total connection time is similar to measuring total

DNS
ORG

lookup time: you subtract the start timing from the end timing. However, there’s an additional secureConnectionStart property that may be 0 if HTTPS isn’t used or if the connection is persistent. If you want to measure TLS negotiation time, you’ll need to keep that in mind:

// Quantifying total connection time const [pageNav] = performance.getEntriesByType(‘navigation’); const connectionTime = pageNav.connectEnd – pageNav.connectStart; let tlsTime = 0; // <– Assume 0 to start with // Was there TLS negotiation? if (pageNav.secureConnectionStart > 0) { // Awesome! Calculate it! tlsTime = pageNav.connectEnd – pageNav.secureConnectionStart; }

Once

DNS
ORG

lookup and connection negotiation ends, timings related to fetching documents and their dependent resources come into play.

Requests and responses

Loading performance is affected by

two
CARDINAL

types of factors:

Extrinsic factors: These are things like latency and bandwidth. Beyond choosing a hosting company and a CDN, they’re (mostly) out of our control, as users can access the web from anywhere.

These are things like latency and bandwidth. Beyond choosing a hosting company and a CDN, they’re (mostly) out of our control, as users can access the web from anywhere. Intrinsic factors: These are things like server and client-side architectures, as well as resource size and our ability to optimize for those things, which are within our control.

Both types of factors affect loading performance. Timings related to these factors are vital, as they describe how long resources take to download. Both

Navigation Timing
ORG

and Resource Timing describe loading performance with the following metrics:

fetchStart marks when the browser begins to fetch a resource (Resource Timing) or a document for a navigation request (

Navigation Timing
PERSON

). This precedes the actual request, and is the point at which the browser is checking caches (for example, HTTP and

Cache
ORG

instances).

marks when the browser begins to fetch a resource (Resource Timing) or a document for a navigation request (

Navigation Timing
PERSON

). This precedes the actual request, and is the point at which the browser is checking caches (for example, HTTP and instances).

workerStart marks
ORG

when a request starts being handled within a service worker’s fetch event handler. This will be 0 when no service worker is controlling the current page.

marks when a request starts being handled within a service worker’s event handler. This will be when no service worker is controlling the current page.

requestStart
ORG

is when the browser makes the request.

is when the browser makes the request. responseStart is when the

first
ORDINAL

byte of the response arrives.

is when the

first
ORDINAL

byte of the response arrives. responseEnd is when the last byte of the response arrives.

These timings allow you to measure multiple aspects of loading performance, such as cache lookup within a service worker and download time:

// Cache seek plus response time of the current document const [pageNav] = performance.getEntriesByType(‘navigation’); const fetchTime = pageNav.responseEnd – pageNav.fetchStart; // Service worker time plus response time let workerTime =

0
CARDINAL

; if (pageNav.workerStart > 0) { workerTime = pageNav.responseEnd – pageNav.workerStart; }

You can also measure other aspects of request/response latency:

const [pageNav] = performance.getEntriesByType(‘navigation’); // Request time only (excluding redirects,

DNS
ORG

, and connection/TLS time) const requestTime = pageNav.responseStart – pageNav.requestStart; // Response time only (download) const responseTime = pageNav.responseEnd – pageNav.responseStart; // Request + response time const requestResponseTime = pageNav.responseEnd – pageNav.requestStart;

Other measurements you can make


Navigation Timing
ORG

and

Resource Timing
ORG

is useful for more than what the examples above outline. Here are some other situations with relevant timings that may be worth exploring:

Page redirects: Redirects are an overlooked source of added latency, especially redirect chains.

Latency
ORG

gets added in a number of ways, such as HTTP-to-HTTPs hops, as well as

302
CARDINAL

/uncached

301
CARDINAL

redirects. The redirectStart , redirectEnd , and redirectCount timings are helpful in assessing redirect latency.

Redirects are an overlooked source of added latency, especially redirect chains.

Latency
ORG

gets added in a number of ways, such as HTTP-to-HTTPs hops, as well as

302
CARDINAL

/uncached

301
CARDINAL

redirects. The , , and timings are helpful in assessing redirect latency. Document unloading: In pages that run code in an unload event handler, the browser must execute that code before it can navigate to the next page. unloadEventStart and unloadEventEnd measure document unloading.

In pages that run code in an event handler, the browser must execute that code before it can navigate to the next page. and measure document unloading. Document processing: Document processing time may not be consequential unless your website sends very large HTML payloads. If this describes your situation, the domInteractive ,

domContentLoadedEventStart
GPE

, domContentLoadedEventEnd , and

domComplete
GPE

timings may be of interest.

Warning: Timings related to document unloading and processing are available only in

Navigation Timing
ORG

, as they only apply to navigation requests.

Acquiring timings in application code

All of the examples shown so far use performance.getEntriesByType , but there are other ways to query the performance entry buffer, such as performance.getEntriesByName and

performance.getEntries
ORG

. These methods are fine when only light analysis is needed. In other situations, though, they can introduce excessive main thread work by iterating over a large number of entries, or even repeatedly polling the performance buffer to find new entries.

The recommended approach for collecting entries from the performance entry buffer is to use a PerformanceObserver . PerformanceObserver listens for performance entries, and provides them as they’re added to the buffer:

// Create the performance observer: const

perfObserver
GPE

= new PerformanceObserver((observedEntries) => { // Get all resource entries collected so far: const entries = observedEntries.getEntries(); // Iterate over entries: for (let i = 0; i < entries.length; i++) { // Do the work! } }); // Run the observer for

Navigation Timing
ORG

entries: perfObserver.observe({ type: ‘navigation’, buffered: true }); // Run the observer for

Resource Timing
ORG

entries: perfObserver.observe({ type: ‘resource’, buffered: true });

Note: Adding the buffered option to a performance observer’s observe event ensures that performance entries added to the buffer prior to the instantiation of the performance observer are observable.

This method of collecting timings may feel awkward when compared to directly accessing the performance entry buffer, but it’s preferable to tying up the main thread with work that doesn’t serve a critical and user-facing purpose.

Phoning home

Once you’ve collected all the timings you need, you can send them to an endpoint for further analysis.

Two
CARDINAL

ways to do this are with either navigator.sendBeacon or a fetch with the keepalive option set. Both methods will send a request to a specified endpoint in a non-blocking way, and the request will be queued in a way that outlives the current page session if need be:

// Caution: If you have lots of performance entries, don’t // do this. This is an example for illustrative purposes. const data = JSON.stringify(performance.getEntries())); // The endpoint to transmit the encoded data to const endpoint = ‘/analytics’; // Check for fetch keepalive support if (‘keepalive’ in Request.prototype) { fetch(endpoint, { method: ‘POST’, body: data, keepalive: true, headers: { ‘Content-Type’: ‘application/json’ } }); } else if (‘sendBeacon’ in navigator) { // Use sendBeacon as a fallback navigator.sendBeacon(endpoint, data); }

In this example, the JSON string will arrive in a

POST
LOC

payload that you can decode and process/store in an application backend as needed.

Wrapping up

Once you have metrics collected, it’s up to you to figure out how to analyze that field data. When analyzing field data, there are a few general rules to follow to ensure you’re drawing meaningful conclusions:

Avoid averages, as they’re not representative of any

one
CARDINAL

user’s experience, and may be skewed by outliers.

Rely on percentiles. In datasets of time-based performance metrics, lower is better. This means that when you prioritize low percentiles, you’re only paying attention to the fastest experiences.

Prioritize the long tail of values. When you prioritize experiences at the

75th
ORDINAL

percentile or higher, you’re putting your focus where it belongs: on the slowest experiences.

This guide isn’t meant to be an exhaustive resource on

Navigation or Resource Timing
ORG

, but a starting point. Below are some additional resources you may find helpful:

With these APIs and the data they provide, you’ll be better equipped to understand how loading performance is experienced by real users, which will give you more confidence in diagnosing and addressing loading performance problems in the field.