Solving 3 problems
Despite the evolution in #WebPerf with plenty of new timers and tools, there is still no definitive answer to some basic questions.
Not having a simple answer to those will maintain our challenges to influence why #PerfMatters.
Despite the evolution in #WebPerf with plenty of new timers and tools, there is still no definitive answer to some basic questions.
Not having a simple answer to those will maintain our challenges to influence why #PerfMatters.
We should not change how we measure and tune! We should look at the existing data from a different angle.
The root cause for our problems is the assumption that making a metric faster always results in a better experience.
We should stop following the mantra that faster is always better.
FRUSTRATIONindex follows a different mantra. Rather than looking at metrics individually it looks at the gap between them. The bigger the gap the bigger the chance a user gets frustrated.
This is a fundamentally different compared to the past: FRUSTRATIONindex indicates how improving one metric could actually result in a larger gap, therefore impacting end user experience in a negative way.
FRUSTRATIONindex looks at 4 key milestones perceived by the end-user while loading a webpage:
The index ranges from 0 (best) to 100 (worst) and uses 4 color codes to indicate likelihood of frustration:
$frustrationIndex = calculateFrustrationIndex();
function calculateFrustrationIndex(){
/* $totalFrustrationPoints = A + B + C + D; * A = frustration points between navigation start and TTFB * B = frustration points between TTFB and FCP * C = frustration points between FCP and Visually Ready * D = frustration points between Visually Ready and largest value of (LCP, TTI and pageload) * (Visually Ready = LCP, or falling back to Time To Visually Ready or SpeedIndex) */
$totalFrustrationPoints = calculateFrustrationPoints($ttfb,0,100) + calculateFrustrationPoints($fcp,$ttfb,100) + calculateFrustrationPoints($visuallyReady,$fcp,100) + calculateFrustrationPoints(max($lcp,$tti,$onload),$visuallyReady,250);
//Index can't by higher than 100, 10000ms is the break point for guaranteed frustration.
$frustrationIndex = min((sqrt($totalFrustrationPoints)/10000)*100,100); return $frustrationIndex; }
//Frustration only kicks in after a $threshold. //After kicking in it grows exponentially. //Default $threshold is 100ms, based on Jakob Nielsen https://www.nngroup.com/articles/response-times-3-important-limits/
function calculateFrustrationPoints($timer,$reference,$threshold=100){ return pow(max($timer-$reference-$threshold,0),2); }
Below the thresholds for steps A, B, C and D marked in green, only when the gap is larger than the threshold frustration kicks in.
The initial version uses metrics available to typical WPT result pages. There are however other elements which can indicate user frustration, therefore Real User Monitoring (RUM) tools can extend the index with additional elements contributing to frustration. For example:
Suppose you have performance measurements for 3 versions of a page:
Version | TTFB | FCP | SpeedIndex | Pageload | TTI |
---|---|---|---|---|---|
1 | 300ms | 2100ms | 2200ms | 2600ms | 2700ms |
2 | 300ms | 750ms | 1600ms | 2600ms | 2700ms |
3 | 300ms | 400ms | 1600ms | 2600ms | 2700ms |
Traditionally looking at the individual metrics Version 1 is the slowest and Version 3 is clearly the winner: The 3 tests share the same TTFB, Pageload and TTI, so we would focus on FCP and SpeedIndex. Version 3 has the fastest values for both FCP and SpeedIndex and following the mantra that faster is better must therefore have the better user experience.
Now is it? FRUSTRATIONindex claims Version 2 as the winner
Reason? Although further improving the FCP in Version 3 the increased gap between FCP and SpeedIndex leads to a bit more frustration.
FRUSTRATIONindex is NOT a timer, it is a score taking into account the gap between key timers. The longer a transition takes the bigger the frustration level for the user.