Oxidizing Source Maps with Rust and WebAssembly

A detailed look at how we replaced the most performance-sensitive portions of the source-map JavaScript Library’s source map parser with Rust code that is compiled to WebAssembly. The results: The WebAssembly is up to 5.89 times faster than the JavaScript implementation on realistic benchmarks operating on real world source maps! Additionally, performance is also more consistent: relative standard deviations decreased.
We hope that, by sharing our experience, we inspire others rewrite performance-sensitive JavaScript in Rust via WebAssembly.

Link: https://hacks.mozilla.org/2018/01/oxidizing-source-maps-with-rust-and-webassembly/

Third-Party Scripts

Trent Walton:
My latest realization is that delivering a performant, accessible, responsive, scalable website isn’t enough: I also need to consider the impact of third-party scripts. No matter how solid I think my prototype is, it doesn’t absolve me from paying attention to what happens during implementation, specifically when it comes to the addition of these third-party scripts.
I recently had a conversation with a friend working on quite a high profile e-commerce site. They were hired to develop …

Third-Party Scripts is a post from CSS-Tricks

Link: https://css-tricks.com/third-party-scripts/

Monitoring unused CSS by unleashing the raw power of the DevTools Protocol

From Johnny’s dev blog:
The challenge: Calculate the real percentage of unused CSS
Our goal is to create a script that will measure the percentage of unused CSS of this page. Notice that the user can interact with the page and navigate using the different tabs.
DevTools can be used to measure the amount of unused CSS in the page using the Coverage tab. Notice that the percentage of unused CSS after the page loads is ~55%, but after clicking …

Monitoring unused CSS by unleashing the raw power of the DevTools Protocol is a post from CSS-Tricks

Link: http://blog.cowchimp.com/monitoring-unused-css-by-unleashing-the-devtools-protocol/

Front-End Performance Checklist

Vitaly Friedman swings wide with a massive list of performance considerations. It’s a well-considered mix of old tactics (cutting the mustard, progressive enhancement, etc.) and newer considerations (tree shaking, prefetching, etc.). I like the inclusion of a quick wins section since so much can be done for little effort; it’s important to do those things before getting buried in more difficult performance tasks.
Speaking of considering performance, Philip Walton recently dug into what interactive actually means, in a world …

Front-End Performance Checklist is a post from CSS-Tricks

Link: https://www.smashingmagazine.com/2018/01/front-end-performance-checklist-2018-pdf-pages/

Breaking Down the Performance API

JavaScript’s Performance API is prudent, because it hands over tools to accurately measure the performance of Web pages, which, in spite of being performed since long before, never really became easy or precise enough.
That said, it isn’t as easy to get started with the API as it is to actually use it. Although I’ve seen extensions of it covered here and there in other posts, the big picture that ties everything together is hard to find.

One look at …

Breaking Down the Performance API is a post from CSS-Tricks

Link: https://css-tricks.com/breaking-performance-api/

Comparing Novel vs. Tried and True Image Formats

Popular image file formats such as JPG, PNG, and GIF have been around for a long time. They are relatively efficient and web developers have introduced many optimization solutions to further compress their size. However, the era of JPGs, PNGs, and GIFs may be coming to an end as newer, more efficient image file formats aim to take their place.
We’re going to explore these newer file formats in this post along with an analysis of how they stack up …

Comparing Novel vs. Tried and True Image Formats is a post from CSS-Tricks

Link: https://css-tricks.com/comparing-novel-vs-tried-true-image-formats/

Actual Input Latency: cross-browser measurement and the Hasal testing framework

Editor’s Note: This post is also featured on the 2017 Performance Calendar. This is a story about an engineering team at Mozilla, based in Taipei, that was tasked with measuring performance and solving some specific performance bottlenecks in Firefox. It is also a story about user-reported performance issues that were turned into actionable insights. It […]

Link: https://hacks.mozilla.org/2017/12/actual-input-latency-and-the-hasal-testing-framework/

JavaScript Performance Optimization Tips: An Overview

In this post, there’s lots of stuff to cover across a wide and wildly changing landscape. It’s also a topic that covers everyone’s favorite: The JS Framework of the Month™.
We’ll try to stick to the “Tools, not rules" mantra and keep the JS buzzwords to a minimum. Since we won’t be able to cover everything related to JS performance in a 2000 word article, make sure you read the references and do your own research afterwards.
But before we dive into specifics, let’s get a broader understanding of the issue by answering the following: what is considered as performant JavaScript, and how does it fit into the broader scope of web performance metrics?
Setting the Stage
First of all, let’s get the following out of the way: if you’re testing exclusively on your desktop device, you’re excluding more than 50% of your users.

This trend will only continue to grow, as the emerging market’s preferred gateway to the web is a sub-$100 Android device. The era of the desktop as the main device to access the Internet is over, and the next billion internet users will visit your sites primarily through a mobile device.
Testing in Chrome DevTools’ device mode isn’t a valid substitute to testing on a real device. Using CPU and network throttling helps, but it’s a fundamentally different beast. Test on real devices.
Even if you are testing on real mobile devices, you’re probably doing so on your brand spanking new $600 flagship phone. The thing is, that’s not the device your users have. The median device is something along the lines of a Moto G1 — a device with under 1GB of RAM, and a very weak CPU and GPU.
Let’s see how it stacks up when parsing an average JS bundle.

Addy Osmani: Time spent in JS parse & eval for average JS.

Ouch. While this image only covers the parse and compile time of the JS (more on that later) and not general performance, it’s strongly correlated and can be treated as an indicator of general JS performance.
To quote Bruce Lawson, “it’s the World-Wide Web, not the Wealthy Western Web”. So, your target for web performance is a device that’s ~25x slower than your MacBook or iPhone. Let that sink in for a bit. But it gets worse. Let’s see what we’re actually aiming for.
What Exactly is Performant JS Code?
Now that we know what our target platform is, we can answer the next question: what is performant JS code?
While there’s no absolute classification of what defines performant code, we do have a user-centric performance model we can use as a reference: The RAIL model.

Sam Saccone: Planning for Performance: PRPL

If your app responds to a user action in under 100ms, the user perceives the response as immediate. This applies to tappable elements, but not when scrolling or dragging.
On a 60Hz monitor, we want to target a constant 60 frames per second when animating and scrolling. That results in around 16ms per frame. Out of that 16ms budget, you realistically have 8–10ms to do all the work, the rest taken up by the browser internals and other variances.
Idle work
If you have an expensive, continuously running task, make sure to slice it into smaller chunks to allow the main thread to react to user inputs. You shouldn’t have a task that delays user input for more than 50ms.
You should target a page load in under 1000ms. Anything over, and your users start getting twitchy. This is a pretty difficult goal to reach on mobile devices as it relates to the page being interactive, not just having it painted on screen and scrollable. In practice, it’s even less:

Fast By Default: Modern Loading Best Practices (Chrome Dev Summit 2017)

In practice, aim for the 5s time-to-interactive mark. It’s what Chrome uses in their Lighthouse audit.
Now that we know the metrics, let’s have a look at some of the statistics:

53% of visits are abandoned if a mobile site takes more than three seconds to load
1 out of 2 people expect a page to load in less than 2 seconds
77% of mobile sites take longer than 10 seconds to load on 3G networks
19 seconds is the average load time for mobile sites on 3G networks.

And a bit more, courtesy of Addy Osmani:

apps became interactive in 8 seconds on desktop (using cable) and 16 seconds on mobile (Moto G4 over 3G)
at the median, developers shipped 410KB of gzipped JS for their pages.

Feeling sufficiently frustrated? Good. Let’s get to work and fix the web. ✊
Context is Everything
You might have noticed that the main bottleneck is the time it takes to load up your website. Specifically, the JavaScript download, parse, compile and execution time. There’s no way around it but to load less JavaScript and load smarter.
But what about the actual work that your code does aside from just booting up the website? There has to be some performance gains there, right?
Before you dive into optimizing your code, consider what you’re building. Are you building a framework or a VDOM library? Does your code need to do thousands of operations per second? Are you doing a time-critical library for handling user input and/or animations? If not, you may want to shift your time and energy somewhere more impactful.
It’s not that writing performant code doesn’t matter, but it usually makes little to no impact in the grand scheme of things, especially when talking about microoptimizations. So, before you get into a Stack Overflow argument about .map vs .forEach vs for loops by comparing results from JSperf.com, make sure to see the forest and not just the trees. 50k ops/s might sound 50× better than 1k ops/s on paper, but it won’t make a difference in most cases.
Continue reading %JavaScript Performance Optimization Tips: An Overview%

Link: https://www.sitepoint.com/javascript-performance-optimization-tips-an-overview/

Progressive Web Apps: A Crash Course

Progressive Web Apps (PWAs) try to overlap the worlds of the mobile web apps and native mobile apps by offering the best features of each to mobile users.
They offer an app-like user experience (splash screens and home screen icons), they’re served from HTTPS-secured servers, they can load quickly (thanks to page load performance best practices) even in low quality or slow network conditions, and they have offline support, instant loading and push notifications. The concept of PWAs was first introduced by Google, and is still supported by many Chrome features and great tools, such as Lighthouse, an open-source tool for accessibility, performance and progressiveness auditing which we’ll look into a bit later.
Throughout this crash course, we’ll build a PWA from scratch with ES6 and React and optimize it step by step with Lighthouse until we achieve the best results in terms of UX and performance.
The term progressive simply means that PWAs are designed in a such a way that they can be progressively enhanced in modern browsers where many new features and technologies are already supported but should also work fine in old browsers with no cutting-edge features.
Native vs Mobile = Progressive
A native app is distributable and downloadable from the mobile OS’s respective app store. Mobile web apps, on the other hand, are accessible from within a web browser by simply entering their address or URL. From the user’s point of view, launching a browser and navigating to an address is much more convenient than going to the app store and downloading, installing, then launching the app. From the developer/owner’s point of view, paying a one-time fee for getting an app store account and then uploading their apps to become accessible to users worldwide is better than having to deal with the complexities of web hosting.
A native app can be used offline. In the case of remote data that needs to be retrieved from some API server, the app can be easily conceived to support some sort of SQLite caching of the latest accessed data.
A mobile web app is indexable by search engines like Google, and through search engine optimization you can reach more users. This is also true for native apps, as the app stores have their own search engines where developers can apply different techniques — commonly known as App Store Optimization — to reach more users.
A native app loads instantly, at least with a splash screen, until all resources are ready for the app to execute.
These are the most important perceived differences. Each approach to app distribution has advantages for the end user (regarding user experience, availability etc.) and app owner (regarding costs, reach of customers etc.). Taking that into consideration, Google introduced PWAs to bring the best features of each side into one concept. These aspects are summarized in this list introduced by Alex Russell, a Google Chrome engineer. (Source: Infrequently Noted.)

Responsive: to fit any form factor.
Connectivity independent: progressively-enhanced with service workers to let them work offline.
App-like-interactions: adopt a Shell + Content application model to create appy navigations & interactions.
Fresh: transparently always up-to-date thanks to the service worker update process.
Safe: served via TLS (a service worker requirement) to prevent snooping.
Discoverable: are identifiable as “applications” thanks to W3C Manifests and service worker registration scope allowing search engines to find them.
Re-engageable: can access the re-engagement UIs of the OS; e.g. push notifications.
Installable: to the home screen through browser-provided prompts, allowing users to “keep” apps they find most useful without the hassle of an app store.
Linkable: meaning they’re zero-friction, zero-install, and easy to share. The social power of URLs matters.

Lighthouse is a tool for auditing web apps created by Google. It’s integrated with the Chrome Dev Tools and can be triggered from the Audits panel.
You can also use Lighthouse as a NodeJS CLI tool:
npm install -g lighthouse

You can then run it with:
lighthouse https://sitepoint.com/

Lighthouse can also be installed as a Chrome extension, but Google recommends using the version integrated with DevTools and only use the extension if you somehow can’t use the DevTools.
Please note that you need to have Chrome installed on your system to be able to use Lighthouse, even if you’re using the CLI-based version.
Building your First PWA from Scratch
In this section, we’ll be creating a progressive web app from scratch. First, we’ll create a simple web application using React and Reddit’s API. Next, we’ll be adding PWA features by following the instructions provided by the Lighthouse report.
Please note that the public no-authentication Reddit API has CORS headers enabled so you can consume it from your client-side app without an intermediary server.
Before we start, this course will assume you have a development environment setup with NodeJS and NPM installed. If you don’t, start with the awesome Homestead Improved, which is running the latest versions of each and is ready for development and testing out of the box.
We start by installing Create React App, a project boilerplate created by the React team that saves you from the hassle of WebPack configuration.
npm install -g create-react-app
create-react-app react-pwa
cd react-pwa/

The application shell architecture
The application shell is an essential concept of progressive web apps. It’s simply the minimal HTML, CSS and JavaScript code responsible for rendering the user interface.

This app shell has many benefits for performance. You can cache the application shell so when users visit your app next time, it will be loaded instantly because the browser doesn’t need to fetch assets from a remote server.
For building a simple UI we’ll use Material UI, an implementation of Google Material design in React.
Let’s install the package from NPM:
npm install material-ui –save

Next open src/App.js then add:
import React, { Component } from ‘react’;
import MuiThemeProvider from ‘material-ui/styles/MuiThemeProvider’;
import AppBar from ‘material-ui/AppBar’;
import {Card, CardActions, CardHeader,CardTitle,CardText} from ‘material-ui/Card’;
import FlatButton from ‘material-ui/FlatButton’;
import IconButton from ‘material-ui/IconButton’;
import NavigationClose from ‘material-ui/svg-icons/navigation/close’;

import logo from ‘./logo.svg’;
import ‘./App.css’;

class App extends Component {

constructor(props) {

this.state = {
posts: []

render() {
return (

title={<span >React PWA</span>}

iconElementLeft={<IconButton><NavigationClose /></IconButton>}
iconElementRight={<FlatButton onClick={() => this.fetchNext(‘reactjs’, this.state.lastPostName)} label=”next" />

{this.state.posts.map(function (el, index) {
return <Card key={index}>

actAsExpander={el.data.is_self === true}

<CardText expandable={el.data.is_self === true}>
<FlatButton label="View" onClick={() => {
}} />


<FlatButton onClick={() => this.fetchNext(‘reactjs’, this.state.lastPostName)} label="next" />


export default App;

Next we need to fetch the Reddit posts using two methods fetchFirst() and fetchNext():
fetchFirst(url) {
var that = this;
if (url) {
fetch(‘https://www.reddit.com/r/’ + url + ‘.json’).then(function (response) {
return response.json();
}).then(function (result) {

that.setState({ posts: result.data.children, lastPostName: result.data.children[result.data.children.length – 1].data.name });

fetchNext(url, lastPostName) {
var that = this;
if (url) {
fetch(‘https://www.reddit.com/r/’ + url + ‘.json’ + ‘?count=’ + 25 + ‘&after=’ + lastPostName).then(function (response) {
return response.json();
}).then(function (result) {

that.setState({ posts: result.data.children, lastPostName: result.data.children[result.data.children.length – 1].data.name });
componentWillMount() {


You can find the source code in this GitHub Repository.
Before you can run audits against your app you’ll need to make a build and serve your app locally using a local server:
npm run build

This command invokes the build script in package.json and produces a build in the react-pwa/build folder.
Now you can use any local server to serve your app. On Homestead Improved you can simply point the nginx virtual host to the build folder and open homestead.app in the browser, or you can use the serve package via NodeJS:
npm install -g serve
cd build

With serve, your app will be served locally from http://localhost:5000/.

You can audit your app without any problems, but in case you want to test it in a mobile device you can also use services like surge.sh to deploy it with one command!
npm install –global surge

Next, run surge from within any directory to publish that directory onto the web.
You can find the hosted version of this app from this link.
Now let’s open Chrome DevTools, go to Audits panel and click on Perform an audit.

From the report we can see we already have a score of 45/100 for Progressive Web App and 68/100 for Performance.
Under Progressive Web App we have 6 failed audits and 5 passed audits. That’s because the generated project already has some PWA features added by default, such as a web manifest, a viewport meta and a <no-script> tag.
Under Performance we have diagnostics and different calculated metrics, such as First meaningful paint, First Interactive, Consistently Interactive, Perceptual Speed Index and Estimated Input Latency. We’ll look into these later on.
Lighthouse suggests improving page load performance by reducing the length of Critical Render Chains either by reducing the download size or deferring the download of unnecessary resources.
Please note that the Performance score and metrics values can change between different auditing sessions on the same machine, because they’re affected by many varying conditions such as your current network state and also your current machine state.
Continue reading %Progressive Web Apps: A Crash Course%

Link: https://www.sitepoint.com/progressive-web-apps-a-crash-course/

Comparing Browser Page Load Time: An Introduction to Methodology

On blog.mozilla.org, we shared results of a speed comparison study to show how fast Firefox Quantum with Tracking Protection enabled is compared to other browsers. In this companion post, we share some insights into the methodology behind these page load time comparison studies and benchmarks. Our study focused on news web sites, which tend to come with an abundance of trackers, and uses the Navigation Timing API as a data source.

Link: https://hacks.mozilla.org/2017/11/comparing-browser-page-load-time-an-introduction-to-methodology/