What makes WebAssembly fast?

This is the fifth part in a series on WebAssembly and what makes it fast. If you haven’t read the others, we recommend starting from the beginning.

In the last article, I explained that programming with WebAssembly or JavaScript is not an either/or choice. We don’t expect that too many developers will be writing full WebAssembly code bases.

So developers don’t need to choose between WebAssembly and JavaScript for their applications. However, we do expect that developers will swap out parts of their JavaScript code for WebAssembly.

For example, the team working on React could replace their reconciler code (aka the virtual DOM) with a WebAssembly version. People who use React wouldn’t have to do anything… their apps would work exactly as before, except they’d get the benefits of WebAssembly.

The reason developers like those on the React team would make this swap is because WebAssembly is faster. But what makes it faster?

What does JavaScript performance look like today?

Before we can understand the differences in performance between JavaScript and WebAssembly, we need to understand the work that the JS engine does.

This diagram gives a rough picture of what the start-up performance of an application might look like today.

The time that the JS engine spends doing any one of these tasks depends on the JavaScript the page uses. This diagram isn’t meant to represent precise performance numbers. Instead, it’s meant to provide a high-level model of how performance for the same functionality would be different in JS vs WebAssembly.

Diagram showing 5 categories of work in current JS engines

Each bar shows the time spent doing a particular task.

  • Parsing—the time it takes to process the source code into something that the interpreter can run.
  • Compiling + optimizing—the time that is spent in the baseline compiler and optimizing compiler. Some of the optimizing compiler’s work is not on the main thread, so it is not included here.
  • Re-optimizing—the time the JIT spends readjusting when its assumptions have failed, both re-optimizing code and bailing out of optimized code back to the baseline code.
  • Execution—the time it takes to run the code.
  • Garbage collection—the time spent cleaning up memory.

One important thing to note: these tasks don’t happen in discrete chunks or in a particular sequence. Instead, they will be interleaved. A little bit of parsing will happen, then some execution, then some compiling, then some more parsing, then some more execution, etc.

The performance this breakdown brings is a big improvement from the early days of JavaScript, which would have looked more like this:

Diagram showing 3 categories of work in past JS engines (parse, execute, and garbage collection) with times being much longer than previous diagram

In the beginning, when it was just an interpreter running the JavaScript, execution was pretty slow. When JITs were introduced, it drastically sped up execution time.

The tradeoff is the overhead of monitoring and compiling the code. If JavaScript developers kept writing JavaScript in the same way that they did then, the parse and compile times would be tiny. But the improved performance led developers to create larger JavaScript applications.

This means there’s still room for improvement.

How does WebAssembly compare?

Here’s an approximation of how WebAssembly would compare for a typical web application.

Diagram showing 3 categories of work in WebAssembly (decode, compile + optimize, and execute) with times being much shorter than either of the previous diagrams

There are slight variations between browsers in how they handle all of these phases. I’m using SpiderMonkey as my model here.

Fetching

This isn’t shown in the diagram, but one thing that takes up time is simply fetching the file from the server.

Because WebAssembly is more compact than JavaScript, fetching it is faster. Even though compaction algorithms can significantly reduce the size of a JavaScript bundle, the compressed binary representation of WebAssembly is still smaller.

This means it takes less time to transfer it between the server and the client. This is especially true over slow networks.

Parsing

Once it reaches the browser, JavaScript source gets parsed into an Abstract Syntax Tree.

Browsers often do this lazily, only parsing what they really need to at first and just creating stubs for functions which haven’t been called yet.

From there, the AST is converted to an intermediate representation (called bytecode) that is specific to that JS engine.

In contrast, WebAssembly doesn’t need to go through this transformation because it is already an intermediate representation. It just needs to be decoded and validated to make sure there aren’t any errors in it.

Diagram comparing parsing in current JS engine with decoding in WebAssembly, which is shorter

Compiling + optimizing

As I explained in the article about the JIT, JavaScript is compiled during the execution of the code. Depending on what types are used at runtime, multiple versions of the same code may need to be compiled.

Different browsers handle compiling WebAssembly differently. Some browsers do a baseline compilation of WebAssembly before starting to execute it, and others use a JIT.

Either way, the WebAssembly starts off much closer to machine code. For example, the types are part of the program. This is faster for a few reasons:

  1. The compiler doesn’t have to spend time running the code to observe what types are being used before it starts compiling optimized code.
  2. The compiler doesn’t have to compile different versions of the same code based on those different types it observes.
  3. More optimizations have already been done ahead of time in LLVM. So less work is needed to compile and optimize it.

Diagram comparing compiling + optimizing, with WebAssembly being shorter

Reoptimizing

Sometimes the JIT has to throw out an optimized version of the code and retry it.

This happens when assumptions that the JIT makes based on running code turn out to be incorrect. For example, deoptimization happens when the variables coming into a loop are different than they were in previous iterations, or when a new function is inserted in the prototype chain.

There are two costs to deoptimization. First, it takes some time to bail out of the optimized code and go back to the baseline version. Second, if that function is still being called a lot, the JIT may decide to send it through the optimizing compiler again, so there’s the cost of compiling it a second time.

In WebAssembly, things like types are explicit, so the JIT doesn’t need to make assumptions about types based on data it gathers during runtime. This means it doesn’t have to go through reoptimization cycles.

Diagram showing that reoptimization happens in JS, but is not required for WebAssembly

Executing

It is possible to write JavaScript that executes performantly. To do it, you need to know about the optimizations that the JIT makes. For example, you need to know how to write code so that the compiler can type specialize it, as explained in the article on the JIT.

However, most developers don’t know about JIT internals. Even for those developers who do know about JIT internals, it can be hard to hit the sweet spot. Many coding patterns that people use to make their code more readable (such as abstracting common tasks into functions that work across types) get in the way of the compiler when it’s trying to optimize the code.

Plus, the optimizations a JIT uses are different between browsers, so coding to the internals of one browser can make your code less performant in another.

Because of this, executing code in WebAssembly is generally faster. Many of the optimizations that JITs make to JavaScript (such as type specialization) just aren’t necessary with WebAssembly.

In addition, WebAssembly was designed as a compiler target. This means it was designed for compilers to generate, and not for human programmers to write.

Since human programmers don’t need to program it directly, WebAssembly can provide a set of instructions that are more ideal for machines. Depending on what kind of work your code is doing, these instructions run anywhere from 10% to 800% faster.

Diagram comparing execution, with WebAssembly being shorter

Garbage collection

In JavaScript, the developer doesn’t have to worry about clearing out old variables from memory when they aren’t needed anymore. Instead, the JS engine does that automatically using something called a garbage collector.

This can be a problem if you want predictable performance, though. You don’t control when the garbage collector does its work, so it may come at an inconvenient time. Most browsers have gotten pretty good at scheduling it, but it’s still overhead that can get in the way of your code’s execution.

At least for now, WebAssembly does not support garbage collection at all. Memory is managed manually (as it is in languages like C and C++). While this can make programming more difficult for the developer, it does also make performance more consistent.

Diagram showing that garbage collection happens in JS, but is not required for WebAssembly

Conclusion

WebAssembly is faster than JavaScript in many cases because:

  • fetching WebAssembly takes less time because it is more compact than JavaScript, even when compressed.
  • decoding WebAssembly takes less time than parsing JavaScript.
  • compiling and optimizing takes less time because WebAssembly is closer to machine code than JavaScript and already has gone through optimization on the server side.
  • reoptimizing doesn’t need to happen because WebAssembly has types and other information built in, so the JS engine doesn’t need to speculate when it optimizes the way it does with JavaScript.
  • executing often takes less time because there are fewer compiler tricks and gotchas that the developer needs to know to write consistently performant code, plus WebAssembly’s set of instructions are more ideal for machines.
  • garbage collection is not required since the memory is managed manually.

This is why, in many cases, WebAssembly will outperform JavaScript when doing the same task.

There are some cases where WebAssembly doesn’t perform as well as expected, and there are also some changes on the horizon that will make it faster. I’ll cover those in the next article.

About Lin Clark

Lin works in Advanced Development at Mozilla, with a focus on Rust and WebAssembly.

More articles by Lin Clark…


20 comments

  1. Art Scott

    And WA is more Energy-Efficient

    February 28th, 2017 at 10:17

  2. Daniel Earwicker

    Taking the example of React reconciliation, that task takes a JS object tree as input and manipulates the DOM as its “output”.

    As Web Assembly can’t yet work with either of those, would it really speed up that case?

    February 28th, 2017 at 11:49

    1. Lin Clark

      For one, direct access to the DOM is on the roadmap, so that shouldn’t be a limitation once it’s implemented. But the new Fiber reconciler has an effect list as output, not direct DOM manipulation, so you could potentially return an effect list from a WASM-based Fiber reconciler. It’s hard to say until we actually start playing with a WASM implementation what the boundaries would be.

      February 28th, 2017 at 14:32

      1. Mike

        Why bother with the DOM? Why do we steel need the DOM / HTML / CSS etc.? Can’t Webassembly be designed to directly take control of the browser rendering therefore skipping the whole HTML / DOM / CSS mess? Isn’t it time to move forward and use frameworks similar to Microsoft WPF for user interfaces?

        Sorry if I am missing the point here, I know very little about web programming so I may be totally off base with my comments.

        March 7th, 2017 at 09:47

        1. Lin Clark

          Some people have talked about doing this. There are some issues with it, though. For example, there’s an accessibility tree that gets generated from the DOM. This accessibility tree is used by assistive technologies such as screen readers. If the DOM is skipped, then a whole lot of users suddenly can’t access the content of the web. We may see things move in this direction, but there’s a lot of standardization and implementation work that will need to happen first.

          March 7th, 2017 at 09:57

          1. Mike

            I hope that all these issues get resolved soon (this should be top priority if you ask me). As long as we keep relying on the DOM / HTML / CSS etc for web programing this thing will continue to be a complete an utter mess (this is my opinion of course).

            Performance is nice, but having frameworks that are designed and optimized from the ground up to be used for user interface creation is paramount. The DOM / HTML / CSS etc technologies are the epitome of unnecessary complexity, is my understanding that they were never designed to do what they do today and it shows just by looking at what appears to be a never-ending stream of different frameworks out there that aim to help ease the creation of web user interfaces. I want better ways to create, maintain and enhance the code that I produce and not have to really in total hacks like we do today when doing web development.

            Again, my experience with web development is not much (I try to avoid it as much as I can) so if I say something that does not make sense I apologize.

            March 7th, 2017 at 11:26

          2. Mike

            Hi Lin,

            Sorry to keep harping on this but I was wonder if you wouldn’t mind answering one more question related to the subject.

            Would it be possible using the current release of Webassembly for someone to create a compiler that could take the code from a *desktop* application (a rich user interface such as one developed using Microsoft WPF or Delphi) and make Webassembly code that could mimic the behavior of the application in the browser?

            I realize that this may not be optimal and maybe even discouraged but I am curious if this would at least be theoretically possible (without it being an enormous effort).

            My guess is that the answer is yes since I have seen browser games based on Webassembly so if you can draw fancy graphics on a browser like that it should be relatively easy to draw your typical textboxes. comboboxes etc. The key here is the idea of being able to skip the whole browser technologies mess (JavaScript, DOM / HTML / CSS etc) or at the very least, their usage should be absolutely minimal and transparent to the developer.

            Thanks.

            March 8th, 2017 at 09:13

  3. Skatox

    This is so awesome! I can even use this to explain it to my students. Good work!

    February 28th, 2017 at 17:28

  4. Dave

    Very interesting article! Quick question, what software do you use to do your diagrams? I really like the hand-drawn style :)

    February 28th, 2017 at 21:26

    1. Lin Clark

      Thank you! I use Pixelmator with a Wacom Cintiq tablet.

      March 1st, 2017 at 06:20

  5. Chris

    Very nicely written set of articles! Super approachable, solidly factual.

    I think you’ve missed pointing out one trick though: while wasm doesn’t have a runtime that supports luxuries like a garbage collector, as far as I can see there’s nothing stopping anyone from writing one. The real underlying trick of wasm turns the programming model of the web on its head: instead of opting-out (optimizing out!) of high level functionality, wasm is about opting-in to what you really want. By reducing the scope of execution all the way back down to a simple machine, there’s room for different approaches to build on top of that.

    Will Rust’s static analysis model beat the garbage collectors? Will someone come up with something different? Who knows? Hopefully whatever it is, it’ll be able to compile to wasm and be delivered in the same on demand manner as today’s web apps.

    I’m looking forward to seeing how this shakes out.

    March 1st, 2017 at 12:09

  6. anonymous

    tl;dr WebAssembly is a super minifier dressed as new tech

    March 1st, 2017 at 12:10

    1. Lin Clark

      I’d disagree that this is the TL;DR. Minifying only reduces the time spent in fetching and parsing. It doesn’t reduce time spent in the other categories. Hopefully the diagrams make clear that WASM reduces time spent across all 6 categories.

      March 1st, 2017 at 13:20

  7. alexander

    Great articles!
    “execution was pretty slowly” should probably be “execution was pretty slow” ;) .

    March 1st, 2017 at 12:16

    1. Lin Clark

      Thanks, both for the compliment and the catch :)

      March 1st, 2017 at 13:14

  8. Phuong Nguyen

    This is a really great and exciting new technology, I can’t wait until it’s stable so I can try to use it on my web application :)

    March 2nd, 2017 at 06:46

  9. Reddy

    There is a typo on the second paragraph under ‘Fetching’, it should be ‘more compact than…’

    Overall great set of articles, definitely helped me wrap my head around Web Assembly, and what it means for the future of Web.

    March 5th, 2017 at 08:04

    1. Lin Clark

      Thanks! and thanks for the typo catch

      March 5th, 2017 at 08:15

  10. Zonr

    What a charming post! I’d like to added a comment to the following point in conclusion:

    “reoptimizing doesn’t need to happen …”

    Beside “deoptimization”, there’s actually another kinds of “reoptimization” in JIT called “adaptive optimization” [1] which “promote” current compiled code to better one. For example, after running a while, we’re confident that a function is hot. We then could invest more time to perform some eager optimizations on it (e.g., vectorization) and expect the efforts will pay off. This can happen with regardless of WebAssembly adoption.

    [1]: https://en.wikipedia.org/wiki/Adaptive_optimization

    March 5th, 2017 at 19:38

  11. John Lehew

    Nice article. WebAssembly will be the next big thing and is great to see. Time spent parsing and compiling JS especially large frameworks and third party libraries adds up and is a huge issue. I’m glad this has been addressed.

    Are WebAssembly modules and allocated memory always cleared when redirecting to a new page? If so this would address many garbage collection concerns as memory is cleared before the next page is loaded. Can spa apps call a function to clear all memories and libraries as a way to perform GC?

    Are there plans to include and install standard WebAssembly libraries in the browser? For instance if Bootstrap compiled their library to WebAssembly, could it be installed as an extension library in the browser and accessed as needed from a webpage? Or would the browser simply cache wasm libraries from a CDN.

    March 9th, 2017 at 01:04

Comments are closed for this article.