Rust Wasm on Fastly Compute@Edge

Rust Wasm on Fastly Compute@Edge

Recently I've been playing with a lot of different versions of WebAssmbly using Rust. From Leptos, Dioxus, and most recently by using Compute Edge on Fastly. In this blog, I'm going to build a simple api, and show how we can deploy it to Fastly using their Compute cli.

To get started, just install the Fastly Compute SDK using the following documentation. After creating an account and api token, you'll have to run something to effect of:

fastly profile create <NAME> --token=<FASTLY_API_TOKEN>

Then, create a new Rust Compute service using:

fastly compute init

When prompted, simply choose rust as the language of choice. This will create a typical Rust binary project, with the Fastly crate included as a dependency, as well as a Fastly.toml file in the root for configuring the Compute settings.

Before we continue, let's do some simple optimizations. If you run:

fastly compute build

you will see Fastly creates a bin/main.wasm. If you check the file size using ls -lh bin/main.wasm, you will see that this file is around 2.5 mb. Let's improve that at the Rust release level. I'm just going to update the Cargo.toml to the following:

opt-level = 3     # 0-3
strip = true      # strip symbols from binary
lto = true        # enable link time optimization
codegen-units = 1 # maximize size reduction optimizations

NOTE: codegen-units can have some tradeoffs, most notably of which is increased compile time; it's up to you if you want to include it. Now, if we re-run fastly compute build and then ls -lh bin/main.wasm, we can see it has decreased to around 400 kb (this will vary by machine specs, of course). Awesome!

Debugging the application is as simple as running:

fastly compute serve

and hitting the url printed in your terminal, for me it's at . As you would expect, Fastly simply creates some bindings to our src/ file using their crate (which I believe just has some convenience wrappings around the http crate).

Easy! To publish our service to Fastly, all we have to do is run

fastly compute deploy

For a more involved example where I use Fastly's backends to serve a proxy of jsonplaceholder, check out this repository where I added the tokio async runtime, as well as whipped up some custom routing, tracing, and serialization.

Keep on rusting!