WebAssembly (WASM) was designed as a binary instruction set that natively-compiled languages like C/C++ and Rust could use as a compilation target to be executed in a web browser. However, running WASM natively on the host outside the browser has unique characteristics that make it compelling for many applications. Using a WASM runtime compiled for the hardware architecture and OS platform of your choice provides some essential benefits:
- Portability: Because WASM is both CPU and OS agnostic, you can compile a polyglot of programming languages to WASM, share it with others, and run anywhere a WASM host runtime is available.
- Speed: You can run at near-native speed using ahead-of-time (AOT) compilation.
- Security: WASM provides a secure sandbox environment. The WASM runtime running on the host is by definition a virtual machine that also enforces a deny-by-default mode. This requires you to explicitly grant permissions for the WASM module to access system resources on the host. To access host system resources, the WebAssembly System Interface, or WASI, exists to provide POSIX-like syscall APIs to your WASM modules.
- Low resource footprint: WASM modules are typically small in size and have very low startup costs compared to VMs or containers.
In its initial use case, these qualities allowed high-performance applications that traditionally run on the host to be compiled to WASM and run efficiently within the browser. Adobe Photoshop, Autodesk Autocad, Figma, Google Sketchup and even Doom 3 are now available to be run in a standard browser, thanks to WASM. However, the unique characteristics described above make WASM outside the browser an ideal solution for edge and serverless applications in particular.
Note: Red Hat’s Emerging Technologies blog includes posts that discuss technologies that are under active development in upstream open source communities and at Red Hat. We believe in sharing early and often the things we’re working on, but we want to note that unless otherwise stated the technologies and how-tos shared here aren’t part of supported products, nor promised to be in the future.
How to run WebAssembly alongside containers
Many organizations today have infrastructure set up to run containers, whether on bare metal or on top of virtual machines. If that’s the case in your organization, you may be using Kubernetes or Red Hat OpenShift to orchestrate containers. The good news is that you can run WASM workloads right alongside containers using any combination of the technology stack described in this article.
Using crun
At a low level, containers rely on a technology called a container runtime. The container runtime handles a lot of the nitty-gritty details involved in creating an actual container, such as setting up namespaces and cgroups and creating the process. You may be familiar with the runc container runtime written in Golang, which has traditionally been used as the default container runtime for Kubernetes clusters. Now there’s a more performant alternative to runc called crun. Crun is written in C and provides all of the performance enhancements that come with applications written in C. The crun project has been quietly adding support for running WASM workloads directly via the C shared library APIs provided by various WASM runtimes such as wasmtime.
To use crun to execute a WASM module using wasmtime, simply compile the native code written in any WASM-supported language (e.g., C/C++, Rust, Golang) to the WASM/WASI target. Then create a scratch Open Containers Initiative (OCI) image that includes only your binary WASM module. Here’s an example of a Containerfile to build such an OCI image:
FROM scratch COPY ./target/wasm32-wasi/debug/wasm-demo-app.wasm / CMD ["/wasm-demo-app.wasm"]
To build the image, specify the annotation module.wasm.image/variant=compat
using a tool like buildah:
buildah build --annotation "module.wasm.image/variant=compat" -t <registry>/<repo>/wasm-demo-app .
Once built, you can push it to any OCI registry as it’s just another OCI image:
buildah login <registry> buildah push <registry>/<repo>/wasm-demo-app
Once you have an OCI image hosted where crun can retrieve it, you can tell crun to execute the WASM workload. The crun container runtime will detect that the workload to be executed is a WASM module packaged into a scratch OCI image. This detection is done through the JSON configuration (config.json) you provide specifying the same module.wasm.image/variant=compat
annotation used to build the image. The crun executable—built with wasmtime enabled via the --with-wasmtime
configure flag—will then invoke its custom WASM handler to execute your WASM module. This custom handler stands up a wasmtime execution environment using the wasmtime C API provided by the libwasmtime.so
shared library.
Using podman
The next level up of the container stack is a container engine. The container engine provides a lot of UX improvements over the container runtime. It creates a config.json
configuration file and ultimately uses the crun container runtime to execute the container. The container engine can be a daemon, such as cri-o and containerd, or it can be daemonless, such as podman. (Aditya R and Giuseppe Scrivano, who wrote some of the code that enables the content of this blog post, discuss this in their article, “Use OCI containers to run WebAssembly workloads.”)
Using podman is a great way to execute WASM workloads via containers. It’s even easier than using crun directly because it handles the creation of that config.json
file passed to crun. Simply podman run
the container just like any other container using the OCI image path, as long as you’ve built the image with the annotation mentioned above. Note that this still involves having crun built with wasmtime enabled, as mentioned previously. One of the neat features of using podman to execute WASM workloads is that you can use podman to port WASM workload containers to systemd as long-living processes using tools like quadlet. The systemd containers executing WASM modules can be automatically started and managed by systemd as part of your solutions.
Using MicroShift
For edge use cases that involve running an enterprise-grade Kubernetes cluster to orchestrate containers, look to MicroShift, a small form-factor Kubernetes optimized for edge computing. MicroShift delivers an all-in-one container running Kubernetes purpose-built for the edge. MicroShift relies on the cri-o container engine, which implements the Kubernetes Container Runtime Interface (CRI) and can be configured to use the crun container runtime for executing your WASM workloads. The YAML manifest used to deploy your WASM workload will need to specify the annotation described previously, as shown here. The rest will work the same way, except instead of podman, cri-o interfaces directly with crun. However, you can still create and run Kubernetes YAML manifests to launch pods directly with podman using its generate kube
and play kube
commands as well.
Conclusion
If you’re thinking about running some WASM workloads, consider running them alongside your containers using any combination of the technologies discussed here. Whether that’s directly with crun, podman, systemd with podman, or MicroShift, you have a variety of options at your disposal!
Want more? Check out the demo on YouTube or have a look at the tutorial on GitHub!