$ cat post/a-patch-long-applied-/-what-the-stack-trace-never-showed-/-it-was-in-the-logs.md

a patch long applied / what the stack trace never showed / it was in the logs


Title: WebAssembly on the Server: A New Frontier in Our Ops Jungle


May 13, 2024 feels like a strange date to me, but hey, why not. I woke up this morning thinking about how much the ops landscape has changed since last year, and I decided to jot down some thoughts.

WebAssembly on Server Side: A Wild Ride

Lately, there’s been a lot of buzz around WebAssembly (Wasm) moving from the client-side into server-side operations. This isn’t just a passing trend; it’s something we’re deeply diving into at work. We’re trying to leverage Wasm for some backend logic and I must say, it’s both exhilarating and daunting.

The Setup

We’re using a combination of Rust and Go with Wasm, which is quite a mix. The idea behind this is simple: to offload some compute-intensive tasks that were previously running in a VM or container to the server side, making our stack more lightweight and potentially faster. We’re hosting these on Kubernetes, so we get all the benefits of container orchestration without having to deal with a full-blown runtime.

First Steps

The first challenge was setting up the build process. We had to compile Rust code into Wasm modules using wasm-pack and then figure out how to run these modules alongside our existing Go services. This required some tweaking in our Dockerfile and Kubernetes manifests, but once we got it working, it was smooth sailing.

Debugging Joy

Oh boy, the debugging journey so far has been… adventurous. Wasm doesn’t have as much support for traditional debugging tools like breakpoints or stepping through code. That means a lot of console logs and adding log statements everywhere. At one point, I spent an hour trying to figure out why some data wasn’t being passed correctly between Go and Rust. Eventually, it turned out that there was a mismatch in how the data types were defined.

Performance Wins

But hey, the wins are real. Once we got past the initial hiccups, performance improvements were noticeable. Running heavy calculations on the server side using Wasm made our services faster. And with the ops team, we’re now considering ways to scale this further by running multiple instances of these Wasm modules.

The Hype and Reality

While everyone is excited about WebAssembly for servers, it’s not a silver bullet. We’ve encountered some limitations, like the fact that Wasm isn’t suitable for everything. For example, anything too complex or involving a lot of state might be better off in a traditional runtime. Plus, there’s still a bit of a learning curve with Rust and Wasm.

Industry Events

Speaking of which, I just read about the new M4 chip from Apple. It’s interesting to see how hardware trends are intertwined with software innovations. At work, we’re discussing if we should start looking at using this in some of our projects for more powerful server instances without compromising on energy efficiency.

The FinOps Reality Check

And let’s not forget about the ongoing pressure from FinOps. Every line of code and every service added to our infrastructure needs to be justified. With DORA metrics widely adopted, we’re constantly pushing ourselves to improve deployment frequency and reduce lead times. It’s a balancing act between innovation and efficiency.

Wrap Up: The Path Ahead

In conclusion, WebAssembly on the server is an exciting area that’s going to shape how we build and operate our systems in the future. While there are challenges, the potential benefits are huge. I’m looking forward to seeing where this journey takes us over the next few months.


Feel free to send me your thoughts or questions if you’ve got any! Let’s chat about ops and tech.

Stay tuned for more updates as we continue to navigate this fascinating landscape together.