$ cat post/uptime-of-nine-years-/-we-merged-without-a-review-/-we-kept-the-old-flag.md

uptime of nine years / we merged without a review / we kept the old flag


Title: March 7, 2022 - The Year We Woke Up to LLMs and WebAssembly


March 7th, 2022. I woke up to a world that had shifted significantly since my last blog post. The dawn of 2022 was marked by the explosive growth of large language models (LLMs) with ChatGPT leading the charge. AI was no longer just a buzzword; it was here, and it was here in a big way.

As platform engineers, we found ourselves in the thick of this new era. Our systems were being put to the test like never before. We were debugging not just server errors or database issues, but also handling edge cases where our LLM integrations needed fine-tuning. It wasn’t uncommon for us to spend late nights refining the user experience so that when people interacted with our services, they had a seamless and intuitive experience.

One of the more exciting developments was WebAssembly on the server side. I remember the initial skepticism from some team members—after all, Wasm is great for running JavaScript in the browser, but how does it fit into an engineering stack that has traditionally been dominated by languages like Python or Go? As we started experimenting with projects that utilized WebAssembly, the benefits became clear: faster performance and more efficient use of resources. We were able to leverage existing WebAssembly frameworks and libraries to enhance our services without having to rewrite everything from scratch.

But with this excitement came new challenges. Debugging WebAssembly code was a different ballgame—no longer could we just hit print statements or step through the code line by line in an IDE. Instead, we had to rely on tools like Emscripten and various profiling tools that still felt somewhat clunky compared to their browser counterparts.

On a personal front, I found myself wrestling with FinOps and cloud cost pressure. Our company was not immune to the financial pressures of running large-scale services, and it became clear that every line of code we wrote had to be optimized for performance and efficiency. This meant making tough decisions about what features to keep or cut, and often, it came down to a balance between user experience and cost.

DORA (DevOps Research and Assessment) metrics were becoming more mainstream, and I found myself arguing with some developers who wanted to push out changes as quickly as possible without proper testing. The reality was that speed should always be balanced with quality—too many premature releases can lead to major issues down the line.

During this month, Hacker News featured a plethora of interesting stories. One that stuck out was about Pockit, the tiny, modular computer. It got me thinking about how we could apply similar principles in our own infrastructure to make it more flexible and adaptable. Another story about hacking Okta really made us think about security on a deeper level—how we needed to be vigilant not just at the application layer but also in the underlying systems.

Reflecting back, March 2022 was a month of contrasts. We were waking up to an AI revolution while simultaneously grappling with the practical realities of scaling our services. The tools and technologies were exciting, but so too were the challenges they presented. As we moved forward, I felt both optimistic about what lay ahead and cautiously aware that the path would be filled with new problems to solve.

And that’s where I find myself today—still debugging code, still arguing over best practices, and still learning from every experience. It’s a journey, but one that keeps me engaged and passionate about the work we do.


This post captures my thoughts as they were in March 2022, reflecting on the technological landscape and personal experiences during that time.