$ cat post/a-shell-i-once-loved-/-what-the-stack-trace-never-showed-/-we-were-on-call-then.md
a shell I once loved / what the stack trace never showed / we were on call then
Title: Reflections on March 2024: When LLMs Were Everywhere
March 25th, 2024. Another sunny day in the Valley, with the sun reflecting off the glassy towers that stretch to the sky. It’s been a whirlwind of an era since ChatGPT made its debut last year, and I’ve found myself neck-deep in the AI infrastructure explosion. Today, I’m taking a moment to reflect on what it means for platform engineering.
The past few months have seen an overwhelming influx of LLM (Large Language Model) projects and discussions. Almost every conversation around tech nowadays seems to gravitate towards these models—whether they’re being used to automate documentation generation, customer support, or even just for fun generative art. I’ve been on both ends: shipping a project that uses LLMs for internal task automation and debugging one where a misconfigured model led to some unexpected behavior.
The Backdoor Incident
One of the biggest stories this month was the backdoor in upstream xz/liblzma library. It turned out that this vulnerability, while technically not as severe as some might make it sound (it’s RCE, not auth bypass and is gated/unreplayable), still highlights a critical issue with how we handle open-source libraries. For my team at work, this was a wake-up call to tighten our vetting processes for third-party dependencies.
We had been using xz/liblzma in some of our compression utilities, so I rolled up my sleeves and did some deep dives into the codebase to ensure that we weren’t vulnerable. It’s always a good idea to keep your dependencies as lean as possible, but sometimes you just can’t avoid them. In this case, it was worth the effort.
Platform Engineering Musings
Platform engineering has become mainstream, with every tech company touting their platform services as the next big thing. We’ve seen a surge in CNCF (Cloud Native Computing Foundation) projects and frameworks that aim to standardize these platforms. However, the landscape is still overwhelming, with so many options vying for attention.
One of our recent projects involved integrating a WebAssembly runtime into one of our server-side applications. It was fascinating to see how this technology could potentially transform our application’s performance and memory footprint. We spent countless hours experimenting with different WebAssembly runtimes and optimization techniques, but it wasn’t all smooth sailing. Debugging issues that arise when deploying complex binaries on the edge is a whole new level of technical challenge.
Developer Experience and FinOps
Developer experience (DevEx) has become a discipline in its own right. We’re not just focused on writing code anymore; we’re looking at how every aspect of the development lifecycle impacts our engineers’ productivity. This includes everything from tooling to deployment workflows, and even things like documentation quality.
On the finance side, FinOps is gaining traction as cloud providers continue to raise the bar on cost efficiency. Our team has had its fair share of DORA (DevOps Research and Assessment) metrics discussions, trying to find a balance between innovation and cost control. It’s a constant juggling act that requires careful planning and execution.
Open-Source Alternatives
One of the most interesting stories this month was about Bruno, an open-source API client that aims to be a Git-friendly alternative to Postman. As someone who has used both tools extensively over the years, I’ve seen first-hand how important it is for developers to have flexible and powerful tools at their disposal.
Bruno’s approach is intriguing because it leverages Git’s version control features directly within its API client. This could potentially simplify workflows where multiple developers are working on the same project and need to coordinate changes in APIs. It’s a neat idea, but time will tell if it gains widespread adoption.
Personal Learning Curve
Reflecting on these events, I’m struck by how much the tech landscape has changed since last year. The AI infrastructure explosion is reshaping everything we do, from internal tooling to external services. I’ve found myself constantly learning new technologies and methodologies, sometimes at a dizzying pace.
But amidst all this change, one thing remains constant: the importance of robust engineering practices. Whether it’s securing open-source libraries or optimizing cloud costs, the fundamentals are still key. As the tech world continues to evolve, I’m excited to see where these trends take us next.
Until then, let’s keep pushing boundaries and learning from each other.
That’s my reflection for March 2024. Thanks for reading!