$ cat post/memory-leak-found-/-the-rollout-was-never-finished-/-a-ghost-in-the-pipe.md

memory leak found / the rollout was never finished / a ghost in the pipe


Title: Kubernetes Conundrums and the Rise of Serverless


July 30, 2018 was just another day in tech hell. I was deep into a project that involved managing our Kubernetes cluster at work, and as the serverless hype continued to grow, I couldn’t help but feel like I was missing out on some key developments.

The Kubernetes Cluster Dilemma

We had been using Kubernetes for about a year now, and it’s been a mixed bag. On one hand, it’s incredibly powerful, providing us with the ability to manage containerized applications at scale. But oh boy, did we run into our fair share of pain points along the way.

One morning, I logged in to see that half of our services were down. After some digging, I found a nasty bug: the kube-scheduler was crashing due to some obscure configuration issue. It took me hours to track it down and deploy a fix. That’s the kind of thing that can really stress you out when your production cluster is involved.

Helm vs. Configuration as Code

As we were trying to stabilize our cluster, I started exploring Helm for managing Kubernetes resources. The promise was enticing—declarative configuration management, which seemed like it could solve some of our pain points. But in practice? It’s a mixed bag.

I argued with the team about whether Helm charts should be version-controlled alongside our application code or kept separate. In the end, we decided to go for the latter, figuring that keeping everything together in one repo would simplify our development pipeline and reduce merge conflicts.

The Serverless Siren Call

Meanwhile, everyone was talking about serverless. “Learn how to design large-scale systems” on Hacker News was a constant reminder of what I wasn’t doing. My colleagues were all excited about Lambda, but for us, it just seemed like a bit too much too soon. Our infrastructure was complex enough without adding another layer of abstraction.

I spent some time looking at the serverless frameworks available back then—serverless.com, AWS SAM, and others. But they felt clunky and didn’t integrate well with our existing stack. Plus, we had real work to do in making Kubernetes more stable first.

The GitOps Experiment

One day, I decided to try out GitOps. It was a bit of a wild goose chase, but the idea appealed: push your configuration changes directly into version control and let tools like Flux sync them to your cluster automatically. I set up a demo with some fake applications and watched as it worked its magic.

But once I brought in real-world complexity, things got messy fast. The flux tool seemed fragile, and we hit issues with reconciling our state. We ended up backing off on GitOps for now, but it was an interesting experiment that made me appreciate how much work there is to do before these kinds of tools become fully mature.

Reflections

Looking back at 2018, I can see the trends starting to coalesce. Kubernetes was clearly winning in the container orchestration space, and we were just trying to keep up. Helm and Istio were emerging as essential tools for managing our clusters more effectively. And serverless? It was still a bit of a pipe dream, but one that seemed inevitable.

In the end, it’s all about making small improvements day by day. Debugging the kube-scheduler, arguing over Helm vs. configuration management, and experimenting with GitOps—all these little things add up to real progress in our tech stack. And who knows? Maybe someday serverless will be a reality for us too.

For now, I’m content knowing that we have a solid foundation with Kubernetes, even if it’s not without its quirks. The journey is long and full of ups and downs, but that’s what makes it interesting.