$ cat post/a-diff-i-once-wrote-/-i-pivoted-the-table-wrong-/-the-merge-was-final.md

a diff I once wrote / I pivoted the table wrong / the merge was final


Title: Kubernetes Wars, Helm, and the Great Platform Engineering Debate


March 20, 2017 was a pivotal month in my career. It was the beginning of a new era where containers were really starting to come into their own, and Kubernetes was emerging as the de facto standard for orchestration. I found myself neck-deep in discussions about how to best leverage this technology at work.

We had just upgraded our development environment from Docker Swarm to Kubernetes, and it felt like everyone was on a war footing. Every conversation turned into a debate between “Kubernetes is awesome” and “Kubernetes is overrated.” Helm came along, bringing with it the promise of easier package management for Kubernetes clusters. It seemed like every weekend I was wrestling with charts trying to figure out how to make them work.

Meanwhile, serverless was all the rage. Lambda functions were being talked about as the future. Some folks were even pushing the idea that containers would be obsolete in a few years, replaced by this new wave of ephemeral execution environments. But at my current workplace, we were still grappling with how to build and maintain reliable, scalable applications using Kubernetes.

Platform engineering was gaining momentum, too. The term itself was just starting to catch on, but the concepts were becoming clearer. My team started pushing back against the idea that infrastructure should be left to operations while development teams focused solely on coding. We believed in a more holistic approach where platform engineers would own the full lifecycle of applications—building and maintaining both the code and the environment it ran in.

One specific project I was working on at the time involved deploying a microservice using Kubernetes and Helm. It was a cluster with multiple namespaces, each serving a different feature or component. We were trying to figure out how to manage secrets securely without hardcoding them into our manifests. After hours of frustration, we finally settled on using a combination of HashiCorp Vault and the Kubernetes Secrets API. The solution felt clunky but worked.

This period was also marked by the rise of GitOps, with tools like Flux2 taking center stage. I remember debates about whether or not to use GitOps for our infrastructure as code. Some argued that it was overkill and unnecessary complexity. Others insisted that it was the only way to achieve true reproducibility and automation. At the time, my gut told me we should lean into GitOps more aggressively, but there were risks involved in adopting such a radical change.

On a personal level, I found myself arguing with colleagues about whether we should be using Terraform 0.x or sticking with CloudFormation for our AWS deployments. There was a sense of urgency to learn and adapt quickly, as new tools emerged every month. Sometimes the learning curve was steep; other times it felt like we were just rehashing old problems in a new context.

One day, I found myself buried under a mountain of debug logs from a failing Kubernetes deployment. It turned out that our custom resource definitions had some subtle edge cases that caused pod restarts to loop infinitely. After hours of tracing back and forth, I finally managed to isolate the issue and fix it. That night, as I lay in bed, I couldn’t help but reflect on how much these technologies were shaping my work—and my life outside of work.

In retrospect, March 2017 was a time of rapid change and constant learning. It was full of excitement and frustration, with new tools and concepts emerging every week. Looking back now, I can see that those days laid the groundwork for the platform engineering practices we continue to refine today.