$ cat post/august-2017-reflections:-kubernetes-and-the-chaos-of-platform-engineering.md

August 2017 Reflections: Kubernetes and the Chaos of Platform Engineering


August 2017 was a whirlwind. I remember staring at my computer screen, trying to keep up with the rapid pace of change in our platform engineering world. Kubernetes had won the container wars, Helm and Istio were emerging, serverless was all the rage, Terraform was in its early stages (still version 0.x), and GitOps was just starting to catch on. Prometheus + Grafana had replaced Nagios as the go-to monitoring tools. I found myself constantly thinking about how we could adapt our platform to these new technologies.

One day, I found myself deep in a debugging session with Kubernetes. We were running into some issues with pod scheduling and network latency. The stack traces and logs were a mess—there was so much going on, it felt like trying to untangle spaghetti. After hours of digging, I finally narrowed down the problem to an issue with the kube-scheduler. It seemed that the scheduler wasn’t properly handling our custom resource definitions (CRDs). I had to write some custom code to work around this until a proper fix could be upstreamed.

Around the same time, we were debating whether to adopt Helm for our cluster deployments. The team was split—some thought it would be too complicated and unnecessary, while others argued that it would provide better abstraction and ease of management. I found myself in the latter camp, but I also understood the pushback. We had already invested significant effort into setting up a working CI/CD pipeline without Helm, and changing now could mean starting over.

The buzz around serverless was palpable. We had been using Lambda for some backend services, but it felt like the hype was getting out of hand. Was it really worth it to offload all our workloads to cloud functions? The downsides—cold starts, vendor lock-in, and limited control—seemed significant enough that we were hesitant to commit fully.

In a moment of self-deprecation, I jotted down in my journal: “Why am I even still doing this? Kubernetes is such a mess. Every time I think I understand it, something else changes.” It was true. The sheer complexity and constant churn made me feel like I was constantly playing catch-up.

The tech world around us was also changing rapidly. Hacker News had some interesting stories that reflected the zeitgeist of 2017:

  • 70MillionJobs launched with a bang, aiming to help people with criminal records find employment.
  • Google fired an employee who wrote a controversial diversity memo, sparking a major debate in tech circles.
  • A camera that snapped a GIF and ejected a cartridge—it seemed like the future was full of quirky inventions.
  • Ad-blocking was becoming more prevalent, forcing websites to rethink their monetization strategies.

These stories were not just random articles I found interesting; they were part of the broader narrative of a rapidly changing tech landscape. The diversity memo controversy highlighted the ongoing struggles with inclusivity and bias in tech companies. Meanwhile, the rise of ad blocking mirrored the frustration many users felt towards intrusive advertising practices.

Reflecting on this period, I realize how much it shaped my approach to platform engineering. It taught me the importance of staying flexible and embracing change, even when it’s messy. Kubernetes continues to evolve, Helm has become a standard, and serverless is here to stay, albeit not in the way we initially thought. As for the rest of tech world—well, that’s another story.

In the end, I found my way through these challenges by focusing on fundamentals: understanding the underlying systems, being proactive about learning new tools, and staying open to debate and discussion within my team. Those were the key ingredients in navigating the chaos of platform engineering back then, and they remain just as relevant today.


That was my reflection on August 2017—full of technical challenges, personal growth, and the constant evolution of technology.