$ cat post/kubernetes-vs.-my-home-network:-a-tale-of-2018.md

Kubernetes vs. My Home Network: A Tale of 2018


It’s been a while since I last sat down to write, but this month feels like as good a time as any. November 5th, 2018, was just before the days when the term “serverless” was still being hyped and the “Kubernetes wars” were in full swing. As I reflect back on that era, I can’t help but chuckle at how much we take for granted now.

The Kubernetes Conundrum

A few months ago, our team decided to jump into Kubernetes with both feet. We wanted to modernize our application deployments and move away from traditional VM-based solutions. After extensive research and a few proof-of-concepts, we dove in headfirst. But like any big change, there were growing pains.

One particular night, I found myself wrestling with some odd network behavior that was driving me crazy. We had deployed the application to Kubernetes using Helm charts, but for some reason, our internal APIs weren’t communicating properly between services. It was like having a bunch of people trying to talk in a room where everyone is wearing noise-canceling headphones.

After days of debugging and tearing my hair out, I realized that we were running into network policies issues. One of the services had been misconfigured, effectively blocking traffic from other pods. Once I fixed it, everything started working as expected. It was like a weight lifted off my shoulders, but also somewhat humbling to admit how much I could learn about Kubernetes networking.

The Helm Hurdles

While we were getting our services up and running in Kubernetes, we faced another challenge: managing the complexity of our deployments. Helm came along at just the right time, offering a way to encapsulate our deployment processes into charts that we could manage with ease. But there was a steep learning curve.

One weekend, I spent hours trying to figure out why one of my Helm templates wasn’t rendering correctly. After countless iterations and frustrating debugging sessions, it turned out that a simple typo in the YAML had caused all the problems. It’s moments like these that remind me how important attention to detail is—especially when dealing with templating languages.

Platform Engineering and the Road Ahead

As I look back at 2018, it feels like we were on the cusp of something big. The rise of platform engineering was beginning to take shape, with tools like Terraform and GitOps gaining traction. But for us, the focus was still very much on getting our Kubernetes cluster up and running smoothly.

The serverless hype was everywhere, but for now, it seemed like a distant dream. Our team was too busy dealing with the realities of containerizing applications and managing network policies to worry about writing less code that someone else would run.

Reflections

As I reflect on those days, I’m reminded of how much progress we’ve made since then. Kubernetes is no longer seen as a novelty; it’s a staple in most modern infrastructure strategies. Helm charts are still around, but they’ve matured significantly. And while serverless remains an interesting area, the focus has shifted to more practical concerns like observability and security.

Looking back at that time, I’m grateful for the challenges we faced because they helped us grow as engineers and professionals. Whether it was fighting through network policies or mastering Helm templates, each obstacle pushed us to be better.

So here’s to 2018—the year when Kubernetes became a reality and platform engineering started to take hold. May next year bring even more growth and learning!