$ cat post/the-firewall-dropped-it-/-the-monorepo-grew-too-wide-/-it-ran-in-the-dark.md

the firewall dropped it / the monorepo grew too wide / it ran in the dark


Kubernetes Matures, Helm Steadies Our Helmets

September 4, 2017. Kubernetes is the king now, and everyone’s using it—huzzah! But the journey to a stable state has its twists and turns. Today, I’m reflecting on one of those turns: our switch from plain old kubectl to Helm.

The Setup

A year ago, we were a small startup with a growing Kubernetes cluster. Our team was excited about Kubernetes but quickly realized it could be a pain point without the right abstractions. We had a few custom deployments and some tightly coupled services, making it hard to scale or update our apps. Enter Helm.

Helm promised to make managing Kubernetes applications easier by packaging them into templates. It felt like a no-brainer—just slap on Helm and we’d have first-class dependency management, templating, and easy upgrades.

The Migration

Initial Pangs of Helm

We started by converting one of our services, a simple REST API that fetched data from various sources and served it up to the front-end. We had a values.yaml file for configuration overrides and a few templates in our repository. Easy peasy lemon squeezy. Or so we thought.

Configuration Hell

As soon as we hit “helm install,” things got messy. Our templates were riddled with hardcoded paths, secret references, and environment variables that just didn’t fit the Helm way of doing things. We found ourselves wrestling with a mix of YAML files, values.yaml overrides, and Helm charts that seemed to grow organically over time without much structure.

Dependency Drama

Dependency management was another sticking point. We had services that depended on each other for configuration or data, making it hard to modularize our charts properly. For example, one service needed a specific version of another, but Helm’s dependency resolution was flaky at best. It felt like we were back in the good old days of managing multiple Docker images without proper orchestration.

The Debugging

Debugging Helm was a nightmare. Errors were often cryptic and hard to trace. If something went wrong with one of our charts, we’d spend hours trying to figure out what exactly was breaking. Logs didn’t always help, as the errors would appear in random places or just not show up at all.

The GitOps Conundrum

We tried integrating with GitOps practices, but it felt like a forced fit. We wanted our infrastructure and deployments to be version-controlled and reproducible, but Helm’s templating and dependency resolution made that tricky. We ended up with a hybrid approach where we manually kept track of changes in both our Helm charts and the Kubernetes manifests.

The Light at the End of the Tunnel

Despite the initial challenges, Helm did eventually start to make sense for us. We realized that the key was not just using it but understanding its design principles. We started breaking down our services into smaller, more modular charts with clear dependencies. This helped reduce the complexity and made upgrades and scaling much easier.

Learning from the Storm

The journey wasn’t easy, but it taught us a lot about Kubernetes and container orchestration in general. We learned that while Helm can be powerful, it’s essential to approach it with a clear understanding of your deployment needs. We also discovered that GitOps practices complement Helm well, providing a more structured way to manage our deployments.

The Aftermath

Looking back, I realize we were lucky. With the maturity Kubernetes has gained since then, and tools like GitOps becoming more mainstream, it’s much easier now than it was in 2017. But those early days with Helm were rough, and I wouldn’t wish that struggle on anyone starting out today.

In the end, Kubernetes has proven to be a solid foundation for our platform. Helm, while not perfect, helped us manage the complexity of our growing services. And through all the pain points, we learned valuable lessons about infrastructure management in a modern cloud-native world.