$ cat post/kubernetes-vs.-our-legacy-setup:-a-year-later.md

Kubernetes vs. Our Legacy Setup: A Year Later


October 3, 2016. I can still remember the day vividly. It was early in my tenure as an engineering manager, and we were about to embark on a journey that would change our development landscape forever. We had just deployed our first Kubernetes cluster at work, replacing our monolithic legacy setup with a containerized microservices architecture.

The Setup Before

Back then, our application stack was a sprawling mess of custom-built scripts, Jenkins jobs, and a monolithic Ruby app running on a single server. Monitoring was done via Nagios, and alerts would sometimes be triggered for reasons that were hard to diagnose. Configuration management was a nightmare—every developer had their own way of doing things, making it difficult to maintain consistency across the team.

The Kubernetes Journey

Kubernetes won the container wars in 2016, but the path to adoption wasn’t smooth. We faced challenges like understanding the intricacies of pod networking, managing stateful applications, and ensuring that our legacy services could gracefully transition into containers.

One of my biggest learnings came when I wrestled with a stateful service called database-sync. This service was crucial for syncing data between our primary and backup databases. When Kubernetes restarted pods, the service would sometimes lose its state, causing data corruption. After days of debugging, we realized that persistent volumes weren’t enough; we needed to implement proper state management using custom Kubernetes annotations.

Helm and Configuration Management

As we moved forward, we embraced Helm for managing our application manifests. The ability to package applications with their dependencies and configurations made it much easier to roll out new services or updates. However, managing secrets became a headache. We initially stored sensitive information like database passwords in the values.yaml files, but this proved insecure when team members shared code.

To address this, we started using Vault for secret management. Integrating with Vault required us to write custom scripts and adapt our Helm charts. The transition wasn’t seamless, but it was a step in the right direction. We also began exploring GitOps principles, where changes were made through Git commits rather than manual commands on servers.

The Serverless Hype

Amidst all this, serverless became the buzzword of the year. Everyone was talking about AWS Lambda and the promises of zero-server infrastructure. While it seemed appealing, we weren’t ready to embrace it yet. Our applications were still too tightly coupled with our monolithic setup for a smooth transition. Plus, the complexity of managing multiple services in Kubernetes gave us enough challenges without adding serverless into the mix.

Prometheus + Grafana

Monitoring was another area where we saw significant improvements. We replaced Nagios with Prometheus and Grafana, which offered more granular insights into our applications’ performance. Setting up dashboards for each service became a bit of an art form, but it paid off in terms of better observability.

A Year Later: Reflections

As I look back on that initial Kubernetes deployment, I see a lot of progress and some missed opportunities. We had a long way to go in terms of automation, security, and best practices. But we made strides towards a more modern, scalable architecture.

The journey wasn’t just about adopting new technologies; it was also about fostering a culture of collaboration and continuous improvement within the team. Debugging those pesky stateful services taught us valuable lessons that I carry forward in my current role as an engineering manager.

Kubernetes has proven its worth over time, but the path to fully leveraging its potential is a marathon, not a sprint. Looking ahead, I’m excited about the future of platform engineering and how it will continue to evolve with new tools and technologies.


In conclusion, October 3, 2016, marked the beginning of a significant shift in our development landscape. While we faced numerous challenges along the way, embracing Kubernetes laid the foundation for more efficient and scalable infrastructure. The journey continues, but I’m grateful for the lessons learned and the team that made it possible.