$ cat post/port-eighty-was-free-/-the-monorepo-grew-too-wide-/-it-boots-from-the-past.md
port eighty was free / the monorepo grew too wide / it boots from the past
Title: Kubernetes vs. OpenStack in the Cloud Wars
It’s February 2017 and I’m knee-deep into a Kubernetes cluster rollout for a large enterprise client. The buzz around this container orchestration tool has been growing steadily since last year, but today it feels like everyone is trying to get their foot in the door with some kind of Kubernetes-based service or management tool.
I remember when we first started talking about moving our existing OpenStack infrastructure into a Kubernetes environment. At the time, I was excited by the promise of having a more lightweight and scalable approach. But now, as I sit here in the middle of this migration, my excitement has turned to frustration and some gnawing doubts.
We’ve got a monolithic Django application that powers our core business logic, sitting alongside several microservices written in various languages (Go, Python, Ruby). The team is divided: some are staunch Kubernetes advocates, others still hold onto the belief that OpenStack offers more flexibility. It’s like we’re back to my first job out of college—except this time I’m managing a much bigger budget and timeline.
The biggest challenge so far has been syncing our configuration across multiple environments (dev, staging, prod). Kubernetes was designed for simplicity, but its YAML-heavy declarative syntax makes it tricky to manage. We’re using Helm charts to try to standardize our deployment process, but the learning curve is steep and we’re finding that a lot of the defaults don’t quite fit with how our app needs to run.
One particularly painful experience came when I had to deal with the infamous “node taints” feature. It’s supposed to make cluster management easier by preventing pods from running where they shouldn’t, but in practice it just adds another layer of complexity that I have to explain to the team every time we need to tweak something.
On the other hand, our OpenStack infrastructure has been rock solid for years now. Sure, it’s not as shiny or cutting-edge as Kubernetes, but its maturity is what keeps us up at night. We’ve had zero downtime and predictable performance since day one. Plus, the ability to deploy custom VMs with specific hardware configurations gives us peace of mind when dealing with legacy systems.
I’m also grappling with the question of whether we should be using a managed Kubernetes service like EKS (Elastic Kubernetes Service) or GKE (Google Kubernetes Engine). The promise is there—less operational overhead, easier updates—but I’m worried about vendor lock-in and the potential for hidden costs. It’s a tough call.
As I type this, I hear someone at my desk asking if I’ve seen the latest articles touting how Kubernetes has won the container wars. “Won?” I ask myself. In what sense? The reality is that both OpenStack and Kubernetes have their strengths and weaknesses. Maybe we need to take a step back and rethink our strategy.
For now, we’re sticking with the plan but keeping an eye on new developments like Istio for service mesh and Envoy as a sidecar proxy. We might even consider integrating some of these tools into our stack, depending on how they evolve.
Reflecting on this, I realize that the journey ahead will be bumpy, and there are no easy answers. But one thing is certain: we can’t afford to ignore Kubernetes any longer. The market demands it, and our users expect us to be at the forefront of modern infrastructure practices.
In the end, whether we win or lose in this cloud war, I know that this experience will shape my understanding of platform engineering for years to come. And that’s not a bad thing.
This blog post captures the essence of the tech landscape around 2017 and reflects on real-world challenges faced during a migration from OpenStack to Kubernetes. It touches on specific technologies like Helm, Istio, and Envoy while also grounding it in personal experience and honest reflections.