$ cat post/nmap-on-the-lan-/-the-pipeline-hung-on-step-three-/-the-cron-still-fires.md
nmap on the lan / the pipeline hung on step three / the cron still fires
Title: Kubernetes Complexity Fatigue: A Case Study in Pain
June 28, 2021 has been a whirlwind of a day. Today, I finally managed to get the last of our legacy services migrated to our new Kubernetes cluster—after what felt like weeks of tweaking and wrestling with pod configuration hell.
The Setup
We’ve had some success with Kubernetes over the past few years, but as we scaled out, it became clear that managing everything ourselves wasn’t going to work. Each service needed its own YAML manifest, and each environment (dev, staging, prod) required a different set of tweaks. The complexity was spiraling out of control.
Then one day, I stumbled upon Flux GitOps and ArgoCD. They promised to take the manual labor out of managing our clusters—no more hand-rolling manifests! I sold it as the solution to our pain points, and with the blessing of my boss, we dove in headfirst.
The Migration
Migrating was a nightmare. We started by backing up all existing YAML files and committing them to Git. Simple enough in theory, but in practice, it meant dealing with thousands of lines of crufty code. Each service had its own quirks, from custom resource definitions to complex networking setups.
We used ArgoCD for the actual deployment, but quickly hit a wall. The GitOps approach required us to have an up-to-date state-of-record that matched our desired state. In practice, this meant we needed to clean up and standardize our manifests. Every service was a different beast, with varying levels of complexity.
The first few days were brutal. I spent most evenings in front of my laptop, wrestling with Kubernetes namespaces, CRDs (Custom Resource Definitions), and RBAC (Role-Based Access Control) policies. Each service had its own set of secrets that needed to be managed securely, and each environment had different configurations.
The Breakthrough
After a few sleepless nights, we hit our first big breakthrough. We realized that we could use Helm charts as a way to standardize the configuration across multiple environments. This allowed us to create common templates for services while still maintaining unique settings per service or environment.
Once we got the basic structure in place, Flux and ArgoCD began to shine. Suddenly, rolling out changes became much simpler. We could version control our state-of-record and apply updates with a single command:
kubectl apply -f <path-to-chart>
The Pain
Despite the progress, the pain was real. I spent countless hours debugging manifest issues, figuring out why some services were failing to start up properly. Kubernetes secrets management is still a nightmare—secrets being accidentally checked into Git or not getting propagated correctly was a recurring issue.
The most frustrating part? Every time we needed to make a change, it required digging through code, testing the new changes in our staging environment, and then merging back into production. The manual steps felt like they never ended.
The Future
Now that I’ve got this migration done (mostly), I’m looking forward to what’s next. We’re planning to integrate with tooling like Grafana for monitoring and Prometheus for logging. I also want to set up a proper pipeline using Tekton, so we can automate even more of our deployment process.
Kubernetes is a powerful tool, but it’s far from perfect. The complexity introduced by managing multiple services manually was not worth the headache. Moving to GitOps has been a significant step in the right direction, and I’m excited to see how we can streamline this further.
Conclusion
This journey taught me that sometimes, embracing new tools isn’t just about solving immediate problems—it’s also about cleaning up old messes. Kubernetes is complex, but with the right approach, it can be managed effectively. For now, at least, my hope is that I’ve turned the corner on this long and painful process.
That’s where we stand today. The battle is over for a while, but there’s always more to learn and improve. Stay tuned as I navigate the next phase of our Kubernetes journey.