$ cat post/kubernetes-complexity-fatigue:-a-day-in-the-life.md

Kubernetes Complexity Fatigue: A Day in the Life


May 13, 2019. I woke up with a feeling of déjà vu. Every day feels like it starts at 8 AM and ends around 5 PM, but this morning, I found myself reflecting on how much has changed—and not changed—in Kubernetes over the past few years.

The Complexity Conundrum

Kubernetes has become a ubiquitous tool in our tech stack, but with that ubiquity comes complexity. We’ve gone from “deployments” to “cluster autoscaling,” “horizontal pod autoscaling,” and now “multi-cluster deployments.” Each additional feature or tweak adds layers to the already complex architecture.

Today, I spent most of my morning wrestling with ArgoCD for a new project. I had high hopes that it would be a smooth process, but I quickly found myself buried in configuration files and subtle differences between how our staging environment worked compared to production.

A Day Full of Debugging

I started by setting up a fresh ArgoCD instance on Minikube. It was supposed to be simple: just run minikube start and then follow the quickstart guide from the official documentation. But, as always, there were nuances I hadn’t accounted for.

After 30 minutes of fiddling with manifests, I realized that my network setup was causing issues. Minikube couldn’t connect to our internal Git repository, which ArgoCD needed to sync application configurations. A quick minikube ip command solved the problem, but it took a moment to remember why IP addresses mattered in this context.

As the day progressed, I found myself arguing with colleagues about the best practices for using ArgoCD versus Flux. The conversation went round and round on whether we should stick to a single GitOps tool or use both. Both sides had valid points—Flux’s native Kustomize support is hard to beat, while ArgoCD offers more out-of-the-box features.

Scaling Remote Infrastructure

On top of the day-to-day challenges with Kubernetes, I also spent some time thinking about our remote infrastructure setup. The COVID-19 pandemic has forced many teams to scale their remote environments quickly. We’ve been working on improving our VPCs and network segmentation to ensure that sensitive data stays secure.

One of the biggest changes is moving away from shared clusters for development workloads. Each team now runs its own mini-cluster, which helps with isolation but adds a layer of complexity in terms of networking and load balancing. It’s a trade-off we’re willing to make for better security and flexibility.

A Personal Reflection

As I reflect on this day, it’s clear that Kubernetes has matured significantly since its early days. The tools are getting better, the community is growing, and the ecosystem around it is expanding at an incredible rate. But with that growth comes complexity, and we need to be mindful of managing that complexity as our teams continue to scale.

Looking Ahead

In the coming weeks, I plan to dive deeper into eBPF and explore how it can help us optimize our network stacks. It’s exciting technology that promises to make a significant impact on performance, but like Kubernetes itself, it has its own learning curve.

For now, though, I just want to take a moment to appreciate the progress we’ve made. We’ve come a long way from those early days of trying to understand what kubectl apply does, and as much complexity as there is, I’m grateful for every step forward in making Kubernetes easier to work with.


This was a day filled with both challenges and progress. In the world of tech, we’re always moving forward—sometimes too fast, sometimes too slow. But that’s part of the journey.