$ cat post/the-build-finally-passed-/-we-merged-without-a-review-/-the-deploy-receipt.md

the build finally passed / we merged without a review / the deploy receipt


Title: Kubernetes Complexity Fatigue and the Unraveling of a Side Project


October 11, 2021 was just another day in the life of an engineering manager dealing with the complexities of Kubernetes. I remember sitting at my desk late one afternoon, staring at yet another cluster that had decided to misbehave.

The past few months have been challenging. As platform engineering formalized and internal developer portals like Backstage gained traction, we’ve seen a rise in SRE roles and remote-first infra scaling. eBPF is becoming more popular, but for me, Kubernetes remains the beast I need to tame every day.

I was working on a side project—a personal cloud storage solution using MinIO, an object storage system that’s great for small-scale projects. The idea was simple: create a platform where I could manage my files in the cloud and access them from anywhere. It seemed like a perfect fit for a Kubernetes deployment.

But here’s where the reality bit me hard. The more features I added to this project, the more I felt like I was losing control. Each new service introduced complexity into the mix—caching layers, backup strategies, and network configurations. I found myself spending more time debugging than coding, which is never fun when you’re working on something for yourself.

One evening, as I tried to deploy a new feature using ArgoCD, things started to go awry. The application failed to start up, and the logs were cryptic at best. “Something’s wrong with the config map,” said the error message, but where? After hours of digging through YAML files, it turned out that my mistake was something small—a missing space in a command.

It’s these moments that highlight why Kubernetes can be so frustrating. The syntax and configuration are meticulous, and a single typo or misplaced character can cause everything to fall apart. But there’s more than just the technical challenges. As I wrestled with ArgoCD, I couldn’t help but feel like the whole ecosystem was designed for seasoned operators rather than someone trying to learn on the side.

This isn’t the first time I’ve faced Kubernetes complexity fatigue. Last year, we had a similar situation in production where an application went down due to misconfigured network policies. It took us hours to track it down and fix, leaving me with that sinking feeling of “I should have done this differently.”

As October 11th rolled around, I found myself reflecting on the state of my side project. Was it worth the effort? Could I simplify things enough to make maintenance easier without sacrificing features?

In the end, I decided to break down the project into smaller components. Instead of trying to build everything at once, I’d focus on one piece at a time, ensuring that each part was rock solid before moving on to the next.

This approach not only made my work more manageable but also helped me regain control over what had been turning into an overwhelming mess. By focusing on small, achievable goals, I could still make progress without feeling like I was drowning in Kubernetes complexity.

As for the broader tech world, the month of October 2021 was dominated by news stories that felt both exciting and concerning—like Facebook’s site downtime and the Pandora papers leak. But when you’re dealing with day-to-day technical challenges, these global events can feel distant from your immediate work.

In the end, it’s about finding balance between pushing forward and stepping back to reassess. Maybe I’ll return to this project another day, but for now, I’m content to take things one step at a time. After all, that’s what platform engineering is all about—building something that works, even if it means starting small.

And who knows? Maybe next year, when eBPF becomes mainstream, I’ll have a new challenge to tackle. For now, I’ll stick with the basics and see where this side project takes me.