$ cat post/vi-on-a-dumb-term-/-we-merged-without-a-review-/-config-never-lies.md

vi on a dumb term / we merged without a review / config never lies


Title: Why I Left My Gigabyte Tower in the Closet


August 23, 2021. Just another day in tech, but somehow it felt like a turning point for me and my approach to infrastructure.

I still remember the day vividly. I was sitting at my desk with my trusty gigabyte tower from 2015, the kind that required an actual room to breathe. I had been working on this monolithic application for months now, tweaking performance here, fixing a bug there. But today felt different. There was a nagging feeling in the back of my head—something wasn’t right.

The Spark: Internal Developer Portals

A few weeks earlier, our team started toying with Backstage, the internal developer portal tool from Spotify. It’s one of those projects that seems simple enough on the surface but opens up a whole can of worms when you dive into it. We set up a basic instance and began poking around its API and UI. What struck me was how neatly it handled all the disparate parts of our tech stack—repositories, CI/CD pipelines, even services running in Kubernetes.

This little experiment planted a seed. Why did I need to run a custom script every time someone wanted to update our infrastructure documentation? Why wasn’t there an easier way to deploy changes without manually logging into servers and issuing commands?

The Realization

Then came the realization: I had spent so much time optimizing my application that I hadn’t given proper attention to making our operations more efficient. I was still tethered to this clunky, monolithic setup because of the complexity and fragility it represented.

That’s when I decided to take a deep dive into eBPF (Extended Berkeley Packet Filter). It seemed like an exciting area with real potential for rewriting how we handle networking and monitoring. But as much as I wanted to jump in headfirst, I knew better than to abandon my responsibilities just yet. So, I set up a small proof of concept in a spare server and began writing notes on the side.

The Complexity

Kubernetes had become this giant ball of complexity. We were using Helm for charting our deployments, but even that felt like too much boilerplate. Every time we needed to make changes, there was a risk of introducing subtle bugs. ArgoCD looked promising as a solution, but integrating it across all our services was proving harder than expected.

I found myself spending more and more time wrestling with these issues instead of focusing on building new features. The Kubernetes complexity fatigue was setting in, and I knew I had to do something about it.

The Decision

In the end, it wasn’t just a matter of choosing tools or technologies. It was about rethinking my approach to infrastructure. I decided that we needed to centralize our configuration management more effectively. This meant investing time into understanding Flux GitOps better and setting up best practices for using it across our entire stack.

So, I packed away the gigabyte tower—literally, it went in a box and retired to the back of my closet. It was a small victory, but a significant step towards modernizing our operations.

Looking Forward

The next few weeks were spent setting up the foundations for Flux in our dev environment. It’s not perfect yet, but I’m excited about where we’re headed. We’re starting to see the benefits: easier deployments, better version control of our infrastructure, and a clearer separation between development and production environments.

As for eBPF? I’ll keep an eye on it, but right now, my focus is on making sure we can scale our internal tools effectively while maintaining the agility needed in tech. The path forward isn’t always linear, but every step counts towards building something that works better for everyone involved.


And so, as August 23rd ticked by, I found myself looking forward to what was next—less tied to outdated hardware and more focused on the future of our platform engineering practices.