$ cat post/sudo-bang-bang-run-/-we-merged-without-a-review-/-config-never-lies.md

sudo bang bang run / we merged without a review / config never lies


Title: April 25, 2022 - When Twitter’s Shiny New Toes Start Scratching


April 25, 2022. The date felt like a milestone, or at least a new phase of the daily grind. Back then, the tech world was abuzz with AI and LLMs, platform engineering was becoming mainstream, and every day seemed to bring more complexity in the form of CNCF landscape updates. WebAssembly on the server side was still somewhat of a novelty, and DevOps was evolving into Developer Experience as a discipline. FinOps and cloud cost pressure were real, and DORA metrics had firmly taken hold.

That morning, I woke up to Twitter’s breaking news: Elon Musk was making an unsolicited $43B bid for the company. The headline was so absurd that it felt like the tech world’s equivalent of a Hollywood blockbuster. Within hours, the stock market went into a frenzy. It wasn’t just about Twitter; it was the first domino in what would become an epic saga.

Meanwhile, on my desk at work, I had some real ops to deal with. We were facing an influx of user requests for our platform that we hadn’t anticipated. The team and I spent most of the morning doing post-mortems from a recent outage, trying to understand where our infrastructure was holding us back. One of the main issues was our load balancers—every time they overloaded, it caused cascading failures.

As I dug into the logs, I noticed something peculiar: there were patterns in the request data that didn’t match our usual traffic profiles. It seemed like some bots were trying to game our system for performance testing or stress testing. This was not ideal; we needed a way to handle legitimate load without being overwhelmed by these artificial bursts.

I decided to implement rate limiting using Redis as a cache. The plan was simple: if the number of requests from an IP exceeded a certain threshold, block further requests until they cleared the cooldown period. After some coding and testing, it worked like a charm, saving us from potential cascading failures that could have brought down our system.

But then came the day when I had to argue with my peers about a new framework we were considering for our platform. The buzz was around Serverless frameworks, but many of them lacked stability and support for enterprise-grade needs. We needed something robust, scalable, and maintainable in the long term. I argued for Kubernetes as a better fit, despite its complexity, because it offered more control over resources and allowed us to leverage existing CI/CD pipelines.

The discussion was heated, with some team members leaning towards Serverless due to the promises of automatic scaling and reduced operational overhead. But I insisted on the benefits of having full visibility and control over our deployment processes. We ultimately decided to run a small proof-of-concept using both approaches before making any major decisions.

That night, as I was wrapping up my work for the day, I found myself reflecting on all these challenges. The industry was in flux, with every tech conference or blog post bringing new ideas and tools. But at the end of the day, it came down to practical solutions that addressed real-world issues.

And then there was the GitHub star count—losing 54k stars felt like a punch in the gut for open-source projects. I couldn’t help but wonder if our project had something to do with it or if it was just part of the broader trend of tech fatigue and consolidation.

Despite all these challenges, April 2022 taught me that resilience is key. Whether dealing with unexpected load spikes, arguing about frameworks, or coping with the ever-changing landscape of technology, each day brought new lessons. I ended up feeling more grounded and determined to keep pushing forward, one bug at a time.


That’s where I was in 2022, navigating the tech storm while trying to keep our platform running smoothly.