$ cat post/sudo-bang-bang-run-/-the-flag-was-set-in-production-/-config-never-lies.md

sudo bang bang run / the flag was set in production / config never lies


Title: Docker Fever: A Year in Review


April 7, 2014 marked a significant moment for me and many of my colleagues. It was about a year after I first heard the term “Docker” tossed around at a conference, where some guy showed off a container tool that sounded way too good to be true: lightweight, portable, reusable, with minimal overhead.

At the time, we were working on a large-scale e-commerce platform for a major retailer. The app was monolithic and tightly coupled, making it a nightmare to scale or update in any meaningful way. We were already using VMs liberally across our infrastructure, but Docker promised something different—containers that could be easily spun up and torn down without the overhead of full virtualization.

The Initial Hype

Docker seemed like the silver bullet we needed. It was simple, easy to use, and promised a new way to manage applications. We began playing with it on our dev machines, trying out different scenarios, and building small proof-of-concepts. The community was growing rapidly; every week brought new tools and integrations.

One of my favorite aspects of Docker early on was the docker run command. It allowed us to quickly spin up a container from a base image, run our app, and then tear it down without leaving any mess behind. It felt like magic compared to setting up VMs with all their complexity.

The Real Work Begins

When we started using Docker in production, things got messy real quick. We ran into several issues that were far from glamorous. Our application stack was complex; there were multiple services and a database involved. Migrating an existing monolith to containers required a complete rearchitecting of our codebase.

One big problem was resource management. Containers can be lightweight, but they still need resources, especially memory. We quickly hit limits on the number of containers we could run due to RAM constraints. Kubernetes was announced just as we were hitting these walls, and it promised a solution for orchestration and scaling.

The SRE Lens

As someone who has always been an ops person at heart, I found myself thinking about how this would impact our operations team. With more moving parts in the form of containers, the traditional roles of sysadmins and developers were starting to blur. Our existing tools for monitoring and logging weren’t well-suited to containerized applications.

We began exploring new tools like etcd and fleet, which seemed promising for managing a fleet of Docker containers. But integrating them with our existing monitoring stack was tricky, and we had to spend countless hours figuring out the best practices.

The Heartbleed Moment

And then there was the infamous Heartbleed bug. It hit right as we were ramping up Docker in production. Suddenly, everyone was scrambling to secure their systems, including our containerized applications. We spent a week applying patches and hardening containers, which was both frustrating and eye-opening.

It made me realize that security is not just about code, but also about the infrastructure supporting it. Containers brought new challenges we hadn’t anticipated, and we had to adapt quickly.

The Write Code Every Day Philosophy

During this period of chaos, I found myself thinking a lot about what drives success in tech. Reading through hacker news, articles like “Write Code Every Day” resonated with me. In the midst of all the container hype, it was easy to get lost in tools and buzzwords. But the core truth remains: writing code, debugging, and solving real problems is what matters.

Looking Back

A year later, Docker has definitely changed our landscape. We’ve moved away from VMs for many services and now rely on containers for most of our applications. Kubernetes has become a crucial part of our platform, helping us manage and scale containers more effectively.

Looking back, the excitement around Docker was warranted. But it also taught me to be wary of tools that promise too much too soon. The journey from monolith to microservices is long and filled with challenges, but I’m glad we took the leap.containers in production required a complete rearchitecting of our codebase.


This post reflects my thoughts and experiences during this exciting yet challenging time in tech. It’s a reminder that while new technologies can be transformative, they also bring their own set of problems to solve.