$ cat post/the-docker-dilemma:-a-tale-of-two-containers.md

The Docker Dilemma: A Tale of Two Containers


June 2, 2014. I wake up to the sound of a small ding from my laptop—another update in my Hacker News RSS feed. Today’s story is about Docker 1.0 hitting the streets. It’s hard to ignore this news when it’s accompanied by 709 points and 130 comments. I can’t help but think back to the early days of containerization, and how we at my company have been wrestling with the concept.

Last year, our ops team started looking into Docker as part of our quest for better application portability. We had a mix of VMs and containers running on both private and public clouds, and everyone was trying to figure out the right way forward. The promise of lightweight processes sharing a single host seemed too good to be true, but then again, we were all tired of dealing with hypervisors.

My role as an engineer involved not just adopting new technology, but also understanding its implications for our current infrastructure. We had a monolithic application that was slowly evolving into microservices, and Docker seemed like the perfect tool to help us manage these services. But there were still plenty of questions—how do we integrate it with our existing monitoring? Can we make it as reliable as VMs? And most importantly, can we actually reduce costs while increasing agility?

The decision came down to a mix of trial and error, coupled with some healthy debate. I remember long discussions with the team about whether Docker was mature enough for production use. Some argued that CoreOS with fleet and etcd was still the way to go. Others were excited by Mesos and Marathon as an orchestration layer.

One day, we decided to go all-in on Docker 1.0. We set up a few test clusters and started migrating some of our services over. The initial results were promising: faster startup times, reduced resource usage—everything pointed towards a clear win. But then came the bugs. We had issues with service discovery, network isolation, and even some security vulnerabilities that made us stop and rethink.

One particular night stands out. I was working late trying to debug an issue where our containerized service wasn’t starting properly on one of our servers. It turned out to be a DNS resolution problem—a simple typo in the Docker command line arguments had caused a cascade of failures. I remember spending hours tracing logs, checking configurations, and finally figuring it out. It was those moments that made me appreciate the complexity of container orchestration.

As June moved on, so did our efforts. We continued to experiment with Docker, learning as we went. By the time Kubernetes came around in 2014, it felt like a natural progression—a solution for many of the issues we encountered.

Looking back at those early days, I realize that technology adoption is rarely straightforward. It’s a blend of excitement and frustration, hope and skepticism. But Docker taught us something invaluable: patience and persistence are key when trying to embrace new technologies. And even as we move forward with Kubernetes and beyond, the lessons learned from our initial foray into containerization remain deeply ingrained in my approach to DevOps.


That’s where I stand today, reflecting on the journey that led me to where I am now. The Docker Dilemma was just one chapter in a longer story of learning, adapting, and pushing the boundaries of what’s possible with technology.