$ cat post/containers-vs.-vms:-a-case-study-in-devops-transformation.md

Containers vs. VMs: A Case Study in DevOps Transformation


February 16th, 2015 - I remember the day like it was yesterday. It was just a few months since Docker hit the scene with its promise of lightweight and portable containers that could change the game for developers and operations teams alike. At work, we were still grappling with the old-school VM (virtual machine) setup, but the buzz around Docker was undeniable.

Our team had been using VMs for years, each application running in its own isolated environment on a dedicated server. It worked fine, but there were growing pains. Server sprawl was an issue, and provisioning new servers took time. Plus, we often found ourselves reinventing the wheel with setup scripts for every new project.

Docker offered something different. Instead of whole machines, you got lightweight containers that could share the same kernel as the host machine. This meant more efficient use of hardware resources and quicker start times. But would it work in our environment? We were about to find out.

The Setup

We decided to pilot Docker on one of our smaller projects to see if we could migrate from VMs. My colleague, Sarah, was in charge of the initial setup, while I focused on the deployment pipeline and orchestration.

Sarah came back with a working container stack using CoreOS as the host OS. Each application ran in its own container, and etcd served as the distributed key-value store to manage state. Fleet was used for cluster management, ensuring that our containers started up correctly and stayed running.

The Debugging Begins

But then we hit a snag. One of our services failed to start reliably. It wasn’t immediately clear why it wouldn’t boot, so I dove into the logs with Sarah.

“Looks like a network issue,” she said, pointing at the error message that kept repeating. “Maybe it’s not starting because it can’t reach its dependencies?”

I nodded and ran a few commands to check connectivity. Everything seemed fine from where we were sitting. But then I remembered that one of our containers was trying to connect over an internal network address.

“Could be a DNS issue,” I suggested, changing the configuration file slightly. “Let’s try adding an entry for this service in etcd.”

Sure enough, after a few tweaks, the container started up just fine. It turned out we needed more thorough testing of our internal networking setup. This little hiccup was teaching us valuable lessons about container dependencies and network configuration.

The Future is Here

As we continued to refine our setup, I found myself thinking about the broader implications of Docker. With Kubernetes on the horizon from Google, it seemed like containers were going mainstream. But would they replace VMs completely?

I still remember the skepticism some of my peers had. “Isn’t this just a fad?” one asked. “We invested so much in our current setup.”

And yet, as we saw the benefits of more efficient resource usage and easier deployment, the doubts began to fade. We started to see Docker not as an alternative but as a complementary technology that could work alongside VMs.

Looking Back

That initial migration was just the beginning. Over time, we integrated more services into containers, and our development processes became faster and leaner. The transition wasn’t always smooth, but it was worth it.

Today, when I think back to those early days of Docker, I’m reminded that change in tech can be slow and challenging. But sometimes, even small steps can lead to big transformations. And who knows? Maybe in a few years, we’ll look back on this era as the start of something truly revolutionary.


Feel free to reach out if you have any questions or want to chat about your experiences with containers!