$ cat post/a-docker-day-in-the-life:-when-containers-were-still-new.md

A Docker Day in the Life: When Containers Were Still New


June 1, 2015. I wake up to the sound of my MacBook Pro’s fans whirring like a jet engine. Today is another long day of coding and debugging, but with the addition of containerization to my toolkit, things are looking up.

Last year, Docker burst onto the scene like a supernova, and we’re still in the early days here at [Company Name]. We’ve started using Docker on our dev machines for local development, but getting it production-ready has been an adventure. I spent last night debugging an issue that’s been bugging me all day.

The problem? One of our services, a Java-based application, was timing out in Kubernetes when we deployed it to a new cluster with Docker images instead of plain old VMs running the same code. We’ve got a bunch of microservices, and they’re starting to take off. Every time I think one of them is behaving normally, something else pops up.

I start by firing up the cluster logs in kubectl to see what’s going on. The app is failing with an IOException, but the stack trace isn’t particularly helpful. It looks like a file permission issue, but that doesn’t make sense because we’ve checked and double-checked everything. I decide to take it one step further.

I create a small Docker container myself to isolate the problem. My first attempt uses a simple Java app with minimal dependencies. The container runs fine on my dev machine, so something must be different in production. I dig deeper into the deployment configuration and notice that we’re using some custom volume mounts for persistence.

With a fresh cup of coffee (and a bit more frustration), I start debugging those volume mounts. Turns out, there’s an issue with how the volumes are being mounted on the Docker containers running in Kubernetes. The path is set to /var/lib/docker/volumes instead of the correct path that our application expects.

After some trial and error, I change the mount path back to what it should be, and voilà—no more timeouts! It’s a small victory, but a necessary one for getting our services running smoothly in production with Docker.

This morning, I’m thinking about how far we’ve come. Just two years ago, everyone was still trying to figure out the best way to manage containers and microservices. Now, it feels like every new engineer who joins the team has to spend a few days figuring out Kubernetes basics before they can even start coding.

As I reflect on this day in the life of Dockerization, I’m reminded that while tools come and go, the core principles remain: simplicity, reliability, and just enough automation. And in our case, it’s all about making sure our containers behave like a well-oiled machine, no matter where they’re running.

That’s how we roll around here at [Company Name]. Not perfect, but always learning and improving. Stay tuned for more adventures in the world of containerization and microservices!


This blog post captures a snapshot of the early days of Docker adoption, highlighting both the technical challenges and the excitement surrounding this new technology. It also reflects on the broader context of the era, including the other tech news stories that were shaping the industry at the time.