$ cat post/the-docker-diaries:-a-day-in-the-life-of-a-newbie.md

The Docker Diaries: A Day in the Life of a Newbie


September 8, 2014 was a day that felt like the dawn of something big. I had just started working with Docker in my current role at a company that was seriously considering moving to containers for our development environment. It’s not every day you dive into a technology that promises to change how we package and deploy applications.

The morning started off like any other Monday, but as soon as the first coffee hit my system, I remembered what I had been up to last night. I spent some time setting up Docker on my laptop, getting familiar with the basics—like pulling images from Docker Hub and running a few simple containers. It felt like magic: I could spin up an instance of a web server or database in minutes, all without worrying about dependencies.

But then came the first real challenge. Our application was a monolith that had been built over several years, with a complex set of dependencies. We wanted to split it into microservices and Dockerize everything. Easy enough, right? Not so fast.

I tried running my app inside a container for the first time. The command looked something like docker run -p 80:8080 -v /path/to/code:/app <image-name>. It seemed straightforward, but I hit an error almost immediately. “Could not bind to port 80 on [host IP address] as it is already in use.” Huh? I hadn’t even checked if the port was occupied.

After a few false starts and some debugging (which involved running lsof -i :80), I realized that one of our other services was already using port 80. Ugh, how could I have missed this?

Fixing it wasn’t too hard—I just needed to change the service configuration—but it taught me a valuable lesson: containers are great for isolation, but you still need to manage dependencies and conflicts between them.

Later in the day, we had a meeting where everyone was brainstorming ideas on how to integrate Docker into our existing CI/CD pipeline. One of the team members suggested using Jenkins with Docker plugins. It sounded promising, so I spent some time researching it after the meeting.

I started by setting up Jenkins on one of our servers and installing the Docker plugin. But there were a few gotchas right away. The plugin was still in beta and wasn’t as feature-complete as we hoped. It took some trial and error to get it running smoothly, but eventually I had a working setup where Jenkins could build our Docker images.

As the day went on, more challenges arose. One of them involved mounting volumes properly so that our application data wouldn’t be lost when containers were recreated or redeployed. We ended up using named volumes with Docker Compose to make sure everything was set up correctly.

By the end of the day, I felt a mix of excitement and frustration. Excitement because Docker opened up so many possibilities for us, but frustration because setting it all up required a lot more work than I had anticipated. It made me realize that while containers are powerful, they’re also complex—especially when dealing with legacy applications.

As the sun set on another day, I sat back and took stock of my progress. Docker was going mainstream, and we were right at the beginning of this journey. I knew there would be more bumps ahead, but for now, I was ready to face them head-on.


This blog post reflects a typical experience with Docker in its early days, capturing the excitement and challenges involved in adopting containerization technology.