$ cat post/the-prod-deploy-froze-/-the-segfault-taught-me-the-most-/-the-repo-holds-it-all.md

the prod deploy froze / the segfault taught me the most / the repo holds it all


Title: Dockerizing Our Rails App: A Day in the Life of a DevOps Mess


April 6th, 2015. The sun was just starting to peek over the horizon as I sat down at my desk, coffee already brewing from the maker on my counter. Today’s task? Dockerize our beloved Ruby on Rails app.

You see, we’ve been running this app for years now, and while it works, it’s not exactly a model of efficiency or portability. We’re sitting in an office full of engineers who’ve seen better days with configuration files, VMs, and manual deployment scripts. It was time to modernize our infrastructure with Docker.

The Setup

I’ve always been a fan of simplicity, so I decided to start small: let’s containerize the app itself. Our Rails app is straightforward; it’s just a single codebase running on a Postgres database. We have a simple Nginx setup for serving static content and reverse proxying requests to Passenger.

My first step was to create a Dockerfile in the root of our project directory. It wasn’t long before I realized that this might be more complex than expected. The Rails app depends on several gems, and we needed to ensure all these dependencies were properly installed within the container. After some trial and error, here’s what my initial Dockerfile looked like:

# Use an official Ruby runtime as a parent image
FROM ruby:2.3

# Set the working directory in the container
WORKDIR /app

# Copy gemfile and install gems
COPY Gemfile* ./
RUN bundle install --jobs 20 --retry 5

# Bundle our app source
COPY . .

# Expose port 80 for Nginx to forward requests to
EXPOSE 80

# Run the Rails app
CMD ["bundle", "exec", "puma", "-C", "/app/config/puma.rb"]

The Challenges

As I was testing my container, I encountered a few issues. One major problem was getting our Nginx and Passenger setup to work within Docker. We needed to ensure that the Rails app could communicate with the database, which meant setting up environment variables for the connection string.

But wait, there’s more! The first time I tried running docker-compose up, it failed because of permissions issues on the host machine. Darnit! Turns out, we had to tweak our Docker volumes and file permissions to get everything working correctly.

The Fix

After some head-scratching and debugging (a lot of sudo chown commands), I finally got a stable container running locally. Now came the fun part: deploying it to production.

We use a combination of Chef and Ansible for our infrastructure, so integrating Docker into this workflow required some changes in our deployment scripts. We had to ensure that our CI/CD pipeline could build and push Docker images to our private registry before deploying them to our staging environment.

The Reflection

This little project took longer than I thought it would. It was a learning curve, not just for the tech involved but also for how we integrate new tools into our existing workflows. But hey, that’s part of the job—embracing change and making sure our systems stay lean and mean.

Looking back at this time, there were plenty of HN stories about development tools and methodologies. But what really stuck out was the push towards containerization and microservices. It felt like we were on the cusp of a new era in infrastructure.

Onward to the next challenge!


This day in DevOps land was just one piece of the puzzle, but it felt like a significant step forward. The journey from monolithic apps to Docker containers is still evolving, and I’m excited to see where this takes us.