$ cat post/dial-up-tones-at-night-/-we-ran-it-until-it-melted-/-the-wire-holds-the-past.md

dial-up tones at night / we ran it until it melted / the wire holds the past


Title: Docker Diaries: A Week in the Life of a Noob


September 1, 2014. I woke up to the news that Docker was gaining serious traction. I’d read about it, seen some demos, but hadn’t really taken the plunge yet. Today, I decided to start my journey into containerization.

First step: install Docker on a brand new Ubuntu VM. After wrestling with the installation (why oh why do you need sudo for everything), I finally got it running and tried out the classic “hello world” container. Success! A quick docker run hello-world confirmed that I was indeed living in the future.

Next, I dove into some tutorials to get a better feel for Dockerfiles and how containers work under the hood. As a platform engineer, this is a bit of a mind-bender at first—think about packaging your entire application environment inside a lightweight container instead of relying on virtual machines. But the simplicity of it was starting to grow on me.

By mid-afternoon, I felt confident enough to start a small project: using Docker to package up our internal REST API service for deployment across various environments. I began by creating a basic Dockerfile:

FROM ubuntu:14.04

RUN apt-get update && \
    apt-get install -y python-pip

COPY . /app
WORKDIR /app

CMD ["python", "api.py"]

I then built the image and ran it, but things quickly went south when I tried to interact with the container:

docker run --name api -p 8000:5000 my-api

I hit http://localhost:8000 in my browser, only to get an error 502 Bad Gateway. A quick docker logs api revealed that the API service wasn’t starting properly due to a missing configuration file. Ugh.

After some debugging, I realized that the working directory inside the container wasn’t what I expected. The WORKDIR /app instruction was setting the path relative to where it was built on the host machine, not where the image was started. So back to the drawing board—this time with a more robust setup:

FROM ubuntu:14.04

RUN apt-get update && \
    apt-get install -y python-pip

COPY . /app
WORKDIR /app

CMD ["python", "api.py"]

I had to ensure that the container was being started in the correct directory, and this time it worked like a charm.

As I wrapped up my day, I couldn’t help but feel a bit nostalgic. Docker felt like a return to basics—simplifying deployment by ensuring every environment is just a file with a few instructions. But it also introduced new challenges that required attention to detail. Tomorrow, I plan to explore how we can use this technology to streamline our development and operations workflows.

In the meantime, I’ll keep an eye on other container technologies in play—CoreOS, Mesos, Kubernetes—and see where they fit into our stack. The future of cloud-native apps is here, and Docker is leading the way. Time to get my hands dirty!


That’s a quick snapshot of what it was like for me getting started with Docker back then. It’s always fun to look back at your own tech journey!