$ cat post/docker-fever-in-2015.md

Docker Fever in 2015


May 25, 2015 was a week before the DockerCon conference. I was feeling the fever of containers and microservices. I’d been dabbling with Docker for about six months, but it had yet to really break into my day-to-day work at my company.

The night before DockerCon, I spent some time on a project that was starting to take shape: a simple web service written in Go, packaged up in a Docker container. It was just a small proof of concept, but the excitement around containers and orchestration tools like Kubernetes was palpable. I had visions of deploying this containerized service with minimal effort.

But as is often the case when you’re experimenting with new technologies, there were unexpected bumps along the way.

The Container Conundrum

The first issue came when trying to deploy my Go service on a different machine. I thought everything was set up right, but running docker run kept spitting out errors about missing libraries or incorrect environment variables. Debugging these kinds of issues can be particularly frustrating because it feels like you’re missing something obvious but crucial.

After spending hours going through the container logs and configuration files (a mix of Dockerfile, docker-compose.yml), I realized that my environment setup was slightly off. Specifically, I needed to ensure the service’s dependencies were properly installed in the container image before building it. Once I got those right, things started working smoothly.

The Orchestrator Odyssey

With my web service running correctly inside a container, the next step was to think about how this would fit into our existing infrastructure. We had been using Mesos and Marathon for some time now, so Kubernetes seemed like an interesting addition.

Setting up Kubernetes meant spending more time on learning its intricacies. There were several things that tripped me up initially: understanding the role of kubectl, managing pods and services, and figuring out how to expose my web service externally.

The biggest challenge was integrating Docker and Kubernetes seamlessly. I kept running into issues where Kubernetes couldn’t see the containers I had started with docker run. Eventually, I realized that the way I had configured the container labels wasn’t matching what Kubernetes expected. Tweaking those settings got me closer, but there were still a few edge cases to iron out.

The Microservices Mess

As I delved deeper into microservices and container orchestration, I found myself questioning some of our existing architectural decisions. We had been using monolithic applications for years, so jumping to a fully containerized, microservice architecture was quite a shift.

One of the things that stuck with me was trying to convince my team about the benefits of splitting services into smaller, more manageable pieces. The argument was not just about scalability and resilience but also about reducing complexity in deployments. I remember going through a roundtable discussion where we hashed out what services could potentially be containerized and how they would interact with each other.

In the end, it came down to trade-offs: while microservices promised better modularity and easier scaling, there was also overhead in managing more services and ensuring they worked together seamlessly. We decided to start small, migrating one service at a time, rather than diving headfirst into full-blown microservices architecture.

The Rust Revival

While I was wrestling with Docker and Kubernetes, I couldn’t help but notice the buzz around Rust. It seemed like every hacker news thread was talking about this new systems programming language. I took some time to play with it, experimenting with a small project that could serve as an API endpoint for our containerized services.

The initial setup was a bit rough: I had to install Cargo and figure out how to integrate Rust into my existing development workflow. But once I got past those hurdles, working in Rust felt quite natural. The language’s focus on safety and performance resonated with me, especially when compared to some of the older languages we were using.

Conclusion

By the time DockerCon rolled around, I had a much better understanding of how containers and orchestration could fit into our infrastructure. It wasn’t all smooth sailing—there were plenty of late nights spent debugging and troubleshooting—but it was exciting to be part of this evolution in software delivery.

Looking back, 2015 marked the beginning of my journey with Docker and Kubernetes, which would continue to shape how we built and deployed applications for years to come. The tech landscape is constantly changing, but the lessons I learned then still resonate today.