$ cat post/containerizing-our-java-monolith:-a-lesson-in-simplicity.md

Containerizing Our Java Monolith: A Lesson in Simplicity


November 9, 2015. I remember it like yesterday. A day of contrasts—Paris attacks dominating the news cycle, while tech was making strides that would change everything. On this particular Tuesday, my team and I were wrestling with how to containerize a monolithic Java application, something we had been avoiding for far too long.

The Setup

We were using an on-premises data center, running our monolith in virtual machines (VMs) using Puppet for configuration management. VMs worked fine for us until they didn’t. Scaling was becoming problematic, and the complexity of managing multiple VMs was reaching a breaking point. Enter Docker containers—a simple solution that promised to revolutionize how we deploy and manage applications.

The Journey Begins

We started small, with a few containerized services. Each service had its own Docker image, and we used RKT (RunC) as our container runtime. It felt like the future of DevOps was here, but there were still kinks to iron out. For example, networking between containers wasn’t intuitive at first. We spent hours debugging DNS resolution issues that seemed trivial in hindsight.

One of the biggest hurdles was converting our monolithic application into a suite of microservices. The initial thought was to carve off parts of the codebase that could function independently. However, we soon realized that this was easier said than done. The codebase was tightly coupled, and refactoring it to follow microservice principles required significant effort.

Learning the Hard Way

We faced several challenges along the way:

  • Database Interdependencies: Our application interacted with multiple databases, making it tricky to isolate services from each other.
  • Configuration Management: We needed a way to manage configuration that would work across different environments (dev, staging, prod).
  • Service Discovery: How could we dynamically discover and communicate between microservices without relying on static IP addresses?

We spent months arguing about the best approach. Should we use Kubernetes? Marathon? CoreOS Fleet? Each option had its pros and cons, but none seemed perfect for our situation. We ended up building a custom solution that allowed us to manage services, but it was far from ideal.

A Turning Point

One day, during a team standup, a colleague mentioned Mesos/Marathon. It hit me like a lightning bolt. Here was a tool designed specifically to handle these issues! After some initial hesitation, we decided to give Marathon a shot. The transition wasn’t smooth; we had to relearn the entire CI/CD pipeline, but the results were worth it.

Marathon allowed us to manage our services with ease. We could define service dependencies and configurations in JSON files, making life significantly easier. Scaling became trivial, and we finally had a robust way to handle failover and redundancy.

Reflections

Looking back, containerizing our Java monolith was more than just a technical challenge; it was a cultural shift. The transition required us to rethink how we develop, test, and deploy applications. While Docker provided the foundation, Marathon (and later Kubernetes) helped us scale and manage services effectively.

This experience taught me that while new tools are exciting, they often come with their own set of challenges. The real value lies in adapting them to fit your needs rather than trying to force-fit a solution.

In the end, our efforts paid off. We achieved better scaling, easier maintenance, and a more flexible deployment process. And when you think about it, isn’t that what DevOps is all about?


That’s my take on containerizing our Java monolith in 2015. A journey filled with trials but ultimately rewarding.