$ cat post/chmod-seven-seven-seven-/-the-segfault-taught-me-the-most-/-it-ran-in-the-dark.md

chmod seven seven seven / the segfault taught me the most / it ran in the dark


Title: December 29, 2014 - A New Year, A New Stack


December 29, 2014. Another year ticks by as the sun sets on another long day in engineering land. I can still feel the buzz of excitement from the announcement of Kubernetes back in June. It was like a beacon in the dark—a promise that container orchestration wasn’t going to be just another flavor-of-the-month but could actually stick around and change how we think about deploying software.

On my team, we were starting to dip our toes into Docker containers, which had been all the rage for almost a year now. We had a few services running in production with Docker, but integrating it into our deployment pipelines was proving tricky. Every day felt like a battle against the clock and our own internal bureaucracy. It wasn’t just about setting up Docker; we were trying to figure out how to integrate it seamlessly with our CI/CD processes, monitoring tools, and logging infrastructure.

One of the big issues we faced was getting consistent builds from our CI servers to production without breaking anything. We had to ensure that every service version deployed in a container matched exactly what was built by Jenkins. This required a lot of manual verification steps, which was error-prone and time-consuming. To address this, I started working on a custom script that would automatically compare the Docker images tagged in our registry with their corresponding git tags. It wasn’t pretty—lots of shell scripting and grep calls—but it worked well enough to give us some peace of mind.

Another challenge we ran into was managing dependencies between services. We had multiple microservices, each running inside its own container, and they needed to communicate with one another for our application to function correctly. At the time, Docker Swarm didn’t have a robust service discovery mechanism built in. So, we ended up implementing our own simple DNS solution using CoreOS’s etcd and fleet. It wasn’t elegant, but it got the job done. I remember staying up late coding this solution, feeling like a real ops guy finally doing some low-level magic.

In parallel with all these technical hurdles, there was an ongoing debate about whether we should embrace Kubernetes or stick with our current setup using Marathon on Mesos. The tech world seemed to be divided—some companies jumping into the Kubernetes camp while others were still loyal to their existing platforms. My team was split too; some folks felt Kubernetes offered a more mature ecosystem and better scalability, while others worried about the learning curve and potential vendor lock-in.

One of the most memorable moments that month came from Hacker News. A user posted an article about CoreOS building Rocket, their own container runtime. I remember feeling both intrigued and skeptical. On one hand, it was exciting to see a new player in the game. On the other, there were already too many container runtimes out there—Docker, rkt (Rocket), Podman, Containerd—and each had its pros and cons.

As we approached the end of 2014, I found myself reflecting on all the work that went into setting up our infrastructure for Docker containers. It wasn’t easy by any means, but it was necessary. The transition felt like a rite of passage, a way to stay current in an ever-evolving field.

And now, as we enter 2015, I can only hope that the challenges and debates we faced will be less prevalent. Let’s see if Kubernetes truly takes off and whether CoreOS’ Rocket finds its place in the ecosystem or not. Whatever happens, it’s clear that containers are here to stay, and with them comes a whole new world of possibilities—and problems—to explore.


That’s how 2014 ended for me—full of technical hurdles, debates, and some personal coding battles. Here’s to hoping 2015 brings us more clarity and fewer late nights.