$ cat post/a-segfault-at-three-/-memory-i-can-not-free-/-a-segfault-in-time.md

a segfault at three / memory I can not free / a segfault in time


Title: From Docker Containers to Kubernetes: A Year of Learning


November 2, 2015. It’s hard to believe it’s been a full year since I started down the path of containerization and microservices. Looking back, it feels like I’ve been living in a tech version of Groundhog Day—Docker here we come again.

When Docker first hit the scene in 2013, I was skeptical. “Virtual machines are so much easier,” I thought. But as time went on, I couldn’t ignore the simplicity and efficiency it brought to our ops team. Containers just made sense for small, isolated services that didn’t require full VMs.

Then, 2014 came around and Google threw us a curveball with Kubernetes. “What’s this?” we wondered. Microservices, I get. But orchestration? It sounded complicated. Yet, as the buzz grew louder, so did our curiosity. We started experimenting with Minikube to see if it could work for us.

It wasn’t easy. Setting up and managing containers was one thing; orchestrating them on a large scale was another beast entirely. We wrestled with understanding how Kubernetes worked, particularly its scheduling mechanisms and the intricacies of deploying stateful vs. stateless services. Every service felt like an adventure in configuration.

One day, I found myself in a debate with our team about whether we should stick to Docker Swarm or jump on board with Kubernetes. Arguments flew back and forth—some for the familiarity of Docker, others pushing us towards the promise of Google’s orchestration tool. In the end, pragmatism won out; we started using both, but kept Kubernetes in mind as a future proofing strategy.

In November 2015, I was already seeing signs that this might be the year when Kubernetes would truly come into its own. The excitement around it was palpable, and with good reason. We were just starting to dip our toes into what orchestration could do for us beyond simple load balancing. It opened up new possibilities for auto-scaling, rolling updates, and managing complex service dependencies.

Yet, for all the buzz, there were still growing pains. We hit a wall when trying to set up an HA (High Availability) cluster for one of our critical services. Kubernetes was just too young, with some parts of its API not fully baked yet. Bugs lurked in every corner, and we found ourselves in debugging sessions that felt more like treasure hunts than technical challenges.

And then there were the naysayers. “Why bother?” they asked. “Just stick to what you know.” I couldn’t help but chuckle at some of these comments. It’s funny how quickly technology can shift paradigms and yet, people still cling to their familiar tools out of comfort or fear of change.

But that was the beauty of it—embracing the new while not completely abandoning the old. We kept learning, experimenting, and iterating. By the end of the year, we had a solid foundation in containerization that allowed us to be more agile and efficient. Kubernetes became our go-to for complex deployments, but Docker still handled many day-to-day tasks.

Looking back on this journey through 2015, I realize how much it has changed me as an engineer. No longer content with just deploying code, I now think about scaling, load balancing, and resilience at a whole new level. It’s not all smooth sailing; there are still nights when I wake up in cold sweats thinking about pod failures or network issues. But that’s part of the learning process.

As 2015 drew to a close, I felt both exhilarated by the possibilities Kubernetes offered and humbled by the challenges it presented. In the tech world, change is constant, but sometimes, it’s the journey itself that matters most.


This was written from my personal experience with containerization and orchestration in 2015, reflecting on a time when Docker containers were becoming mainstream, while Kubernetes was still finding its footing.