$ cat post/the-kernel-panicked-/-we-scaled-it-past-what-it-knew-/-the-repo-holds-it-all.md

the kernel panicked / we scaled it past what it knew / the repo holds it all


Title: The Month That Changed My Mind About Containers


December 7, 2015 was a day that would mark a significant shift in how I viewed containerization. Just months earlier, Docker had been released to the world with a lot of fanfare and hype. But it wasn’t until this month, as more and more companies started adopting microservices architectures, that containers began to feel like something we couldn’t avoid.

A Year of Container Evolution

By 2015, I was already using Docker in my personal projects, but at work, the adoption was still tentative. The buzz around Kubernetes had started, and CoreOS was making waves with its container runtime and etcd distributed key-value store. But there were still plenty of arguments to be made for and against containers.

A Bug in Our System

One day, our system went down in a way that only a misbehaving container can bring you: it just froze up. We had a service running on Docker, and suddenly, the whole thing ground to a halt. No logs, no error messages, nothing but a bunch of services that were supposed to be stateless getting stuck.

This was frustrating because we were already invested in Kubernetes for orchestration, so why were our containers failing? After some digging, I realized it wasn’t just an issue with the container runtime or Kubernetes—it was about how we had structured and managed our applications.

Learning from Swift

While this was going on, a post on Hacker News caught my eye: “Swift is Open Source.” This news felt like a distant echo of what was happening in the tech world. The swift move to open-source didn’t just change programming languages; it was a broader indication that the tech landscape was evolving rapidly.

Instagram’s Million Dollar Bug

Another story that stuck with me was about Instagram’s “million dollar bug.” This wasn’t really a billion-dollar lesson, but it highlighted how critical debugging can be. Bugs like these are not only expensive in terms of money and time lost; they also lead to serious security issues. For someone who had been through multiple production outages, this was a stark reminder that even the most successful companies can make costly mistakes.

The First Person to Hack the iPhone

In another article, I read about the first person to hack the iPhone building a self-driving car. This story felt like a metaphor for how fast technology is advancing. It made me wonder: How much of what we’re doing today will still be relevant in five years?

Containers and Microservices

As I wrestled with the container failure, I started to rethink our approach to microservices. We were using Docker as a way to isolate services, but maybe it was causing more problems than it was solving. Kubernetes seemed like an overkill for our small team, but then again, what if it could help us avoid these kinds of issues?

The Lessons Learned

By the end of December 2015, I had come to a few conclusions:

  1. Containers aren’t magic: They require proper management and can introduce new complexities.
  2. Kubernetes isn’t just for big teams: Smaller teams might not need it, but understanding how orchestration works is important.
  3. Debugging in containers is different: It requires different tools and a different mindset.

These reflections shaped my approach to containerization going forward. While the hype around Docker was still high, I started to see its limitations more clearly. And that’s what makes this time so memorable—it wasn’t just about the technology; it was also about learning where we needed to go next.


This post is a snapshot of how I approached containers and microservices in early 2015, reflecting on the challenges and lessons learned from debugging our system.