$ cat post/the-monolith-ran-/-we-named-the-server-badly-then-/-i-saved-the-core-dump.md
the monolith ran / we named the server badly then / I saved the core dump
Title: Why I Still Have Mixed Feelings About Docker
August 6th, 2018. The Docker Store debacle was still a hot topic in the tech community, and it felt like everyone had an opinion about it. I’m sitting here with my coffee, a screen open to a Jenkins pipeline script for one of our Kubernetes deployments, and mixed feelings swirling around me.
Let’s get real—Docker has been a game-changer. It made deploying containerized applications as simple as dragging a box into a VM and hitting “Start.” But that simplicity was like being handed a shiny toy without any instructions on how to build it or what could go wrong.
Last week, we had one of those days where something just decided to die. A service in our Kubernetes cluster went down with the usual noise: some error message about resource starvation or maybe a network glitch. I jumped into the container logs hoping for some sort of clue. And there it was, plain as day: Error response from daemon: Get https://store.docker.com/api/images/....
Now, this might seem like an epic fail to Docker fans and detractors alike, but let me explain why it hit close to home. It’s not just about the inconvenience of having a dependency that requires you to log in somewhere else. It’s about trust.
Docker’s ecosystem is vast and complex. From the initial containerization to monitoring, security, and scaling—every step has its own tools and complexities. We’ve invested heavily in setting up our CI/CD pipelines with Jenkins, but now I find myself thinking: “What if one of these services falls over because of a Docker dependency? How much do we really trust this tool?”
The rise of Kubernetes and the subsequent growth of Helm, Istio, Envoy, and serverless functions all point to a move away from monolithic application architectures. Docker’s store issue is just another reminder that containerization isn’t a silver bullet. It introduces new layers of complexity that require more robust monitoring and management.
And let’s not forget about GitOps. With every deployment moving toward this model, the stakes for ensuring that our tooling remains reliable are getting higher. We can’t afford to have any single point of failure in our CI/CD pipelines. Docker’s store issue is a stark reminder of just how critical it is to keep a close eye on these tools.
But here’s where I find myself being honest: despite all the issues and the trust factor, we’re still heavily invested in Docker. It remains the de facto standard for containerization, and that means dealing with its quirks and limitations. For now, at least, it’s too risky to abandon ship.
As I stare at my screen, trying to figure out how to debug this latest issue, I’m left with a mix of frustration and resignation. Frustration because of the Docker store issue; resignation because we’re stuck using it for the foreseeable future. But hey, that’s just part of the gig in tech. You learn, you adapt, and sometimes you just have to deal with the messes others leave behind.
So here’s to another day spent wrestling with containerization tools—let’s hope they get a little less complicated by the time we get to Kubernetes 2.0!