$ cat post/telnet-to-nowhere-/-i-traced-it-to-the-library-/-the-build-artifact.md

telnet to nowhere / I traced it to the library / the build artifact


Title: Kubernetes Is Not Just for Funerals Anymore


August 2018 was an interesting month to reflect on. The world of infrastructure and container orchestration had undergone a significant shift since the days when Docker was all the rage. Kubernetes had emerged from Google’s shadow, won the container wars, and now Helm, Istio, Envoy, and other tools were adding layers upon layers of complexity. But as someone who’s spent countless nights debugging Kubernetes clusters, I couldn’t help but feel a mix of frustration and pride.

The Night That Will Not Be Forgotten

A couple of weeks ago, our production Kubernetes cluster decided to go on strike. Pods started spewing errors like they were going out of style: FailedMount, NetworkPluginNotReady, ContainerCreating. The chaos was palpable; users were screaming into Slack, and I could feel the weight of the world on my shoulders. After hours of digging through logs and cluster events, I realized that our NFS mounts had failed. A simple mistake led to a complex failure mode.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: glusterfs
  resources:
    requests:
      storage: 10Gi

Turns out, the storageClassName we were using didn’t match any existing GlusterFS provisioners. Kubernetes was trying to mount a non-existent volume, and then failing spectacularly.

The Helm of Despair

As I struggled with the Kubernetes cluster, my team started arguing about whether we should use Helm for our deployments. Some argued that it would simplify things, while others feared the complexity of managing charts and dependencies. After hours of heated debate, I finally decided to take a break and clear my head.

During this break, I stumbled upon an article in Hacker News about Docker’s new requirement to log into the Docker Store to download images. This felt like a step backward; we were supposed to move away from tight vendor coupling, not towards it. It made me wonder how many more layers of complexity we’d have to deal with as the ecosystem continued to evolve.

The Silver Lining

Despite the chaos, there was one bright spot: the rise of GitOps. Our team had been discussing the idea for a while now, and I couldn’t help but think that maybe, just maybe, it would be the key to managing our complex infrastructure in a more structured way. After all, if we could version control our infrastructure code, wouldn’t that make debugging and understanding state transitions easier?

The Future of Monitoring

As I reflected on the month, I couldn’t shake the feeling that monitoring was going through another big shift. Prometheus and Grafana had become ubiquitous, but they still required significant effort to set up and maintain. I started wondering if there wasn’t a better way—something more automated, perhaps?

Conclusion: The Journey Continues

Looking back on August 2018, it’s clear that the tech landscape was in flux. Kubernetes was no longer just a tool for container orchestration; it had become a cornerstone of modern infrastructure. Helm, Istio, and Envoy were just the beginning—more tools would follow.

For me personally, this month taught me to embrace complexity but not get lost in it. Debugging that Kubernetes cluster was tough, but I emerged from it with a deeper understanding of our infrastructure. The journey continues, and I’m excited to see where we’ll go next.