Sponsored

I wanted to know why people talk about Kubernetes so much. This post is about my journey. I’ll show how I explored Kubernetes, from its basic concepts to having a small cluster up and running. Let’s dig in and uncover the reasons behind Kubernetes’ widespread appeal.

Initial Exploration

When I set out to explore Kubernetes, I had a clear aim: transform my small Digital Ocean droplet, home to my personal projects, into a a Kubernetes setup that could deploy anywhere with ease. This goal sparked my journey and gave meaning to each step I took.

So how did I start? I jumped into practical work right away. I set up a small testing environment using Minikube, a tool that let me play with Kubernetes locally.

In this environment, I started tinkering with Minikube, getting hands-on with its tools. I explored “manifests,” essentially plans that tell Kubernetes how to work. I learned basic commands like get, describe, and apply.

The process itself was straightforward, but I found myself overwhelmed by the tutorials I followed. The multitude of concepts, names, and manifest files in the Kubernetes ecosystem left me feeling swamped. To overcome this, my next step was to tackle the basics head-on. The Kubernetes Roadmap from roadmap.sh proved to be a valuable ally in this endeavor.

After some dedicated reading, I became familiar with key Kubernetes components. I grasped the roles of Pods, Deployments, Services, Ingresses, StatefulSets, and Persistent Volumes. I not only knew what they were for, but also how they interlinked within the Kubernetes landscape.

Building My Own Cluster

In my quest to save money, I embarked on the journey of constructing my own cluster. However, this path was not without frustrations. My first attempt involved manual cluster setup, but I quickly realized that this approach wouldn’t align with my overarching goal—to achieve easy deployment if I ever decided to move VPSs.

This realization hit me hard: I was veering away from my primary aim of gaining hands-on experience and deepening my Kubernetes understanding. In light of this, I opted for a managed cluster, allowing me to offload this setup for the future.

During the next few days, I devoted myself to tinkering with manifest files. I began to witness progress. Success arrived when I managed to deploy all my services using Kubernetes manifests. This accomplishment was a major milestone. The culmination of my efforts resulted in a collection of manifest files, including StatefulSets and Deployments, effectively mirroring the environment I had on my current Droplet powered by Docker Compose.

The process was further streamlined by implementing the Cert Manager. By automating the renewal of my Let’s Encrypt certificates, I effectively eliminated a task that I’d previously been handling manually for years. This integration was especially welcome, given my prior laziness to set up a cron job for the same purpose.

The Final Product

Even as I stood before the culmination of my efforts, a couple of concerns continued to nag at me. Firstly, the inability to SSH into my cluster was a limitation that left me somewhat uneasy. Secondly, the cost of the setup had crept up higher than I had originally anticipated. While I had thoroughly enjoyed the journey of mastering Kubernetes, I realized it was time to assess my options.

I was satisfied with the learning process and had garnered a sound understanding of Kubernetes. I considered reverting to my good old small droplet—my original hosting choice. However, orchestrating services within Kubernetes had become a breeze, and I was reluctant to relinquish that advantage.

Enter K3s—a lightweight Kubernetes distribution. I approached this transition with a hint of skepticism. The shift from Docker to Containerd and from Nginx to Traefik seemed like a leap into the unknown. Nevertheless, I decided to give it a shot. There were no reasons tying me to those in the first place. Adapting my manifests to K3s took a few hours, but the outcome was a straightforward simple recipe. With just a few commands, I could have my cluster up and running. This newfound agility was a boon, offering me the freedom to reset whenever things went awry.

Finally, the puzzle fell into place. Not only did I manage to slash the costs, bringing them in line with my previous Digital Ocean setup, but I also retained the ease of deploying services through Kubernetes manifests. Additionally, I found myself empowered to swiftly set up new clusters should I ever decide to switch hosts again. Mission accomplished!

By thyago

Leave a Reply

Your email address will not be published. Required fields are marked *