DigitalOcean Kubernetes Challenge: Deploy an Internal Container Registry
I recently got on board as a Navigator and that’s how I came across the DigitalOcean Kubernetes Challenge, a challenge to sharpen your DevOps skills, learn more about Cloud Native Computing Foundation (CNCF) projects, and win various vouchers amidst a lot of other things. In this article, I’m going to document my submission and share how I built it.
About the Challenge
The DigitalOcean Kubernetes Challenge gives developers an opportunity to level-up their K8s skill set.
Kubernetes is an open-source container-orchestration system for automating computer application deployment, scaling, and management. It was originally designed by Google and is now maintained by the Cloud Native Computing Foundation.
DigitalOcean Kubernetes (DOKS) is a managed Kubernetes service that helps us to deploy Kubernetes clusters hassle free without needing to handle the control panel and containerised infrastructure. Clusters are compatible with standard Kubernetes toolchains and integrate natively with DigitalOcean Load Balancers and block storage volumes.
Find more details about the same here: https://www.digitalocean.com/community/pages/kubernetes-challenge
The Prizes
Those who successfully finish the challenge, receive the following prizes:
- $150 to donate to your project of choice in Open Collective
- $100 gift card to be used at the DigitalOcean Swag Store
- $50 gift card to be used at the CNCF Swag Store
My Submission
I decided to go ahead with the “Deploy an internal container registry” Challenge, housed under the “New to Kubernetes” category, as I still consider myself pretty new to DevOps. To try the aforementioned challenge, I received $60 worth of DigitalOcean Credits that I used to deploy the Internal Container Registry using Harbor.
Creating a Kubernetes Cluster
Over DigitalOcean, Kubernetes Clusters can be created in many ways, using the Dashboard, doctl
CLI and so on. I used the DigitalOcean Dashboard to create it, specifically the Kubernetes Create Cluster Dashboard. Here are the specifications that I used:
- Kubernetes Version: 1.21.5-do.0
- Datacentre Region: I went with the default, though it’s advisable to select the one that’s nearest.
- Cluster Capacity; Machine Type, Node Count, Node Plan: I rolled with the lowest possible values for these so I could keep the monthly rate low, which allowed me to experiment around!
The process takes a while, for me the cluster was up and running in a few minutes.
Connecting to the Cluster
I was greeted with a pretty dashboard, the one below:
Towards the bottom of this dashboard I found the doctl
command to run that automatically saves the Kubernetes configuration on your local machine;
doctl kubernetes cluster kubeconfig save *cluster_details*
After saving the auth config using the aforementioned command, I moved ahead with the installation of the image repository as well as the ingress using kubectl
.
Setup
I used Helm, the Package Manager for Kubernetes, to setup Traefik Ingress.
helm install \
--namespace traefik \
--create-namespace \
--values charts/traefik/values.yml \
traefik traefik/traefik
The next step is to setup Harbor.
PASS=$(openssl rand -hex 8)
KEY=$(openssl rand -hex 8)
helm install \
--namespace harbor \
--create-namespace \
--values charts/harbor/values.yml \
--set-string "secretKey=$KEY" \
--set-string "harborAdminPassword=$PASS" \
harbor harbor/harbor
apply
manages applications through files defining Kubernetes resources. It creates and updates resources in a cluster through running kubectl apply
. So I created the following examples to deploy my submission:
kubectl apply -f yml/nginx-example.yml
kubectl apply -f yml/traefik-dashboard.yml
This looks super easy but it took me a while to be done with this!
Learnings
- I’ve always wanted to dive deeper into the world of DevOps, this challenge gave me the push that I truly needed!
- Apart from that, I was able to learn a lot about DOKS. I absolutely hate going through documentations but it was fun reading through
doctl
andkubectl
without confusing between both. - I had to choose between Harbor and Trow and my decision was influenced by the wider availability of documentation for Harbor. As a registry, it looks amazing.
Why Kubernetes though?
I pitched this question to myself everytime I failed to deploy the container but hey, it offers:
- Auto Scaling
- Automated Rollbacks
- Load Balancing
- Self Healing
It’s a good bargain after a few hiccups! Once you get the hang of K8s it’s a good tool that helps you be more productive, though it can sometimes be an overkill for simple applications! I loved the dashboard offered by DigitalOcean which highlights insights in an amazing way. Take a look:
Ending Notes
Kubernetes provides an easy way to scale your application compared to virtual machines. It keeps code operational and speeds up the delivery process. Kubernetes API allows the automation of a lot of resource management and provisioning tasks. I would recommend taking up the DigitalOcean Kubernetes Challenge if you’re looking forward to upskilling!