Introduction

Here it is - The mountain that I’ve been excited and nervous to climb. Today I’ll describe my experience getting a local, multi node cluster of Kubernetes running. I was excited to play with one of the most interesting and now rather ubiquitous technologies in modern software operations.

The nerves came from the amount of difficulty and complexity that was regularly reported by people online. In order to reduce the overwhelming task to a digestable portion, I decided to go with a somewhat automated deployment. I used a script and tutorial series from an excellent Youtube channel - Jim’s Garage.

The process in a nutshell was:

  1. Install Proxmox on a physical machine
  2. Provision virtual machines with Cloud-Init
  3. Run a script to deploy k3s to the VMs
  4. Install Helm
  5. Install Rancher on the k3s cluster

Installing Proxmox

I’d been picking up used Lenovo Thinkcentre micro desktops off Ebay - a great way to set up a homelab on a budget, and I had one that I wasn’t using. So, I used it to set up a Proxmox node. This was as simple as:

  1. Download the Proxmox-VE iso
  2. Make a bootable USB from that iso
  3. Boot from the USB
  4. Install using the guided wizard, where the only tricky part is choosing a hostname and static IP

After installation it gave me the url to the web management console and I was on my way.

VM Provisioning with Cloud-Init

Adding an Ubuntu server image

First, I went to the Ubuntu Cloud Images website and copied the url for the latest 22.04 KVM release.

In Proxmox, under my server host, I clicked on local storage, then ISO Images, and Download from URL. In the dialog box, I pasted the Ubuntu image url, clicked Query URL, and Download

Building a virtual machine template

Once that was done, I went to my server, then Shell, and ran the following commands to set up a VM:

qm create 9000 --memory 4092 --core 2 --numa 1 --cpu host --balloon 0 --name ubuntu-cloud --net0 virtio,bridge=vmbr0
cd /var/lib/vz/template/iso
qm importdisk 9000 jammy-server-cloudimg-amd64-disk-kvm.img local-lvm
qm set 9000 --scsihw virtio-scsi-pci --scsi0 local-lvm:vm-9000-disk-0
qm disk resize 9000 scsi0 +20480M
qm set 9000 --ide2 local-lvm:cloudinit
qm set 9000 --boot c --bootdisk scsi0
qm set 9000 --serial0 socket --vga serial0

In a nutshell, this creates a VM, adds the Ubuntu cloud image, adds a storage disk and increases its size, adds the Cloud-Init virtual CD-ROM, sets the boot disk, and adds a console for direct access in Proxmox.

Side note: At first, I was confused how the Cloud-Init line worked, because I hadn’t added an image or anything. It appears to be a feature of Proxmox - like you’re adding a virtual CD-ROM from a built in image.

After switching back to the web interface, I click on the new VM, then Cloud-Init. There I added my username, password, and SSH public key by pasting the contents of my workstation’s public key file, ~/.ssh/id_ed25519.pub.

The last thing I did to make this VM a template was to right click on it and click Convert to template.

Creating the virtual machines

With the template created I could now create the five VMs that would host my k3s cluster. To do that, I right clicked on the template and chose Clone, changed the mode to Full Clone, and gave them each a name from k3s-00 to k3s-04.

screenshot

I needed them to have static IPs for the script to work. I prefer to do that using DHCP leases whenever possible. So, I added that in my DHCP server, Pi-hole. This didn’t go completely smoothly because Pi-hole just isn’t made to be a robust DHCP server, but I’ll spare you the details of the tire kicking that had to be done to get it to work.

After starting the VMs and making sure everything could talk, it was time to deploy k3s.

Deploying k3s

The script that I used is from the previously mentioned Youtube channel, it can be found at: https://github.com/JamesTurland/JimsGarage/tree/main/Kubernetes/K3S-Deploy

Right away I want to point out a couple of issues that I found in this script:

  1. It will absolutely blow away your ~/.ssh/config if you have one
  2. It pulls down some config files that it never cleans up
  3. It assumes you’re running it from your home directory

That said, it worked well, especially after I realized that last bit.

Before running it, I changed the IPs of each node to match the ones I added in Pi-hole, gave it a virtual IP for my network, a range of IPs that the load balancer will use to expose new services, and set my username.

After it failed the first time and I realized it wanted to be in my home directory, I ran it again and was successful!

screenshot

I was able to confirm it was working by curling the nginx load balancer external IP and saw the default nginx landing page.

Installing Helm and Rancher

The next thing I wanted to check out was Rancher. The tutorial I was following installed it on the k3s cluster. I have since read that it’s probably better to install Rancher separately, using Docker. Since I didn’t yet know that, I followed the cluster install documentation, which requires Helm.

Helm

To install Helm, I simply used their official helper script:

curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh

A helm version command confirmed that it was successfully installed.

Rancher

To install Rancher on the cluster, I first added the Helm chart repo for the stable version of Rancher:

helm repo add rancher-stable https://releases.rancher.com/server-charts/stable

Then, I created a namespace for Rancher:

kubectl create namespace cattle-system

Next, I installed and set up cert-manager to use self-signed certificates:

kubectl apply -f https://github.com/cert-manager/cert-manager/releases/download/v1.16.1/cert-manager.crds.yaml
helm repo add jetstack https://charts.jetstack.io
helm repo update
helm install cert-manager jetstack/cert-manager \
  --namespace cert-manager \
  --create-namespace \
  --set crds.enabled=true

I was able to validate cert-manager was running with the command: kubectl get pods --namespace cert-manager and could see three pods running.

So, I installed Rancher:

helm install rancher rancher-stable/rancher \
  --namespace cattle-system \
  --set hostname=rancher.rickawesome.com \
  --set bootstrapPassword=admin

After a good long wait, I was able to see the deployment was successful with the command:

kubectl -n cattle-system rollout status deploy/rancher

Then I just needed to make the service available to my main network through the command:

kubectl expose deployment rancher --name rancher-lb --port 443 --type=LoadBalancer -n cattle-system

I could see the IP of the new load balancer using kubectl get svc -n cattle-system. screenshot

Finally, I was able to open the web page at that IP, sign in, and see everything in my k3s cluster! screenshot

Conclusion

It was great to tick the box of beginning to play with Kubernetes. I’m excited for the possibilities this unlocks such as moving home services from my Docker host to a k3s cluster or setting up a GitOps pipeline using local runners. I would also like to get a better understanding of setting this environment up, so I might try tearing it down and building it again without a script.

I’m not sure what direction I’ll go with next, but I look forward to playing more with this technology and sharing it with you.

Cheers!
Rick

 

 


Resources