K3S Kubernetes Cluster Setup with K3D

Introduction
In my previous blog post, I wrote about the Kind and how we can set up Kubernetes Cluster using Vagrant and Kind (You can read my blog post about Kind here). In this article, I will talk about how we can create a Kubernetes Cluster in a Linux-based operating system with K3D, a tool of Rancher.
K3S (Lightweight Kubernetes)
K3S is a lightweight Kubernetes distribution released as open-source in 2019 by Rancher. K3S was designed as a Binary file under 100MB. Also, it is a Kubernetes tool that has CNCF (Cloud Native Computing Foundation) certificate. K3S allows us to install Kubernetes Cluster even in low-equipped systems, thanks to its lightweight. The minimum system requirements of K3S are as follows;
- Linux 3.10
- 512MB RAM (server)
- 75MB RAM (node)
- 200MB range
K3D
K3D is a tool that provides us set up Kubernetes Cluster quickly. Clusters which create K3D are lightweight because they base on K3S that I mentioned above. In other words, using K3D, we can set up a K3S cluster quickly, with minimum effort and minimum resource usage. Also, since K3D is cross-platform, it can run on Windows, Mac OS, and Linux devices and supports creating multiple clusters.
1) Installations
Requirements
- Docker
- Kubectl
- K3D
1.1. Docker Setup
To quickly install Docker, you can use the script prepared by Rancher with the command I gave below, or you can follow the installation steps suitable for the environment you will install from this address.
[Rancher Script] $ curl https://releases.rancher.com/install-docker/19.03.sh | sh
1.2. Kubectl Setup
You need to set up Kubectl to communicate with Kubernetes API, you can do this easily with follow the steps below.
$ curl -LO "https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl" $ chmod +x ./kubectl $ sudo mv ./kubectl /usr/local/bin/kubectl
After all these transactions, you can do a Kubectl version check with the command below.
$ kubectl version --client
1.3. K3D Setup
K3D setup is quite simple, the only thing you need to do is enter the command below to your terminal. If you wish you can do set up steps according to your environment from this address.
$ wget -q -O - https://raw.githubusercontent.com/rancher/k3d/main/install.sh | bash

If you too got the output like the image above, it means you did set up successfully. If you want you can verify your setup by doing a version check, for this you need to enter the command below to your terminal.
$ k3d --version
1.4. K3D Commands
If you have read or used my article about Kind, you know how simple it is to set up a cluster with Kind. K3D looks pretty similar to Kind from the CLI side. Before proceeding with the cluster installation, let’s take a look at what it offers us by simply typing k3d into our terminal.

with k3d cluster command;
- We can set up a new K3S cluster.
- We can list the clusters we set up.
- We can stop the cluster we set up.
- We can start the cluster we set up.
- We can delete the cluster we set up.
with k3d node command;
- We can create a new node
- We can list created nodes
- We can stop the nodes
- We can start the nodes
- We can delete the nodes
Apart from that, you can get the output of the cluster config file with the k3d kubeconfig command.
1.5. Cluster Setup with K3D
Now we can move on to the cluster setup. With the command below, you can start creating clusters with a name you choose.
$ k3d cluster create [NAME]

Since it downloads a docker image of approximately 150–200MB in the first installation, the installation time may take some time depending on your internet speed, but your next cluster installations will take less time unless you delete these images.
With the k3d cluster list command, you can see the clusters you have created and their status.

If you follow these steps and install K3D on a remote server and create the cluster in this environment, you cannot access this application when you install an application. Because we do not allow any port, it will only be accessible from within the cluster.
In the next step, I will talk about how we can access our applications.
$ k3d cluster delete -a --> Deletes all clusters $ k3d cluster delete [name] --> Deletes specified cluster
2) Reaching to Services
In my Kind article, I said that Kind is very new and has some problems. One of them was that if we installed Kind on a remote server rather than in our local environment, opening a port was a difficult task. When using Kind, we had to manually open a port one by one, and if we needed a new port in a working cluster, we had to delete the cluster, set the config file again, and restore it. When using K3D, we need to open a port beforehand, but it offers us many ways for this. We can access our applications either as a node-port or via Load Balancer.
2.1. K3D Load Balancer Settings
K3D uses traefik by default and I will go through traefik. If you have installed K3D on a remote server, you need to open an external port for access. For this, we open our load balancer using the following command.
$ k3d cluster create [NAME] -p 80:80@loadbalancer
After the cluster is installed, let’s launch a sample Nginx application. You can run the kubectl command below for this.
$ kubectl create deployment demo --image nginx --port 80
With this command, we created an nginx image with internal port 80 and named demo. Now, let’s create a service to access the Deployment we created.
$ kubectl expose deployment demo
After successfully creating our service, we can now set the load balancer settings. Now let’s create a file named demo-ingress.yaml and edit its content as follows.
apiVersion: extensions/v1beta1 kind: Ingress metadata: name: nginx annotations: ingress.kubernetes.io/ssl-redirect: "false" spec: rules: - http: paths: - path: / backend: serviceName: demo servicePort: 80
To deploy the file at Kubernetes;
$ kubectl create -f demo-ingress.yaml
In this way, we have made the load balancer settings of the demo nginx application that we have just created. We can see the Kubernetes objects we created with the kubectl get pod,svc,ing command.

If you have an output like the one above, we can now test it. If you have done the installation on your own computer with the localhost, you can access the nginx page with the server IP address and verify the installation if you have done it on a remote server.

2.2. K3D NodePort Settings
As I mentioned above, when using Kind, the ports have to be entered into the config file one by one and the cluster has to be re-established when a new port is needed. K3D is leaving Kind at this point because we can tell it to open a certain port range to open ports with k3d. Let me continue with the example below.
$ k3d cluster create [NAME] -p "30000-30100:30000-30100@server[0]"
We added a new parameter in the above command. Docker users frequently used the “-p” (publish) parameter. With this parameter, we can open the host-container port. We did this above and opened the 30000-30100 port range both on the host side and on the container side. (Kubernetes default node port range: 30000–32767)
NOTE: I do not recommend opening the 30000-32767 range for low-equipped systems because RAM usage significantly increases when you open the ports. Some of my tests where I defined large port ranges caused Docker to fail. You can lower this interval and restart the service if you get an error.
Now, let’s install the nginx application above by opening nodeport this time. Create a file named demo-nodeport.yaml and add the following lines and save it.
apiVersion: apps/v1 kind: Deployment metadata: name: demo labels: app: demo spec: replicas: 1 selector: matchLabels: app: demo template: metadata: labels: app: demo spec: containers: - name: demo image: nginx --- kind: Service apiVersion: v1 metadata: name: demo spec: selector: app: demo type: NodePort ports: - name: node-port port: 80 nodePort: 30050
We create deployment and service with the yaml file above. I will use the address 30050 for NodePort, you can specify a port from the range you opened and access the application through the port you specified. Now let’s implement the demo-nodeport.yaml file.
$ kubectl create -f demo-nodeport.yaml

We can see the objects we created with the kubectl get pod,svc command. If your pod and service have been created without any problems, you can access the application from your browser. (IP:NODEPORT)

3) Multi-Node Cluster Setup
With K3D, you can set up clusters with more than one node. Let’s set up a cluster with 3 nodes and then add 1 more node to the cluster working with the k3d node command. First of all, we tell K3D to set up a cluster of 3 nodes with the following command.
$ k3d cluster create [NAME] --servers 3
We can see our nodes’ situations with the commands below after the nodes are created.
[CHECKING WITH K3D] $ k3d cluster list
[CHECKING WITH KUBECTL] $ kubectl get node

3.1. Adding Node
Our nodes have been successfully created. So, what should we do if we want to add a new node according to needs? We can add a node to the cluster we just created easily with the command below.
$ k3d create node new-node --cluster [CLUSTER-NAME] --role server

4) Activating Traefik Dashboard
The Traefik panel is disabled by default. I want to talk about activating the panel most shortly. For this, we need to edit traefik ConfigMap from Kubernetes. We open the ConfigMap for editing with the following command.
$ kubectl edit configmap traefik -n kube-system
Then we add the following lines to the ConfigMap we opened, and then save and exit. You can see the screenshot below for your help.
[api] dashboard = true
With the first command above, we open our configmap for editing. Then we add the specified command to the configmap we opened and save and exit. You can see the screenshot in the next step to help.

Now we need to update the pod for the settings to take effect. For this, you can delete the pod or scale the deployment size. I’ll go with the second option. For this, I will first make the pod number 0 and then update it to 1 again.
$ kubectl scale deployment traefik --replicas 0 -n kube-system $ kubectl scale deployment traefik --replicas 1 -n kube-system $ kubectl get pod,svc -n kube-system
In the last step, we check the pods and services. You should see our Traefik service set to loadbalancer. After this step, if you have done the installation on your own computer, you can access it directly with the specified port. If you have installed it on a remote server, you can use port-forward.
$ kubectl port-forward --address 192.168.1.74 deployment/traefik -n kube-system
There is a point you should pay attention to; You must write the IP address of your remote server in the –address parameter. I wrote my own IP address above as an example.

4.1. Raising App with Traefik
Finally, let’s re-install the nginx application using the loadbalancer with the steps in the 2.1 header and view it from the traefik panel.

The service we created appears in the traefik panel.
Most simply, I touched on how you can set up K3D and K3S clusters on your local or remote server. As I said in my Kind article, you can set up clusters very quickly for both test and real environment with such tools. Since Docker works as a container, it is a bit tiring to open services to the outside, but if you already have a Docker installed on your computer, it will be quite easy to do these steps locally.
Author: Doğukan Turan
Date Published: Jul 13, 2021
