Customers Contact TR

Deploying a Kubernetes cluster on OpenStack with Kubespray

This blog post walks you through deploying a Kubernetes cluster on OpenStack with Kubespray. In this example, we explain it through the SkyAtlas project, which uses OpenStack in its infrastructure. Kubespray uses Terraform for the provision of the environment and Ansible for the automatic deployment of Kubernetes.


Firstly, we provision our private infrastructure with Terraform, then deploy the Kubernetes cluster with Kubespray. Kubespray has it’s own Terraform configuration examples, but in the following link, you may find SkyAtlas ready configuration.


If you only change the directory to kubernetes_cluster following the instructions in this blog post, you’ll have a virtual private environment for Kubernetes deployment.

After you run the Terraform apply command, you should get an output as below:



The output shows you the Public IP of your Bastion node and your virtual subnet id used by the Kubernetes cluster.


Now, we need to log in to the Bastion node. Kubespray comes with various deployment options. The cluster only accessible from Bastion node for management is what we follow as a method here. For the login to the Bastion node, you need the ssh-key you used Terraform provisioning.


$ ssh -i demo-key 213.16.276.112 -l ubuntu

Then, we install Ansible and PIP.


$ sudo apt -y update
$ sudo apt -y install ansible
$ sudo apt -y install python-pip

And it’s time to get Kubespray.


$ git clone https://github.com/kubernetes-sigs/kubespray.git

Go to the repository root folder and copy a new infrastructure from the example:


$ cd kubespray/
$ cp -rfp inventory/sample inventory/mycluster

and we install requirements.


$ pip install — user -r requirements.txt

Gitlab CI Integration


Our repo has a particular YAML file named .gitlab-ci.yml in its root directory. This file is responsible for running scripts on the GitLab Runner.



If you trigger this file with a Gitlab CI, the GitLab Runner will run the steps above automatically, and if everything goes as planned, you can log in to your Bastion node and continue the steps below.


Now we need to edit the inventory file for Ansible. If you don’t have an OpenStack API Client, you can log in to SkyAtlas Dashboard, and under Compute/Instances, you can find your Kubernetes nodes and their IP addresses.



Then we edit the inventory file:


$ vim inventory/mycluster/inventory.ini

It should look like this:



Now we set the cloud_provider option to OpenStack in all.yaml file.


$ vim inventory/mycluster/group_vars/all/all.yml


All Kubernetes cluster nodes are only accessible with the ssh-key we used for Terraform provision, but those nodes are only accessible from their network, except the Bastion node. So, if you add the private ssh-key to your Bastion node as ~/.ssh/id_rsa and edit your hosts and inventory files correctly, your ansible should be able to access your other Kubernetes nodes. (If you prefer to go with another file path, you need to change your Ansible config or use —private-key= options.


Kubespray needs your environment variables from the RC file.


You can find the way to get your RC file here. Source the RC file on Bastion node, and you’re ready for Kubernetes cluster deployment with Ansible.


$ ansible-playbook -i inventory/mycluster/inventory.ini --become --become-user=root cluster.yml

If everything goes as planned, you should see results like this:



With the private key you put in Bastion Node, you can SSH your Kubernetes nodes. Add subnet-id and floating-network-id under LoadBalancer section on /etc/kubernetes/cloud_config file from all the master nodes. Your cloud_config file should look like this:



If you don’t have an OpenStack API client installed, you can get the subnet-id and floating-network-id from SkyAtlas Dashboard.




Let’s check the cluster status:



Let’s test it!


I already have this Docker image for test purposes, and we can use that image with the Kubernetes LoadBalancer service. Kubia is a simple node.js image that will accept HTTP requests and respond with the machine’s hostname it’s running in.


Edit kubia.yaml file and change the “THE_FLOATING_NETWORK_ID.” from the line starting with “loadbalancer.openstack.org/floating-network-id:” it’s under the Service section.


Create Pods and services from the file:


$ kubectl create -f kubia.yaml

As a result, kubia application should be running on 3 Pods; those Pods should be deployed on different worker nodes, which are shown below:



Kubia.yaml file exposes the kubia app with the Kubernetes service. The service automatically creates a LoadBalancer on SkyAtlas.



When a connection is made to the service via its Public IP, a random pod will be selected, and then all network packets belonging to that connection are all sent to that single pod. If there’s a new connection that new traffic will go to the next pod.


If you try to reach the application with different connections, the next pod on the line will respond every time you try.


 

Author: Onur Özkan

Date Published: Jan 28, 2020