Disclaimer
I have published this post on my work blog https://reece.tech previously.
Preface
We are hosting our workloads in Docker containers within various Kubernetes clusters. To ensure consistency and repeatability across environments, we are using idempotent configuration management tools like Ansible. Kubespray is an Ansible playbook used to manage Kubernetes clusters including initial build and also lifecyle of the cluster (adding or removing nodes, version upgrades etc.).
Requirements
We are automatically provisioning new VMs in our VSphere environment using Ansible. Once the new node is up and running, Kubespray runs across to install required Kubernetes services. In this example we are using a root user ssh key for passwordless authentication.
- Ansible 2.5
- Kubespray git repository
Steps
Getting Kubespray
The following commands will download Kubespray, select the latest release version and install necessary Python modules.
git clone https://github.com/kubernetes-sigs/kubespray.git
cd kubespray
git branch -a | grep release
git checkout release-2.8
sudo pip install -r requirements.txt
Minimum configuration
We now copy the sample inventory directory to our cluster specific inventory.
cp -rfp inventory/sample inventory/onecluster
The inventory builder script automatically updates the hosts.ini file with the IP addresses of our new cluster nodes. There should be at least three addresses supplied to build a resilient (multi master) cluster.
declare -a IPS=(10.99.1.3 10.99.1.4 10.99.1.5)
CONFIG_FILE=inventory/onecluster/hosts.ini python3 contrib/inventory_builder/inventory.py ${IPS[@]}
echo "docker_version: latest" >> inventory/onecluster/group_vars/all/all.yml
Build
The following Ansible command should build a working cluster with three shared master / worker nodes (also each running etcd in stacked control plane configuration).
ansible-playbook -i inventory/onecluster/hosts.ini --user=root cluster.yml
Additional configuration
Host names
The default auto-generated hosts file will just call each server node1, node2 etc. This can be adjusted by replacing each name in the hosts.ini.
Kubernetes versions
If you are after a specific Kubernetes version - for example 1.11.7 (set as kube_version in group_vars/k8s-cluster/k8s-cluster.yml), Kubespray may error due to missing checksums. This can be fixed by adding sha256 checksums to the roles/download/defaults/main.yml.
roles/download/defaults/main.yml
@@ -76,6 +76,7 @@ hyperkube_checksums:
v1.12.0: f80336201f3152a5307c01f8a7206847398dde15c69b3d20c76a7d9520b60daf
+ v1.11.7: 9f4c22bc5fa6d3a70f748c652f21f702b0256b8808c481e2c06af2104d42bd36
v1.11.3: dac8da16dd6688e52b5dc510f5dd0a20b54350d52fb27ceba2f018ba2c8be692
@@ -96,6 +97,7 @@ kubeadm_checksums:
v1.12.0: 463fb058b7fa2591fb01f29f2451b054f6cbaa0f8a20394b4a4eb5d68473176f
+ v1.11.7: 37ec1273ec4ca85ba704740c62bea70f171f9b574ff1be06c65f13b384185c51
v1.11.3: 422a7a32ed9a7b1eaa2a4f9d121674dfbe80eb41e206092c13017d097f75aaec
Metrics server
group_vars/k8s-cluster/addons.yml
15c15
< metrics_server_enabled: false
---
> metrics_server_enabled: true
Helm / Tiller
group_vars/k8s-cluster/addons.yml
6c6
< helm_enabled: false
---
> helm_enabled: true
Flannel instead of Calico
group_vars/k8s-cluster/k8s-cluster.yml
74c76
< kube_network_plugin: calico
---
> kube_network_plugin: flannel
Flannel backend type vxlan
group_vars/k8s-cluster/k8s-net-flannel.yml
< # flannel_backend_type: "vxlan"
---
> flannel_backend_type: "vxlan"
Updating IP address ranges for services and pods
group_vars/k8s-cluster/k8s-cluster.yml
77c79
< kube_service_addresses: 10.233.0.0/18
---
> kube_service_addresses: 10.196.0.0/16
82c84
< kube_pods_subnet: 10.233.64.0/18
---
> kube_pods_subnet: 10.144.0.0/16
Other commands
build:
ansible-playbook -i inventory/onecluster/hosts.ini --user=root cluster.yml
scale (add nodes to hosts.ini, then):
ansible-playbook -i inventory/onecluster/hosts.ini --user=root scale.yml
removing a node:
ansible-playbook -i inventory/onecluster/hosts.ini --user=root remove-node.yml -e "node=xxxx,yyyy,zzzz"
clean / destroy cluster (careful):
ansible-playbook -i inventory/onecluster/hosts.ini --user=root reset.yml
upgrade (from 1.11.3 to 1.11.7 in this case):
ansible-playbook -i inventory/onecluster/hosts.ini --user=root upgrade-cluster.yml -e kube_version=v1.11.7
Comments
Post a Comment