RPi 4 K8s multi-master cluster using Ubuntu and Kubespray
Introduction
OS/Software stack
At this moment Raspbian it's compiled for Arm32/armhf and Kubespray will fail installing packages like calico, etcd has experimental support for arm32, so better to use arm64. On the other side, Raspbian Arm64 is experimental and not officially supported by Kubespray so better not to use any of these distros yet, avoiding unexpected issues.
The current latest Ubuntu 19.10 (Eoan Ermine) has a Preinstalled server image for the new Raspberry Pi 4 compiled for arm64 but it's not supported officially by Kubespray and the installation can fail so, maybe this distro will be a good candidate in near months.
I wanted a long term support stable distro to avoid collateral issues I thought about Ubuntu 18.04 LTS which will be supported for 5 years until April 2023. But, the current official Ubuntu 18.04 LTS doesn't have a preinstalled server image for the new Raspberry Pi 4, maybe they'll release it later, but now only Raspberry Pi 2/3 are supported.
Setting up Raspberry Pi 4 nodes
sudo vi /etc/default/keyboard
sudo apt-get update -y && sudo apt-get --with-new-pkgs upgrade -y
# This file is generated from information provided by
# the datasource. Changes to it will not persist across an instance.
# To disable cloud-init's network configuration capabilities, write a file
# /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following:
# network: {config: disabled}
network:
ethernets:
eth0:
dhcp4: false
match:
macaddress: dd:a6:3a:1e:50:b9
set-name: eth0
addresses:
- 192.168.0.10/24
gateway4: 192.168.0.1
nameservers:
addresses: [1.1.1.1, 1.0.0.1]
version: 2
sudo apt-mark hold flash-kernel linux-raspi2 linux-image-raspi2 linux-headers-raspi2 linux-firmware-raspi2 linux-firmware
8. Repeat steps 1 to 7 for all raspberry pis.
Setting up a controller node
ssh-keygen -t rsa
ssh-copy-id 192.168.0.10
# Allow members of group sudo to execute any command
%sudo ALL=(ALL:ALL) ALL
your-controller-username ALL=(ALL:ALL) NOPASSWD:ALL
sudo apt install python3-pip git
sudo pip3 install --upgrade pip
pip --version
5. Add a function in your .bashrc script to be able to run commands to all raspberry pis at the same time:
cd
nano .bashrc
Add this function at the end of the .bashrc file:
function picmd {
echo "pi1"
ssh 192.168.0.10 "$@"
echo "pi2"
ssh 192.168.0.11 "$@"
echo "pi3"
ssh 192.168.0.12 "$@"
echo "pi4"
ssh 192.168.0.13 "$@" }
Apply .bashrc changes
source .bashrc
Try it from your controller node writing: picmd date, you should see all pi node responses, something like this:
ubuntu@CONTROLLER-NODE:~$ picmd date
pi1
Sat Jan 25 14:50:31 UTC 2020
pi2
Sat Jan 25 14:50:33 UTC 2020
pi3
Sat Jan 25 14:50:35 UTC 2020
pi4
You can now use picmd command to execute commands (like apt update) in all your raspberry pi nodes without logging in all of them one by one.
Installing Kubespray and the Kubernetes cluster
git clone https://github.com/kubernetes-sigs/kubespray.git
2. Enter into kubespray directory and take a look which kuberspray version are you going to install:
cd kubespray/ && git describe --tags
sudo pip install -r requirements.txt
cp -rfp inventory/sample inventory/mycluster
declare -a IPS=(192.168.0.10 192.168.0.11 192.168.0.12 92.168.0.13)
CONFIG_FILE=inventory/mycluster/hosts.yml python3 contrib/inventory_builder/inventory.py ${IPS[@]}
nano inventory/mycluster/hosts.yml
all:
hosts:
node1:
ansible_host: 192.168.0.10
ip: 192.168.0.10
access_ip: 192.168.0.10
node2:
ansible_host: 192.168.0.11
ip: 192.168.0.11
access_ip: 192.168.0.11
node3:
ansible_host: 192.168.0.12
ip: 192.168.0.12
access_ip: 192.168.0.12
node4:
ansible_host: 192.168.0.13
ip: 192.168.0.13
access_ip: 192.168.0.13
children:
kube-master:
hosts:
node1:
node2:
kube-node:
hosts:
node1:
node2:
node3:
node4:
etcd:
hosts:
node1:
node2:
node3:
k8s-cluster:
children:
kube-master:
kube-node:
calico-rr:
hosts: {}
nano inventory/mycluster/group_vars/all/all.yml
nano inventory/mycluster/group_vars/k8s-cluster/k8s-cluster.yml
ansible-playbook -i inventory/mycluster/hosts.yml --become --become-user=root cluster.yml
Manage your new cluster with Kubectl
curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl
sudo chmod +x kubectl
sudo mv kubectl /usr/local/bin/
Copy the admin.conf file from one of the master nodes to provide kubectl configuration. In the following bash commands replace the user name "ubuntu" by your controller username and the ip "192.168.0.1" with your master node ip:
cd
ssh 192.168.0.1 sudo cp /etc/kubernetes/admin.conf /home/ubuntu/config
ssh 192.168.0.1 sudo chmod +r ~/config
scp 192.168.0.1:~/config .
mkdir .kube
mv config .kube/
ssh 192.168.0.1 sudo rm ~/config
Now you should be able to list all your nodes and kubernetes pods:
kubectl get nodes -o wide
kubectl get pods -o wide --all-namespaces
DONE!
Classic error, number of masters and etcds databases
[node3 -> 192.168.0.12]: FAILED! =>
{"changed": true,
"cmd": ["bash", "-x",
"/usr/local/bin/etcd-scripts/make-ssl-etcd.sh",
"-f", "/etc/ssl/etcd/openssl.conf", "-d", "/etc/ssl/etcd/ssl"], "delta": "0:00:00.007623",
"end": "2019-12-21 13:10:13.140942", "msg": "non-zero return code", "rc": 127, "start": "2019-12-21 13:10:13.133319",
"stderr": "bash: /usr/local/bin/etcd-scripts/make-ssl-etcd.sh: No such file or directory",
"stderr_lines": ["bash: /usr/local/bin/etcd-scripts/make-ssl-etcd.sh: No such file or directory"],
"stdout": "",
"stdout_lines": []}
Skip Kubespray packages upgrades
sudo apt-mark hold apt-transport-https aufs-tools cgroupfs-mount containerd.io docker-ce docker-ce-cli ipset ipvsadm libipset3 libltdl7 libpython-stdlib libpython2-stdlib libpython2.7-minimal libpython2.7-stdlib pigz python python-apt python-minimal python2 python2-minimal python2.7 python2.7-minimal socat
Comments
Comments are closed