NicTheGeek
go down to startI've been working with IT for a long time...
k3s – kubernetes for IoT on Arm processors
I have been working with Docker for some time and run several network-linked containers on a ubuntu server, somewhere in Amsterdam. I am running a pilot project, requiring a single container for each function and so just script the build and connection with docker-compose – here is an example of docker-compose.yml for a project that runs nginx, wordpress, mysql and Node.js
version: '3.7'
services:
db:
image: mysql:8.0
container_name: db
restart: unless-stopped
env_file: .env
environment:
- MYSQL_DATABASE=wordpress
volumes:
- dbdata:/var/lib/mysql
command: '--default-authentication-plugin=mysql_native_password'
networks:
- app-network
wordpress:
depends_on:
- db
image: wordpress:5.1.1-fpm-alpine
container_name: wordpress
restart: unless-stopped
env_file: .env
environment:
- WORDPRESS_DB_HOST=db:3306
- WORDPRESS_DB_USER=$MYSQL_USER
- WORDPRESS_DB_PASSWORD=$MYSQL_PASSWORD
- WORDPRESS_DB_NAME=wordpress
volumes:
- wordpress:/var/www/html
- ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
networks:
- app-network
webserver:
depends_on:
- wordpress
image: nginx:mainline-alpine
container_name: webserver
restart: unless-stopped
ports:
- "80:80"
- "443:443"
volumes:
- wordpress:/var/www/html
- ./nginx-conf:/etc/nginx/conf.d
- ../certs/:/etc/nginx/certs/
networks:
- app-network
nodeapi:
build:
context: ./nodeapi
dockerfile: Dockerfile
image: nodeapi
container_name: nodeapi
restart: unless-stopped
ports:
- "3000:3000"
volumes:
- ./nodeapi:/usr/src/app
networks:
- app-network
volumes:
wordpress:
dbdata:
networks:
app-network:
driver: bridge
The next step would be to scale on-demand and make the solution resilient by spreading the containers across multiple machines. This is what kubernetes is designed to do, and it’s about time I learnt more…
So, to build a simple kubernetes multiple machine cluster, I could have spun up multiple x86 VMs or cloud servers – but i was fascinated by an article from Alex Ellis – https://blog.alexellis.io/test-drive-k3s-on-raspberry-pi/ – it introduced me to a stripped down version of kubernetes that can run in a restrained environment and on Arm.
I looked around my home office and found a pair of NanoPi Neo2 from FriendlyArm – 4 x A53 core AllWinner CPU, 512MB RAM, Gigabit Ethernet – and this, rather unusual offering from globalscale, that I bought from a KickStarter promotion – the espressobin that features 3 Gigabit Ethernet ports, SATA and 1 mini PCIe with 1GB RAM and 2 x A53 core Marvell CPU.
To keep everything stable and provide a heatsink of sorts, I mounted the SBCs on a spare piece of ACM Aluminium Composite Sheet

I installed the current version of Armbian, based on Debian Buster, as a stable ARM64 linux distribution. All 3 are running off 16GB micro SD cards. The default install for the espresso configures Ethernet ports 1 and 2 as LAN, 3 as WAN, so plugging cables between it and the NanoPi just works!
Armbian comes with armbian-config that makes it easy to check for updates, change hostname, etc. I added my default ssh key to each SBC from my Mac using the following command:
ssh-copy-id <username>@<SBC_URL>
Following the the blog I mentioned earlier, the k3s master install on the espresso SBC was simply:
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v0.9.1 sh -
Note that I had to install a specific version as the current build failed with an error – something you get used to when pulling the latest version of code from an early stage linux app. This should ask for your sudo password (you’re not installing as root, are you?) and then install a whole raft of essential tech, including TLS certs, Traefik and Helm – look at k3s.io for more detail. Next, copy the K3S_TOKEN to your clipboard using:
sudo cat /var/lib/rancher/k3s/server/node-token
Now, on each NanoPi, create two environment variables, K3S_URL and K3S_TOKEN, then run the same curl script as before. This time, it will install the k3s worker code:
export K3S_URL="https://espresso.home:6443"
export K3S_TOKEN="<the K3S_TOKEN is that you copied from the master>"
curl -sfL https://get.k3s.io | INSTALL_K3S_VERSION=v0.9.1 sh -
Move back to your espresso client and try the following command – note that it took a few seconds for the workers to attain ‘Ready’ status
sudo kubectl get nodes -o wide
[sudo] password for <username>:
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3slave001 Ready worker 5h25m v1.15.4-k3s.1 192.168.1.83 <none> Debian GNU/Linux 10 (buster) 4.19.63-sunxi64 containerd://1.2.8-k3s.1
k3slave002 Ready worker 4h45m v1.15.4-k3s.1 192.168.1.85 <none> Debian GNU/Linux 10 (buster) 4.19.63-sunxi64 containerd://1.2.8-k3s.1
espresso Ready master 2d19h v1.15.4-k3s.1 192.168.1.252 <none> Debian GNU/Linux 10 (buster) 4.19.59-mvebu64 containerd://1.2.8-k3s.1
Alex Ellis is the developer behind openfaas and as such, provides a simple example in the blog post that enables us to test the k3s cluster we have just created. So, just follow the section in that blog, entitled ‘Deploy a Microservice’ and you should see something like this

So, that’s it. try k3s with any available Arm-based SBCs you may have to hand – make it easy for yourself by running Armbian on each. I intend to write a subsequent post, showing some of the automated app deployment, scalability and resilience made possible even out to the IoT computing edge of your solution – happy coding!
Leave a Reply