Zuto on a Pi

Hello!

I'm Michael Horsley and I'm one of the software engineers at Zuto.

This is hopefully the first post in a series of ramblings around seeing if I can run a copy of Zuto APIs off a cluster of Raspberry Pi.

The posts will generally be split between a wordy bit explaining what I intended to do/why and then a step-by-step walking through of how it was done.

This one will be a bit more wordy, as it'll include an introduction about why I'm doing this and then setting up a Pi Cluster.

If you're here for tech, as I'm sure some are, skip ahead to section "Less talk more cluster"

I wonder if Zuto would run on a Raspberry Pi?

Why Raspberry Pi?

Well, mostly in an continuing effort to justify how much I spend on tech to myself and my partner (mostly the partner) I thought this would make great use of any Raspberry Pis I had already, as well as save some cost on hosting in AWS.

Could you have not just used the cloud?

Oh absolutely, and for anything that's even remotely important or has any service-level agreement requirements, I'd go cloud based every time.

Kubernetes in the cloud has progressed really well in the last couple of years. From AWS EKS or Googles Cloud Platform managing your cluster to orchestrators such as Rancher (https://rancher.com/) it all makes getting your docker images and containers up and running on the cloud has been made much simpler, because providers have abstracted a lot of the complicity/maintenance away from the user.

However I feel this often skips a lot of the learnings and appreciations that engineers gain from having to deal with the original pain of getting it all working without these helpful services.

Also as costs of cloud RAM and Cores reduce over time, engineers and companies seem to have less enthusiasm around optimising either memory and CPU usage, as it returns less value then releasing new features or focusing on code.

Working with a Pi cluster brings a very fixed amount of usable resources, bringing with it a challenge to see if you can squeeze everything out of the running applications and forcing you to constantly monitor the running pods.

Another bonus of a Pi cluster would be if I could get it all completely self contained within my home network, the idea of having a fully contained bubble sounds like fun, for both development and testing. However this might be difficult with some existing services that exist purely within the AWS realm.

This might even open up the ability to spin up dev environments for each of the engineering teams allowing them to work independently of the other teams without the extra cost would be caused in the cloud.

What are you hoping to gain?

Mostly a better understanding of data flow through Zutos APIs, I've been with Zuto a little over 3 months at time of writing and while I've built somewhat of an understanding of the applications I work with day-to-day, I've got no doubt I'm missing a lot of infrastructural knowledge.

By converting each application into the Pi cluster and then it's connecting applications, I'll have to understand what is going on and each flow throughout the system.

I may be able to optimise the applications to get everything running on the cluster. This will hopefully improve Zutos cost margins, as we scale these will only increase, so any benefit here will hopefully compound later down the line.

Why K3s

K3s is a lightweight version of Kubernetes, perfect for a Raspberry Pi Cluster and supports ARM. It comes bundled with fairly little but does include Ingress (that'll handle the API requests) and a Control plane (that'll control where our running applications sit in our cluster).

The other main option for Raspberry Pi is MicroK8s which boasts, among other things, high-availability.

I chose K3s over MicroK8s mostly because I struggled to get a basic .Net Core application running within the cluster, almost certainly something daft I've done but it was difficult to try and find any tutorial or guide that showed me how to piece together an Ingress or any other routing option into an usable setup.

If someone has a good tutorial or walk through I'd be happy to try it out, one joy of the Pi cluster is I can rebuild it fairly quickly now.

The Cluster

So I'm going to using a 3 node Cluster consisting of 1 Raspberry Pi 4 A (4GB model) and 2 Pi 4 B (8GB models) making it one primary and 2 secondary nodes. The primary node by default will act as the control plane while the 2 secondaries will run my applications, however you should be able to follow along with even a single Raspberry Pi (Ideally RPi 4B models, however 4A should work as well).

Less talk more cluster

For the following you'll need this;

  • Raspberry Pi 4 with power supply
  • SD Card and SD card USB Reader
  • Desktop or Laptop to image the SD cards
  • Raspberry Pi Imager (I've grabbed it from here https://www.raspberrypi.org/software/)
  • SSH Keys generated on your machine (you will most likely already have these if you use GitHub, to check, go to your user directory and see if there's an .ssh folder, it might be hidden. If not, run ssh-keygen within command prompt and follow the instructions)
  • [Optional] Lens for Kubernetes (https://k8slens.dev/) This is optional but will give you a nice GUI that'll display information about your cluster

In my commands to run, I'll either denote them as Desktop or Pi or Pi Primary, if you are following along, copy everything after the $

So if my command was Desktop $ ssh ubuntu@ubuntu this means, from my desktop machine I would run the command ssh ubuntu@ubuntu, hopefully that makes sense. This will hopefully make things easier later in the step-by-step when I switch between the primary/secondary nodes and my desktop for Helm.

The Pi and Pi Primary are for dealing with multiple Pi, if you're using only 1 Pi then skip the "Setting up our secondary Pi Node(s)" section.

Setting up a base Ubuntu Pi

  1. Take the SD card and place it into your card reader and place that into your computer

  2. Open up the Raspberry Pi Imager

  3. Go to Ubuntu and then select Ubuntu Server 21.04 (RPi 3/4/400) 64-bit server OS for arm64 architectures

  4. Then select your SD card

  5. Click Write

    • This can take up to 10 minutes or so to write and then verify, depending on your USB port and Cards write speed
    • Once that is done it should have already safely ejected the mount, however if not, do this yourself before taking the reader out of your computer

[Optional Steps For Wifi]

If you would like your Pi to use Wifi on boot like I did, use the following, otherwise skip steps 6 - 10 if you have an Ethernet Cable connected to the Pi

  1. Place the reader back into your PC to get the folder prompt back, we have some files we need to alter

  2. Open up the drive disk view and select system-boot

  3. Open the network-config file in a text editor

  4. Remove the commented out lines under the ethernet section and replace it with something like my example

  • just remember to be careful, this is yaml and it uses spaces, tabs will not work here and it's 2 spaces for indentation
  • put your wifi name with " " surrounding it, just like your password
  • An example is below;
version: 2
ethernets:
  eth0:
    dhcp4: true
    optional: true
wifis:
  wlan0:
    dhcp4: true
    optional: true
    access-points:
      "my-wireless-network-name":
        password: "my-network-password"
  1. Safely eject the SD Card from your machine

[End of Optional Steps For Wifi]

  1. Plug the SD card into your Pi and power it up
  • Wait a couple of minutes and then open up a terminal, I personally use Git Bash (https://gitforwindows.org/), we are going to ssh into our PI now
  1. You can ssh into your Pi either via its hostname like so or by it's IP
  • Desktop $ ssh ubuntu@ubuntu

since the default host should be ubuntu or you should be able to get it via it's IP address, there's a few ways to find out the Pi IP address, I usually open up my wifi router's home page and look for it there. Hopefully ubuntu should work as a host name but that might depend on your routers settings

  1. When prompted to add the key fingerprint type yes
  • the default password is ubuntu unless you changed it within the user-data file previously
  1. It should now ask us to reset the ubuntu user's password, follow the prompts to create a new password for this user, type what you want but remember it
  • completing this step will kick us out of the ssh session
  1. [Optional] If you have already set up ssh keys and would like to log into the Pi via them rather than a username/password, do the following from your machine
  • Desktop $ ssh-copy-id ubuntu@ubuntu
  • enter in the password you set in the previous step
  • now when we ssh into the PI, we won't get asked for a password
  1. ssh back into the PI
  • Desktop $ ssh ubuntu@ubuntu
  1. Run the following
  • Pi $ sudo sed -i '1s/^/cgroup_enable=memory cgroup_memory=1 /' /boot/firmware/cmdline.txt
    • ? what does this command do? this allows resource management to restriction memory usage, when installing K3s, the service will not start if this part isn't configured correctly
  1. Set the hostname of the pi, don't have this collide with any hostnames already defined on your network (so I use node1, node2, node3 etc).
  • Pi $ sudo hostnamectl set-hostname node1
  1. Then reboot
  • Pi $ sudo reboot
  1. Ssh back into the Pi once it's booted Desktop $ ssh ubuntu@node1 //or whatever you changed the host to

  2. Run package updates, it's always good to keep it up to date for security updates

  • Pi-Primary $ sudo apt -y update && sudo apt dist-upgrade
    • this takes a while
    • on the version that I am running, it will warn you that there's a new version of the kernel and it will need to restart some services, just hit Enter, you'll see two different screens that look something like this;
      warning_1
  1. Finally reboot after that's done
  • Pi $ sudo reboot

You now have base set up of the Raspberry Pi, however we don't yet have K3s on it. The previous steps are the same for all nodes within our cluster

Now it diverges slightly, on your Primary Pi node do the following

Setting up our primary Pi Node

  1. Let's get K3s
  • Pi-Primary $ curl -sfL https://get.k3s.io | sh -
  1. Once that's done, run
  • Pi-Primary $ sudo kubectl get nodes
    • this should return them as ready, if not, wait a few minutes and try again, failing that, double check the cgroup_enable and cgroup_memory to make sure they are correct

Setting up our secondary Pi Node(s)

Assuming you've repeated steps 1-22 for all the generic setting up of Ubuntu, we don't run the same curl command as our primary for all our secondaries, instead we install and connect our Pi(s) to the primary node by doing the following;

  1. On the primary node run the following
  • Pi-Primary $ sudo cat /var/lib/rancher/k3s/server/node-token
    • this will spit out a token, copy that for the next command
  1. ssh into your secondary nodes and run the following, using the token generated from step 1
  • Pi-Secondary $ export K3S_URL="https://node1:6443" && export K3S_TOKEN="<TokenCopied>" && curl -sfL https://get.k3s.io | sh -
    • Replacing the hostname node1 if you used something different
    • Replace '' with the token from step 1
    • Don't forget to copy the little - at the end, it's part of the command
  1. After a few minutes you should be able to run Pi $ sudo kubectl get nodes on any of the Raspberry Pi to see the nodes as ready.

And there we go, we have a running cluster!

But we don't want to have to SSH into the Raspberry Pi everytime we want to deploy something or run any kubectl commands.

Running Kubectl commands without SSH'ing into the cluster everytime

For this we'll want to grab the Kube Config from the Primary Node, and then copy it to our main machine, this will allow us to run the kubectl commands outside of our cluster.

  1. Create a new folder within your user directory on your main machine called .kube
  • For example mine could then become C:\Users\michael.horsley\.kube
  1. Within this directory create a new extentionless file called config

  2. Open a terminal and SSH into the Primary Pi node

  • Desktop $ ssh ubuntu@node1
    • my Primary Pi host is node1
  1. Print the K3s kube config to the terminal
  • Primary $ sudo cat /etc/rancher/k3s/k3s.yaml
  1. Copy that from your terminal and then paste that into the config file we created in step 2

  2. Open the config file and around line 5 there is a bit of text server: https://127.0.0.1:6443 change the 127.0.0.1 to your nodes hostname or IP address

  • Mine would become https://node1:6443
  1. Save that file and then re-open the terminal and try getting the cluster nodes
  • Desktop $ kubectl get nodes
  1. Now hopefully you'll be able to see all your nodes and we've done this outside of our cluster! No more needless SSH'ing back in to run commands. This'll become important for when we want to deploy applications into our cluster.
  • If this step fails, try using the IP address instead of the hostname when updating the kube config file
  1. [Optional step for Lens] Getting Lens set up with our cluster is now as easy as clicking to add a new Cluster and then selecting our config file. Lens should then connect and you'll be able to browse around the namespaces and contexts, we aren't running anything yet so it'll be a little empty.

Here's a link to a kubectl cheat sheet I will often refer back to since it's hard to remember all the commands (https://kubernetes.io/docs/reference/kubectl/cheatsheet/)

Next steps

My next steps are to start moving over the Api applications and their dependencies, I'll be doing this one at a time but first I'll need to figure out which ones to move.

Wrapping up

If you've made it this far then thank you, hopefully this post will at least have been interesting and do let me know how your cluster went or if you have any general feedback or advice for my posts or my Pi cluster! My twitter handle is @Mike_Horsley

Zuto on a Pi
Share this