It currently also has a "swing" mode as I'm still learning all the parts I need for this:
Ok this is a crude but effective way of making a PoC/MVP enclosure. This is by far not what I had in mind for this, but the USB code tension is a major theme, and I need to solidify the structure a lot more, but this will do for the coming time:
It's in its temporary position, let's turn it on!
It's alive! Well only the body, there is no SDcard in it yet as I need to first update the network provisioning for this VLAN. But now I have a node to experiment with that on while the master node does all the being a k8s alone (:P)
If there is anything I've learned so far is that building a solid enclosure for the node and SSD, is that it is harder than it looks. Partially because I don't have all the required parts because I'm ordering as I'm learning. So I have a couple 100 parts already, but everytime
I order it's mainly parts I didn't have enough of the previous time. So it's a slow process, but it means that even though I'm ordering way more than I need per node. It also means I have a whole bunch of parts I need anyway to build the enclosures for the other nodes.
And sometimes you get interruptions like this cute little kitten:
Dark shot of the cluster in progress. This thing will become a light show when done 🤣
Argh, one step forward two steps backward. I really love the idea of #tinkerbell_oss and everything call that you can do with it. But I haven't even gotten around getting the workflows to run and do their job. (It's also not #tinkerbell_oss that is to blame here for the record!)
It's that the RPI's make you jump through all kinds of hooks with PXE and net booting it. I'm probably better off building my own image that streams the k3sos iso to the SSD and kexec's into that or something.
Because all I want is a fresh node when it comes up, no reuse of whatever was previously on that node. It's maybe not what you'd normally do for a "home lab" but I'd like it because there is no litter left behind.
So my afternoon on this project started pretty well, with #k3os booting from SDcard. Next step was booting it from the SSD. Should be easy right?
So my afternoon looked a lot like this, /boot file or directory not found. Now there is a huge clue what is wrong in the block device name.
That p in sdap1 shouldn't be there when using an SSD over USB, but it has to be there for when doing this from an SDcard. The script I'm using has this somewhat hardcoded, and it took me long enough to release that the "fix" pointed out in this issue: https://t.co/RWujfwXkFF
How to correctly fix init.resizefs when booting rpi from ssd · Issue #27 · sgielen/picl-k3os-image-generator

When generating the image for raspberry pi and burning it to a ssd, so that the rpi boots from the usb disk instead of the sd card everything it needs to be changed is this line ('p1' to &#...

GitHub
Solves it, and makes the whole thing boot and works without a hitch. Next up is making sure I'm using the latest #k3os version, as for some reason the script doesn't pick up the latest version as provided. (Or I can just let it upgrade it self until the latest version
Had to disable a few features, but it's up and running!
There is nothing running on it yet obviously, but it is up and running:
That also makes that there are now two Kubernetes clusters up and running in our house
One of the things I wanted SSD's for, is A) SDcards wear out fast under high I/O, B) speed, but C) https://t.co/0nq5EhdEMv for persistent volumes. (With a S3 based backup/restore for real persistence.)
Longhorn

Cloud native distributed block storage for Kubernetes

Longhorn
One of the things I want to try now knowing how that script works. Is to hardcode sda in it, and boot from SDcard when SSD doesn't have an MBR. Now when booting from SDcard it will install k3os on the SSD, and up on reboot k3os supports scripts and I'm looking into removing the
MBR after it has booted from the SSD. So that the next time it is powered on again it will reinstall just as if it's a fresh node.
This cluster will be a beacon of light in the darkness 🤣
And combined both nodes into a single new cluster. With nothing on it yet, but will #terraform apply in the morning loading some of the basics on it:
And yes, a bare #k3s/#k3os #kubernetes cluster looks really boring :D
Smile, you're a #Kubernetes cluster!
Installed https://t.co/0nq5EgW3UX just now (through terraform through a GitHub Actions self-hosted runner on the cluster (yes it's a bit meta)). And due to the amount of pods (24!!!!), it took the cluster a while to download all OCI images, extract them, and spin the pods up
Longhorn

Cloud native distributed block storage for Kubernetes

Longhorn
using default settings (so 3 replica's for most of the things).
Alright so with the latest #k3os and #raspberrypi firmware the #PoE+ fans are kicking in. The downside, they are audible when they ramp up to cool. Which happens every few 1 - 20 seconds pretty much. Need to tweak that they are pinning 10 RPM higher by default, I think
Ow the yellow/green lines are the fans, and the blue/orange is the CPU temp on the nodes
3rd node is incoming soon.
What to name the third node (the theme is infinity stones):
Since the previous poll resulted in a tie, let's have round two a.k.a. the finals (the theme is still infinity stones):
The hardware for (what looks like) Reality is in 🎉
Waiting for a #LEGO Pick a Brick order with "some" parts for the nodes housing. One of the major lessons from the last few days was that the USB <-> SATA adapter #LED blinking during the night can affect our sleep. And I prefer a good night rest, so will attempt to build a less
light leaking housing for it. And those parts were missing. (Also hoarding for future and current nodes once I've settled on a design.)
This is what 300 #LEGO pieces look like. Let the building being!
Cluster part box before and after building this new node:
It was fun to do this #LEGO build for the 3rd time with all the new insights from the 2nd build. This is what became the SSD enclosure (with human sleep improvement change (a.k.a. known as let us not leak #LED light)):
And this is the #raspberrypi with #PoE+ hat computing #LEGO enclosure:
Combine these two #LEGO builds are you get a node:
Overview shot with the previous #LEGO iterations:
Aside from the need to block the SSD USB <-> SATA adapter's LED, another thing is sound from the PoE+ hat's cooling fan. It's switching between 64 and 128 RPM a lot to cool the CPU off by a few degrees celcious. Rather noise, especially if you can hear it in the bedroom at night.
So one of the things I took time for today was to tweak when to switching from 64 to 128 RPM. And it's currently set to 55 degrees celsius. Meaning it gets about 5 degrees hotter then when it would previously kick in, and instead of every few seconds it now only kicks in a

handful times an hour.

On the other side of that coin, I don't want it to get hot at all because it's still held together by #LEGO. Which can held a maximum of 80 degrees celsius according to: https://t.co/bP2FxZX8hx

How much heat can LEGO bricks withstand?

I was thinking on making a decorative LEGO candle holder. It would be made out of regular ABS LEGO bricks, but also transparent ones. Because there will be an open flame involved, I want to make su...

Bricks
But judging by the mention of Polycarbonate for transparent bricks could be interesting as well, at least for the contact points with the Pi. *runs off to the pick a brick page*
Ow, another neat detail is that before letting this node join the cluster. I only had to turn it on once, to get the MAC address of the board. It booted straight from USB after that 😍
Resetting the cluster to apply the PoE+ hat fan speed changes, one SSD at a time. And yes at this point I ran a USB cable from my desktop to the cluster because taking the SSD's off would be a hassle:
Ow before I forgot, here is the article by #geerlingguy that taught me how to tweak the PoE+ fans: https://t.co/IorNUhLt0F
Taking control of the Pi PoE HAT's overly-aggressive fan | Jeff Geerling

#geerlingguy So far, so good (Also installed LongHorn again since the cluster now has 3 nodes. (Hence the spike in temperature in the middle.)):
#geerlingguy After this wipe of the cluster was done, and all nodes were back up. It took 15 minutes for #terraform to reprovision every service running on the cluster: https://t.co/cfnhZvXfAo
Cees-Jan Kiewiet (@wyri@haxim.us) on Twitter

“Resetting the cluster to apply the PoE+ hat fan speed changes, one SSD at a time. And yes at this point I ran a USB cable from my desktop to the cluster because taking the SSD's off would be a hassle:”

Twitter
And the fan speed tweak really worked out. It gets a tad hotter, but no more annoying spin ups of the cooling fan all the time anymore:
Ok, another big milestone reached, deploy a project to both my #Kubernetes clusters at the same time. This one builds a VPN between both clusters:
Ok, this might look like exactly the same thing as the previous tweet. But this time for my current #Kubernetes cluster #Terraform crafted the kubeconfig that was used to do the deployment. Refs: https://t.co/CYBqnPqQFb
Cees-Jan Kiewiet (@wyri@haxim.us) on Twitter

“TL;DR: Read the documentation next time”

Twitter
Also terraforming my current cluster will secure it more, plus make a lift and shift, or booting up a clone in another region a lot simpler. (Application specifics excluded.)
Another few hours of #TerraForm, #longhornio can now back up to #minio (self hosted #s3 on my NAS):
Did the boring thing today and added #traefik as ingress, showing my default backend here using a global ingress:
#traefik Was hoping to use #soloio_inc's Gloo instead of #traefik, but no arm64 images available by default made it, for now, an easy call go with #traefik. (Nothing against #traefik FYI, just want to experiment with Gloo more on this cluster.)
Another important milestone today. Started preparing to move an existing project over to use #RabbitMQ on the cluster instead of running on my #Synology NAS. Went all in and set up a 3 node cluster. Up next is configuring the ingress for AMQP
#RabbitMQ #Synology Been iterating over that thing, and have been duping from traffic from the one running on my NAS to see how it works with more than one node: https://t.co/mDtpYwdG6n
Cees-Jan Kiewiet (@wyri@haxim.us) on Twitter

“Working on #Terraform adding #RabbitMQ to my home cluster, while doing some OSS, and while playing some games. (TF takes 15 minutes for some reason per run because it has to swap out the RMQ nodes.)”

Twitter
And so far #LongHornIO has been invaluable for (insights about) the persistent storage for each pod:
And one of the cats somehow managed to race through the cluster and somehow take out the master node. (It's still running but all network is down.)