Enabling Talos system extensions
I wanted to deploy Longhorn on a Talos cluster and I didn’t realise that at the time I didn’t include the required extensions:
- siderolabs/iscsi-tools
- siderolabs/util-linux-tools
After a bit of struggling, I found the fix on Reddit lol… create a patch file:
# patch.yaml
customization:
systemExtensions:
officialExtensions:
- siderolabs/iscsi-tools
- siderolabs/util-linux-tools
Use this patch file in a POST request to the Talos image builder to get the schematic ID for an image with these extensions:
curl -X POST --data-binary @patch.yaml https://factory.talos.dev/schematics
# OUTPUT:
{"id":"613e1592b2da41ae5e265e8789429f22e121aab91cb4deb6bc3c0b6262961245"}
Then upgrade the Talos cluster using this new schematic (make sure to match the version of Talos software that is currently running - in my case it was 1.11.5):
talosctl upgrade --preserve --image factory.talos.dev/installer/613e1592b2da41ae5e265e8789429f22e121aab91cb4deb6bc3c0b6262961245:v1.11.5 --nodes=192.168.0.5,192.168.0.6,192.168.0.7,192.168.0.8
# OUTPUT:
watching nodes: [192.168.0.5 192.168.0.6 192.168.0.7 192.168.0.8]
* 192.168.0.5: post check passed
* 192.168.0.6: post check passed
* 192.168.0.7: post check passed
* 192.168.0.8: post check passed
talosctl get extensions -n 192.168.0.5
# OUTPUT:
NODE NAMESPACE TYPE ID VERSION NAME VERSION
192.168.0.5 runtime ExtensionStatus 0 1 iscsi-tools v0.2.0
192.168.0.5 runtime ExtensionStatus 1 1 util-linux-tools 2.41.1
192.168.0.5 runtime ExtensionStatus 2 1 schematic 613e1592b2da41ae5e265e8789429f22e121aab91cb4deb6bc3c0b6262961245
Migrating data between PVCs
Let’s say I have a pod using a PVC that relies on a PV that uses NFS as a storage backend. At some point I want to make this pod use a quicker backend like Longhorn, without losing any data in the process. One option to achieve this is to us a temporary Pod to replicate the data into the new PVC. Here are the steps along with an example of migrating the storage of a Gitea Actions runner:
- create a new PV using the new storage backend + a new PVC that binds to it
The storage backend of the new PVC is Longhorn:
# pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitea-run-1-pvc
namespace: dev-stuff
spec:
storageClassName: longhorn-static
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
Apply the manifest and check that the PVC is is “bound” state:
kubectl apply -f pvc.yaml
# OUTPUT:
persistentvolumeclaim/gitea-run-1-pvc created
kubectl get pvc -n dev-stuff
# OUTPUT:
gitea-run-1-pvc Bound pvc-17af2514-7307-4d74-b343-33c31607ad12 5Gi RWO longhorn-static <unset> 56s
- scale down the Deployment to 0 replicas
kubectl scale deployment -n dev-stuff gitea-runner-1 --replicas=0
# OUTPUT:
deployment.apps/gitea-runner-1 scaled
- create a temporary Pod that attaches both the old and the new PVC
The image of choice is “busybox”, but any other image that has the basic linux utilities available will do:
# temp.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: data-mover
name: data-mover
namespace: dev-stuff
spec:
replicas: 1
selector:
matchLabels:
app: data-mover
template:
metadata:
labels:
app: data-mover
spec:
containers:
- args:
- -c
- while true; do ping localhost; sleep 60;done
command:
- /bin/sh
image: busybox:latest
name: data-mover
volumeMounts:
- mountPath: /source
name: source
- mountPath: /destination
name: destination
restartPolicy: Always
volumes:
- name: source
persistentVolumeClaim:
claimName: gitea-runner-1-pvc
- name: destination
persistentVolumeClaim:
claimName: gitea-run-1-pvc
Apply the manifest and check that the pod is running:
kubectl get pods -n dev-stuff
# OUTPUT:
NAME READY STATUS RESTARTS AGE
data-mover-5ff6cfcbfc-9cd8f 1/1 Running 0 31s
- copy all the data across
Exec into the newly created pod and copy the contents of “/source” into “/destination”:
kubectl exec -it -n dev-stuff data-mover-5ff6cfcbfc-9cd8f -- sh
copy -r source/* destination/*
exit
- remove the “data-mover” deployment
kuebctl delete -f temp.yaml
- modify the Deployment to mount the new PVC
kubectl edit deployment -n dev-stuff gitea-runner-1
# Change the line that references the PVC to use the new one:
volumes: - name: runner-data persistentVolumeClaim: claimName: gitea-run-1-pvc # This was previously "gitea-runner-1-pvc"
# Then save and close. The manifest should be applied with the new values
# OUTPUT:
deployment.apps/gitea-runner-1 edited
- scale up the Deployment and check that everything works as expected
kubectl scale deployment -n dev-stuff gitea-runner-1 --replicas=1
# OUTPUT:
deployment.apps/gitea-runner-1 scaled
kubectl get pods -n dev-stuff
# OUTPUT:
NAME READY STATUS RESTARTS AGE
gitea-runner-1-754f74b9c4-vlqrf 1/1 Running 0 90s