Why
As previously mentioned, I have successfully deployed Pi-hole on my Kubernetes cluster, which operates on a 3-node Raspberry Pi cluster. One concern I have is that the Pi-hole pod can be created on any node, so it needs to be able to access the same storage from any node in the cluster.
I explored Rook combined with Ceph, which offers a cloud-native distributed storage system. However, after spending a few hours researching and attempting to set it up on my cluster, I discovered that it demands a considerable amount of CPU/memory resources. While it could be a viable solution for transforming a Raspberry Pi cluster into a distributed storage system, I have other plans for my cluster and don’t intend to use it solely for storage.
Thus, my other option is NFS, which is supposedly slower but simpler to implement.
Setup NFS Server
For me, I plan to move it to a NAS in the future, but since I don’t have one right now, I followed this guide to setup a NFS server on a Pi node.
Install nfs-kernel-server
|
|
Create a directory to be used, e.g. /nfs/export
|
|
Find out the uid
and gid
of the <RPi-user>
|
|
Update NFS access control file /etc/exports
|
|
Check the guide for the explanation of the parameters
Finally, restart nfs-kernel-server
|
|
In the end, there should be a path (say /nfs/export
) that’s readable/writable for the <RPi-user>
on the NFS server.
Test the Setup
From another node, mount the path and try it out.
|
|
Should be able to read and write any file/directory on the path. umount
the path once done testing.
PersistentVolume and PersistentVolumeClaim
Now let’s create a PersistentVolume (PV) on the path and use it for the Pi-hole
|
|
Create a PersistentVolumeClaim (PVC) for Pi-hole pod (in the same pihole
namespace)
|
|
Update the values.yaml
used for installing Pi-hole Helm chart with
|
|
And finally upgrade the Helm installation
|
|
That’s it.