Why

As previously mentioned, I have successfully deployed Pi-hole on my Kubernetes cluster, which operates on a 3-node Raspberry Pi cluster. One concern I have is that the Pi-hole pod can be created on any node, so it needs to be able to access the same storage from any node in the cluster.

I explored Rook combined with Ceph, which offers a cloud-native distributed storage system. However, after spending a few hours researching and attempting to set it up on my cluster, I discovered that it demands a considerable amount of CPU/memory resources. While it could be a viable solution for transforming a Raspberry Pi cluster into a distributed storage system, I have other plans for my cluster and don’t intend to use it solely for storage.

Thus, my other option is NFS, which is supposedly slower but simpler to implement.

Setup NFS Server

For me, I plan to move it to a NAS in the future, but since I don’t have one right now, I followed this guide to setup a NFS server on a Pi node.

Install nfs-kernel-server

1
sudo apt install nfs-kernel-server

Create a directory to be used, e.g. /nfs/export

1
2
sudo mkdir -p /nfs/export
sudo chown <RPi-user>:<RPi-user-group> /nfs/export

Find out the uid and gid of the <RPi-user>

1
id <RPi-user>

Update NFS access control file /etc/exports

1
/nfs/export 192.168.10/0/24(rw,async,no_subtree_check,all_squash,insecure,anonuid=1000,anongid=1000)

Check the guide for the explanation of the parameters

Finally, restart nfs-kernel-server

1
sudo systemctl restart nfs-kernel-server

In the end, there should be a path (say /nfs/export) that’s readable/writable for the <RPi-user> on the NFS server.

Test the Setup

From another node, mount the path and try it out.

1
sudo mount -t nfs <NFS-server-IP>:/nfs/export /home/<RPi-user>/nfs/

Should be able to read and write any file/directory on the path. umount the path once done testing.

PersistentVolume and PersistentVolumeClaim

Now let’s create a PersistentVolume (PV) on the path and use it for the Pi-hole

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
14
15
16
17
18
apiVersion: v1
kind: PersistentVolume
metadata:
  name: nfs-pv-pihole
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Retain
  storageClassName: slow
  mountOptions:
    - hard
    - nfsvers=4.1
  nfs:
    path: /nfs/export/pihole
    server: <NFS-server-IP>

Create a PersistentVolumeClaim (PVC) for Pi-hole pod (in the same pihole namespace)

 1
 2
 3
 4
 5
 6
 7
 8
 9
10
11
12
13
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: pihole-pvc
  namespace: pihole
spec:
  accessModes:
    - ReadWriteMany
  volumeMode: Filesystem
  resources:
    requests:
      storage: 500Mi
  storageClassName: slow

Update the values.yaml used for installing Pi-hole Helm chart with

1
2
3
persistentVolumeClaim:
  enabled: true
  existingClaim: pihole-pvc

And finally upgrade the Helm installation

1
helm upgrade pihole mojo2600/pihole -f values.yaml --namespace pihole

That’s it.