NFS NAS with SSD cache in front of  RAID5 x 4 notebook drives (rust).

Update 2021.07.25: Added the nfs-ganesha docker containers to github and hub.docker.com

Finished off the documentation for the docker containers. These provide NFS server functionality to the docker swarm for arm32, arm64 and amd64 architecture. If I've forgotten stuff please ping me on twitter @electricbrain2

Links:

electricbraincontainers/nfs-ganesha-docker-swarm

electricbrain-code / docker-swarm-nfs-ganesha

Docker swarm nfs-ganesha server container. Super simple instant file sharing.

What is this? The build scripts build a series of 3 containers. The resulting containers can be combined into a single "manifest". The manifest looks and behaves like a container image, however, it has the added advantage that the docker repository is able to distinguish which architecture it is being called from and provides the appropriate image. This may be important if your cluster is multi-architecture.

The container architectures are for ARM32, ARM64 and AMD64 cpus.

Running the container(s). Run this container directly, i.e. not inside the docker swarm under swarm control:

#!/bin/bash

docker run
--detach
--cap-add SYS_ADMIN
--cap-add DAC_READ_SEARCH
--env "GANESHA_BOOTSTRAP_CONFIG=no"
--hostname "nfs-ganesha"
--memory 512m
--memory-swap 512m
--name "nfs-ganesha"
--publish 662:662/tcp
--publish 2049:2049/tcp
--publish 38465:38465/tcp
--publish 38466:38466/tcp
--publish 38467:38467/tcp
--restart unless-stopped
--tmpfs /tmp
--tmpfs /run/dbus
--stop-signal SIGRTMIN+3
--volume /docker.local/nfs-ganesha/data/etc/ganesha:/etc/ganesha
--volume /docker.local/nfs-ganesha/data/mysharefolder:/data
electricbraincontainers/nfs-ganesha-docker-swarm:3.5-u20-1.0

 

Main points:

  • It works.
  • Very fast
  • Cache is using Write Behind policy (how much trouble am I in?). OK, some testing shows that cache software checks that the SSD cache and the spinning rust are in sync at startup. If they are not (and I have no idea how it knows something happened) it sets about flushing the entire cache to the disks (presumably because it cannot tell which bits are out of sync). This flushing took a long long time, nevertheless everything was available and seemed quick enough during the process.
    As pointed out by a good friend, the cache must be persistent for this to work (and SSD is, so all is good, nevertheless my fingers are crossed).
  • NFS Ganesha docker container (userland NFS server - replaces Samba)
  • Raspberry Pi 4 x 4GB
  • Sabrent USB 3 x 4 HDD 2.5 inch ordered via AmazonAU from USA. Couldn't find this in AU.
    Unit has, what seems, a massively oversize 12v 4A psu. It has a configurable power connector, but only came with US plug (more adaptors).
    Possibly the most attractive piece of plastic for 2019. Brilliant plastic design.
  • Toshiba sub-credit card SSD
  • References
    Redhat Creating LVM Cache Logical Volumes
    Kernel Device Mapper Cache
  • ext4 fs read this: Forcing ext4lazyinit to finish its thing
    Essentially ext4lazyinit runs in the backgraound chirping the RAID every few seconds. This causes the RAID drives to annoyingly stay awake forever, defeating the purpose of 2.5 inch drives which spin up really quickly (compared to the slow spin up of the old 3.5 iron which drives me crazy), not to mention wasting power.
    https://superuser.com/questions/784606/forcing-ext4lazyinit-to-finish-its-thing
  • Fix fan noise: Added sophistocated fan controller (100 ohm resistor). Perfect. *essential*. Fan will probably last 10 times longer too.

Fix fan noise hack (note to self - add heatshrink over resistor ends)
LOL - Note sticker... +RED -BLACK (note to Chinese manufacturer +WHITE -BLACK (maybe)):