Server: skydick (10.0.1.1), pool: dick
Network: 10GbE bonded (bond0), jumbo frames MTU 9000, subnet 10.0.0.0/16.
| Share | Server path | SMB name | NFS mount | Access |
|---|---|---|---|---|
| Public files | /srv/public |
\\SKYDICK\public |
/public |
rw, all @storage users |
| Media library | /srv/media/library |
\\SKYDICK\media |
/media/library |
ro, all @storage users |
| Personal files | /srv/users/<you>/files |
\\SKYDICK\<you> |
/users/<you> |
rw, owner only |
NFS paths are relative to the NFSv4 pseudo-root (/srv on the server, exported with fsid=0).
Open File Explorer, type in the address bar:
\\10.0.1.1\public \\10.0.1.1\media \\10.0.1.1\<your-username>
When prompted, enter your Samba credentials (set by the admin on skydick with smbpasswd -a <user>).
Finder → Go → Connect to Server (Cmd+K):
smb://10.0.1.1/public smb://10.0.1.1/media smb://10.0.1.1/<your-username>
Nautilus/Dolphin/Thunar address bar:
smb://10.0.1.1/public smb://10.0.1.1/<your-username>
# One-off mount sudo mount -t cifs //10.0.1.1/public /mnt/public -o username=<you>,uid=$(id -u),gid=$(id -g) # /etc/fstab (persistent) — store password in /root/.smbcredentials (chmod 600) //10.0.1.1/public /mnt/public cifs credentials=/root/.smbcredentials,uid=1000,gid=100,_netdev 0 0
/root/.smbcredentials:
username=<you> password=<your-smb-password>
NFS uses NFSv4 with a pseudo-root at /srv. Mount paths omit /srv.
# Public shared files sudo mount -t nfs4 10.0.1.1:/public /mnt/public # Media library (read-only) sudo mount -t nfs4 10.0.1.1:/media/library /mnt/media # Your private tree (all writes become your UID via all_squash) sudo mount -t nfs4 10.0.1.1:/users/<you> /mnt/skydick-home
10.0.1.1:/public /mnt/public nfs4 rw,hard,_netdev 0 0 10.0.1.1:/media/library /mnt/media nfs4 ro,hard,_netdev 0 0 10.0.1.1:/users/<you> /mnt/skydick-home nfs4 rw,hard,_netdev 0 0
For large transfers on 10GbE with jumbo frames, add NFS mount options:
rsize=1048576,wsize=1048576,nconnect=16
Example:
10.0.1.1:/users/ldx /mnt/skydick nfs4 rw,hard,rsize=1048576,wsize=1048576,nconnect=16,_netdev 0 0
/srv/public)Collaborative shared space. All users in the storage group can read and write. New files
inherit group storage via setgid (mode 2775).
@storageroot_squash (root maps to nobody, normal UIDs pass through)/srv/media/library)Read-only organized media (movies, TV, music). Managed by the automation stack (qBittorrent +
Sonarr/Radarr/Lidarr). Users consume but do not write here.
@storage/media/library exportThe full /srv/media dataset (including /srv/media/data with raw torrent payload) is only
writable by the qbittorrent service account (UID 900). Hardlinks between data/ andlibrary/ work because they are directories on the same ZFS dataset.
/srv/users/<you>/files)Private per-user storage. Only you can access your tree.
[homes] share — connect as \\SKYDICK\<you>, authenticates with your Samba password/users/<you> export with all_squash mapping all operations to your UID/GIDYour NFS export maps every client UID to your server-side UID. This means any process on any
host in 10.0.0.0/16 that mounts your export will write as you. NFS does not authenticate —
it trusts the network. For stronger isolation, use SMB (which requires a password).
/srv/users/<you>/
├── files/ ← personal files (SMB [homes] points here)
├── bt-state/ ← private torrent/arr client state
│ ├── watch/ ← .torrent watch directory
│ ├── session/ ← client session/resume data
│ └── config/ ← client configuration
└── vm/
└── files/ ← VM disk images (file-backed, accessible via NFS/SMB)
bt-state holds your torrent client's configuration and state databases. The actual media
payload lives on the shared dick/media dataset, not here. This separation means:
VM zvols (block devices for iSCSI) are created as ZFS children of dick/users/<you>/vm/<name>
and are managed by the admin. They are not visible in the filesystem tree.
Admin procedure — run on skydick as root:
In hosts/skydick/default.nix:
users.users.<newuser> = {
extraGroups = [ "storage" ];
hashedPassword = "<hash>"; # mkpasswd -m yescrypt
};
In modules/users.nix (if the user needs SSH/sudo access across all hosts):
users.users.<newuser> = {
isNormalUser = true;
extraGroups = [ "wheel" ];
openssh.authorizedKeys.keys = [ "ssh-ed25519 ..." ];
};
In hosts/skydick/datapool.nix, add to systemd.tmpfiles.rules:
"d /srv/users/<newuser> 0700 <newuser> users -" "d /srv/users/<newuser>/files 0750 <newuser> users -" "d /srv/users/<newuser>/bt-state 0750 <newuser> users -" "d /srv/users/<newuser>/vm 0750 <newuser> users -" "d /srv/users/<newuser>/vm/files 0750 <newuser> users -"
Add to services.nfs.server.exports:
/srv/users/<newuser> 10.0.0.0/16(rw,sync,no_subtree_check,all_squash,anonuid=<UID>,anongid=100)
Replace <UID> with the user's actual numeric UID (id -u <newuser> after first deploy).
sudo git -C /etc/nixos pull && sudo nixos-rebuild switch --flake /etc/nixos
# Get the user's UID uid=$(id -u <newuser>) # Create datasets zfs create -o mountpoint=/srv/users/<newuser> -o quota=10T dick/users/<newuser> zfs create -o recordsize=128K -o mountpoint=/srv/users/<newuser>/files dick/users/<newuser>/files zfs create -o recordsize=16K -o mountpoint=/srv/users/<newuser>/bt-state dick/users/<newuser>/bt-state zfs create -o recordsize=64K -o mountpoint=/srv/users/<newuser>/vm dick/users/<newuser>/vm mkdir -p /srv/users/<newuser>/vm/files # Set ownership chown <newuser>:users /srv/users/<newuser> && chmod 0700 /srv/users/<newuser> for d in files bt-state vm vm/files; do chown <newuser>:users /srv/users/<newuser>/$d && chmod 0750 /srv/users/<newuser>/$d done
smbpasswd -a <newuser>
The user can now connect via SMB and NFS.
exportfs -ra
Each user has a ZFS quota on their dick/users/<user> dataset (default 10TB). This caps the
total across all child datasets (files + bt-state + vm). Check usage:
zfs list -o name,used,quota -r dick/users
Adjust quota:
zfs set quota=20T dick/users/<user>
The shared dick/media dataset has no per-user quota — it is managed at the service level.
Check pool health:
zpool status dick
Check dataset usage:
zfs list -o name,used,avail,refer,quota -r dick
Check NFS exports:
exportfs -v
Check active NFS clients:
ss -tn | grep :2049
ip addrshowmount -e 10.0.1.1all_squash — your local UID does not matter, everything maps tostorage groupssh skydick zfs list -r dick/users/<you>smbpasswd -a <user> on skydick to create/reset the Samba passwordnconnect=16 to mount options for parallel NFS connectionsrsize=1048576,wsize=1048576 for 1MB read/write blocksethtool <interface> | grep SpeedThe arr stack (Sonarr/Radarr/Lidarr) hardlinks files from /srv/media/data/ to/srv/media/library/. If a file exists in data/ but not in library/, the arr
import/rename has not run yet. Do not manually copy files into library/ — let the
automation stack manage it to preserve hardlinks and metadata.