Newer
Older
skyworks-Nix-infra / hosts / skydick / DATAPOOL.md

Skydick Data Pool — User & Service Guide

Server: skydick (10.0.1.1), pool: dick

Network: 10GbE bonded (bond0), jumbo frames MTU 9000, subnet 10.0.0.0/16.

Available shares

Share Server path SMB name NFS mount Access
Public files /srv/public \\SKYDICK\public /public rw, all @storage users
Media library /srv/media/library \\SKYDICK\media /media/library ro, all @storage users
Personal files /srv/users/<you>/files \\SKYDICK\<you> /users/<you> rw, owner only

NFS paths are relative to the NFSv4 pseudo-root (/srv on the server, exported with fsid=0).

Connecting via SMB (Windows / macOS / Linux GUI)

Windows

Open File Explorer, type in the address bar:

\\10.0.1.1\public
\\10.0.1.1\media
\\10.0.1.1\<your-username>

When prompted, enter your Samba credentials (set by the admin on skydick with smbpasswd -a <user>).

macOS

Finder → Go → Connect to Server (Cmd+K):

smb://10.0.1.1/public
smb://10.0.1.1/media
smb://10.0.1.1/<your-username>

Linux (GUI)

Nautilus/Dolphin/Thunar address bar:

smb://10.0.1.1/public
smb://10.0.1.1/<your-username>

Linux (CLI / fstab)

# One-off mount
sudo mount -t cifs //10.0.1.1/public /mnt/public -o username=<you>,uid=$(id -u),gid=$(id -g)

# /etc/fstab (persistent) — store password in /root/.smbcredentials (chmod 600)
//10.0.1.1/public  /mnt/public  cifs  credentials=/root/.smbcredentials,uid=1000,gid=100,_netdev  0  0

/root/.smbcredentials:

username=<you>
password=<your-smb-password>

Connecting via NFS (Linux)

NFS uses NFSv4 with a pseudo-root at /srv. Mount paths omit /srv.

One-off mount

# Public shared files
sudo mount -t nfs4 10.0.1.1:/public /mnt/public

# Media library (read-only)
sudo mount -t nfs4 10.0.1.1:/media/library /mnt/media

# Your private tree (all writes become your UID via all_squash)
sudo mount -t nfs4 10.0.1.1:/users/<you> /mnt/skydick-home

Persistent mount (/etc/fstab)

10.0.1.1:/public         /mnt/public        nfs4  rw,hard,_netdev                         0  0
10.0.1.1:/media/library  /mnt/media         nfs4  ro,hard,_netdev                         0  0
10.0.1.1:/users/<you>    /mnt/skydick-home  nfs4  rw,hard,_netdev                         0  0

Performance tuning (10GbE)

For large transfers on 10GbE with jumbo frames, add NFS mount options:

rsize=1048576,wsize=1048576,nconnect=16

Example:

10.0.1.1:/users/ldx  /mnt/skydick  nfs4  rw,hard,rsize=1048576,wsize=1048576,nconnect=16,_netdev  0  0

Share details

Public (/srv/public)

Collaborative shared space. All users in the storage group can read and write. New files
inherit group storage via setgid (mode 2775).

  • SMB: read-write for @storage
  • NFS: read-write with root_squash (root maps to nobody, normal UIDs pass through)

Media library (/srv/media/library)

Read-only organized media (movies, TV, music). Managed by the automation stack (qBittorrent +
Sonarr/Radarr/Lidarr). Users consume but do not write here.

  • SMB: read-only for @storage
  • NFS: read-only via /media/library export

The full /srv/media dataset (including /srv/media/data with raw torrent payload) is only
writable by the qbittorrent service account (UID 900). Hardlinks between data/ and
library/ work because they are directories on the same ZFS dataset.

Personal files (/srv/users/<you>/files)

Private per-user storage. Only you can access your tree.

  • SMB: Samba [homes] share — connect as \\SKYDICK\<you>, authenticates with your Samba password
  • NFS: /users/<you> export with all_squash mapping all operations to your UID/GID

Your NFS export maps every client UID to your server-side UID. This means any process on any
host in 10.0.0.0/16 that mounts your export will write as you. NFS does not authenticate —
it trusts the network. For stronger isolation, use SMB (which requires a password).

Per-user subtree layout

/srv/users/<you>/
├── files/          ← personal files (SMB [homes] points here)
├── bt-state/       ← private torrent/arr client state
│   ├── watch/      ← .torrent watch directory
│   ├── session/    ← client session/resume data
│   └── config/     ← client configuration
└── vm/
    └── files/      ← VM disk images (file-backed, accessible via NFS/SMB)

bt-state holds your torrent client's configuration and state databases. The actual media
payload lives on the shared dick/media dataset, not here. This separation means:

  • No duplicate media storage across users
  • Your client state is private and independent
  • The shared media tree has one writer (the automation stack)

VM zvols (block devices for iSCSI) are created as ZFS children of dick/users/<you>/vm/<name>
and are managed by the admin. They are not visible in the filesystem tree.

Adding a new user

Admin procedure — run on skydick as root:

1. Add the user to NixOS config

In hosts/skydick/default.nix:

users.users.<newuser> = {
  extraGroups = [ "storage" ];
  hashedPassword = "<hash>";  # mkpasswd -m yescrypt
};

In modules/users.nix (if the user needs SSH/sudo access across all hosts):

users.users.<newuser> = {
  isNormalUser = true;
  extraGroups = [ "wheel" ];
  openssh.authorizedKeys.keys = [ "ssh-ed25519 ..." ];
};

2. Add per-user tmpfiles and NFS export

In hosts/skydick/datapool.nix, add to systemd.tmpfiles.rules:

"d /srv/users/<newuser> 0700 <newuser> users -"
"d /srv/users/<newuser>/files 0750 <newuser> users -"
"d /srv/users/<newuser>/bt-state 0750 <newuser> users -"
"d /srv/users/<newuser>/vm 0750 <newuser> users -"
"d /srv/users/<newuser>/vm/files 0750 <newuser> users -"

Add to services.nfs.server.exports:

/srv/users/<newuser>  10.0.0.0/16(rw,sync,no_subtree_check,all_squash,anonuid=<UID>,anongid=100)

Replace <UID> with the user's actual numeric UID (id -u <newuser> after first deploy).

3. Deploy NixOS config

sudo git -C /etc/nixos pull && sudo nixos-rebuild switch --flake /etc/nixos

4. Create ZFS datasets on skydick

# Get the user's UID
uid=$(id -u <newuser>)

# Create datasets
zfs create -o mountpoint=/srv/users/<newuser> -o quota=10T               dick/users/<newuser>
zfs create -o recordsize=128K -o mountpoint=/srv/users/<newuser>/files   dick/users/<newuser>/files
zfs create -o recordsize=16K  -o mountpoint=/srv/users/<newuser>/bt-state dick/users/<newuser>/bt-state
zfs create -o recordsize=64K  -o mountpoint=/srv/users/<newuser>/vm      dick/users/<newuser>/vm
mkdir -p /srv/users/<newuser>/vm/files

# Set ownership
chown <newuser>:users /srv/users/<newuser> && chmod 0700 /srv/users/<newuser>
for d in files bt-state vm vm/files; do
  chown <newuser>:users /srv/users/<newuser>/$d && chmod 0750 /srv/users/<newuser>/$d
done

5. Set Samba password

smbpasswd -a <newuser>

The user can now connect via SMB and NFS.

6. Re-export NFS

exportfs -ra

Quotas

Each user has a ZFS quota on their dick/users/<user> dataset (default 10TB). This caps the
total across all child datasets (files + bt-state + vm). Check usage:

zfs list -o name,used,quota -r dick/users

Adjust quota:

zfs set quota=20T dick/users/<user>

The shared dick/media dataset has no per-user quota — it is managed at the service level.

Monitoring

Check pool health:

zpool status dick

Check dataset usage:

zfs list -o name,used,avail,refer,quota -r dick

Check NFS exports:

exportfs -v

Check active NFS clients:

ss -tn | grep :2049

Troubleshooting

"Permission denied" on NFS mount

  • Verify your IP is in 10.0.0.0/16: ip addr
  • Check the export exists: showmount -e 10.0.1.1
  • Per-user exports use all_squash — your local UID does not matter, everything maps to
    the server-side owner UID

"Permission denied" writing to NFS

  • Public: your server-side UID must be in the storage group
  • Media: read-only for all users (write is via the automation service account only)
  • Personal: should always work — if not, check that the ZFS datasets are mounted:
    ssh skydick zfs list -r dick/users/<you>

SMB authentication fails

  • Samba uses its own password database (tdbsam), separate from Unix login passwords
  • Admin must run smbpasswd -a <user> on skydick to create/reset the Samba password
  • LDAP-backed Samba auth is not yet configured

Slow NFS transfers

  • Ensure MTU 9000 (jumbo frames) is set on both client and server interfaces
  • Add nconnect=16 to mount options for parallel NFS connections
  • Add rsize=1048576,wsize=1048576 for 1MB read/write blocks
  • Check link speed: ethtool <interface> | grep Speed

Files not showing up in media library

The arr stack (Sonarr/Radarr/Lidarr) hardlinks files from /srv/media/data/ to
/srv/media/library/. If a file exists in data/ but not in library/, the
arr
import/rename has not run yet. Do not manually copy files into library/ — let the
automation stack manage it to preserve hardlinks and metadata.