Newer
Older
skyworks-Nix-infra / hosts / skydick / DATAPOOL.md

Skydick Data Pool — User & Service Guide

Server: skydick (10.0.1.1), pool: dick

Network: 10GbE bonded (bond0), jumbo frames MTU 9000, subnet 10.0.0.0/16.

Available shares

Share Server path SMB name NFS mount Access
Public files /srv/public \\SKYDICK\public /public rw, all @storage users
Media library /srv/media/library \\SKYDICK\media /media/library ro, all @storage users
Personal files /srv/users/<you>/files \\SKYDICK\<you> /users/<you> rw, SMB owner-authenticated; NFS network-trusted all_squash to owner

NFS paths are relative to the NFSv4 pseudo-root (/srv on the server, exported with fsid=0).

The share/export paths above are live. The dedicated dick/users/*, dick/system/*, and
dick/templates/* ZFS datasets are the intended final layout and still require explicit dataset
creation/migration on a host where only the legacy dick/{share,media,backup,torrent,vm} tree exists.

Identity and authentication

  • skydick resolves POSIX users and groups from LDAP at ldap://10.0.0.1/, base
    dc=skyw,dc=top
  • Local /etc/passwd users still win if the same username exists both locally and in LDAP
  • SMB still uses Samba's local password database (tdbsam), not LDAP-backed SMB auth
  • NFS still does not authenticate users; it trusts client IPs and export options

Connecting via SMB (Windows / macOS / Linux GUI)

Windows

Open File Explorer, type in the address bar:

\\10.0.1.1\public
\\10.0.1.1\media
\\10.0.1.1\<your-username>

When prompted, enter your Samba credentials (set by the admin on skydick with smbpasswd -a <user>).
LDAP identity on skydick does not replace SMB passwords yet.

macOS

Finder → Go → Connect to Server (Cmd+K):

smb://10.0.1.1/public
smb://10.0.1.1/media
smb://10.0.1.1/<your-username>

Linux (GUI)

Nautilus/Dolphin/Thunar address bar:

smb://10.0.1.1/public
smb://10.0.1.1/<your-username>

Linux (CLI / fstab)

# One-off mount
sudo mount -t cifs //10.0.1.1/public /mnt/public -o username=<you>,uid=$(id -u),gid=$(id -g)

# /etc/fstab (persistent) — store password in /root/.smbcredentials (chmod 600)
//10.0.1.1/public  /mnt/public  cifs  credentials=/root/.smbcredentials,uid=1000,gid=100,_netdev  0  0

/root/.smbcredentials:

username=<you>
password=<your-smb-password>

Connecting via NFS (Linux)

NFS uses NFSv4 with a pseudo-root at /srv. Mount paths omit /srv.

One-off mount

# Public shared files
sudo mount -t nfs4 10.0.1.1:/public /mnt/public

# Media library (read-only)
sudo mount -t nfs4 10.0.1.1:/media/library /mnt/media

# Your private tree (all writes become your UID via all_squash)
sudo mount -t nfs4 10.0.1.1:/users/<you> /mnt/skydick-home

Persistent mount (/etc/fstab)

10.0.1.1:/public         /mnt/public        nfs4  rw,hard,_netdev                         0  0
10.0.1.1:/media/library  /mnt/media         nfs4  ro,hard,_netdev                         0  0
10.0.1.1:/users/<you>    /mnt/skydick-home  nfs4  rw,hard,_netdev                         0  0

Performance tuning (10GbE)

For large transfers on 10GbE with jumbo frames, add NFS mount options:

rsize=1048576,wsize=1048576,nconnect=16

Example:

10.0.1.1:/users/ldx  /mnt/skydick  nfs4  rw,hard,rsize=1048576,wsize=1048576,nconnect=16,_netdev  0  0

Share details

Public (/srv/public)

Collaborative shared space. All users in the storage group can read and write. New files
inherit group storage via setgid (mode 2775).

  • SMB: read-write for @storage
  • NFS: read-write with root_squash (root maps to nobody, normal UIDs pass through)

Media library (/srv/media/library)

Read-only organized media (movies, TV, music). Managed by the automation stack (qBittorrent +
Sonarr/Radarr/Lidarr). Users consume but do not write here.

  • SMB: read-only for @storage
  • NFS: read-only via /media/library export

The full /srv/media dataset (including /srv/media/data with raw torrent payload) is only
writable by the qbittorrent service account (UID 900). Hardlinks between data/ and
library/ work because they are directories on the same ZFS dataset.

Personal files (/srv/users/<you>/files)

Private per-user storage. Only you can access your tree.

  • SMB: Samba [homes] share — connect as \\SKYDICK\<you>, authenticates with your Samba password
  • NFS: /users/<you> export with all_squash mapping all operations to your UID/GID

Your NFS export maps every client UID to your server-side UID. This means any process on any
host in 10.0.0.0/16 that mounts your export will write as you. NFS does not authenticate —
it trusts the network. For stronger isolation, use SMB (which requires a password).

Per-user subtree layout

/srv/users/<you>/
├── files/          ← personal files (SMB [homes] points here)
├── bt-state/       ← private torrent/arr client state
│   ├── watch/      ← .torrent watch directory
│   ├── session/    ← client session/resume data
│   └── config/     ← client configuration
└── vm/
    └── files/      ← VM disk images (file-backed, NFS-visible by default)

bt-state holds your torrent client's configuration and state databases. The actual media
payload lives on the shared dick/media dataset, not here. This separation means:

  • No duplicate media storage across users
  • Your client state is private and independent
  • The shared media tree has one writer (the automation stack)

VM zvols (block devices for iSCSI) are created as ZFS children of dick/users/<you>/vm/<name>
and are managed by the admin. They are not visible in the filesystem tree.

Adding a new user

Admin procedure — run on skydick as root:

1. Create or verify the user in LDAP

Preferred for storage-only users. The LDAP entry should already contain:

  • uid
  • uidNumber
  • gidNumber
  • homeDirectory
  • objectClass: posixAccount

Check it on skydick:

getent passwd <newuser>

2. Add a local NixOS user only if needed

Only do this if the user needs SSH login, sudo, or a fixed local override that should win over LDAP.

In hosts/skydick/default.nix:

users.users.<newuser> = {
  extraGroups = [ "storage" ];
  hashedPassword = "<hash>";  # mkpasswd -m yescrypt
};

In modules/users.nix (if the user needs SSH/sudo access across all hosts):

users.users.<newuser> = {
  isNormalUser = true;
  extraGroups = [ "wheel" ];
  openssh.authorizedKeys.keys = [ "ssh-ed25519 ..." ];
};

3. Add per-user tmpfiles and NFS export

Use numeric UID/GID in tmpfiles rules for LDAP-only users. This avoids boot-time dependence on NSS
name resolution.

First get the IDs:

uid=$(getent passwd <newuser> | cut -d: -f3)
gid=$(getent passwd <newuser> | cut -d: -f4)

In hosts/skydick/datapool.nix, add to systemd.tmpfiles.rules:

"d /srv/users/<newuser> 0700 <UID> <GID> -"
"d /srv/users/<newuser>/files 0750 <UID> <GID> -"
"d /srv/users/<newuser>/bt-state 0750 <UID> <GID> -"
"d /srv/users/<newuser>/vm 0750 <UID> <GID> -"
"d /srv/users/<newuser>/vm/files 0750 <UID> <GID> -"

Add to services.nfs.server.exports:

/srv/users/<newuser>  10.0.0.0/16(rw,sync,no_subtree_check,all_squash,anonuid=<UID>,anongid=<GID>)

Replace <UID> and <GID> with the LDAP-backed numeric IDs from getent passwd.

4. Deploy NixOS config

sudo git -C /etc/nixos pull && sudo nixos-rebuild switch --flake /etc/nixos

5. Create ZFS datasets on skydick

# Get the user's UID/GID
uid=$(getent passwd <newuser> | cut -d: -f3)
gid=$(getent passwd <newuser> | cut -d: -f4)

# Create datasets
zfs create -o mountpoint=/srv/users/<newuser> -o quota=10T               dick/users/<newuser>
zfs create -o recordsize=128K -o mountpoint=/srv/users/<newuser>/files   dick/users/<newuser>/files
zfs create -o recordsize=16K  -o mountpoint=/srv/users/<newuser>/bt-state dick/users/<newuser>/bt-state
zfs create -o recordsize=64K  -o mountpoint=/srv/users/<newuser>/vm      dick/users/<newuser>/vm
mkdir -p /srv/users/<newuser>/vm/files

# Set ownership
chown "$uid:$gid" /srv/users/<newuser> && chmod 0700 /srv/users/<newuser>
for d in files bt-state vm vm/files; do
  chown "$uid:$gid" /srv/users/<newuser>/$d && chmod 0750 /srv/users/<newuser>/$d
done

6. Set Samba password

smbpasswd -a <newuser>

This is still required even if the user exists in LDAP, because Samba auth is not LDAP-backed yet.

The user can now connect via SMB and NFS.

7. Re-export NFS

exportfs -ra

Quotas

When a user's dick/users/<user> dataset exists, its ZFS quota (default 10TB in the examples
above) caps the total across all child datasets (files + bt-state + vm). Check usage:

zfs list -o name,used,quota -r dick/users

Adjust quota:

zfs set quota=20T dick/users/<user>

The shared dick/media dataset has no per-user quota — it is managed at the service level.

Monitoring

Check pool health:

zpool status dick

Check dataset usage:

zfs list -o name,used,avail,refer,quota -r dick

Check NFS exports:

exportfs -v

Check active NFS clients:

ss -tn | grep :2049

Troubleshooting

"Permission denied" on NFS mount

  • Verify your IP is in 10.0.0.0/16: ip addr
  • Check the export exists: showmount -e 10.0.1.1
  • Per-user exports use all_squash — your local UID does not matter, everything maps to
    the server-side owner UID

"Permission denied" writing to NFS

  • Public: your server-side UID must be in the storage group
  • Media: read-only for all users (write is via the automation service account only)
  • Personal: should always work — if not, check that the ZFS datasets are mounted:
    ssh skydick zfs list -r dick/users/<you>

SMB authentication fails

  • Samba uses its own password database (tdbsam), separate from Unix login passwords and LDAP
  • Admin must run smbpasswd -a <user> on skydick to create/reset the Samba password
  • getent passwd <user> succeeding only proves LDAP/NSS lookup works; it does not create an SMB login

Slow NFS transfers

  • Ensure MTU 9000 (jumbo frames) is set on both client and server interfaces
  • Add nconnect=16 to mount options for parallel NFS connections
  • Add rsize=1048576,wsize=1048576 for 1MB read/write blocks
  • Check link speed: ethtool <interface> | grep Speed

Files not showing up in media library

The arr stack (Sonarr/Radarr/Lidarr) hardlinks files from /srv/media/data/ to
/srv/media/library/. If a file exists in data/ but not in library/, the
arr
import/rename has not run yet. Do not manually copy files into library/ — let the
automation stack manage it to preserve hardlinks and metadata.