diff --git a/hosts/skydick/DATAPOOL.md b/hosts/skydick/DATAPOOL.md new file mode 100644 index 0000000..6193625 --- /dev/null +++ b/hosts/skydick/DATAPOOL.md @@ -0,0 +1,323 @@ +# Skydick Data Pool — User & Service Guide + +Server: `skydick` (10.0.1.1), pool: `dick` + +Network: 10GbE bonded (bond0), jumbo frames MTU 9000, subnet 10.0.0.0/16. + +## Available shares + +| Share | Server path | SMB name | NFS mount | Access | +|-------|-------------|----------|-----------|--------| +| Public files | `/srv/public` | `\\SKYDICK\public` | `/public` | rw, all @storage users | +| Media library | `/srv/media/library` | `\\SKYDICK\media` | `/media/library` | ro, all @storage users | +| Personal files | `/srv/users//files` | `\\SKYDICK\` | `/users/` | rw, owner only | + +NFS paths are relative to the NFSv4 pseudo-root (`/srv` on the server, exported with `fsid=0`). + +## Connecting via SMB (Windows / macOS / Linux GUI) + +### Windows + +Open File Explorer, type in the address bar: + +``` +\\10.0.1.1\public +\\10.0.1.1\media +\\10.0.1.1\ +``` + +When prompted, enter your Samba credentials (set by the admin on skydick with `smbpasswd -a `). + +### macOS + +Finder → Go → Connect to Server (Cmd+K): + +``` +smb://10.0.1.1/public +smb://10.0.1.1/media +smb://10.0.1.1/ +``` + +### Linux (GUI) + +Nautilus/Dolphin/Thunar address bar: + +``` +smb://10.0.1.1/public +smb://10.0.1.1/ +``` + +### Linux (CLI / fstab) + +```bash +# One-off mount +sudo mount -t cifs //10.0.1.1/public /mnt/public -o username=,uid=$(id -u),gid=$(id -g) + +# /etc/fstab (persistent) — store password in /root/.smbcredentials (chmod 600) +//10.0.1.1/public /mnt/public cifs credentials=/root/.smbcredentials,uid=1000,gid=100,_netdev 0 0 +``` + +`/root/.smbcredentials`: + +``` +username= +password= +``` + +## Connecting via NFS (Linux) + +NFS uses NFSv4 with a pseudo-root at `/srv`. Mount paths omit `/srv`. + +### One-off mount + +```bash +# Public shared files +sudo mount -t nfs4 10.0.1.1:/public /mnt/public + +# Media library (read-only) +sudo mount -t nfs4 10.0.1.1:/media/library /mnt/media + +# Your private tree (all writes become your UID via all_squash) +sudo mount -t nfs4 10.0.1.1:/users/ /mnt/skydick-home +``` + +### Persistent mount (/etc/fstab) + +``` +10.0.1.1:/public /mnt/public nfs4 rw,hard,_netdev 0 0 +10.0.1.1:/media/library /mnt/media nfs4 ro,hard,_netdev 0 0 +10.0.1.1:/users/ /mnt/skydick-home nfs4 rw,hard,_netdev 0 0 +``` + +### Performance tuning (10GbE) + +For large transfers on 10GbE with jumbo frames, add NFS mount options: + +``` +rsize=1048576,wsize=1048576,nconnect=16 +``` + +Example: + +``` +10.0.1.1:/users/ldx /mnt/skydick nfs4 rw,hard,rsize=1048576,wsize=1048576,nconnect=16,_netdev 0 0 +``` + +## Share details + +### Public (`/srv/public`) + +Collaborative shared space. All users in the `storage` group can read and write. New files +inherit group `storage` via setgid (mode 2775). + +- SMB: read-write for `@storage` +- NFS: read-write with `root_squash` (root maps to nobody, normal UIDs pass through) + +### Media library (`/srv/media/library`) + +Read-only organized media (movies, TV, music). Managed by the automation stack (qBittorrent + +Sonarr/Radarr/Lidarr). Users consume but do not write here. + +- SMB: read-only for `@storage` +- NFS: read-only via `/media/library` export + +The full `/srv/media` dataset (including `/srv/media/data` with raw torrent payload) is only +writable by the `qbittorrent` service account (UID 900). Hardlinks between `data/` and +`library/` work because they are directories on the same ZFS dataset. + +### Personal files (`/srv/users//files`) + +Private per-user storage. Only you can access your tree. + +- SMB: Samba `[homes]` share — connect as `\\SKYDICK\`, authenticates with your Samba password +- NFS: `/users/` export with `all_squash` mapping all operations to your UID/GID + +Your NFS export maps every client UID to your server-side UID. This means any process on any +host in 10.0.0.0/16 that mounts your export will write as you. NFS does not authenticate — +it trusts the network. For stronger isolation, use SMB (which requires a password). + +### Per-user subtree layout + +``` +/srv/users// +├── files/ ← personal files (SMB [homes] points here) +├── bt-state/ ← private torrent/arr client state +│ ├── watch/ ← .torrent watch directory +│ ├── session/ ← client session/resume data +│ └── config/ ← client configuration +└── vm/ + └── files/ ← VM disk images (file-backed, accessible via NFS/SMB) +``` + +`bt-state` holds your torrent client's configuration and state databases. The actual media +payload lives on the shared `dick/media` dataset, not here. This separation means: +- No duplicate media storage across users +- Your client state is private and independent +- The shared media tree has one writer (the automation stack) + +VM zvols (block devices for iSCSI) are created as ZFS children of `dick/users//vm/` +and are managed by the admin. They are not visible in the filesystem tree. + +## Adding a new user + +Admin procedure — run on skydick as root: + +### 1. Add the user to NixOS config + +In `hosts/skydick/default.nix`: + +```nix +users.users. = { + extraGroups = [ "storage" ]; + hashedPassword = ""; # mkpasswd -m yescrypt +}; +``` + +In `modules/users.nix` (if the user needs SSH/sudo access across all hosts): + +```nix +users.users. = { + isNormalUser = true; + extraGroups = [ "wheel" ]; + openssh.authorizedKeys.keys = [ "ssh-ed25519 ..." ]; +}; +``` + +### 2. Add per-user tmpfiles and NFS export + +In `hosts/skydick/datapool.nix`, add to `systemd.tmpfiles.rules`: + +```nix +"d /srv/users/ 0700 users -" +"d /srv/users//files 0750 users -" +"d /srv/users//bt-state 0750 users -" +"d /srv/users//vm 0750 users -" +"d /srv/users//vm/files 0750 users -" +``` + +Add to `services.nfs.server.exports`: + +``` +/srv/users/ 10.0.0.0/16(rw,sync,no_subtree_check,all_squash,anonuid=,anongid=100) +``` + +Replace `` with the user's actual numeric UID (`id -u ` after first deploy). + +### 3. Deploy NixOS config + +```bash +sudo git -C /etc/nixos pull && sudo nixos-rebuild switch --flake /etc/nixos +``` + +### 4. Create ZFS datasets on skydick + +```bash +# Get the user's UID +uid=$(id -u ) + +# Create datasets +zfs create -o mountpoint=/srv/users/ -o quota=10T dick/users/ +zfs create -o recordsize=128K -o mountpoint=/srv/users//files dick/users//files +zfs create -o recordsize=16K -o mountpoint=/srv/users//bt-state dick/users//bt-state +zfs create -o recordsize=64K -o mountpoint=/srv/users//vm dick/users//vm +mkdir -p /srv/users//vm/files + +# Set ownership +chown :users /srv/users/ && chmod 0700 /srv/users/ +for d in files bt-state vm vm/files; do + chown :users /srv/users//$d && chmod 0750 /srv/users//$d +done +``` + +### 5. Set Samba password + +```bash +smbpasswd -a +``` + +The user can now connect via SMB and NFS. + +### 6. Re-export NFS + +```bash +exportfs -ra +``` + +## Quotas + +Each user has a ZFS quota on their `dick/users/` dataset (default 10TB). This caps the +total across all child datasets (files + bt-state + vm). Check usage: + +```bash +zfs list -o name,used,quota -r dick/users +``` + +Adjust quota: + +```bash +zfs set quota=20T dick/users/ +``` + +The shared `dick/media` dataset has no per-user quota — it is managed at the service level. + +## Monitoring + +Check pool health: + +```bash +zpool status dick +``` + +Check dataset usage: + +```bash +zfs list -o name,used,avail,refer,quota -r dick +``` + +Check NFS exports: + +```bash +exportfs -v +``` + +Check active NFS clients: + +```bash +ss -tn | grep :2049 +``` + +## Troubleshooting + +### "Permission denied" on NFS mount + +- Verify your IP is in 10.0.0.0/16: `ip addr` +- Check the export exists: `showmount -e 10.0.1.1` +- Per-user exports use `all_squash` — your local UID does not matter, everything maps to + the server-side owner UID + +### "Permission denied" writing to NFS + +- Public: your server-side UID must be in the `storage` group +- Media: read-only for all users (write is via the automation service account only) +- Personal: should always work — if not, check that the ZFS datasets are mounted: + `ssh skydick zfs list -r dick/users/` + +### SMB authentication fails + +- Samba uses its own password database (tdbsam), separate from Unix login passwords +- Admin must run `smbpasswd -a ` on skydick to create/reset the Samba password +- LDAP-backed Samba auth is not yet configured + +### Slow NFS transfers + +- Ensure MTU 9000 (jumbo frames) is set on both client and server interfaces +- Add `nconnect=16` to mount options for parallel NFS connections +- Add `rsize=1048576,wsize=1048576` for 1MB read/write blocks +- Check link speed: `ethtool | grep Speed` + +### Files not showing up in media library + +The *arr stack (Sonarr/Radarr/Lidarr) hardlinks files from `/srv/media/data/` to +`/srv/media/library/`. If a file exists in `data/` but not in `library/`, the *arr +import/rename has not run yet. Do not manually copy files into `library/` — let the +automation stack manage it to preserve hardlinks and metadata. diff --git a/hosts/skydick/datapool.nix b/hosts/skydick/datapool.nix index 9776a63..4722286 100644 --- a/hosts/skydick/datapool.nix +++ b/hosts/skydick/datapool.nix @@ -54,25 +54,88 @@ # log nvme-INTEL_SSDPE21K750GAC_PHKE0163008K750BGN-part1 \ # cache nvme-INTEL_SSDPE21K750GAC_PHKE0163008K750BGN-part2 # +# === Dataset hierarchy === +# +# SHARED mount recsize compress purpose +# dick/public /srv/public 128K zstd collaborative shared files +# dick/media /srv/media 1M off shared media (one hardlink domain) +# → /srv/media/data (dir) torrent payload (*arr downloads) +# → /srv/media/library (dir) organized media (hardlinked from data/) +# +# PER-USER (template — shown for ldx UID=1000,GID=100; repeat per user) +# dick/users (canmount=off) namespace root +# dick/users/ldx /srv/users/ldx — — quota boundary +# dick/users/ldx/files /srv/users/ldx/files 128K zstd personal files +# dick/users/ldx/bt-state /srv/users/ldx/bt-state 16K zstd .torrent, resume, *arr DBs +# dick/users/ldx/vm /srv/users/ldx/vm 64K zstd VM filesystem root / parent for zvol children +# +# SYSTEM +# dick/system (canmount=off) namespace root +# dick/system/backup /srv/system/backup 1M zstd-3 archival backups +# dick/system/vm /srv/system/vm 64K zstd central VM filesystem root / parent for zvol children +# dick/templates/vm /srv/templates/vm 64K zstd shared read-only VM base images +# +# LEGACY (active migration — destroy after cutover) +# dick/share /srv/share 128K zstd → dick/public +# dick/torrent /srv/torrent 1M zstd → dick/media/data +# dick/backup /srv/backup 1M zstd-3 → dick/system/backup +# dick/vm /srv/vm 64K zstd → dick/system/vm +# +# Design rule: dataset boundary = hardlink domain = quota/tuning domain. +# dick/media keeps payload (data/) and library in ONE dataset so *arr +# hardlinks work. Per-user trees hold private state only, not payload — +# avoids duplicate media across users. One writer stack (qbittorrent + +# *arr) manages dick/media; other users get read-only access. +# # === Dataset creation === # -# zfs create -o mountpoint=/srv/share -o recordsize=128K dick/share -# zfs create -o mountpoint=/srv/media -o recordsize=1M -o compression=off dick/media -# zfs create -o mountpoint=/srv/backup -o recordsize=1M -o compression=zstd-3 dick/backup -# zfs create -o mountpoint=/srv/torrent -o recordsize=1M dick/torrent -# zfs create -o mountpoint=/srv/vm -o recordsize=64K dick/vm +# Shared: +# zfs create -o mountpoint=/srv/public -o recordsize=128K dick/public +# chown root:storage /srv/public && chmod 2775 /srv/public # -# # Set permissions after creation (persisted in ZFS): -# for d in share media torrent; do chown root:storage /srv/$d && chmod 2775 /srv/$d; done -# for d in backup vm; do chown root:root /srv/$d && chmod 0700 /srv/$d; done +# dick/media already exists (recordsize=1M, compression=off). +# mkdir -p /srv/media/{data,library} +# chown qbittorrent:storage /srv/media/{data,library} # -# Dataset rationale: -# share — general multi-user storage, default recordsize -# media — large media files, 1M for sequential throughput, compression off (pre-compressed) -# backup — archival backups, 1M records, zstd-3 for better compression ratio -# torrent — bittorrent download/seed, 1M records (clients write sequentially per file) -# vm — iSCSI zvols for live VMs + backup images, 64K aligns with guest block sizes -# Create zvols: zfs create -V -o volblocksize=16K dick/vm/ +# Per-user namespace: +# zfs create -o mountpoint=none -o canmount=off dick/users +# +# # ldx (UID 1000, primary GID 100 = users) +# zfs create -o mountpoint=/srv/users/ldx -o quota=10T dick/users/ldx +# zfs create -o recordsize=128K -o mountpoint=/srv/users/ldx/files dick/users/ldx/files +# zfs create -o recordsize=16K -o mountpoint=/srv/users/ldx/bt-state dick/users/ldx/bt-state +# zfs create -o recordsize=64K -o mountpoint=/srv/users/ldx/vm dick/users/ldx/vm +# mkdir -p /srv/users/ldx/vm/files +# chown ldx:users /srv/users/ldx && chmod 0700 /srv/users/ldx +# for d in files bt-state vm vm/files; do chown ldx:users /srv/users/ldx/$d && chmod 0750 /srv/users/ldx/$d; done +# # File-backed VM images live under /srv/users/ldx/vm/files. +# # Block LUNs are zvol children of dick/users/ldx/vm/. +# +# # ylw (UID 1002, primary GID 100 = users) — same pattern, s/ldx/ylw/ +# zfs create -o mountpoint=/srv/users/ylw -o quota=10T dick/users/ylw +# zfs create -o recordsize=128K -o mountpoint=/srv/users/ylw/files dick/users/ylw/files +# zfs create -o recordsize=16K -o mountpoint=/srv/users/ylw/bt-state dick/users/ylw/bt-state +# zfs create -o recordsize=64K -o mountpoint=/srv/users/ylw/vm dick/users/ylw/vm +# mkdir -p /srv/users/ylw/vm/files +# chown ylw:users /srv/users/ylw && chmod 0700 /srv/users/ylw +# for d in files bt-state vm vm/files; do chown ylw:users /srv/users/ylw/$d && chmod 0750 /srv/users/ylw/$d; done +# # File-backed VM images live under /srv/users/ylw/vm/files. +# # Block LUNs are zvol children of dick/users/ylw/vm/. +# +# System: +# zfs create -o mountpoint=none -o canmount=off dick/system +# zfs create -o recordsize=1M -o compression=zstd-3 -o mountpoint=/srv/system/backup dick/system/backup +# zfs create -o recordsize=64K -o mountpoint=/srv/system/vm dick/system/vm +# mkdir -p /srv/system/vm/files +# zfs create -o recordsize=64K -o readonly=on -o mountpoint=/srv/templates/vm dick/templates/vm +# chown root:root /srv/system/{backup,vm} /srv/templates/vm && chmod 0700 /srv/system/{backup,vm} +# chown root:root /srv/system/vm/files && chmod 0700 /srv/system/vm/files +# # File-backed VM images live under /srv/system/vm/files. +# # Block LUNs are zvol children of dick/system/vm/. +# +# iSCSI zvols (block service — never the same bytes as SMB/NFS): +# zfs create -V -o volblocksize=16K dick/users//vm/ +# zfs create -V -o volblocksize=16K dick/system/vm/ # # === Expanding the pool === # @@ -81,21 +144,40 @@ # mirror \ # mirror # -# === Permission model === +# === Service model === # -# User-facing datasets (share, media, torrent): -# root:storage 2775 (setgid — new files inherit storage group) -# NFS: root_squash, Samba: @storage group +# File services (SMB + NFS share the same filesystem datasets): +# Public: root:storage 2775, NFS root_squash, Samba [public] @storage +# Media: qbittorrent:storage, NFS rw /srv/media all_squash(900), +# NFS reader /srv/media/library ro, Samba [media] ro @storage +# Home: : 0700, explicit per-user NFS exports, Samba [homes] +# BT-state: : 0750, NFS all_squash(uid), no Samba +# VM files: : 0750, NFS all_squash(uid), no Samba # -# System datasets (backup, vm): -# root:root 0700 -# NFS: no_root_squash, iSCSI: vm zvols +# Block services (iSCSI — separate zvols, never shared with SMB/NFS): +# dick/users//vm/ — user-owned zvols +# dick/system/vm/ — centrally managed zvols +# +# Quotas: +# ZFS quota on dick/users/ caps total across all child datasets. +# dick/media is shared — no per-user quota; manage via service-level controls. +# +# Auth: +# NFS all_squash provides UID mapping, not authentication. Per-user NFS here +# uses one explicit export per user, which is acceptable for a small fixed set +# but scales linearly with user count and client ACL maintenance. +# Samba [homes] valid users = %S gives real per-user auth via tdbsam. +# LDAP already has posixAccount users and Samba schema loaded, but no live +# sambaSamAccount/sambaDomain entries on skydick yet — unified SMB auth is a +# separate integration step. For stronger NFS isolation: use sec=krb5 or +# tighter per-client IP restrictions. { config, pkgs, ... }: { users.groups.storage = {}; + # Service account for the shared media writer (qbittorrent + *arr stack) users.users.qbittorrent = { uid = 900; group = "storage"; @@ -105,8 +187,36 @@ systemd.tmpfiles.rules = [ "d /srv 0755 root root -" - "d /srv/share 2775 root storage -" + + # Shared + "d /srv/public 2775 root storage -" "d /srv/media 2775 root storage -" + "d /srv/media/data 2775 qbittorrent storage -" + "d /srv/media/library 2775 qbittorrent storage -" + + # Per-user trees + "d /srv/users 0755 root root -" + "d /srv/users/ldx 0700 ldx users -" + "d /srv/users/ldx/files 0750 ldx users -" + "d /srv/users/ldx/bt-state 0750 ldx users -" + "d /srv/users/ldx/vm 0750 ldx users -" + "d /srv/users/ldx/vm/files 0750 ldx users -" + "d /srv/users/ylw 0700 ylw users -" + "d /srv/users/ylw/files 0750 ylw users -" + "d /srv/users/ylw/bt-state 0750 ylw users -" + "d /srv/users/ylw/vm 0750 ylw users -" + "d /srv/users/ylw/vm/files 0750 ylw users -" + + # System + "d /srv/system 0700 root root -" + "d /srv/system/backup 0700 root root -" + "d /srv/system/vm 0700 root root -" + "d /srv/system/vm/files 0700 root root -" + "d /srv/templates 0755 root root -" + "d /srv/templates/vm 0755 root root -" + + # Legacy (keep until migration complete) + "d /srv/share 2775 root storage -" "d /srv/backup 0700 root root -" "d /srv/torrent 2775 root storage -" "d /srv/vm 0700 root root -" @@ -123,12 +233,27 @@ mountdPort = 20003; exports = '' - /srv 10.0.0.0/16(rw,sync,fsid=0,crossmnt,no_subtree_check,root_squash) - /srv/share 10.0.0.0/16(rw,sync,no_subtree_check,root_squash) - /srv/media 10.0.0.0/16(rw,sync,no_subtree_check,all_squash,anonuid=900,anongid=997) - /srv/backup 10.0.0.0/16(rw,sync,no_subtree_check,no_root_squash) - /srv/torrent 10.0.0.0/16(rw,sync,no_subtree_check,all_squash,anonuid=900,anongid=997) - /srv/vm 10.0.0.0/16(rw,sync,no_subtree_check,no_root_squash) + /srv 10.0.0.0/16(rw,sync,fsid=0,crossmnt,no_subtree_check,root_squash) + + # Shared + /srv/public 10.0.0.0/16(rw,sync,no_subtree_check,root_squash) + /srv/media 10.0.0.0/16(rw,sync,no_subtree_check,all_squash,anonuid=900,anongid=997) + /srv/media/library 10.0.0.0/16(ro,sync,no_subtree_check,root_squash) + + # Per-user — explicit exports; all_squash maps every client UID to the owner + /srv/users/ldx 10.0.0.0/16(rw,sync,no_subtree_check,all_squash,anonuid=1000,anongid=100) + /srv/users/ylw 10.0.0.0/16(rw,sync,no_subtree_check,all_squash,anonuid=1002,anongid=100) + + # System + /srv/system/backup 10.0.0.0/16(rw,sync,no_subtree_check,no_root_squash) + /srv/system/vm 10.0.0.0/16(rw,sync,no_subtree_check,no_root_squash) + /srv/templates/vm 10.0.0.0/16(ro,sync,no_subtree_check,root_squash) + + # Legacy (remove after cutover) + /srv/share 10.0.0.0/16(rw,sync,no_subtree_check,root_squash) + /srv/backup 10.0.0.0/16(rw,sync,no_subtree_check,no_root_squash) + /srv/torrent 10.0.0.0/16(rw,sync,no_subtree_check,all_squash,anonuid=900,anongid=997) + /srv/vm 10.0.0.0/16(rw,sync,no_subtree_check,no_root_squash) ''; }; @@ -140,7 +265,7 @@ }; }; - # Samba — user-facing datasets only (Windows/Mac convenience) + # Samba — file-level access (SMB + NFS share the same datasets) services.samba = { enable = true; openFirewall = false; @@ -165,8 +290,9 @@ "load printers" = "no"; }; - share = { - path = "/srv/share"; + # Shared datasets + public = { + path = "/srv/public"; browseable = "yes"; "read only" = "no"; "guest ok" = "no"; @@ -176,20 +302,20 @@ }; media = { - path = "/srv/media"; + path = "/srv/media/library"; browseable = "yes"; "read only" = "yes"; "valid users" = "@storage"; }; - torrent = { - path = "/srv/torrent"; - browseable = "yes"; + # Per-user homes — Samba auto-creates \\SKYDICK\ from this template + homes = { + path = "/srv/users/%S/files"; + browseable = "no"; "read only" = "no"; - "guest ok" = "no"; - "valid users" = "@storage"; - "create mask" = "0664"; - "directory mask" = "2775"; + "valid users" = "%S"; + "create mask" = "0640"; + "directory mask" = "0750"; }; }; };