Server: skydick (10.0.1.1), pool: dick
Network: 10GbE bonded (bond0), jumbo frames MTU 9000, subnet 10.0.0.0/16.
| Share | Server path | SMB name | NFS mount | Access |
|---|---|---|---|---|
| Public files | /srv/public |
\\SKYDICK\public |
/public |
rw, all @storage users |
| Media library | /srv/media/library |
\\SKYDICK\media |
/media/library |
ro, all @storage users |
| Personal files | /srv/users/<you>/files |
\\SKYDICK\<you> |
/users/<you> |
rw, SMB owner-authenticated; NFS network-trusted all_squash to owner |
NFS paths are relative to the NFSv4 pseudo-root (/srv on the server, exported with fsid=0).
The final storage layout is live on skydick:
dick/public mounted at /srv/publicdick/media mounted at /srv/media, with data/ and library/ directories in one hardlink domaindick/users/<user>/{files,bt-state,vm} for per-user private datadick/system/{backup,vm} for centrally managed system storagedick/templates/vm for shared read-only VM base imagesThe old dick/{share,backup,torrent,vm} layout is no longer part of the design. Torrent payload now
lives under /srv/media/data, and organized media under /srv/media/library.
skydick resolves POSIX users and groups from LDAP at ldap://10.0.0.1/, basedc=skyw,dc=topldapsam) against the same directory treesambaDomainName=SKYDICK, matching the NetBIOS name, not the browse workgroup WORKGROUPpublic / media access is carried by the LDAP posixGroupcn=storage,ou=posix_groups,dc=skyw,dc=top with gidNumber: 997ldappasswd), which keeps Samba's sambaSamAccount password data aligned.Current admin users on skydick intentionally use the same canonical usernames as their LDAP
identities, for example ye-lw21. In those collisions, local NSS lookup still wins for the final
Unix UID/GID/group resolution on the server, while SMB password data still comes from LDAP.
The bootstrap LDIF for the Samba domain object, the LDAP storage group, and the machine OU is
checked in at samba-ldap-bootstrap.ldif.
Open File Explorer, type in the address bar:
\\10.0.1.1\public \\10.0.1.1\media \\10.0.1.1\<your-username>
When prompted, enter your SMB credentials. For a user's first SMB login, an admin must bootstrap
the account once on skydick with smbpasswd -a <user>, which creates the LDAPsambaSamAccount data for that user. After that, change passwords through the LDAP password UI
or ldappasswd so LDAP remains authoritative and SMB stays in sync.
Finder → Go → Connect to Server (Cmd+K):
smb://10.0.1.1/public smb://10.0.1.1/media smb://10.0.1.1/<your-username>
Nautilus/Dolphin/Thunar address bar:
smb://10.0.1.1/public smb://10.0.1.1/<your-username>
# One-off mount sudo mount -t cifs //10.0.1.1/public /mnt/public -o username=<you>,uid=$(id -u),gid=$(id -g) # /etc/fstab (persistent) — store password in /root/.smbcredentials (chmod 600) //10.0.1.1/public /mnt/public cifs credentials=/root/.smbcredentials,uid=1000,gid=100,_netdev 0 0
/root/.smbcredentials:
username=<you> password=<your-smb-password>
NFS uses NFSv4 with a pseudo-root at /srv. Mount paths omit /srv.
# Public shared files sudo mount -t nfs4 10.0.1.1:/public /mnt/public # Media library (read-only) sudo mount -t nfs4 10.0.1.1:/media/library /mnt/media # Your private tree (all writes become your UID via all_squash) sudo mount -t nfs4 10.0.1.1:/users/<you> /mnt/skydick-home
10.0.1.1:/public /mnt/public nfs4 rw,hard,_netdev 0 0 10.0.1.1:/media/library /mnt/media nfs4 ro,hard,_netdev 0 0 10.0.1.1:/users/<you> /mnt/skydick-home nfs4 rw,hard,_netdev 0 0
For large transfers on 10GbE with jumbo frames, add NFS mount options:
rsize=1048576,wsize=1048576,nconnect=16
Example:
10.0.1.1:/users/ldx /mnt/skydick nfs4 rw,hard,rsize=1048576,wsize=1048576,nconnect=16,_netdev 0 0
/srv/public)Collaborative shared space. All users in the storage group can read and write. New files
inherit group storage via setgid (mode 2775).
@storageroot_squash (root maps to nobody, normal UIDs pass through)Shared access is governed by LDAP membership in cn=storage,ou=posix_groups,dc=skyw,dc=top.skydick also keeps a local storage group at GID 997 so on-disk ownership, service accounts,
and same-name local admin overlays stay stable.
/srv/media/library)Read-only organized media (movies, TV, music). Managed by the automation stack (qBittorrent +
Sonarr/Radarr/Lidarr). Users consume but do not write here.
@storage/media/library exportThe full /srv/media dataset (including /srv/media/data with raw torrent payload) is only
writable by the qbittorrent service account (UID 900). Hardlinks between data/ andlibrary/ work because they are directories on the same ZFS dataset.
/srv/users/<you>/files)Private per-user storage. Only you can access your tree.
[homes] share — connect as \\SKYDICK\<you>, authenticates with your Samba passwordsmbpasswd -a <you> on skydick creates your sambaSamAccountldappasswd/users/<you> export with all_squash mapping all operations to your UID/GIDYour NFS export maps every client UID to your server-side UID. This means any process on any
host in 10.0.0.0/16 that mounts your export will write as you. NFS does not authenticate —
it trusts the network. For stronger isolation, use SMB (which requires a password).
/srv/users/<you>/
├── files/ ← personal files (SMB [homes] points here)
├── bt-state/ ← private torrent/arr client state
│ ├── watch/ ← .torrent watch directory
│ ├── session/ ← client session/resume data
│ └── config/ ← client configuration
└── vm/
└── files/ ← VM disk images (file-backed, NFS-visible by default)
bt-state holds your torrent client's configuration and state databases. The actual media
payload lives on the shared dick/media dataset, not here. This separation means:
VM zvols (block devices for iSCSI) are created as ZFS children of dick/users/<you>/vm/<name>
and are managed by the admin. They are not visible in the filesystem tree.
Admin procedure — run on skydick as root:
Preferred for storage-only users. The LDAP entry should already contain:
uiduidNumbergidNumberhomeDirectoryobjectClass: posixAccountIf the user should see public and media, also add their LDAP uid as a memberUid ofcn=storage,ou=posix_groups,dc=skyw,dc=top.
Check it on skydick:
getent passwd <newuser>
Only do this if the user needs SSH login, sudo, or an intentional local override. If you do create
a same-name local admin user, remember that skydick will use the local Unix UID/GID for on-server
authorization while SMB passwords still come from LDAP.
In hosts/skydick/default.nix:
users.users.<newuser> = {
extraGroups = [ "storage" ];
hashedPassword = "<hash>"; # mkpasswd -m yescrypt
};
In modules/users.nix (if the user needs SSH/sudo access across all hosts):
users.users.<newuser> = {
isNormalUser = true;
extraGroups = [ "wheel" ];
openssh.authorizedKeys.keys = [ "ssh-ed25519 ..." ];
};
Use numeric UID/GID in tmpfiles rules for LDAP-only users. This avoids boot-time dependence on NSS
name resolution.
First get the IDs:
uid=$(getent passwd <newuser> | cut -d: -f3) gid=$(getent passwd <newuser> | cut -d: -f4)
In hosts/skydick/datapool.nix, add to systemd.tmpfiles.rules:
"d /srv/users/<newuser> 0700 <UID> <GID> -" "d /srv/users/<newuser>/files 0750 <UID> <GID> -" "d /srv/users/<newuser>/bt-state 0750 <UID> <GID> -" "d /srv/users/<newuser>/vm 0750 <UID> <GID> -" "d /srv/users/<newuser>/vm/files 0750 <UID> <GID> -"
Add to services.nfs.server.exports:
/srv/users/<newuser> 10.0.0.0/16(rw,sync,no_subtree_check,all_squash,anonuid=<UID>,anongid=<GID>)
Replace <UID> and <GID> with the LDAP-backed numeric IDs from getent passwd.
Example: the user previously called ylw in local NixOS config is now canonicalized toye-lw21 everywhere, so the per-user share path is /srv/users/ye-lw21.
sudo git -C /etc/nixos pull && sudo nixos-rebuild switch --flake /etc/nixos
The shared namespace datasets (dick/public, dick/media, dick/system, dick/templates, anddick/users) already exist on the host. For a new user, create only that user's subtree:
# Get the user's UID/GID uid=$(getent passwd <newuser> | cut -d: -f3) gid=$(getent passwd <newuser> | cut -d: -f4) # Create datasets zfs create -o mountpoint=/srv/users/<newuser> -o quota=10T dick/users/<newuser> zfs create -o recordsize=128K -o mountpoint=/srv/users/<newuser>/files dick/users/<newuser>/files zfs create -o recordsize=16K -o mountpoint=/srv/users/<newuser>/bt-state dick/users/<newuser>/bt-state zfs create -o recordsize=64K -o mountpoint=/srv/users/<newuser>/vm dick/users/<newuser>/vm mkdir -p /srv/users/<newuser>/vm/files # Set ownership chown "$uid:$gid" /srv/users/<newuser> && chmod 0700 /srv/users/<newuser> for d in files bt-state vm vm/files; do chown "$uid:$gid" /srv/users/<newuser>/$d && chmod 0750 /srv/users/<newuser>/$d done
smbpasswd -a <newuser>
This one-time step is required even if the user already exists as a POSIX account in LDAP.smbpasswd -a creates the user's sambaSamAccount attributes in LDAP and sets an initial SMB
password.
After this bootstrap, future password changes should happen through the LDAP password UI orldappasswd, not routine smbpasswd use. That keeps LDAP as the password source of truth while
the LDAP server updates the Samba password hashes.
The user can now connect via SMB and NFS.
exportfs -ra
When a user's dick/users/<user> dataset exists, its ZFS quota (default 10TB in the examples
above) caps the total across all child datasets (files + bt-state + vm). Check usage:
zfs list -o name,used,quota -r dick/users
Adjust quota:
zfs set quota=20T dick/users/<user>
The shared dick/media dataset has no per-user quota — it is managed at the service level.
Check pool health:
zpool status dick
Check dataset usage:
zfs list -o name,used,avail,refer,quota -r dick
Check NFS exports:
exportfs -v
Check active NFS clients:
ss -tn | grep :2049
ip addrshowmount -e 10.0.1.1all_squash — your local UID does not matter, everything maps tostorage groupssh skydick zfs list -r dick/users/<you>sambaSamAccount entries for SMB auth, not just the Unix userPasswordgetent passwd <user> succeeding only proves Unix account lookup works; it does not create an SMB loginsmbpasswd -a <user> once on skydick tosambaSamAccountldappasswd so Unix andpublic or media access fails but the home share works, check LDAP storage membership andmemberUid list for cn=storage,ou=posix_groups,dc=skyw,dc=topnconnect=16 to mount options for parallel NFS connectionsrsize=1048576,wsize=1048576 for 1MB read/write blocksethtool <interface> | grep SpeedThe arr stack (Sonarr/Radarr/Lidarr) hardlinks files from /srv/media/data/ to/srv/media/library/. If a file exists in data/ but not in library/, the arr
import/rename has not run yet. Do not manually copy files into library/ — let the
automation stack manage it to preserve hardlinks and metadata.