Upgrading my TrueNAS Mini X+ from TrueNAS Scale 23.10.2 to 24.04.0 was smooth.
I had to change the location of the node_exporter program from /usr/bin to a new location, which is under a separate dataset I created for local apps (not to be confused with the TrueNAS Scale Applications dataset).
The /usr directory, which is mounted from boot-pool/ROOT/24.04.0/usr, is now mounted as read-only.
This post is really a small collection of thoughts about Proxmox when used in a home lab situation and home labs in general. I was originally going to post this to Mastodon only but it didn't fit in a single post.
A lot of people (at least what I see on reddit) build Proxmox systems with shared file systems like ceph, even for home lab use.
A few weeks ago I bought a terramaster NAS (because it was the only device you could install other OSes like proxmox or truenas on it) and now UGREEN published a kickstarter with their new NASes which can do the same but are much newer in terms of hardware…
⚠️ TrueNAS CORE 13 is the end of the FreeBSD version | The Register
「 We have no plans for a FreeBSD 14-based TrueNAS at this time, and the 13.1 release will be a longer-lived maintenance train for those who want to continue running on the BSD product before migrating to SCALE later at some later date 」
After watching videos from @tomlawrence and @technotim about #TrueNAS I'm still somewhat undecided how to setup my 12-disk server with a mix of old/new disks for a general purpose storage (backup, iSCSI targets for VMs, etc...), but it seems like to have this kind of setup:
sdm: system disk (SATA DOM)
8 or 12 disks as 2-disk mirrors in a pool with NVMe for SLOG and maybe 2 or 4 SSDs for L2ARC, depending on whether I can fit them inside the 2 RU server or if I need to use drive bays for that.
#Kubernetes/#K8S Q: I've been having an issue all this while I haven't quite been able to tackle. How do I properly mount a #Samba/#SMB/#CIFS share in a #Docker container on Kubernetes?
I definitely don't want a method that does any "pass through" outside of the container such as mounting said share on the Kubernetes node then passing it to the container, since that seems quite hacky and the deployment/pod could easily be reassigned to a different node.
Update: I've found #csi-driver-smb which seems to be perfect for my needs, and even a video of someone deploying it to their cluster for #Jellyfin.
I've deployed it successfully to my #Kubernetes cluster pretty easily, and am attempting to achieve the same thing but on #Plex rather than Jellyfin. Ran into another obstacle tho, while it seems that my #TrueNAS#SMB share is mounted to the container (shows up in df -h), my root user in the container could not ls the mount point (i.e. /mnt/smb), it'd just return the Permission denied error. Weird thing is the root user could cd into the mount point and its existing subdirectories, but not ls them or write any files to them. I could cat files inside it though, funnily enough.
the PV for said PVC has mounting options included in csi-driver-smb's example including dir_mode=0777, and file_mode=0777, with minor changes such as uid=1001 and gid=1001 I've updated them to 0, which is the uid and gid of the root user. I've even tried updating them to 1000 which is the id of the user plex, but still with the same results.
Anyone have any clue why I'm getting the permission denied error?
Having been a FreeBSD (and BSD in general) user since the 2.x days and managed a small fleet of servers since the heady 4.x days (including a few Sun UltraSPARC II and IIi servers and dealing with the ATA problems), I've been rooting for FreeNAS and TrueNAS CORE for a long time.
It's kind of disheartening to have seen support for FreeBSD from hardware, software and cloud vendors fade away. I knew that when TrueNAS SCALE was going to be based on Linux that it'll eventually lead to CORE getting sunsetted.
I do have to say that I am a bit of a hypocrit in that I migrated from CORE to SCALE on my TrueNAS Mini X+ and only have one server running FreeBSD (even though the cloud provider no longer provides support).
So I'm using #ZFS on #TrueNAS and today I noticed that "auto-trim" is turned off on my ZFS pool. "Hmm," I asked myself "what is TRIM on ZFS?"
After a few minutes of searching, I have no idea what TRIMming does. I know a hundred ways to do "it" manually or automatically. But I don't know what it DOES.
So I finally found this presentation from 2019 that pretty well lays out what it is and why it exists. My drives, however, are rotating magnetic drives (just like in Victorian times), so I'm not sure there's any value in TRIMming my ZFS. Thoughts?
@technotim is back with @adam to discuss the state of homelab in 2024 🧑🔬
They discuss homelab environments providing a safe place for experimentation and learning, network improvement as a gateway to homelab, trends in network connection speeds, to Unifi or not, storage trends, ZFS configurations, TrueNAS, cameras, home automation, connectivity, routers, pfSense, and more.
Weekend project: upgraded our #trueNAS system with #ZFS and a RAIDz1 pool from 4 TB to 16 TB. Easy peasy! #FreeBSD with ZFS was one of the best choices for our NAS. Combined with #restic it is the best backup solution.
Les presento mi próximo proyecto ñoño: Armar mi propio NAS
Voy a remplazar un Qnap de 2 discos por un sistema armado con 4 discos. Ya tengo el gabinete, fuente de poder, procesador, RAM, SSD (Reciclada de la que cambié en el laptop) y los discos duros los reutilizo del Qnap. Solo falta que me llegue la placa madre.
Voy a utilizar TrueNAS SCALE como sistema operativo.
Tuve el servidor funcionando con #TrueNAS por harto tiempo, pero al intentar instalar aplicaciones me pareció que el sistema es muy poco flexible y cuesta personalizar las cosas. Está pensando para trabajar desde la interfaz web, así que cuando quieres usar la linea de comando las cosas se complican e incluso puedes perder lo que haces con las actualizaciones, así que finalmente decidí formatear todo y partir de cero nuevamente con mi NAS casero.