ZFS on root is back in the Ubuntu installer but there’s a better way to do it, next-generation hard drives are proving to be reliable but prices are going up thanks to storage-hungry AI, why getting started with ZFS is really easy, and the best filesystem for a single SSD (take a guess).
Client (a bit clumsy but positive and honest) calls: "Help! Just came back from lunch break and accidentally deleted all files on the file server!"
Me, unfazed: "Alright, besides you, who else worked on the file server during lunch break?"
Client: "No one, we were all away and I'm the first one back. Others will be back by 15:00"
Me, looking at the clock and noticing it's 14:30: "Okay, what time did you go to lunch?"
Client: "At 13:30. How long will it take to restore from the backup? Do you think we'll be able to work tomorrow?"
Me, without flinching as I type "zfs rollback *dataset-13:45-snapshot": "Done"
Client: "All the files reappeared!"
Me: "Thank #ZFS, #FreeBSD, and whoever set up automatic snapshots every 15 minutes."
I have a externally powered SATA backplane with 4 drive bays that attaches 3 hard disks, and 1 SSD (used as a cache) to my Ubuntu Linux computer via USB. The operating system sees each device as an external USB device. I have created a RAID-Z pool with all of the devices attached to this backplane.
This morning I was running a "scrub" when a rolling blackout occurred. When I rebooted, I got this message:
``
root@ubuntusys:/# zpool import
pool: aquifer
id: 5891443854XXXXXXXXX
state: FAULTED
status: The pool metadata is corrupted.
action: The pool cannot be imported due to damaged devices or data.
The pool may be active on another system, but can be imported using
the '-f' flag.
see: https://openzfs.github.io/openzfs-docs/msg/ZFS-8000-72
config:
aquifer FAULTED corrupted data
raidz1-0 ONLINE
sdb ONLINE
sdc ONLINE
sdd ONLINE
I have already tried zpool import -f -Fn, and it immediately comes back with the same error message.
According to ZFS-8000-72, this is a totally fatal error with no way to recover without some other backup (I have no other backup).
It is pretty incredible to me that a single power failure can completely nuke a previously healthy RAID-Z system. I have to be missing something here.
Ok I'm interested to find out how many of you #Linux laptop users that use an encrypted root partition of some description actually use hibernate aka suspend to disk ?
Feel free to leave your reasons for using or not below.
#Linux#ZFS laptop users do you have any recommended settings that you add into /etc/modprobe.d/zfs.conf ?
Things like zfs_arc_max, zfs_txg_timeout and so on ?
I'm running on a 1Tb nvme drive and have 16Gb ram currently but will get around to upgrading it to it's 40Gb max soon.
Any tips appreciated. 😉
Weekend project: upgraded our #trueNAS system with #ZFS and a RAIDz1 pool from 4 TB to 16 TB. Easy peasy! #FreeBSD with ZFS was one of the best choices for our NAS. Combined with #restic it is the best backup solution.
This time a short one, I'm presenting my small weekend project, a NAS running ZFS on a Raspberry Pi Zero. The idea was to have a RAID1 mirror using two USB drives, but that did not turn out so well. But hey, it's the journey that matters. And it's running ZFS so it's serious stuff.
I want to buy a pre-built and tested PC for running #TrueNAS (specifically #TrueNAScore) with sufficient CPU and memory to do several TB of #ZFS well, with an internal SSD for the OS, an internal SSD or similar for cache, at least 10 front-facing hot-swappable SATA bays, a whatever-the-hell-it-is port so I can add an external box with more drives in the future, from a UK seller. Can any of you recommend anyone?
Today I pondered something: Proxmox and others boast native ZFS integration as one of their strengths. Many Proxmox features rely on ZFS's unique capabilities, and many setups are built around them. If Oracle were to send a cease and desist tomorrow, how would the situation unfold?
This post is really a small collection of thoughts about Proxmox when used in a home lab situation and home labs in general. I was originally going to post this to Mastodon only but it didn't fit in a single post.
A lot of people (at least what I see on reddit) build Proxmox systems with shared file systems like ceph, even for home lab use.
Du bon usage du cache, de la compression et du chiffrement
Base Postgres de 300Go, qui descend à 100Go avec la compression avant chiffrement et stockage sur SSD. Tout tient du coup en RAM et quasiment plus d’I/O
filesystem defenders act like it's natural for a computer to have a filesystem. meanwhile filesystem implementors are hard at work convincing me that not only is a filesystem a bad idea, it's also virtually impossible to implement any nontrivial optimizations in one without catastrophic data loss bugs
I do love these new SSDs that I got over the weekend. They're soooo quiet, nice and fast, in particular with no latency when they spin up. That's all my toy budget for the month gone, but I think I'll buy the same again next month to replace the volume that stores my backups. That's still got oodles of space left, but the quiet is nice, and while spin-up time doesn't matter for my backups, having basically zero seek time will really help a lot.
re toots from a few days ago, I'm using #APFS instead of the much better #ZFS, because ZFS just didn't work very well on Mac OS when I played around with it a few days ago. So in the slightly longer term I'm looking for a cheap machine on which I can run #FreeBSD and ZFS, which will support at least 6 x 2.5" SSDs in its own chassis, all hot-swappable without opening it up and without tools, with at least two eSATA ports. Recommendations for something which will Just Work with FreeBSD please!