Nachdem ZFS (zfsutils-linux) unter Armbian immer noch nicht installierbar ist wegen nicht auflösbarer Paketkonflikte und ich relativ schnell eine funktionierende Lösung brauche, habe ich gestern mal mit etwas Aufwand auf BTRFS umgestellt und sehe mir das nun mal an.
Ist eh überfällig, mich damit mal genauer auseinandersetzen.
filesystem defenders act like it's natural for a computer to have a filesystem. meanwhile filesystem implementors are hard at work convincing me that not only is a filesystem a bad idea, it's also virtually impossible to implement any nontrivial optimizations in one without catastrophic data loss bugs
Du bon usage du cache, de la compression et du chiffrement
Base Postgres de 300Go, qui descend à 100Go avec la compression avant chiffrement et stockage sur SSD. Tout tient du coup en RAM et quasiment plus d’I/O
In #postgresql database land today. Thinking through striping and mirroring on #ZFS. Klara Systems has unpacked a lot of this:
... This means two things: solid state and mirrors. SSD drives provide far lower latency than conventional drives possibly can. And mirrors provide far better random access performance than RAIDz can—particularly when we’re talking about small random access ops, which can’t be effectively spread across tons of disks in a wide striped vdev...
Am I missing something obvious with #zfs? All the vdevs in my pool have names like /dev/disk5s1, and the numbers can change depending on what order they power up. I can't find any way of giving them names like "ssd-with-red-sticker-on", "ssd-with-blue-sticker-on" and so on, which would make swapping out a failed disk soooo much easier.
Last I tried screwing around with #zfs, #KVM and #qemu on #slackware I had a bunch of fun with scripts from sbo and dependency hell – since neither is an official slack package.
How does this fare in 2024 if I were to try to get a headless host for my stuff (qubes-like but I like pain)?
This post is really a small collection of thoughts about Proxmox when used in a home lab situation and home labs in general. I was originally going to post this to Mastodon only but it didn't fit in a single post.
A lot of people (at least what I see on reddit) build Proxmox systems with shared file systems like ceph, even for home lab use.
As my old NUC was showing its age, and didn't suffice for my #homelab needs anymore, I decided to build a new one. And because I prefer running all my #selfhosted in containers, but abhor fucking around with #docker and docker-compose, it's a single-node #k8s cluster, using #k3s, just like my old server. One big difference is that the new server has a decent amount of drives for storage. I decided to set up #zfs to manage that, and zfs is all it's cracked up to be.
ZFS on root is back in the Ubuntu installer but there’s a better way to do it, next-generation hard drives are proving to be reliable but prices are going up thanks to storage-hungry AI, why getting started with ZFS is really easy, and the best filesystem for a single SSD (take a guess).
Today I pondered something: Proxmox and others boast native ZFS integration as one of their strengths. Many Proxmox features rely on ZFS's unique capabilities, and many setups are built around them. If Oracle were to send a cease and desist tomorrow, how would the situation unfold?
I just asked Llama3 Instruct (7B Q8): "Tell me all you know about ZFS on OpenBSD"
Llama3 replied: "A great topic!
ZFS is a combined file system and logical volume manager designed by Sun Microsystems, now owned by Oracle. It's known for its advanced features, high performance, and reliability. On OpenBSD, ZFS is available as part of the base system since version 5.4 (released in 2011). Here are some key aspects of using ZFS on OpenBSD:
Installation
To install ZFS on OpenBSD, you'll need to specify zfs as your file system during installation. This will create a ZFS pool with a single vdev (virtual device) containing the root file system.
ZFS Pool
A ZFS pool is the top-level container for all ZFS storage. It can contain one or more virtual devices (vdevs), which are used to store data and metadata. On OpenBSD, you'll typically create a single pool with a single vdev containing the root file system.
VDEVs
Vdevs are the building blocks of a ZFS pool. They're composed of one or more disk devices, and can be configured in various ways (e.g., RAID-Z, mirror, single-disk). OpenBSD supports the following vdev types:
raidz: A type of RAID-5, providing redundancy with data striping.
mirror: A simple mirroring configuration for each block.
single: A single disk vdev, used for a standalone pool or for testing.
ZFS Features
Some notable ZFS features available on OpenBSD:
Snapshots: Instantaneous copies of your file system at a specific point in time. You can take snapshots, clone them, and even revert to previous states if needed.
Send/Receive: Allows you to send and receive entire ZFS pools or datasets between systems.
Checksums: Data integrity checks are performed on all data written to disk.
Compressed: Compression is supported for both data and metadata, reducing storage requirements.
Quotas: Limits can be set for individual datasets to prevent excessive usage.
OpenBSD-specific ZFS settings
Some OpenBSD-specific configuration options:
zfs_load: Controls whether the ZFS module is loaded at boot time. Set it to YES in /etc/sysctl.conf to enable ZFS support.
zfs_root: Specifies the root file system dataset name. Typically set to tank/root.
zfs_pool: Sets the default pool name (e.g., tank).
ZFS CLI
The OpenBSD ZFS command-line interface (CLI) is similar to other Unix-like systems:
# zpool list # List all pools and their status
# zfs list # List all datasets in a pool
# zfs snapshot # Create a new snapshot of a dataset
# zfs clone # Clone a snapshot or dataset
# zfs send # Send a dataset or snapshot to another system
Tips and Considerations
Be cautious when using ZFS, as some operations (e.g., zpool destroy) can be destructive.
Regularly check your pool's health with zpool status and ensure you have enough disk space.
Use zfs snapshots to create regular backups of your file systems.
This is just a brief overview of using ZFS on OpenBSD. If you have specific questions or concerns, feel free to ask!"
We're safe. And I should ask Theo why he's always lied and never announced ZFS support on OpenBSD. 😆
It is the "world #backup day", at least according to WorldBackupDay.com. I like the idea of having such a day, to serve as another nudge and a reminder to make and check backups, though WorldBackupDay.com is awkward, does not mention rsync in its software section. The "com" TLD looks suspicious, too, but it is better than nothing (except for potential private data leaks with online backup services).
I use primarily encrypted external HDDs (#ZFS or #LUKS with #ext4) and #rsync for personal backups, including rsync with "--dry-run --checksum" for scrubbing and checking before synchronization; quite happy that such tools are available, even though they are usually taken for granted, as are many other neat FLOSS tools we use regularly. Planning to add a USB stick to the list of storage devices, since it should be less fragile mechanically (even though less reliable otherwise).
Sharing some technical details about how I'm setting up the hosted email service. It will not be a service of BSD Cafe but tied to my own business. It will run entirely on BSD systems and on bare metal, NOT on "cloud" VPS. It will use FreeBSD jails or OpenBSD or NetBSD VMs (but on bhyve, on a leased server - I do not want user data to be stored on disks managed by others). The services (opensmtpd and rspamd, dovecot, redis, mysql, etc.) will run on separate jails/VMs, so compromising one service will NOT put the others at risk. Emails will be stored on encrypted ZFS datasets - so all emails are encrypted at rest - and only dovecot will have access to the mail datasets. I'm also considering the possibility of encrypting individual emails with the user's login password - but I still have to thoroughly test this. The setup will be fully redundant (double mx for SMTP, a domain for external IMAP access that will be managed through smart DNS - which will distribute the connections on the DNS side and, in case of a server down, will stop resolving its IP, sending all the connections to the other. Obviously, everything will be accessible in both ipv4 and ipv6 and in two different European countries, on two different providers. Synchronization will occur through dovecot's native sync (extremely stable and tested). All technical choices will be clearly explained - the goal of this service is to provide maximum transparency to users on how things will be handled.
This looks too good not to experiment with. You have interactive modes, can query for files and their restoration -- only thing that's missing for me a.t.m. is changing my Mac's APFS volume to ZFS 😞