I was settling in to tinker with #ansible today for work and I just discovered a host called "docker0" on my #proxmox server here at the house. I have no entry for it in my #ssh config nor in #keepassxc. I have no clue what users are present on the system or what its purpose is. There does appear to be some #terraform state files laying around that might be related to it?
Current plan: murder it and see which family member starts to complain so I can identify what service(s) it's running.
Wenn ich 2x 10 GbE zur Verfuegung habe, was ist dann besser/sinnvoller?
Bonding mit 20 GbE?
Bonding mit 20 GbE mit VLAN?
Getrennte Links fuer Proxmox und Ceph mit jeweils 10 GbE?
Ueber 1x 10 GbE bekomme ich mit den CPUs so ca. 300-500 MB/s hin. Bei Bonding haette ich die Vermutung, dass es sich ungefaehr verdoppeln koennte, weil dann zwei Cores parallel arbeiten koennen.
Successfully installed netdata on my proxmox homelab host for monitoring. Thought I wouldnt need monitoring with notifications but yesterday I looked at the host stats and the cpu usage spiked a few days ago... Took me a while to figure out but the issue was one vm ran out of disk space because of influxdb...
Now I get a notification via telegram if something is wrong - Nice!
Ha! Jellyfin running with full HW acceleration inside a LXC container in Proxmox, next to Home Assistant, the -arr suite, FreshRSS, Nextcloud and Immich! (And a Win 10 if needed, with GPU passthrough). Sipping just ~12W at fast enough speed. I love this setup and might maybe have two Rpi4 2GB units to sell ;)
Got slow-to-start #proxmox VMs timing out during backup?
Finally got around to looking at our gaming VM not getting backups and walked down the codepath. Obviously a nice long-term solution would be to add a global or per-VM (optimally both) timeout setting, but for now this stopgap works (tm):
Well that's it, I've bought a fanless x86 mini PC, I'll install Proxmox on it and run my stuff that way. The Rpi4 is powerful enough for almost everything I need but it's very brittle and sensitive. I've never had luck with using a SSD with it for some reason (probably overheating USB controller), and the form factor with its flimsy power cable is anything but robust. I need more stability in my life ;)
@stooovie good call! Migrated away from my armada of #raspberrypis last year (to an old zotac, which runs only #homeassistant, a dedicated #nas and a beefy optiplex for #plex et al.) and have never looked back. I guess running constantly at the limits of performance is not good for any system. #proxmox is nice but centralisation of everything on one machine, even in containers and virtual machines, has not worked well for me. I hope you have better luck!
Today's lesson - don't add a new host to a #Proxmox cluster when one of the cluster members is down - it all stops working. Crash course in rebuilding/fixing.
Bought a couple of NUCs for small #Proxmox servers. All absolutely fine except it's impossible to configure the BIOS without a mouse. Most settings are fine with just a keyboard, but anything that needs an option selecting from a list (specifically power state), forget it. Almost half an hour of hunting before I found a spare mouse. Grr.
I can see why #proxmox would be considered more of a hobbyist hypervisor!! Wanted to pass through an igpu and wow the hurdles I had to.. Still not really working right. Partly cause it's pretty niche thing I wanted to do but the process itself is pretty involved.
@AngryAnt muchas gracias por la bienvenida !! Como dices disfrutando mucho !! Si lo siguiente es mirar cómo realizar copias con #PromoxBackupServer . Mucho camino que recorrer y aprender !! #proxmox