Anybody running a desktop environment on the same machine as #Proxmox? Because of space/location constrains, I'm wanting to have the same computer run Proxmox and KDE. I know it's not recommended, but if I can make it work relatively securely and dependably, it would make my setup much easier.
Bon, je suis complètement à court d'idées, j'ai besoin d' #aide sur du #reseau sur ce serveur... Si vous avez une idée ou si vous pouvez partager, j'en peux plus là
Le serveur qui fait tourner cette instance est sur un #proxmox. Jusqu'ici, tout allait bien.
Hier, suite à un plantage, j'ai du reboot le serveur (VPS chez Ionos). Après redémarrage, impossible d'accéder à quoi que ce soit : Interface Proxmox, services dans les conteneurs, rien.
La configuration était la suivante :
Interface externe (ens6) et 2 bridges :
vmbr0, en bridge-port sur ens6 (avec donc son ip publique), utilisé pour l'administration
vmbr1, avec une ip dans un réseau en 192.168.2.0/24 qui sert les conteneurs (reverse proxy pour un et docker pour l'autre)
J'ai dans mon fichier d'interfaces pour vmbr1 ceci :
Après debug, j'ai vu que le trafic passait à nouveau lorsque je foutais en down VMBR0.
J'ai donc, dans l'urgence, changé mes règles pour retirer vmbr0 et le changer par ens6, qui est le nom d'interface "physique".
J'ai pu constater que tout était revenu : accès à l'interface de proxmox, accès aux conteneurs... tout sauf, un point important : impossible depuis le serveur d'utiliser sa propre ip publique.
Par exemple, impossible pour l'instance d'envoyer un mail (le conteneur de mailing est situé derrière la même ip), impossible même depuis le shell de proxmox... ou encore impossible de renouveler les certificats sur le conteneur qui fait reverse.
Wenn ich 2x 10 GbE zur Verfuegung habe, was ist dann besser/sinnvoller?
Bonding mit 20 GbE?
Bonding mit 20 GbE mit VLAN?
Getrennte Links fuer Proxmox und Ceph mit jeweils 10 GbE?
Ueber 1x 10 GbE bekomme ich mit den CPUs so ca. 300-500 MB/s hin. Bei Bonding haette ich die Vermutung, dass es sich ungefaehr verdoppeln koennte, weil dann zwei Cores parallel arbeiten koennen.
So, on Cassiope Proxmox Server:
1.) Plex container in a Debian template
2.) Xubuntu VM
Note: I did not realize in Proxmox that if you run a VM with a GUI, the console shows the GUI. Which is why I can now make my media server entirely self-suffiicent; I can run both Plex and Handbrake in there.
Currently Plex is running pretty well; I got some friends to test it in several states and three countries. But there is more buffering than I like.
I decided to do a little more experimentation with Proxmox! Since moving my Proxmox VMs to my Intel NUC cluster I had to tighten up my resources, specifically RAM. I went from 256GB to only 64GB, so I decided to turn memory ballooning back on for all of my VMs because I noticed that the SWAP was growing.
I noticed that now my KSM is almost 11GB for one of my nodes! It seems there is almost 11GB of RAM reclaimed because of deduping items in memory! Are these results typical?
It's kinda mesmerizing watching the core speeds on one of the i7-12700Ts in the #Proxmox Cluster (the one that runs Tiggi.es presently) dynamically shift cores anywhere from 1400Mhz to 4.62GHz quickly, and on an as-needed basis. The low base speed saves energy, but it'll instantly pop cores up to speeds within its thermal/power limit envelope, so for lightly-threaded workloads, it can perform as well as beefier chips. #homelab#servers#cpu
Something happened either with the new #Proxmox 8.1 or latest #nextcloud AIO (7.7.0) but it's noticeably faster. Nextcloud Office documents open within 1-2s compared to roughly 5-6 before.
Je partage avec vous un truc que j'avais pas encore eu l'occasion de faire sous #proxmox (je l'utilise uniquement en perso, pas en milieu pro).
Une VM avec un disque mal pensé au départ. Bref, j'ai vu qu'on pouvait faire un truc du style :
qm resize [VM_ID] [DISK_NAME] +[SIZE_INCREASE]G
J'ai ensuite booté sur un systemcdrescue. Lancé un petit gparted. Et enfin j'ai augmenté le LVM (je ne voulais pas rajouter un disque supplémentaire).
lvextend -l +100%FREE /dev/mapper/vg--root-lv--root
1/2
My #Proxmox journey started a month ago. I moved all my all my #docker containers and #HomeAssistant VM from my #Synology NAS to a testing MiniPC with Proxmox.
This test was very positive, so I decided to move to a new server. I'm using a cluster to move my LXCs and VMs. I'm very impressed by how simple the process is for a newbie like me. I just need to remove my snapshots before.
For quite a while now, I have relied on terminal into my Windows Subsystem for Linux on my main workstation, as my daily driver. While it works all right for most cases, there are certain compatibility issues that requires a "... in WSL" search term for documentations/issues.
Close to a month now I have been using a #Ubuntu#terminal only VM on my #homelab#Proxmox cluster. For ones who can roll this out, this seems the best approach.
Here's an early #ff for you all :) If you're technical and self identify as "sysadmin" even just a little bit, you should be following @jimsalter - He wrote for @arstechnica for quite a while and now drops science on us all every week on the @25admins podcast. I have learned a TON about #ZFS and *NIX in general from him through the years and prize his advice rather highly.
Except for his dislike of #ProxMox which I personally think is a pretty great way to herd VMs for non production use :)
Four years of running #homeassistant from a (fast and resilient, but still) SD card has conditioned me to be super careful and judicious with things like addons, backups, turning off unnecessary entities to minimize writing to the card... Running HA as a #Proxmox VM running on a NVMe is so much nicer, faster and none of this is necessary
Spent much of the day working towards this. I'm still failing. 😦
Proxmox is up on VLAN5. I can get at the UI over the network so I know the trunk VLAN is letting that stuff through but I can't figure out how to get at OPNSense. No traffic seems to pass the bridge but the docs say that bit should just work.
As an experiment I decided to shutdown one of my 1u servers running Proxmox to see if my Intel NUC could handle the workload. To my surprise it was able to handle all 6 VMs without a sweat!
My 1u server was pulling 140 watts!
Intel NUC is only pulling 26 watts!
Going to let this run for a few days but impressed so far!
ooh i figured out how to get #macOS#Sonoma installed on my new #proxmox server. this is exciting for me because my creative workstation is overworked and one of the things it does is operate a 512GB #iCloud Content Cache.
and if you have a household of more than 3 devices that use iCloud, you should probably consider running one. even if you just did OS updates you'd be happy you did. probably. (as far as i know there aren't any easily exploited privacy risks)
I’ve officially got itchy feet. I really want to jump to Nix in as many places as I can.
I’m contemplating spending some of my long weekend replacing my existing extremely mature and well tested infra repo with #nixos in as many places as possible.
Core hypervisor host will likely still be #proxmox though.
I have a #proxmox problem where it looks like id's in the webgui are wrong.
If I connect to the command prompt it connects to the wrong vm. #homelab
Anybody see that problem before?
And know what to do perhaps?
Happens on #alpine vm's without kvm-client running