Sergio, to proxmox
@Sergio@fosstodon.org avatar

Anybody running a desktop environment on the same machine as ? Because of space/location constrains, I'm wanting to have the same computer run Proxmox and KDE. I know it's not recommended, but if I can make it work relatively securely and dependably, it would make my setup much easier.

I won't be upset if you Boost 😅

marud, to proxmox

Bon, je suis complètement à court d'idées, j'ai besoin d' sur du sur ce serveur... Si vous avez une idée ou si vous pouvez partager, j'en peux plus là

Le serveur qui fait tourner cette instance est sur un . Jusqu'ici, tout allait bien.
Hier, suite à un plantage, j'ai du reboot le serveur (VPS chez Ionos). Après redémarrage, impossible d'accéder à quoi que ce soit : Interface Proxmox, services dans les conteneurs, rien.

La configuration était la suivante :

Interface externe (ens6) et 2 bridges :

  • vmbr0, en bridge-port sur ens6 (avec donc son ip publique), utilisé pour l'administration
  • vmbr1, avec une ip dans un réseau en 192.168.2.0/24 qui sert les conteneurs (reverse proxy pour un et docker pour l'autre)

J'ai dans mon fichier d'interfaces pour vmbr1 ceci :

        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up iptables -t nat -A POSTROUTING -s 192.168.2.0/24 -o vmbr0 -j MASQUERADE
        post-up /script/dnat.sh
        post-down iptables -t nat -D POSTROUTING -s 192.168.2.0/24 -o vmbr0 -j MASQUERADE

Pour les ouvertures de port, j'ai dans dnat.sh des entrées comme celle ci (exemple pour le port 443)

iptables -t nat -A PREROUTING -i vmbr0 -p tcp --dport 443 -j DNAT --to-destination 192.168.2.10:443

Après debug, j'ai vu que le trafic passait à nouveau lorsque je foutais en down VMBR0.

J'ai donc, dans l'urgence, changé mes règles pour retirer vmbr0 et le changer par ens6, qui est le nom d'interface "physique".

J'ai pu constater que tout était revenu : accès à l'interface de proxmox, accès aux conteneurs... tout sauf, un point important : impossible depuis le serveur d'utiliser sa propre ip publique.

Par exemple, impossible pour l'instance d'envoyer un mail (le conteneur de mailing est situé derrière la même ip), impossible même depuis le shell de proxmox... ou encore impossible de renouveler les certificats sur le conteneur qui fait reverse.

[1/2]

ij, to random German
@ij@nerdculture.de avatar

Mal ne Frage zu und/oder :

Wenn ich 2x 10 GbE zur Verfuegung habe, was ist dann besser/sinnvoller?

  1. Bonding mit 20 GbE?
  2. Bonding mit 20 GbE mit VLAN?
  3. Getrennte Links fuer Proxmox und Ceph mit jeweils 10 GbE?

Ueber 1x 10 GbE bekomme ich mit den CPUs so ca. 300-500 MB/s hin. Bei Bonding haette ich die Vermutung, dass es sich ungefaehr verdoppeln koennte, weil dann zwei Cores parallel arbeiten koennen.

seperis, to proxmox

Adventures with #Proxmox

So, on Cassiope Proxmox Server:
1.) Plex container in a Debian template
2.) Xubuntu VM

Note: I did not realize in Proxmox that if you run a VM with a GUI, the console shows the GUI. Which is why I can now make my media server entirely self-suffiicent; I can run both Plex and Handbrake in there.

Currently Plex is running pretty well; I got some friends to test it in several states and three countries. But there is more buffering than I like.

AngryAnt, to proxmox
@AngryAnt@mastodon.gamedev.place avatar

Considering adding a cloud storage solution as an alternate cold storage mirror of about 40TB of drives.

Any recommendations? Feels like I’m going in circles trying to find a decently priced vendor with a viable interface.

technotim, to proxmox
@technotim@mastodon.social avatar

I decided to do a little more experimentation with Proxmox! Since moving my Proxmox VMs to my Intel NUC cluster I had to tighten up my resources, specifically RAM. I went from 256GB to only 64GB, so I decided to turn memory ballooning back on for all of my VMs because I noticed that the SWAP was growing.

I noticed that now my KSM is almost 11GB for one of my nodes! It seems there is almost 11GB of RAM reclaimed because of deduping items in memory! Are these results typical?

#proxmox

LeoBurr, to proxmox
@LeoBurr@tiggi.es avatar

It's kinda mesmerizing watching the core speeds on one of the i7-12700Ts in the Cluster (the one that runs Tiggi.es presently) dynamically shift cores anywhere from 1400Mhz to 4.62GHz quickly, and on an as-needed basis. The low base speed saves energy, but it'll instantly pop cores up to speeds within its thermal/power limit envelope, so for lightly-threaded workloads, it can perform as well as beefier chips.

Screen grab of "watch -n0 "grep Hz /proc/cpuinfo" off of a Proxmox node. 20 logical CPU cores with a base of 1400MHz flickering up to as high as 4.62GHz per core as processes require it.

stooovie, to proxmox
@stooovie@mas.to avatar

Something happened either with the new #Proxmox 8.1 or latest #nextcloud AIO (7.7.0) but it's noticeably faster. Nextcloud Office documents open within 1-2s compared to roughly 5-6 before.

tux, to fediverse German

Gibt es hier in der eigentlich Erfahrungen mit auf einem ?
@askfedi_de

stooovie, to proxmox
@stooovie@mas.to avatar

Uh oh, #Proxmox suddenly goes "read-only file system"

#bugmagnet

solimanhindy, to proxmox French
@solimanhindy@mastodon.lovetux.net avatar

Je partage avec vous un truc que j'avais pas encore eu l'occasion de faire sous #proxmox (je l'utilise uniquement en perso, pas en milieu pro).
Une VM avec un disque mal pensé au départ. Bref, j'ai vu qu'on pouvait faire un truc du style :
qm resize [VM_ID] [DISK_NAME] +[SIZE_INCREASE]G
J'ai ensuite booté sur un systemcdrescue. Lancé un petit gparted. Et enfin j'ai augmenté le LVM (je ne voulais pas rajouter un disque supplémentaire).
lvextend -l +100%FREE /dev/mapper/vg--root-lv--root
1/2

lucas3d, to proxmox
@lucas3d@mastodon.social avatar

My journey started a month ago. I moved all my all my containers and VM from my NAS to a testing MiniPC with Proxmox.

This test was very positive, so I decided to move to a new server. I'm using a cluster to move my LXCs and VMs. I'm very impressed by how simple the process is for a newbie like me. I just need to remove my snapshots before.

Keep learning in my journey! 😊

rysiek, to linux
@rysiek@mstdn.social avatar

I am completely at a loss. I am trying to get cryptroot to work on #Debian 11 with a #Proxmox VE (5.15.*-pve) kernel. And it just won't budge.

Apparently the cryptroot gets unlocked, booting proceeds (seems like root gets mounted), and then just… stops. Last message on screen is:

systemd[1]: Mounting POSIX Message Queue File System

It works just fine with the stock Debian kernel (5.10.*). Tried disabling AppArmor, no budge. Debugging is difficult, remote server with shitty KVM.

#SysAdmin

vyoma, to ubuntu
@vyoma@mastodon.world avatar

Good bye and hello

For quite a while now, I have relied on terminal into my Windows Subsystem for Linux on my main workstation, as my daily driver. While it works all right for most cases, there are certain compatibility issues that requires a "... in WSL" search term for documentations/issues.

Close to a month now I have been using a only VM on my cluster. For ones who can roll this out, this seems the best approach.

feoh, to FF

Here's an early #ff for you all :) If you're technical and self identify as "sysadmin" even just a little bit, you should be following @jimsalter - He wrote for @arstechnica for quite a while and now drops science on us all every week on the @25admins podcast. I have learned a TON about #ZFS and *NIX in general from him through the years and prize his advice rather highly.

Except for his dislike of #ProxMox which I personally think is a pretty great way to herd VMs for non production use :)

stooovie, to homeassistant
@stooovie@mas.to avatar

Four years of running #homeassistant from a (fast and resilient, but still) SD card has conditioned me to be super careful and judicious with things like addons, backups, turning off unnecessary entities to minimize writing to the card... Running HA as a #Proxmox VM running on a NVMe is so much nicer, faster and none of this is necessary

pieceofthepie, to homelab
@pieceofthepie@n8e.dev avatar

Spent much of the day working towards this. I'm still failing. 😦

Proxmox is up on VLAN5. I can get at the UI over the network so I know the trunk VLAN is letting that stuff through but I can't figure out how to get at OPNSense. No traffic seems to pass the bridge but the docs say that bit should just work.

#HomeLab #SelfHosted #Proxmox #OPNSense

tux, to proxmox German

Frage an alle #Proxmox #Admins: Was verwendet ihr für ein #LAMP und #Docker mit #Portainer? Eine #VM oder #LXC #Container und warum? Bzw. was ist empfehlenswert?
Gerne darf diese #Frage geteilt werden. 😉

@askfedi_de #FragDieFediverse #FragDasFediverse #FollowerPower

technotim, to proxmox
@technotim@mastodon.social avatar

As an experiment I decided to shutdown one of my 1u servers running Proxmox to see if my Intel NUC could handle the workload. To my surprise it was able to handle all 6 VMs without a sweat!

My 1u server was pulling 140 watts!
Intel NUC is only pulling 26 watts!

Going to let this run for a few days but impressed so far!

#proxmox #opensource #intelnuc #nuc #1u #server #lowpower #homelab

emory, to macos
@emory@soc.kvet.ch avatar

ooh i figured out how to get installed on my new server. this is exciting for me because my creative workstation is overworked and one of the things it does is operate a 512GB Content Cache.

and if you have a household of more than 3 devices that use iCloud, you should probably consider running one. even if you just did OS updates you'd be happy you did. probably. (as far as i know there aren't any easily exploited privacy risks)

ironicbadger, to homelab
@ironicbadger@techhub.social avatar

It took a decade but I WAS RIGHT! I never felt comfortable trusting vmware with my hypervisor #homelab needs. Long live #proxmox

ironicbadger, to NixOS
@ironicbadger@techhub.social avatar

I’ve officially got itchy feet. I really want to jump to Nix in as many places as I can.

I’m contemplating spending some of my long weekend replacing my existing extremely mature and well tested infra repo with in as many places as possible.

Core hypervisor host will likely still be though.

hanscees, to proxmox Dutch
@hanscees@mas.to avatar

I have a #proxmox problem where it looks like id's in the webgui are wrong.
If I connect to the command prompt it connects to the wrong vm.
#homelab
Anybody see that problem before?
And know what to do perhaps?
Happens on #alpine vm's without kvm-client running

luna, to proxmox

Sometimes you have to work with what you've got, and I think that's the story behind this image of my cluster at college...

technotim, to proxmox

If you were looking to upgrade to Proxmox 8 today, I wrote a quick guide to help! I've already tested it on my production cluster and it works great!

https://technotim.live/posts/upgrade-proxmox-to-8/

#proxmox #opensource #hypervisor #virtualization

  • All
  • Subscribed
  • Moderated
  • Favorites
  • megavids
  • thenastyranch
  • rosin
  • GTA5RPClips
  • osvaldo12
  • love
  • Youngstown
  • slotface
  • khanakhh
  • everett
  • kavyap
  • mdbf
  • DreamBathrooms
  • ngwrru68w68
  • provamag3
  • magazineikmin
  • InstantRegret
  • normalnudes
  • tacticalgear
  • cubers
  • ethstaker
  • modclub
  • cisconetworking
  • Durango
  • anitta
  • Leos
  • tester
  • JUstTest
  • All magazines