

I make a unique user for each VM - root account is secured with SSH login disabled and a unique password, which is stored in my password manager.
Also, don’t use Virtualbox. It’s Oracle garbage. Use virt-manager instead.
Also find me on sh.itjust.works and Lemmy.world!
https://sh.itjust.works/u/lka1988
https://lemmy.world/u/lka1988
I make a unique user for each VM - root account is secured with SSH login disabled and a unique password, which is stored in my password manager.
Also, don’t use Virtualbox. It’s Oracle garbage. Use virt-manager instead.
What else could that possibly mean?
I do both - older vehicles always needing attention, and self-hosting shit
My problem isn’t directly with the program - my problem lies with VC funding in general. Because they will come back for their money, and the project will inevitably enshittify and shove out enthusiasts in the never-ending search for infinite money.
The solution is getting rid of VC bullshit entirely. But we all know that will never happen.
Tailscale uses WG though, so it’s fundamentally the same thing. Like you said - just do Headscale on a VPS.
The problem, though, is that VC-funded projects bite off way more than they can chew from the start and have to enshittify to keep shareholders happy at that level.
Growth for the sake of growth is a fundamentally broken concept. Tailscale provides a free service that many use. They already offer a paid support tier for companies, like other certain FOSS projects do, so why not call it good there? Grow based on actual customer needs, instead of shareholder bullshit “needs” (line must go up 🙄).
Tailscale never sat right with me. The convenience was nice, but - like other VC-funded projects - it followed that ever-familiar pattern of an “easy” service popping up out of nowhere and gaining massive popularity seemingly overnight. 🚩🚩🚩
I can’t say I’m surprised by any of this.
Why are we running Docker inside LXC? That’s not a wise decision, and is specifically stated as a big “no-no” by both Docker and Proxmox devs.
VMs don’t use as much resources as you realize. I’ve got multiple VMs full of Docker stacks (along with other VMs running various game servers, and several LXCs for various “not set up for Docker” services) spread across three i7-7700T servers; none of them are even close to being taxed.
Right, but asking for 2-4 drive bays…
🤔
A PCI-E expansion board full of M.2 NVME drives might do the trick.
I’d argue for something a bit bigger, physically. The Optiplex SFF systems don’t have a whole lot of interior space for hard drives, in fact the 7050 SFF can only handle a single 3.5", a single 2.5", and a single NVME.
I have an older HP Elitedesk 8300 SFF that can handle 3x 3.5" drives, 2x 2.5" drives, and boot from an M.2 NVMe on a PCIE adapter card (I modded the BIOS). But that’s limited to 3rd gen Intel 🫤
I use Planka pretty regularly to track some of my projects. They just pushed out a release candidate for v2 a few weeks ago, which brought some nice features.
Uploaded your mind to the cloud
planning on running a number of docker containers and a couple of vms.
Just FYI, you can probably do ALL of that on a $200 Dell Optiplex 7050 SFF.
Source: my $200 Dell Optiplex 7050 SFF running 3 VMs, 3 LXC containers, and 16 docker containers - not including the multiple containers within the Nextcloud AIO “mastercontainer”. There is plenty of overhead to spare.
It just sounds overly complicated.