Hello everyone,
I am about to renovate my selfhosting setup (software wise). And then thought about how I could help my favourite lemmy community become more active. Since I am still learning many things and am far away from being a sysadmin I don’t (just) want tell my point of view but thought about a series of posts:
Your favourite piece of selfhosting
I thought about asking everyone of you for your favourite piece of software for a specific use case. But we have to start at the bottom:
Operating systems and/or type 1 hypervisors
You don’t have to be an expert or a professional. You don’t even have to be using it. Tell us about your thoughts about one piece of software. Why would you want to try it out? Did you try it out already? What worked great? What didn’t? Where are you stuck right now? What are your next steps? Why do you think it is the best tool for this job? Is it aimed at beginners or veterans?
I am eager to hear about your thoughts and stories in the comments!
And please also give me feedback to this idea in general.
Debian on the servers, Diet-Pi on the SBC’s, all containerized.
I’m pretty happy with Debian as my server’s OS. I recently gave in to temptation and switched from stable to testing, on my home systems I run Arch because i like to have the most up to date stuff, but with my servers that’s a bit less important, even so debian testing is usually pretty stable itself anyway so I’m not worried much about things breaking because of it.
Rocky Linux. Been using debian but I like firewalld a bit more than ufw, and I don’t trust myself enough to let myself touch iptable.
You can run Firewalld anywhere
I know. But coming out of the box is nicer.
I’m interested in learning more about nixOS but until i get there, proxmox all day
Truenas core because I’m a bsd guy at heart. with that all but dead I’m trying to decide between bare freebsd or xigmanas.
I have a arch linux box for things that don’t run on bsd.
I’ve been using NixOS on my server. Having all the server’s config in one place gives me peace of mind that the server is running exactly what I tell it to and I can rebuild it from scratch in an afternoon.
I don’t use it on my personal machine because the lack of fhs feels like it’d be a problem, but when selfhosting most things are popular enough to have a module already.
I’ve several Debian stable servers operating in my stack. Almost all of them host a range of VMs in addition to a plethora of containers. Some house large arrays, others focus on application gruntwork. I chose Debian because I know it, been using it since the early 00s. It’s👌.
I think this is a great idea. With such a foundational deployment concept like OS there are so many options and each can change the very core of one’s self hosted journey. And then expanding to different services and the different ways to manage everything could be a great discussion for every existence level.
I myself have been considering Proxmox with LXCs deployed via the Community Scripts repo versus bare metal running a declarative OS with Docker compose or direct packages versus a regular Ubuntu/Debian OS with Docker compose. I am hoping to create a self-documenting setup with versioning via the various config and compose files, but I don’t know what would end up being the most effective for me.
I think my overarching deployment strategy is portability. If it’s easy to take a replacement PC, get a base install loaded, then have a setup script configure the base software/user(s) and pull config/compose files and start services, and then be able to swap out the older box with minimal switchover or downtime, I think that’s my goal. That may require several OS tools (Ansible, NixOS config, Docker compose, etc.) but I think once the tooling is set up it will make further service startups and full box swaps easier.
Currently I have a single machine that I started spinning up services with Docker compose but without thought to those larger goals. And now if I need to fiddle with that box and need to reboot or take it offline then all my services go down. I think my next step is to come up with a deployment strategy that remains consistent, but I use that strategy to segment services across several physical machines so that critical services (router, DNS, etc.) wouldn’t be affected if I was testing out a new service and accidentally crashed a machine.
I love seeing all the different ways folks deploy their setups because I can see what might work well for me. I’m hoping this series of discussions will help me flesh out my deployment strategy and get me started on that migration.
I’m gonna be simple : Syno DSM with portainer.
Hardware and software. Simple, for my simple needs.
My old DS916+ is great at the ile services but too weak for computing, so I have a reclaimed business laptop for the services. I could not imagine running anything on the DS.
I run jellyfin, freshrss, actualbudget and a few others services.
Just what I need :)
Cool idea with that thread series! I tried a similar thing with the Selfhosting Sunday posts and I always enjoy seeing what everyone’s up to.
I’ve been running Docker containers on plain Linux (Debian mostly) for a long time (and native a pplications before but I’m glad I migrated most of it) but last year I switched to Proxmox for my own hardware. I was mostly interested in the super comfortable automated VM snapshots but after adding a second node I’m also glad to have High Availability. To maintain a proper quorum (have at least 3 nodes for decisions) I run corosync on a Raspi. It’s been super reliable once set up properly. I have a NAS for backups/snapshots which is native TrueNAS (it was simply the best GUI for ZFS and NFS).
Thought back and forth about setting up K3S and migrate everything but I decided it’s not worth the effort and would just be for practice, but I can’t be arsed to set it up just for that. (I do K8S at work but we have managed clusters so barely any low level tinkering).
I’ve been using Ubuntu server on my server for close to a decade now and it has been just rock solid.
I know Ubuntu gets (deserved) hate for things like snaps shenanigans, but the LTS is pretty great. Not having to worry about a full OS upgrade for up to 10 years (5 years standard, 10 years if you go Ubuntu pro (which is free for personal use)) is great.
A couple times I’ve considered switching my server to another distro, but honestly, I love how little I worry about the state of my server os.
OS: Unraid
It’s primarily NAS software, with a form of software raid functionality built in.
I like it mainly because it works well and the GUI makes is very easy to use and work with.On top of that you can run docker containers, so it is very versatile as well.
I use it to host the following services on my network:
- Nextcloud
- Jellyfin
- CUPS
It costs a bit of money up-front, but for me it was well-worth the investment.
+1 for unraid. Nice OS that let’s me easily do what I want
Love Unraid. Been using it for a few years now on an old Dell server. I’m about to transform my current gaming PC into the main server so I can utilize the GPU pass-through and CPU pinning for things like running a VM just for LLM/AI and a VM for EndeavourOS for gaming. I just need to figure out how to keep my old server somehow working still bc of all the drive storage I have already setup, which my PC doesn’t have space for without a new case.
For anyone looking to setup Unraid, I highly recommend the SpaceInvaderOne YouTube channel. It helped tremendously when I got started.
Stage 1: Ubuntu server
Stage 2: Ubuntu server + docker
Stage 3: Ansible/OpenTofu/Kubernetes
Stage 4: Proxmox
Kubernetes is overkill for most things not just self hosting. If you need yo learn it great otherwise don’t waste your time on it. Extremely complicated given what it provides.
fr, unless you’re horizontally scaling something or managing hundreds of services what’s the point
I agree with this thread, but to answer your question I think the point is to tinker with it j “just because”. We’re all in this for fun, not profit.
oops straight to stage 4.
but wait stage 3 looks daunting
Don’t get me wrong, I use libvrt where it makes sense but why would anyone go to proxmox from a full iac setup?
I do 2 at home, and 3 at work, coming from 4 at both and haven’t looked back.
Because it is much simpler to provision a VM
Maybe for the initial setup, but nothing is more repeatable than automation. The more manual steps you have to build your infra, the harder it is to recover/rebuild/update later
You automate the VM deployments.
if you’re automating the creation and deployment of vms, and the downstream operating systems, and not doing some sort of HA/failover meme setup… proxmox makes things way more complicated than raw libvirt/qemu/kvm.
Can you please elaborate on this? I am currently using MicroOS and think about NixOS because of quick setup. But also about Proxmox and NixOS on top. Where would libvirt fit in in this scenario?
Linux
Proxmox+Almalinux
I also have a few Debian VMs kicking around