Recently, I’ve found myself walking several friends through what is essentially the same basic setup:

  • Install Ubuntu server
  • Install Docker
  • Configure Tailscale
  • Configure Dockge
  • Set up automatic updates on Ubuntu/Apt and Dockge/Docker
  • Self-host a few web apps, some publicly available, some on the Tailnet.

After realizing that this setup is generally pretty good for relative newcomers to self-hosting and is pretty stable (in the sense that it runs for a while and remains up-to-date without much human interference) I decided that I should write a few blog posts about how it works so that other people can set it up for themselves.

As of right now, there’s:

Coming soon:

  • Immich
  • Jellyfin
  • Elementary monitoring with Homepage
  • Cloudflare Tunnels

Feedback is always appreciated.

  • cyclicircuit@lemmy.dbzer0.comOP
    link
    fedilink
    English
    arrow-up
    0
    ·
    11 days ago

    That’s reasonable, however, my personal bias is towards security and I feel like if I don’t push people towards automated updates, they will leave vulnerable, un-updated containers exposed to the web. I think a better approach would be to push for backups with versioning. I forgot to add that I am planning a “backups with Syncthing” article as well, I will take this into consideration, add it to the article, and use it as a way to demonstrate recovery in the event of such an issue.

    • WhyJiffie@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      it’ll still cause downtime, and they’ll probably have a hard time restoring from backup for the first few times it happens, if not for other reason then stress. especially when it updates the wrong moment, or wrong day.

      they will leave vulnerable, un-updated containers exposed to the web

      that’s the point. Services shouldn’t be exposed to the web, unless the person really knows what they are doing, took the precautions, and applies updates soon after release.

      exposing it to the VPN and to tge LAN should be plenty for most. there’s still a risk, but much lower

      “backups with Syncthing”

      Consider warning the reader that it will not be obvious if backups have stopped, or if a sync folder on the backup pc is in an inconsistent state because of it, as errors are only shown on the web interface or third party tools

    • Onomatopoeia@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      My experience after 35 years in IT: I’ve had 10x more outages caused by automatic updates than everything else combined.

      Also after 35 years of running my own stuff at home, and practically never updating anything, I’ve never had an outage caused by a lack of updates.

      Let’s not act like auto updates is without risk. Just look at how often Microsoft has to roll out a fix for something an update broke. Inexperienced users are going to be clueless when an update breaks something.

      We should be teaching new people how to manage systems, this includes proper update checks on a cycle, with appropriate validation that everything works afterwards, and the ability to roll back if there’s an issue.

      This isn’t an Enterprise where you simply can’t manually manage updates across hundreds or thousands of servers, and tens of thousands of workstations - this is a single admin, small environment.

      I do monthly update checks, update where I feel it’s warranted, and verify systems afterwards.

      • Mordikan@kbin.earth
        link
        fedilink
        arrow-up
        0
        ·
        11 days ago

        This is really the truth. Auto-updating is really bad form when you are getting into server management. The first admin position I had back in the day had the rule that no automatic updates are to run, a manual update can only be run after 1 month of that update being released, and it had to accompanying documentation confirmed before it could be approved. The one time we did not follow that we ended up having to re-image the server in question from backup (as that was the quickest solution to getting it back online).

      • cyclicircuit@lemmy.dbzer0.comOP
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        I don’t disagree with any of that, I’m merely making a different value judgement - namely that a breach that could’ve been prevented by automatic updates is worse than an outage caused by the same.

        I will however make this choice more explicit in the articles and outline the risks.

    • LandedGentry@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      10 days ago

      I’m with you on this. It has to feel at least somewhat low-fuss/turnkey or people aren’t going to stick with it. The people who don’t get this are the same people who can’t see why Plex is more popular than Jellyfin despite the latter’s overall superiority

    • rumba@lemmy.zip
      link
      fedilink
      English
      arrow-up
      0
      ·
      10 days ago

      Been in it since the web was a thing. I agree wholeheartedly. If people don’t run auto updates and newbies will not run manual updates, You’re just teaching them how to make vulnerabilities.

      Let them learn how to fix an automatic update failure rather than how to recover from ransomware. No contest here.