• 1 Post
  • 10 Comments
Joined 2 years ago
cake
Cake day: July 7th, 2023

help-circle
  • Ahh ok, that makes sense. Hah magical algorithm.

    Yeah it’s about 30TB of photos/videos. I only recently got into videography which takes up a ton of space. About 25% of that is videos converted into an editing codec, but I don’t have those backed up to external drives. I also have some folders excluded that I know have duplicates. A winter project of mine will be to clear out some of the duplicates, and then cull the photos/videos I definitely don’t need. I got into a bad data hoarding habit and kept everything even after selecting the keepers.

    I have an in progress folder where I dump everything, then folders by year/month for projects and keepers. I need to do better with culling as I go.

    I like that idea, I will incorporate it into my strategy.

    Thank you for taking the time to help me out with this, much appreciated!


  • I didn’t consider that, excellent point. Forgive my ignorance because I’m not certain how the backup systems work, and feel free to ignore this if you don’t know. I presume they compare some metadata or hash of a file against another file and then decide if it’s the same or not to back up? Let’s say I have a file that I have already backed up, and then there is some ransomware that encrypted my files. Would the back up software make a second copy of the file?

    So for most of the important files, I just do a sync to an external drive periodically. Basically when I know there have been a lot of changes. For example I went on a trip last year and came back with nearly 2 TBs of photos/videos. After ingesting the files to unRAID, I synced my external drive. Since I haven’t done much with those files since that first sync, I haven’t done the periodic sync since then. But now you’ve opened my eyes that even this could be a problem. How would the G-F-S strategy work in this case?

    I thought about zfs or btrfs but my Unraid array is unfortunately xfs and it’s too large at this point to restart from scratch.

    Haha that would be a lot of blurays.



  • Ahh gotcha, I misunderstood that then. I could probably set up a VPN there but don’t want to over complicate it. An always on Pi will be fine I think, they are low power. I could also add a smart switch and set up a schedule or something but I don’t think thats worth the hassle considering the low power usage of a pi.

    Hmmm that’s a good point about syncthing backing up corrupt files. I was thinking to use it because I already use it extensively and I wouldn’t need to mess with port forwarding or anything of the sort.

    I had multiple copies of files previously as a backup “strategy” and it got way out of hand where I have like 1.5m photos lol. What do you recommend as an alternative to syncthing?



  • I think this is the play. I’ll likely just get an enclosure for the two 4TB drives I have and can always buy an external drive in the future and get them to plug it in.

    I don’t have any experience in setting up wire guard so I’ll have to look into that. I was thinking to use syncthing since that skips the need for that, but I think someone in the thread mentioned that may not be ideal in case of file corruption.

    Do you just have raspbian on the pi?



  • Thanks for the detailed reply.

    So my main NAS is Unraid, and I also have a couple of proxmox boxes. Though I’m less concerned about the proxmox boxes as the main files are on the NAS, and I have a proxmox backup server vm set up on Unraid with regular backups there.

    For most of my important files on unraid, I have an external drive that I periodically sync and store in a safe.

    I also have access to a VPS with over 1TB of space which I am still figuring out how to best integrate into my backup strategy.

    For what I’m asking here, I just want to have a simple solution that I can tuck away and have remote access to and just use syncthing or something to keep it updated.