I didn’t consider that, excellent point. Forgive my ignorance because I’m not certain how the backup systems work, and feel free to ignore this if you don’t know. I presume they compare some metadata or hash of a file against another file and then decide if it’s the same or not to back up? Let’s say I have a file that I have already backed up, and then there is some ransomware that encrypted my files. Would the back up software make a second copy of the file?
So for most of the important files, I just do a sync to an external drive periodically. Basically when I know there have been a lot of changes. For example I went on a trip last year and came back with nearly 2 TBs of photos/videos. After ingesting the files to unRAID, I synced my external drive. Since I haven’t done much with those files since that first sync, I haven’t done the periodic sync since then. But now you’ve opened my eyes that even this could be a problem. How would the G-F-S strategy work in this case?
I thought about zfs or btrfs but my Unraid array is unfortunately xfs and it’s too large at this point to restart from scratch.
Haha that would be a lot of blurays.
Ahh ok, that makes sense. Hah magical algorithm.
Yeah it’s about 30TB of photos/videos. I only recently got into videography which takes up a ton of space. About 25% of that is videos converted into an editing codec, but I don’t have those backed up to external drives. I also have some folders excluded that I know have duplicates. A winter project of mine will be to clear out some of the duplicates, and then cull the photos/videos I definitely don’t need. I got into a bad data hoarding habit and kept everything even after selecting the keepers.
I have an in progress folder where I dump everything, then folders by year/month for projects and keepers. I need to do better with culling as I go.
I like that idea, I will incorporate it into my strategy.
Thank you for taking the time to help me out with this, much appreciated!