Averting Disaster - A Guide To Computer Backups (2014)
by Brett Howse on May 21, 2014 9:00 AM EST- Posted in
- IT Computing
- Mac
- Apple
- Windows
- Cloud Computing
- macOS
Introduction
We all store more and more of our lives in digital form; spreadsheets, résumés, wedding speeches, novels, tax information, schedules, and of course digital photographs and video. All of this data is easy to store, transmit, copy, and share, but how easy is it to get back?
All of this data can be a harsh reminder that computers are not without fault. For years, storage costs have been dropping while at the same time the amount of storage in any one computer has been increasing almost exponentially. We are at a point where a single hard drive can contain multiple terabytes of information, and with a single mishap, lose it all forever. Everyone knows someone who has had the misfortune of having a computer stop working and wanting their information back.
It’s always been possible to safeguard your data, but now it’s not only necessary thanks to the explosion of personal data, it’s also more affordable than ever. When you think of the costs of backing up your data, just remember what it would cost you if you were to ever lose it all. This guide will walk you through saving your data in multiple ways, with the end goal being to have a backup system that is simple, effective, and affordable. In this day and age, you really can have it all.
It’s prudent at this point to define what a backup is, because there are a lot of misconceptions out there which can cause much consternation when the unthinkable happens, and people who thought they were protected find out they were not.
Backups are simply duplicates of data which are archived, and which can be restored to a previous point in time. The key is the data must be duplicated, and you have to be able to go back to an earlier time. Anything that doesn’t meet both of those requirements is not a backup.
As an example, many people trust their data to network storage devices with RAID (Redundant Array of Independent Disks). Without going into the intricacies of various forms of RAID, none of these Network Attached Storage (NAS) devices are any sort of a backup on their own. RAID is designed to protect a system from a hard disk failure and nothing more. Depending on the RAID level, it either duplicates disks, or uses a calculation to create a parity of the data which can be used to calculate the original value of the data if any part of the data is missing from a failed disk. While RAID is an excellent mechanism to keep a system operational in the event of a disk failure, it is not a backup because if a file is changed or deleted, it is instantly updated or removed on all disks, and therefore there is no way to roll back that change. RAID is excellent for use as a file share, and can even be effectively utilized as the target for backups, but it still requires a file backup system if important data is kept on the array.
Another similar example is cloud storage. Properly configured, cloud storage can be a backup target, and different services can even properly perform backups, but the average person with the average Google Drive or OneDrive account can’t copy their files there and hope they are protected. As with RAID, it is a more robust file storage than any single hard drive, but if you delete a file, or copy over another, it can be difficult or impossible to go back to a previous version.
Both RAID and cloud storage suffer from the same problem – you can’t go back to an earlier time, and therefore are not a true backup. True backups will allow you to recover from practically any scenario – fire, flood, theft, equipment failure, or the inevitable user error. This guide will walk you through several methods of performing backups starting at simple and moving up to elaborate systems that will truly protect your data. These methods work for home and business alike, just the type of equipment will likely differ.
There is some common terminology used in backups that should be defined before we start discussing the intricacies of backups:
- Archive Flag: A bit setting on all files which states whether or not the file has been modified since the last time the flag was cleared.
- Full Backup: A backup of all files which resets the archive flag.
- Differential Backup: A backup of all files with the archive flag set, but it does not clear the archive flag.
- Incremental Backup: A backup of all files with the archive flag set which resets the archive flag.
- Image or System Based Backup: A complete disk level backup which would allow you to image a machine back to a previous state.
- Deduplication: A software algorithm which removes all duplicate file parts to reduce the amount of storage required.
- Source Deduplication: removing duplicate file information from files on the client end. This requires more CPU and memory usage on the client, but allows for a much smaller file size to be transferred to the backup target.
- Target Deduplication: removing duplicate file information from files on the target end. This saves client CPU and memory usage, and is used to reduce the amount of storage space required on the backup target.
- Block Level: A backup or system process which accesses a sequence of bytes of data directly on the disk.
- File Level: A backup or system process which accesses files by querying the Operating System for the entire file.
- Versioning: A list of previous versions of a file or folder.
- Recovery Point Objective (RPO): The amount of time since the last backup deemed safe to lose in a disaster scenario. For example, if you perform backups nightly, your RPO would be the previous night’s backups. Anything created in between backups is assumed to be recoverable through other methods, or an acceptable loss.
- Recovery Time Objective (RTO): The amount of time deemed acceptable between the loss of data and the recovery of data. For home use, there’s really no RTO but many commercial companies will have this defined either with in-house IT or with a Service Level Agreement (SLA) to a support company.
133 Comments
View All Comments
bernstein - Wednesday, May 21, 2014 - link
after fifteen years of backupping i can share the following:- user initiated (e.g. all usb, some network/cloud) backups agree with less than 0.1% of the human population (but hey it is better than nothing)
- with consumer hdds, raid5 nas are totally overrated and *in the real world* rarely protect data better than jbod.
- raid6 is better than jbod. but with consumer disks the only real alternative to jbod is zfs (linux md (which all nas employ) + consumer hdds = shaky unless you have a 24/7sysadmin...)
so either build/buy a zfs nas or backup to the cloud.
bernstein - Wednesday, May 21, 2014 - link
or buy insanely expensive enterprise disksDanNeely - Wednesday, May 21, 2014 - link
Are there any consumer grade NASes with ZFS support enabled by default now? The peanut gallery on yesterday's Synology review was arguing for ZFS being a key reason for rolling your own instead of buying an off the shelf NAS.questionlp - Wednesday, May 21, 2014 - link
There's FreeNAS Mini, which is a 4-bay NAS. I'm actually considering getting one to replace my current file server at home.DanNeely - Thursday, May 22, 2014 - link
Yikes! At $1k diskless, that's well above the typical price for a consumer nas.bsd228 - Thursday, May 22, 2014 - link
but none of them have 16G of ECC memory and a processor at this level, with dual intel gig nics and expansion ports to support a 10G or other parts. It appears to be able to transcode 3 HD streams without the benefit of the acceleration shown in this product. So, perhaps a bit of premium, but not that much.CadentOrange - Wednesday, May 21, 2014 - link
I personally use RAID1 with a 2 bay NAS that's worked fine so far. Granted that I don't have very rigorous needs, but then this isn't for enterprise critical data.Kevin G - Wednesday, May 21, 2014 - link
I've had good luck with RAID5 on small scale arrays. The main reason to go to RAID6 is due to the chance of a disk failing during the rebuild process. Consumer NAS typically are not under that much load so the rebuild times are short. In an enterprise environment where disk counts are higher in an array as well as the load on the array, using RAID6 makes sense.Personally I have an 8 drive RAIDZ2 array in a NAS4Free system that I use at home. Portability and reliability are some of the reasons I went with ZFS. So far it has been purely hands off once I got the system up and running. Admittedly it took a bit longer to get up and running as I'm doing some odd things like hosting virtual machines on the same system.
MrBungle123 - Friday, May 23, 2014 - link
The reason for RAID 6 is because statistically 1 bit out of every 10^14 bits (12TB) is bad on a hard drive... with all drives operational a RAID 5's parity can compensate for said bad bit, with a degraded RAID 5 the sector will be un-recoverable and you'll lose data. RAID 6 has double parity so even with a drive down if (when) a bad sector is encountered it is still possible to recover the data. RAID 5 is obsolete for large arrays.Kevin G - Friday, May 23, 2014 - link
If you know what sector is bad in RAID5, you can still recover the data.The tricker thing is silent corruption where all blocks appear to be OK. There an error can be detected but not necessarily which block contains it.