We have a NAS (Network-Attached Storage) device on our network which is basically used as a repository for the backups from our laptops. It’s a Synology DS418, and I bought it nearly 4 years ago when we moved into our house. At the time, I bought four WD Red 2TB drives to go into it in a RAID format (Redundant Array of Independent Disks) and it’s done some stellar work since then.
However, what with the pandemic year last year (which of course continues into this one) and a few other reasons, I’ve been running into free space problems on a regular basis. So, time for an upgrade! My thought was to replace all four drives with WD Red Plus 4TB drives, essentially doubling the space.
This is where the fun starts. The whole point of a RAID system is that if a drive fails, the system doesn’t go down, no data is lost, and you can just replace the drive (hence the “Redundant” part of the name RAID). If you like, the data is “duplicated” in a sense, and spread across all of the drives. So, because my NAS system is using a RAID configuration (I’m using Synology’s SHR RAID type), I could replace a drive, let the system recover, and then go on and replace the next one.
Step 1: buy four new drives. This was just wacky: I’d normally buy them from Amazon, but for some obscure reason, they would only let me buy three. I waited a couple of days, thinking they were waiting for more to come into stock, but no change. Yeah, I could’ve bought two, and then a couple of days later, tried to buy another two, but instead decided to buy four from Newegg.
Steps 2, 3, 4, …: They were delivered on Wednesday afternoon last week: yay! I powered down the NAS, and then … saw that I needed a little key to unlock each drive’s panel on the front of the device so that the drive could be removed/replaced. Er, where was that key? Yeah, minor panic – where did I put it four years ago? – but I found it. I removed the first 2TB drive and replaced it with the first 4TB drive, then powered up the NAS. Once it was running, I logged into the NAS, and brought up the Storage Manager. Yep, the new drive was seen and storage pool was marked as “Degraded”. I clicked Repair on the Action menu, selected the new drive, and let it do its business. Which it did. Slowly. Verrrrry slowly.
In fact it took over eight hours to repair that first drive, which meant that the second drive went in on Thursday morning. Another eight-plus hours later, the third drive went in, which meant I was ready to put in the fourth drive on Friday morning. Except… it wasn’t done. In essence at that point the NAS had three 4TB drives and went into a “grow the file system” repair mode, which took most of Friday. So the last drive went in on Friday afternoon, and it took a full day for the system to basically recognize that it had four equal-sized drives and repair/grow/polish the whole storage pool.
So on Saturday afternoon, I ended up with all four new drives installed and a RAID storage pool that was over 10TB in size. Well worth it.
However, having gone through essentially three days of watching the NAS rebuild itself over and over again, my overarching thought was: dayum, I should have copied all of the backup files etc from the NAS onto some external drive(s), replaced all four drives in one go and reinstalled the operating system as if it were a brand new system. Then I could have copied all the files back. OK, more work from me, but surely that would have taken less time?
And guess what? I’m not going to experiment and find out.