tl;dr – When adding more drives you can mix-and-match, but when replacing existing you must use a drive at least as big as the largest drive in the array.
In the past whenever I added drives to my Synology the pool and volume automatically grew, however with my most recent purchase this appeared to no longer be true, so what changed?
We recently bought four 14TB drives to replace existing 6TB drives in our main array. As we popped drives out of the main array we decided to recycle them into a backup array which had only 3TB drives in it. After a week of popping and repairing (the backup array was at 82% and couldn’t do a fast repair) the array size wouldn’t change.
We kept checking the RAID Calculator and according to our calculations it should have been at roughly 47TB for SHR however it remained unchanged at 29.1TB. We rebooted, we scrubbed, we updated the OS, we ran S.M.A.R.T, we even swore at it, but nothing would get the array to grow. The Storage Manager showed that we had 6TB drives (or 5.5TB but whatever) so it did recognize them, and as noted, these came from another array so we knew they were working.
But we dug deeper. We double-checked the drive compatibility table and the drives were in fact listed. We checked our model number against maximum volume size and 108TB was listed as the max.
So then we searched. And searched. And searched. “Grow Synology array”. “Synology can’t increase volume size”. “Synology modify allocated size greyed out”.
Synology had an FAQ that seemed perfect:
I added new drives to my Synology NAS, but the available capacity didn’t increase. What can I do?
Nope, can’t increase, it is greyed out.
Also nope, same reason.
Add Drives to Expand the Storage Pool Capacity
Sounds exactly like what I’m trying to do. (I’m actually wrong, more on this one later.)
How I upgraded my Synology NAS to a bigger disk
Yikes. Almost a decade old and manually running commands against the partition tables. Also includes a bunch of “should do the trick but didn’t work for me” statements.
Adding, extending, and removing Linux disks and partitions in 2019
More recent however not Synology-specific. I did run the fdisk
and parted
commands for inspection, and they confirmed that 6TB drives were only using 3TB (as far as I could interpret, at least). And if I’m being honest, I did try crossing my fingers and running one of the rescan commands, hoping for magic, but it didn’t fix anything and I didn’t want to push my luck.
So finally I ended up in the forums with a post:
Grow Synology after adding larger drives
One person was curious about the drives and the other person, although they technically answered the question literally, they didn’t really answer the problem or provide anything useful. Granted that this is a free forum filled with community members how are probably answering in their free time, I’m not knocking that. The fact that I got a discussion going fairly fast was great. But coming from SO, when we state a fact we try to back it up with the source of the fact. The same day as I posted my question to Synology I commented on another on SO where I drove the person to more information. Same with the day before. Short and sweet, but still leading them. (It also doesn’t help that a prior post literally said RTM.) But that’s a side problem. Once again, I’m not bashing anyone, I’m grateful for any help I could get.
After getting the technical answer I continued searching and that’s when I finally noticed the text installing additional drives on this page. Yes, that page title is also Add Drives, and I fully acknowledge that I was wrong and I scanned too fast and I was searching for the wrong things.
What I really needed was Replace a Drive.
In that article we get:
If the existing drives are of different sizes, then the replacement drive must be equal to or larger than the largest existing drive. You must replace the smaller drives first to optimize the usage of storage pool capacity. For example, if your SHR storage pool consists of three drives of different sizes (i.e., 4 TB, 3 TB, and 2 TB), then the replacement drive must be at least 4 TB. Replace the 3 TB or 2 TB drives first.
In the past when I had performed this operation I had always been replacing drives in the array with larger drives. Our most recent growth prior to this, however, was to add two 14TB drives since they were super stupid cheap at the time. This meant that for the first time ever I was adding drives that weren’t the largest in the entire array.
I was wrong. I didn’t RTM. I know now. Hopefully you do, too.
Update: Right after posting this I see there is a response in the forums giving me a little bit of closure that states the only fix is to backup and start over. So at least I have the answer I was looking for now.