I have been setting up a new NAS1 at home, and unlike the old one this new one does have two NVME slots for cache or fast storage. And since I tend to end up fully speccing out my systems, I wanted to add two SSDs and 32GBs of RAM from the get-go.

There were two options, either buy the Synology original parts, or go with compatible 3rd-party ones and hard override the genuineness checks. Naturally I went with the janky way and added two 16GB Crucial sticks and two 2TB WD Red NVME drives. As expected the NVME drives were present but weren’t recognised as compatible, which this handy script from GitHub changed. It created the bare Storage Pool from which Volumes could then be managed in the Storage Manager App as usual. Everything was fine and worked.

The other day I woke up to a message from my NAS in my inbox, telling me that the Storage Pool was degraded:

Storage Pool 2 on $hostname has degraded (total number of |drives: 2; number of active drives: 1).

Several reasons may result in storage pool degradation. Please go to Storage Manager > Storage to understand the cause of degradation, or refer to this article to learn how to repair a degraded storage pool.

From Synology - $hostname

Did I mention this thing beeps like crazy?! You can temporarily disable the alarm under Control Panel > Hardware & Power > Beep Control. Thankfully this stops the beeping for the current problem only and keeps all further alarms enabled.

I then proceeded to shutdown the entire system, remove and visually inspecting the drives. Looked good, so back in they went and after booting up again the beeping continues.

The Storage Manager tells me that Storage Pool 2 was degraded because there was no redundancy, but didn’t offer to re-add the second drive again (due to the janky grafted-on parts). Okay there’s no way to fix it in the WebUI, to the terminal then. I dissected the M2_Volume script to ascertain how it functions, and it seems to be a fancy semi-automatic wrapper around Synology’s synostgpool. Unfortunately I couldn’t get it to modify an existing Storage Pool, it merely supports creating a new one, which means the existing data gets wiped.

After some back and forth I then decided the rebuild the Storage Pool. Stopped the Container Manager and Plex Server that were still running off of the degraded Pool (thankfully Config/Database only, the media files are located on spinning rust). And then proceeded to re-created the Storage Pool using the Script again. It finished without any indication of an error and a new Storage Pool show up in the Storage Manager. But with one drive (I had selected both and the correct ones, re-checked the terminal with the inputs again to make sure).

By accident I stumbled over this, which is by the same person, which promises to overwrite the Compatibility Database to enable the Storage Manager to use the drives as if they were supported.

Executed the script, went to Storage Manager and manually added the second NVME drive to the 1-drive Pool and off we go. I should have done the HDD script from the beginning, it modifies the NAS and those modifications have to be applied after every OS update. But that is manageable (perhaps even with a startup-script?) and allows me to conveniently use the GUI, while the M.2 Volume script initially creates the drive and then you’re on your own.

I’m writing this as the Storage Pool is rebuilding, and have lost a few hours worth of time. I did wonder if buying official Synology NVME would have prevented the issue. But they don’t make the drives either, they buy something off the shelf as well and slap their own sticker on. Plus WD Red are explicitly made for NAS use.

Oh, well. I will report back if it happens again or if I find out what is the cause of this.

  1. Synology DS923+