How to kick an older machine into the modern age with NVMe SSD.
With mobile and cloud taking the world by storm, it's easy to forget that the workstation is still advancing with Moore's Law gusto. A most remarkable technology has come to the PC and laptop in the form of solid state storage and the NVMe interface. While your average laptop or PC 5 years ago had an average of 50MB/s read/write sequential (estimate) for tough workloads and much worse for random access, SSD devices evolved to fully max out the SATA3 bus at 6Gb/s (~500MB in practice) whether random or sequential all with a fraction of the energy and failure rate*. What a surprise to actually have your interface be the bottleneck for your hard disk. It was time for NVMe, essentially a direct injection into the PCIe bus with speeds of up to 3GB/s real world performance in a consumer device. All this while healthy competition drops prices like rocks. Even before the elusive memristor, it truly is the golden age of storage.
With all these advances in tech, it seems a shame to replace a perfectly good machine just to get the benefits of NVMe. Let me walk you through a recent build I did using an older HP Z600 workstation and Fedora 25 Linux dual booted with Windows 7. By the end of this build, I have an extremely cost effective development workstation capable of running scores of VMs and hundreds of docker containers with not a single pokey progress bar or I/O wait to delay me. First of all, how does one get a modern tiny NVMe device into such a large cumbersome box? PCI-E to SATA M.2 adapters are commonplace from resellers, and luckily I have plenty of PCI-E slots. Have a look at this helpful image I yoinked from eBay:
Here we see a standard PCI-E 4x adapter with a single slot (actually it looks like there is a second one on the back). At first you may think "there's room for so much more. How about 2 or 4 slots?" But remember that NVMe is basically a direct mapping to the PCI-E bus, so one device typically can max out the entire PCI-E slot. Be careful with cards that advertise more slots as they may be meant to RAID standard mSATA devices which are slot-compatible with NVMe though won't have nearly the same performance. In my case, I settled on this simple adapter from Startech [£26 - Amazon]. As an experiment and to keep costs reasonable, my device will be Samsung's 950 Pro [£173 - Amazon]. Any suitable vendor should suffice so long as the device supports NVMe and the performance you're looking for. Install is really straightforward. I just slip everything into an available PCI-E 4x slot and away we go.
It's one thing to have 256GB of direct storage in your PCI-E slot. It's an entirely different thing for your legacy BIOS to recognise it as bootable media. Don't panic! There are a few really simple options around this. First, you can continue to boot off your conventional media and use your new storage as add-on. Secondly, you can use the NVMe device as cache for your existing media with something like dm-cache. Thirdly and my preferred method, boot off the media anyway (!)
Yes there is a small caveat when I say my BIOS can't boot off of NVMe. That means that only the bootloader can't be recognised. By simply booting off a Fedora USB stick, I'm able to install into an NVMe device. Anaconda will even happily present me the option to install the bootlader (GRUB2) onto the device. Of course when I restart, the device isn't available for selection. At this point, it's as simple as installing Fedora again onto an existing SATA device. I'm adding a spare 128GB 2.5" SSD I've had around. When I install a second time, I again elect to have the bootloader installed into the SATA device. I elect to use different LVM naming for volumes because confusion is no fun. During install, grub2-mkconfig searches other devices for bootable operating systems. It finds both my NVMe install of Fedora and my Windows 7 installed from factory on a 1TB spindle. On reboot, I can now select my SATA device which contains GRUB 2 and can proceed to boot directly to my NVMe device. Voila! Booting options now take 25 seconds on spindle, 14 seconds on SSD, and 4 seconds on NVMe. Note that if you know how to install and configure grub2 manually, you can do that as well, but this keeps things easier for the average end user or the typical grubber that struggles with grub2's complexity vs plain old grub1.
What about enterprise applications? If I were building a SDS (software defined storage) solution, NVMe cache would be my preference. One or more NVMe tiers caching slower SSD or large cheap SATA spindles would give a huge performance boost to any Gluster, Ceph, or even NFS or Samba deployment. Splitting my PCI-e slots with NVMe storage and 10GigE would make a highly effective enterprise storage solution. In such a case the primary bottleneck would be the network, instead of common SDS solutions bottlenecking at the commodity SATA storage. Even bonded 2x10Gb fibre would struggle to saturate NVMe bus speeds of ~3GB/s. Such environments lend themselves well to hybrid installs with compute/network in one. Nodes can act as full performance storage replicas and still have storage and CPU bandwidth available for things like container workloads. Even some older blade servers offer PCI-e slots capable of NVMe installation. No need to replace an entire server line just to get their built-in mSATA or NVMe features. The possibilities are limitless.
In my use case (development workstation), having a fresh environment that's completely NVMe-based with both sustained and random I/O now capable of 2.8GB/s gives me a world-class machine with 2x6 Xeon hyperthreaded (24 logical cores in Linux) and 48GB of ECC DDR3 RAM, this is an amazing value at about £800. The original SATA hard disk gives an average sustained 40MB/s but now I'm now getting 2600MB/s whether sustained or random. That's about a 65x performance gain. Not 65% - not 650% - but 6500% performance gain all for about £200 in upgrades. True the architecture in the machine isn't the latest, and DDR3 is going DDR4, but just have a look at some of these PTS benchmarks (z600) and kernel compile (kernel-build-1). When it comes do development and testing, this value is hard to beat even in the cloud. Access time and bandwidth surpass the performance of most enterprise SANs! More disk benchmarks to come. (Top entry of all benchmarks)
I hope this helps you breathe new life into your legacy systems and you don't write off a generation of hardware with years of potential left in them. Stay tuned as there is more to come!
* Note that while classic spindle hard disks tend to fail all at once, modern SSDs tend do have a failure rate but which tends to manifest at individual failing cells which are even monitored and error corrected.
Operations Manager
6yWill this work with less expensive NVME drives such as the 970 Evo? Or is there something special about why this works with the 950 pro? Is it that the 950 pro and Hyper X Predator SSD's have option rom where as no other SSD's do? As it stand currently the upgrade to a 950pro doesn't justify the cost for performance in my use case, but something cheaper like the 970 evo does. Thanks in advance!
System Administrator
6yplease note that Samsung 950 Pro provides its own BIOS therefore can be sued as boot device even on older boards ( NVME requires UEFI >= 2.3 or 2.4 ). i have also used Kingston HyperX predator with HHHL adapter as boot device without any need for tricks. But with newer NVME we are out of luck
Principal Consultant en Livingstone Group
8yAnd it is a blessing (I'd probably should say a must) if you use them on gaming PCs! Totally worth it! Thanks for the article John!