Compatibility Issues

One of the major new features of Intel's Tiger Lake mobile processors is support for PCIe 4.0 lanes coming directly off the CPU. The chipset's PCIe lanes are still limited to PCIe 3.0 speeds, but SSDs or a discrete GPU can now get twice the bandwidth.

This change is relevant because of how Intel's Optane Memory caching software interacts with the system's hardware and firmware. Earlier generations of Optane Memory and Intel's NVMe RAID solutions for their consumer platforms all relied on the NVMe SSDs being attached through the chipset. They used an ugly hack to hide NVMe devices from standard NVMe driver software and make them accessible only through the chipset's SATA controller, where only Intel's drivers could find them. Using chipset-attached NVMe devices with standard NVMe drivers as included in operating systems like Windows or Linux required changing the system's BIOS settings to put the SATA controller in AHCI mode rather than RAID/RST mode. Most of the PC OEMs who didn't provide that BIOS option were eventually shamed into adding it, or only activating this NVMe remapping mode when an Optane Memory device is installed.

For Tiger Lake and CPU-attached NVMe drives, Intel has brought over a feature from their server and workstation platforms. The Intel Volume Management Device (VMD) is a feature of the CPU's PCIe root complex. VMD leaves NVMe devices visible as proper PCIe devices, but enumerated in a separate PCI domain from all the other devices in the system. In the server space, this is a clear improvement as it made it easier to handle error containment and hotplug in the driver without involving the motherboard firmware, and VMD was used as the foundation for Intel's Virtual RAID on CPU (VROC) NVMe software RAID on those platforms. In the client space, VMD still accomplishes Intel's goal of ensuring that the standard Windows NVMe driver can't find the NVMe drive, leaving it available for Intel's drivers to manage.

Unfortunately, this switch seems to mean we're going through another round of compatibility headaches with missing BIOS options to disable the new functionality. It's not currently possible to do a clean install of Windows 10 onto these machines without providing an Intel VMD driver at the beginning of the installation process. Without it, Windows simply cannot detect the NVMe SSD in the CPU-attached M.2 slot. As a result, all of the Windows-based benchmark results in this review were using the Intel RST drivers (except for the Enmotus FuzeDrive SSD, which has its own driver). Normally we don't bother with vendor-specific drivers and stick with Microsoft's NVMe driver included with Windows, but that wasn't an option for this review.

We had planned to include a direct comparison of Intel's Optane Memory H20 against the Enmotus FuzeDrive P200 SSD, but Intel's VMD+RST situation on Tiger Lake prevents the Enmotus drivers from properly detecting the FuzeDrive SSD. On most platforms, installing the FuzeDrive SSD will cause Windows Update to fetch the Enmotus drivers and associate them with that particular NVMe device. Their Fuzion application can then be downloaded from the Microsoft Store to configure the tiering. Instead, on this Tiger Lake notebook, the Fuzion application reports that no FuzeDrive SSD is installed even when the FuzeDrive SSD is the only storage device in the system. It's not entirely clear whether the Intel VMD drivers merely prevent the FuzeDrive software from correctly detecting the drive as one of their own and unlocking the tiering capability, or if there's a more fundamental conflict between the Intel VMD and Enmotus NVMe drivers that prevents them from both being active for the same device. We suspect the latter.

Ultimately, this mess is caused by a combination of Intel and Enmotus wanting to keep their storage software functionality locked to their hardware (though Enmotus also sells their software independently), and Microsoft's inability to provide a clean framework for layering storage drivers the way Linux can (while allowing for the hardware lock-in these vendors demand). Neither of these reasons is sufficient justification for shipping such convoluted "solutions" to end users. It's especially disappointing to see that Intel's new and improved method for supporting Optane Memory caching now breaks a competitor's solution even when the Optane Memory hardware is removed from the system. The various software implementations of storage caching, tiering, RAID, and encryption available in the market are powerful tools, but they're at their best when they can be used together. Intel and Microsoft need to step up and address this situation, or attempts at innovation in this space will continue to be stifled by unnecessary complexity that makes these storage systems fragile and frustrating.

An Alternative: Enmotus FuzeDrive SSD Application Benchmarks and IO Traces
Comments Locked


View All Comments

  • Billy Tallis - Thursday, May 20, 2021 - link

    It's a general property of caching that if your workload doesn't actually fit in the cache, then it will run at about the same speed as if that cache didn't exist. This is as true of storage caches as it is of a CPU's caches for RAM. Of course, defining whether your workload "fits" in a cache is a bit fuzzy, and depends on details of the workload's spatial and temporal locality, and the cache replacement policy.
  • scan80269 - Thursday, May 20, 2021 - link

    That Intel Optane Memory H20 stick may be the source of the "coil whine". Don't be so sure about this noise always coming from the main board. A colleague has been bothered by a periodic high-pitched noise from her laptop, up until the installed Optane Memory H10 stick was replaced by a regular m.2 NAND SSD. The noise can come from a capacitor or inductor in the switching regulator circuit on the m.2 stick.
  • scan80269 - Thursday, May 20, 2021 - link

    Oh, and Intel Optane Memory H20 is spec'ed at PCIe 3.0 x4 for the m.2 interface. I have the same HP Spectre x360 15.6" laptop with Tiger Lake CPU, and it happily runs the m.2 NVMe SSD at PCIe Gen4 speed, with a sequential read speed of over 6000 MB/s as measured by winsat disk. So this is the H20 not supporting PCIe Gen4 speed as opposed to the HP laptop lacking support of that speed.
  • Billy Tallis - Thursday, May 20, 2021 - link

    I tested the laptop with 10 different SSDs. The coil whine is not from the SSD.

    I tested the laptop with a PCIe gen4 SSD, and it did not operate at gen4 speed. I checked the lspci output in Linux and the host side of that link did not list 16 GT/s capability.

    Give me a little credit here, instead of accusing me of being wildly wrong about stuff that's trivially verifiable.
  • Polaris19832145 - Wednesday, September 22, 2021 - link

    What about using an Intel 660p Series M.2 2280 2TB PCIe NVMe 3.0 x4 3D2, QLC Internal Solid State Drive (SSD) SSDPEKNW020T8X1 extra CPU l2 or even l3 cache at 1-8TB going forward in a PCI-e 4.0 slot if intel and AMD will allow this to occur for getting rid of any GPU and HDD bottlenecking in the PCH and CPU lanes on the motherboard here? Is it even possible for this sort of additional cache allowed for the CPU to access by formatting the SSD to use for added l3 and l2 cache for speeding up the GPU on an APU or CPU using igpu or even for GPUs running in mgpu on AMD or sli on Nvidia to help kill the CPU bottlenecking issues here if they can mod one for this sort of thing here for the second m.2 PCI-e 4.0 SSD slot to use for additional CPU cache needs here?

Log in

Don't have an account? Sign up now