Recently we had a partner express interest in deploying a Microsoft Hyper-V cluster using a relatively new (and evolving) technology called Storage Spaces Direct, and I was tasked with making that happen.  While I am well versed in many Microsoft technologies, this was a new one for me.  This is part one of a blog series that will include a technology overview, technical details, and lessons learned throughout the design and deployment of Storage Spaces Direct on Windows Server 2016.

What is Storage Spaces Direct?

Storage Spaces Direct (also known as S2D) is an updated version of a technology introduced in Windows Server 2012, called Storage Spaces.  It is a software-defined storage system that can leverage local server storage to provide a highly available clustered solution.  S2D leverages several existing Windows features, including the following:

  • Failover Clustering
  • Cluster Shared Volume (CSV) file system
  • Server Message Block (SMB) 3

The new feature introduced in Windows Server 2016 for S2D is the Software Storage Bus.  Below is an overview of the Storage Spaces Direct stack as defined by Microsoft.

Deployment Types

Storage Spaces Direct can be deployed in two basic configurations – Converged, or Hyper-Converged.  In a converged deployment, computing and storage resources are separated into two clusters.

In a hyper-converged deployment, computing and storage resources are both contained in one cluster.

The design for my implementation of S2D required a hyper-converged configuration, so that is what this series of blog posts will be focused on.

Network Hardware

In order to provide the highest possible level of storage performance, S2D needs to leverage very specific network hardware and configuration requirements.  The core technology that S2D is designed to use is called remote-direct memory access (RDMA).  There are two different implementations of RDMA, iWARP and RoCE.  Intel NICs utilize the iWARP technology, while Mellanox NICs utilize RoCE.

Both technologies require 10GB fiber connections, so that is an important detail to keep in mind when designing an S2D solution.  I’m not here to say one is better than the other, but we opted to use RoCE with Mellanox NICs for a couple of different reasons.

  1. The Mellanox NICs were significantly less expensive than a comparable Intel NIC.
  2. After doing some research on the two protocols, we concluded that RoCE was the better option for us because of its simplicity at the protocol level compared to iWARP.

I won’t get into detail regarding the specific differences, but the diagram below will give you the basic idea.

Server Hardware

Along with the specific network requirements for S2D, there are also very specific requirements for server hardware.  Most hardware vendors provide pre-configured hardware packages designed to support S2D.  These seem to be the best option, as they ensure that the disks, controllers, and NICs are supported by Microsoft for S2D.  In that scenario, the vendor can then support your implementation knowing that your hardware is officially supported.

We opted to go with Dell hardware for our implementation.  Dell provides a solution called “Ready Node” which is packaged with supported hardware for S2D, and also comes with software support for your configuration.  Our Ready Node configuration from Dell included PowerEdge 730XD rack-mount servers with the following disk configuration.

As you can see, the server has a mix of SSD and HDD hard drives.  The SSD drives are used solely for caching, while the HDDs handle the actual data storage.  I will go into further detail around this configuration in my next post.

Microsoft Licensing

One important note regarding Storage Spaces Direct on Windows Server 2016 is that it requires Datacenter licensing from Microsoft.  This to me is a big mistake on Microsoft’s part.  Datacenter licensing is quite expensive compared to the lower editions, and this will make it cost prohibitive for a lot of customers, especially the smaller to medium variety.  Fortunately, this was not a problem for this project, but I suspect there are many potential customers that would rule out this solution because of this requirement.

What’s Next?

This post is intended to provide a high-level overview of Storage Spaces Direct.  In my next post, I will go into technical details regarding the configuration of hardware, storage, and the hypervisor.  My third post on this topic will cover lessons learned during the deployment and suggestions for those who are looking to leverage this technology.  Stay tuned!

About the author

Mike Witt is a Microsoft and Citrix certified Principal Consultant for X-Centric IT Solutions with over 20 years of experience designing and deploying solutions for clients of all sizes and business types.