Azure Stack storage - under the covers

StorageBlogPostHeader.png

When you buy an Azure Stack appliance, you have several options with regards to the configuration, one of them being the physical storage that is supplied. The most common option is a mixture of SATA HDD’s and SSD’s, due to the price point. So, how does Azure Stack use this storage? Can you configure how it’s used? The SSD’s are reserved for temp disks / Premium Storage accounts just like Public Azure, right? I was having a discussion recently about the questions above and whilst I had some answers, I certainly didn’t have them all. Microsoft have given some details on the architecture and technology utilized, but how does it work together to provide an Azure consistent experience?

I decided to spend some time having a poke around to better my understanding of how the technology works.

Storage Architecture

Azure Stack is a hyper-converged appliance, built on Windows Server 2016. The storage element is brought to you by a combination of Storage Spaces Direct (S2D), Windows Server Failover Clustering and high-speed networking (RDMA) to provide a performant, scalable and resilient software defined storage service. I’ll not delve into how S2D works as it’s a broad topic, but if you are interested, I recommend the following : https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/storage-spaces-direct-overview

What I will highlight, though, is how S2D uses the physical storage provisioned in each of the servers in Azure Stack. All available storage is allocated (except the disks set aside for the boot OS) to S2D and the fastest are automatically assigned as cache drives. From https://docs.microsoft.com/en-us/windows-server/storage/storage-spaces/understand-the-cache:

Storage Spaces Direct

 features a built-in server-side cache to maximize storage performance. It is a large, persistent, real-time read 

and

 write cache. The cache is configured automatically when Storage Spaces Direct is enabled. In most cases, no manual management whatsoever is required. How the cache works depends on the types of drives present.

So, if you had a combination of NVMe, SSD and HDD, only the NVMe drives would be allocated to the cache, the SSD and HDD drives would be allocated to capacity(storage).

The diagram below shows how read/write caching works with the different mix of drive types:

For Azure Stack, the S2D cluster is configured to use a three-way mirror, so data is written to at least three of five drives to ensure resilience.

Now we know how the underlying storage is provisioned, how does Azure Stack consume it?

The diagram below shows how a hyper-converged deployment looks, with Cluster Shared Volumes running on Storage Spaces Direct to store:

  • Core infrastructure VM config/VHD’s

  • Azure Consistent Storage Blobs/Tables/Queues storage

  • core infrastructure config files, binaries, SQL DB’s, etc…

  • Tenant VM config; OS, Data and Temp disk VHD’s

As part of the Azure Stack installation routine, it creates a number of Cluster Shared Volumes (CSV’s).

Here is how the Storage Volumes are configured on a 7 node integrated appliance (weird number of nodes, I know; it’s a long story!):

Within each of the volumes, several directories and shares are created. Some key ones:

  • Infrastructure_1-3 volumes host files required by the core Azure Stack infrastructure VM’s and fabric to operate, such as config files, Windows Defender updates, SQL Database files, and so on.

  • VmTemp_1-x hosts the Temp drives for each Tenant VM, equivalent to what you would see in Public Azure IaaS VM’s

  • ObjStore_1-x volumes host the following directories:

  • BlobServiceData is where blob data is stored for the tenants. If you were to look at the files contained in each directory, they are a bunch of hex numbers and are not humanly readable.

  • ACS folder appears to host the Table Service data

  • SU1_ObjStore_1-x hosts Core and tenant VM config and VHD files.

The key thing to point out here is that the VM Temp disks created for each tenant VM DOES NOT directly utilize the physical SSD drive installed in each node, but a VHD stored on a CSV. With Public Azure VM’s, the Temp disk is created directly on SSD storage on the host running the VM, and I’ve seen some blog posts that indicated that it is the same on Azure Stack.

VM Storage

So, how does Azure Stack respect the IOPs limits assigned for each VM series and Standard/Premium storage accounts? (500IOPS per VHD stored on Standard, for instance)?

The answer: Hyper-V Storage QoS!

https://docs.microsoft.com/en-us/windows-server/administration/performance-tuning/role/hyper-v-server/storage-io-performance#advanced-storage-features

For each VHD attached to a VM, minimum and maximum IOPS can be specified. So, for a VM utilizing Standard storage, the max is pegged to 500 IOPS.

Below, you can see the Quota applied set to a App Service VM running on an ASDK deployment:

Interesting to note that the Temp disk is pegged to 500 IOPS too.

For a tenant VM utilizing premium storage, I got the following:

Temp disk IOPS is set to a max of 4000 this time 😊

On this VM (DS1_v2), I also added a number of Data disks, as I wanted to see what I could get from the system.

Although in the portal, it states I should only be able to add 2 data disks, pegged to a maximum of 3200 IOPS (it should be for all attached disks), I found I could add 4 data disks, with each attached VHD being set a max of 2300 IOPS

Below is a screen grab from the Azure Stack tenant portal. The VM size is Not consistent with the Public Azure equivalent. This has been recognised as a bug by Microsoft and should be addressed in an upcoming release.

https://docs.microsoft.com/en-us/azure/azure-stack/user/azure-stack-vm-considerations

The data disks are attached as SCSI devices. Here you can see 4 are attached, each with a max of 2300 IOPS.

Hopefully this post has given some insight into how Azure Stack uses storage and to fill in some of the knowledge gaps that are out there.