Wednesday, September 7, 2016

Virtual SAN Availability Part 1 - Intro and Basics

Introduction


VMworld 2016 U.S. featured many popular breakout sessions covering VMware storage and availability products including Virtual SAN, Site Recovery Manager, and Virtual Volumes. One of these sessions is STO8179 - Understanding the Availability Features of Virtual SAN, which was delivered by GS Khalsa and I. Most of the VMworld sessions are available for playback online, but I thought it made sense to create a blog series on this topic considering the popularity of the session. A finite amount of content can be delivered within the 60-minute time frame of a VMworld breakout session. A series of blog articles enables another way to consume the information and it allows for the addition of supplemental content. This article is the first of the series. As stated in the video recording of the VMworld session, this discussion assumes you have some basic knowledge of Virtual SAN or "VSAN" as it is often called. If you need a primer, start with this VMware Virtual SAN 6.5 Data Sheet.


VSAN Architecture Basics


Before digging into the various VSAN technologies that support availability, it is important to understand some VSAN architecture and how VSAN stores data. Let's quickly cover some architecture concepts. The digram below offers a high-level, "quick view" of VSAN architecture followed by a more detailed explanation of the various concepts.



A VSAN cluster consists of any number of physical vSphere hosts or "nodes", as they are sometimes referred to, from a minimum of 2 to a maximum of 64. You might have heard that a VSAN cluster requires at least 3 nodes, which is true. 2 physical nodes requires the presence of a virtual appliance running ESXi, which acts as the third node. This particular "2-node" configuration is a topic of discussion for another day.

Physical nodes in a VSAN cluster are commonly connected using a standard 10GbE network although 1GbE is supported for VSAN hybrid configurations - more on configuration types in a moment.

Each physical node in a VSAN cluster usually contains a number of local disk devices that serve as storage for a VSAN cluster. It is possible for a host to participate in a VSAN cluster and utilize the VSAN datastore without contributing storage capacity to the cluster, but this is uncommon. For the purposes of this article and blog series, we will assume every host in the cluster is configured similarly and contributing local disk resources to the VSAN datastore.

Storage Devices in a VSAN Cluster


The storage devices can be a combination of flash and magnetic media. This is referred to as a hybrid configuration. Considering the dramatic decrease in the cost of flash devices over the past several months, it is more common these days for organizations to utilize flash devices (only) for VSAN storage. This is called an all-flash configuration and offers multiple advantages such as higher performance and space efficiency features (deduplication, compression, etc.) versus hybrid configurations.

Disks contributing storage to a VSAN datastore are organized into disk groups. Each host contains a minimum of 1 disk group and a maximum of 5 disk groups. Storage devices in a disk group are dedicated to VSAN. In other words, partitioning a disk using some of it for VSAN and the rest for other purposes is not supported.

VSAN currently utilizes a 2-tier architecture. There is a cache tier and a capacity tier. The cache tier does not contribute to the overall capacity of a VSAN datastore. It is used exclusively for read caching and/or write buffering depending on whether the configuration is hybrid or all-flash.

A disk group contains exactly 1 flash device for the cache tier and from 1 to 7 magnetic or flash devices for the capacity tier. Using a magnetic disk for the cache tier is not supported. Mixing flash and magnetic devices in the capacity tier is not supported. In a hybrid configuration, the flash device in the cache tier serves as a read cache and a write buffer. 70% of the device capacity is used for read caching and the remaining 30% is used to buffer writes. In an all-flash configuration, the cache device is used for write buffering. Reads come directly from the capacity tier of an all-flash VSAN configuration. This is how VSAN delivers great performance with hybrid configurations and extreme performance with all-flash configurations.

VSAN Objects and Components


Now that we have a better understanding of how VSAN utilizes local disks to enable a high-performing, shared datastore architecture, let's wrap up the first part in this series with a brief discussion on how VSAN stores a virtual machine (VM). VSAN is an object datastore with a primarily flat hierarchy. This diagram summarizes the various objects that are typically observed when a VM resides on VSAN.


As seen above, a VM Home namespace is created. It contains familiar files such as a VM's configuration (VMX), NVRAM, and log files. Other objects such as virtual disks (VMDK) and snapshot delta objects can also be observed. Each object consists of one or more components. The maximum size of a component is 255GB. If an object is larger than 255GB, it is split up into multiple components. The number of components that make up an object is also determined by the type and level of availability rules that are applied as part of a storage policy. We will save this discussion for future articles in this series.

Up next: A closer look at components and how they are distributed across disks and hosts to achieve high levels of availability. Part 2

@jhuntervmware

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.