First, Virtual SAN's New Name: vSAN
The previous seven blog articles are titled "Virtual SAN Availability..." and this article starts with "vSAN". Why the name change you ask? Duncan Epping discusses it in this blog article: Virtual SAN >> vSAN, and grown to 5500 customers.
Now that the name change is out of the way, let's get back to the topic of vSAN availability. Earlier posts discussed availability within a single site. The next few articles will cover resiliency across sites using a vSAN stretched cluster. We will begin with a short introduction to this vSAN feature.
vSAN stretched clusters provide resiliency against the loss of an entire site. vSAN is integrated tightly with vSphere HA. If a site goes offline unexpectedly, vSphere HA will automatically restart the VMs affected by the outage at the other site with no data loss. The VMs will begin the restart process in a matter of seconds, which minimizes downtime.
vSAN stretched clusters are also very beneficial in planned downtime and disaster avoidance situations. Issues such as an impending storm or rising flood waters typically provide at least some time to make preparations before disaster strikes. VMs at one site in a vSAN stretched cluster can be migrated using vMotion to the other site out of harm's way with no VM downtime and no data loss.
Consider the levels of availability that can be achieved when combining app-level resiliency with a vSAN stretched cluster. An excellent example is Oracle RAC as detailed in the Oracle Real Application Clusters on VMware Virtual SAN reference architecture paper.
A Few Deployment Scenarios and Networking Requirements
vSAN features the capability to create a stretched cluster across two sites. The two sites could be separate rooms at opposite ends of the same building, two buildings on the same campus, two campuses in separate cities, and so on. There are a number of possibilities.
The limitations of what is possible centers around network bandwidth and round trip time (RTT) latency. A number of stretched cluster solutions need a RTT latency of 5ms or less. A common requirement is writes to both sites must be committed before the writes are acknowledged. RTT latencies higher than 5ms will introduce performance issues. vSAN is no exception. A 10Gbps network connection with 5ms RTT latency or less is required between the two sites for a vSAN stretched cluster.
Up to 15 hosts per site is currently supported. In addition to the hosts at each site, a "witness" must be deployed to a third site. The witness is a VM appliance running ESXi that is configured specifically for use with a vSAN stretched cluster. Its purpose is to enable the cluster to achieve quorum when one of the two main data sites is offline. The witness also acts as “tie-breaker” in scenarios where a network partition occurs between the two data sites. This is sometimes referred to as a “split-brain” scenario. The witness does not store virtual machine data such as virtual disks (VMDK files). Only metadata such as witness components (see Part 2 of this series) are stored on the witness.
Up to 200ms RTT latency is supported between the witness site and data sites. The bandwidth requirements between the witness site and data sites varies and depends primarily on the number of vSAN objects stored at each site. A minimum bandwidth of 100Mbps is required and the general rule of thumb is at least 2Mbps of available bandwidth for every 1000 vSAN objects. The vSAN Stretched Cluster Bandwidth Sizing Guide provides more details on networking requirements.
vSAN Stretched Cluster Fault Domains
We discussed vSAN fault domains in Part 5 of this series. A vSAN stretched cluster consists of exactly three fault domains. The physical hosts in one data site make up one fault domain; the physical hosts at the other data site are the second fault domain; the witness is the third fault domain.
When a VM is deployed to a vSAN stretched cluster, a RAID-1 mirroring policy is applied to the objects that comprise the VM. One copy of each object such as VM Home and VMDKs are placed on hosts in the first data site; another copy of these objects are placed in the second site; witness objects are place on the witness at the third site. If any one of the three sites goes offline, there are enough surviving components to achieve quorum so the objects are still accessible. If you need a refresher on the concepts of vSAN objects and components, see Part 1 and Part 2 of this blog series.
Up Next in the vSAN Availability Series
In Part 9, we see how easy it is to configure a vSAN stretched cluster. We will then move on to examining stretched cluster recovery behavior in various failure scenarios in subsequent articles.