Thursday, May 12, 2016

Erasing VMware Virtual SAN Partitions

There are some cases where you might want to reuse storage devices from an old Virtual SAN cluster in a new cluster or perhaps you are reinstalling everything from scratch using the same hardware. You might find that devices previously used for Virtual SAN still contain the Virtual SAN partitions as shown here:

Recent versions of vSphere include an option to erase partitions from the vSphere Web Client. However, with Virtual SAN partitions, this does not always work even if Virtual SAN is turned off - an error appears stating "Cannot change the host configuration."

Fortunately, there is a fairly easy way to do this via CLI. The first step is to temporarily enable SSH access (start the SSH daemon) to the vSphere host containing the disks you wish to erase.

Open Terminal or PuTTy or whatever you use and connect to the host by typing:

ssh root@<vsphere-host-name-or-ip-address>
Then, enter the root password.

View the list of storage devices in the host with this command:

ls -l /vmfs/devices/disks/
You will get a fairly long list, but it is easy to spot Virtual SAN devices in the list as they will have two partitions.

Erasing the flash device(s) for the cache tier of Virtual SAN will also erase capacity devices that were part of the same Virtual SAN disk group (good work VSAN engineers!). It is always good to verify you have located the correct (cache tier) device by comparing the device ID with what is showing in the vSphere Web Client.

Running this command will erase the cache device and all capacity devices that were part of the same disk group:
esxcli vsan storage remove --ssd=<disk-id>

It might take a few moments for the process to complete. When finished, simply verify the partitions were removed either using the CLI or vSphere Web Client UI.

At this point, the devices are "clean" and can be used for whatever new project you are embarking on. This can obviously be rather time consuming if you have many hosts and/or many disk groups in your cluster. I suspect there are other, more efficient ways to do this, but hopefully this article has been helpful to you.

@jhuntervmware on Twitter

No comments:

Post a Comment

Note: Only a member of this blog may post a comment.