As you might know, in a VMware virtual machine besides of the traditional virtual disk (VMDK) you can also add a so called RAW Device Mapping (RDM). This type of disk is used in special situations where SAN snapshots are required, VMFS virtual disk(s) become to large, applications that need to make direct calls to the block table (SAN-aware applications), or if Microsoft Cluster Service (MSCS) is used. When you add a RDM to a virtual machine, data is stored directly on the storage area network (SAN), as opposed to storing it in a VMDK file on a VMFS datastore. As you will see later in the lab demonstration, when adding an RDM to a virtual machine, a pointer file is also created in the VM folder that points to the RAW device (LUN). This fie has a .vmdk extension and will even report the size of the raw device in the Datastore Browser, but this file contains only the mapping information data required to manage and proxy the disk access; it’s instructing the VMkernel where to send disk instructions. Now you might be tempted to use RDM on all your VMs at all times thinking you will gain some performance by doing so, but you don’t. VMFS and RDM produce similar input/output (I/O) throughput, meaning they perform the same. VMware recommends VMFS for most datacenter aplications, and use RDMs only when justified. Only LUNs presented from FC, FCoE and iSCSI are supported for RDMs, local disks are not, or not now anyway.
There are two compatibility modes that can be used with RDMs:
Physical – In this compatibility mode RDMs have almost complete direct access to the SCSI device, which gives you control at much lower levels. The VMkernel passes through all SCSI commands (with the exception of the REPORT LUNs command) which allows the VMkernel to isolate the LUN to the virtual machine that owns it. Physical Compatibility mode is used for Microsoft clustering (cluster across boxes (CAB)) and for SAN aware applications that need direct access to the RAW device.
Virtual – In this compatibility mode RDMs act like a regular VMDK files. This allows LUNs to be more portable when moving to new storage equipment and also allows the use of snapshots; since RDMs hide the underlying hardware. This compatibility mode is recommended when using Microsoft cluster in a box (CIB).
Until now I’ve been telling you all the goods about RDMs, but they also have some limitations like:
- RDMs are not Available for Block Devices or RAID Devices (local storage)
- RDMs are not Available for Devices Attached to a Shared Adapter
- RDMs are available with VMFS-2 Volumes Only
- No Redo Log in Physical Compatibility Mode
- Snapshots cannot be used for RDM disks that are in physical compatibility mode. If you try to snapshot a VM that has RDM disks attached you will get the bellow error message:
Cannot take a memory snapshot, since the virtual machine is configured with independent disks.
In case you need more information about RDMs you can read the official VMware documentation which can be downloaded from here.
Without further ado, let’s move on and add a RDM disk on a VM. Right-click the chosen VM (on or off, it doesn’t meter, it works either way) and go to Edit Settings.
On the Virtual Hardware tab move down to the New Device section, click the drop-down box and choose RDM Disk. Once the hardware is selected click Add.
As soon as you click the Add button, a new window pops-up that let’s you choose which LUN(s) you want to present to this VM as a RDM disk. If the list is empty, it might be because your ESXi host does not see the new provisioned LUNs, it does not support the type of storage you have, or the LUNs are already provisioned as datastores.
[notice]You can’t create a RDM disk from a LUN that is also a datastore. If that’s the case, you will need to delete the datastore in order to add it as a RDM disk.[/notice]
By default, RDMs are added in physical compatibility mode when using the vSphere Web Client. In case you want to change it, just expand the New Hard Disk options and click the drop-down-box in the Compatibility Mode section.
Before you click OK, it is recommended to put the new RDM on a different SCSI bus and not leave it on the same one as the disk OS. I can’t tell you why, because I could not find the reason in the VMware documentation, but I’ve seen this in large enterprise environments.
If you now browse the datastore where the VM resides, you can see a VMDK file of the size of your RAW disk. This is important to remember, because the same space of the RDM LUN should be maintained in the VMFS datastore due to the mapping file; so make sure you have enough space on your datastore in case you are using large RDMs.
By using the GUI you are limited in what you can see in the datastore, but if you do an SSH connection into the ESXi host and go to your VM location folder, you can see both, the descriptor file and the RDM.
~ # cd /vmfs/volumes/YOUR DATASTORE/YOUR VM
Inside the descriptor file, it can be seen the type of the disk used (RDM in this case) and then name of the disk to which it points to.
The last step is to initialize and format the drive so you can store data on it. Once that was taking care of, you can use it for your applications. If you want to add another RDM disk just repeat the process, and you should be all set.
As you could see, RDMs are very useful in some situations because you can save on hardware by virtualizing the applications that need RAW disks access. Use them carefully and test in lab before production.
Want content like this delivered right to your