Archive forStorage

vSphere Lab Storage Tests

Having just completed an upgrade of my vSphere lab equipment (and having nothing better to do over a weekend :) ), I decided to test several free shared storage options. And I have to admit – the results were… pretty surprising!

What I used for the test:

  • server OS: vSphere 5.1
  • client virtual machine – IOMeter running on Windows 7, 4 vCPUs, 4GB RAM. To avoid any issues with contention, I dedicated an entire server just for this machine.
  • storage VMs: Ubuntu Linux 12.04LTS (NFS), FreeNAS (iSCSI and NFS), and NexentaStor Community Edition (iSCSI and NFS). Each VM had 16GB of RAM and 4vCPUs (with a notable exception for Ubuntu, mentioned below). Again, an entire server was dedicated to each machine, to avoid contention issues.
  • storage subsystem: 4x1TB SATA disks, 7200 RPM, RAID5 on a dedicated controller. The array was presented as an RDM to each storage VM in turn (in case you’re wondering how I got local SATA to work via RDM, this link will help . In my case, I used vmkfstools -z). In the storage VM, the RDM was connected to a separate SCSI controller, and configured as an independent persistent disk
  • network connectivity: 1Gbps Ethernet

As you can see, there is nothing “high performance” about this setup. My goal was not to see which software is best in a production environment (there are plenty of such benchmarks online, and plenty of other factors affecting that decision). I was simply trying to answer a simpler question – given a small lab deployment, what storage solution would allow me to play with vSphere’s advanced features, while at the same time being easy to set up and offering decent performance?

I made no attempt to improve performance – I just took the RDMs, and presented them via NFS or iSCSI, without changing any other settings (with the exception of NFS synchronous mode for

As a control, I also ran the tests against a Synology DS411j box (4x2TB 7200rpm SATA drives, RAID5), as well as against a vmdk on a pair of local drives (2x320GB 10K SAS drives, RAID1).

Moving on to IOMeter settings, here they are:

  • 4 workers
  • 1 outstanding I/O request for each worker
  • 0% random (so sequential requests only)
  • 100% read or write
  • 512B/4KB/32KB request size

Each test was run for 3 minutes, with a 10 second ramp-up time. The same 10GB .icf file was used for each test (I just vMotioned the corresponding disk from one storage device to another).

And here are the results:

IOPS Results

Bandwidth Results

Some comments:

  • I was amazed to see that the “basic” NFS configuration on Ubuntu was by far the fastest (IOPS-wise), surpassing both Nexenta and FreeNAS by a margin of more than 50% (!). Even more amazed, in fact, considering that the Ubuntu VM I had available was running on only 1vCPU and 1GB of RAM (!!). That is why I ran the test two more times – once after changing the settings to match the other VMs (4vCPUs and 4GB RAM), and once more after finishing all the other tests (just to make sure that this was not a glitch). The results stayed the same.
  • For FreeNAS, the initial NFS tests were disappointing (writes would peak at around 500 IOPS). After some digging around, it looks like even after enabling asynchronous mode under NFS settings, vSphere writes are still treated as synchronous. So I ran the test once again after forcing asynchronous mode ( zfs set sync=disabled volume/name ), with improved results. Quick note – do NOT do this unless you have a NVRAM/battery backed storage unit (or you don’t care about the data getting corrupted)!
  • You can see the same write penalty on NexentaStor. In that case, the NFS settings were left as “default” (neither forced sync nor forced async), which led to the same issue with synchronous writes.
  • Other than the NFS sync/async issue, there was no significant difference between iSCSI and NFS
  • There is also a write penalty on the local disks, which I cannot explain (it goes well beyond the expected 2:1 penalty imposed by RAID-1)
  • The bandwidth graphs are limited by the network – at larger request sizes, most solutions are easily able to saturate the 1Gbps link I had available.
  • The Synology unit was limited by its hardware. The j-series are the low-end 4-bay units, and this impacts performance – during testing, the CPU would stay above 90% utilization. Had I had access to a higher-end unit (a DS15XX, or even a RackStation), I am sure that the results would have been very different. But this was all I could afford at the moment :)

The conclusion? If you want to play with shared storage in your lab (and have some Linux experience), running a simple NFS server is both the easiest to configure and the fastest solution (you can find a tutorial here – just don’t forget to use async instead of sync if ¬†you have battery backup – or if performance means more to you than your data :) ).

If you don’t like the Linux CLI, I highly recommend FreeNAS. It is easy to install and configure, and offers decent performance.

And, of course, if you’d like to play with a more advanced system (and you’re not afraid of the steep learning curve), NexentaStor might just be the one for you. The Community Edition is completely free (up to 18TB of used storage capacity).

Before people start saying “software X is way better than that, why didn’t you do Y?”, please keep in mind that this was NOT meant to be a comparison of the software solutions. If I had taken the time to follow the official recommendations to improve performance, I am sure that the results would be very different. But I just wanted to compare the results “out of box” – present the disks to the software, and see what it can do with them.

Comments