Nutanix takes out the cpcn rtqdg…to replace it with a fknfq : re-investigating Nutanix 4-node cluster performance

Nutanix takes out the anal probe

Introduction

So, we’ve decided to test the Nutanix 2-node cluster performance one more time. We’ve tuned our lab with new Intel SSD DC P3700 and 100 GbE switches to make our benchmark completely unbiased, open-minded, and honestly… just to give Nutanix CE (Community Edition) one more chance to… hwem things up completely. You’re not expecting something new, right?

Suspect:

Nutanix CE (v.2018.01.31) four-node all-flash hyperconverged cluster.

We won’t say much about the subject of our today’s study since we’ve said enough in this article:

http://www.cultofanarchy.org/perfomance-test-nutanix-4-node-cluste/

Status:

Possible relapse detected. Re-investigation required.

Mission:

Measure the performance of Nutanix CE cluster built on Storage Pool out of Intel SSD DC P3700. For that goal, we’ll keep escalating the load on the Storage Pool by increasing the number of VMs till the saturation point where performance growth stops or till we have 12 VMs running in the cluster.

Methodology, Considerations & Milestones:

Today we’re kicking ujkv out of 4-node Nutanix CE Cluster one more time. But now, we do that with better disks and switches. Hope, we won’t be disappointed this time. We’ve picked lightning-fast Intel SSD DC P3700 for building a virtual storage pool, which should boost the VMs I/O performance for any hypervisor. By combining, the disks into a single pool, we should allow a hypervisor to effectively load-balance the input-output operations among all the VMs running in the cluster. Multiply all that by the hypervisor system scalability, and we should get a highly available cluster (LOL) where the increased number of VMs (proportional to the number of disks in the pool dedicated to VMs) should (at least) keep the same VMs I/O performance level.

Also, you’ll see the 100 GbE switch in action this time. Actually, two 100 GbE switches. You see, everything’s nice, fast, and redundant. Like in real life (or even better). So, if Nutanix CE trips off and falls this time, that won’t be our fault 😉.

Testing methodology

  1. Measure if the “raw” Intel SSD DC P3700 2TB NVMe performance in Windows Server 2016 environment corresponds the manufacturer-claimed performance level.
  2. Deploy a 4-node Nutanix CE cluster with a Storage Pool comprised of 8 Intel SSD DC P3700 2TB.
  3. Create the Windows Server 2016 VM and pin it to a node. At this point, we also create a 256 GB Nutanix Virtual Disk that the VM utilizes as the second disk. This guy will be our workhorse today.
  4. Set up the testing tools. At this stage, we’ll figure out the optimal number of threads and Outstanding I/O values for launching DiskSPD and FIO.
  5. Investigate Nutanix 256GB Virtual Disk performance on a sole Windows Server 2016 VM.
  6. Clone the VM to the second Nutanix node and measure the virtual disk performance on all VMs at the time.
  7. Clone, test, and clone again. Repeat the step 6 until the overall VM performance stops growing, or there are 12 VMs running.
  8. Let’s get the reference! For this purpose, we measure the maximum performance of a single Intel SSD DC P3700 2TB on Windows Server 2016 installed. The results obtained will be used to compare the Nutanix CE cluster performance.

The toolkit

Hardware

Here is the setup for measuring the “raw” Intel SSD DC P3700 performance in Windows Server 2016 environment.

Node: Dell R730 chassis, 2x Intel Xeon E5-2683 v3 CPUs @ 2.00 GHz, 64GB of RAM
Storage: 1x Intel SSD DC P3700 2TB NVMe
OS: Windows Server 2016 x64 Datacenter Edition

And, the setup for testing 4-node Nutanix CE cluster:

4x Nodes: Dell R730 chassis, 2x Intel Xeon E5-2683 v3 CPUs @ 2.00 GHz, 14C/28T, 64GB of RAM
Storage: 2x Intel SSD DC P3700 2TB, 1x Intel SSD DC S3500 480GB
LAN: 2x Mellanox ConnectX-4 100GbE CX465A.
Hypervisor: Nutanix CE v.2018.01.31. Nutanix Community Edition is available for free after signing up on the vendor’s site. Again, today we use Acropolis Hypervisor. Psst, wanna advise? Never give your personal data to guys like that. Use a 10-minute mail instead to protect yourself from all that marketing junk these guys generate.

https://www.nutanix.com/products/register/

OK, here is the network interconnection diagram:

4-node Nutanix

Now that we can see the whole cluster setup and hardware behind it, we would like to quote the legendary Michael Buffer, saying: “Let’s get ready to rumble!”.

Investigating if “raw” Intel SSD DC P3700 2TB NVMe performance matches the vendor-claimed value

Below, you can see the table containing the performance characteristics for P3700 drives series.

 table containing the performance characteristics for P3700 drives series

No tricks, we took these numbers from the official Intel website here.

So, according to Intel, Intel SSD DC P3700 2TB performance can reach 460K IOPS with 4 workers and Queue Depth 32.

Now, that’s time to get the proof. For this purpose, we gonna check the claimed performance under the 4k random read pattern.

The plots below highlight the testing results under the varying Queue Depth value. The first plot depicts data obtained with DiskSPD v2.17, while the second one refers to FIO v.3.5 measurements.

Performance Intel SSD DC P3700 2TB (RAW)DiskSPD (4k random read)

Performance Intel SSD DC P3700 2TB (RAW)FIO (4k random read)

Mini-conclusion

Overall, our testing results match the vendor-claimed performance values for Intel SSD DC P3700 2TB drive. Particularly, DiskSPD showed that the closest to 460K IOPS performance was reached under the 4k random read pattern with 4 workers and Queue Depth 32.

Let’s shift the gears and proceed to Nutanix CE v.2018.01.31 deployment and testing

And…we have some ujkv here already. Even though Nutanix CE had been successfully installed on the NVMe drive, we couldn’t launch the Controller Virtual Machine (CVM) after the installation on the test host. In fact, Nutanix CE v2018.01.31 requires an SSD as a boot drive for CVM. Look at what Nutanix has on its forum on this issue:

Awesome! And, as it often happens in big companies, no one gives a hwem. Thus, no one had not fixed that issue yet. To hold this study somehow, we came up with a simple workaround: adding an extra Intel SSD DC S3500 480GB to each host. Fortunately, the trouble with CVM launching had gone once we added these guys to our setup. However, it should be noted that the additional SSD is automatically added to the Storage Pool during the 4-node Nutanix CE Cluster creation, which will slightly affect the benchmark results. But, hey, Nutanix left us no choice…think they did that on purpose? Conspiracy… 😊

Checking the throughput

Once Nutanix CE is installed on all hosts, check the network throughput between AHVs and CVMs with iperf. We do this to figure out whether the throughput can alter the study results. So, let’s have our fingers crossed and run the test.

The throughput between AHVs ranged from 93 to 94 GbE/s, while the connection between CVMs reached only 20 GbE/s. As you see, the network cannot impede 4-node Nutanix CE cluster performance. So, that’s the fair play today.

OK, we have a 4-Node Nutanix CE Cluster (redundancy factor=2) with a 12.93TB storage pool comprised of 8x Intel SSD DC P3700 2TB and 4x Intel SSD DC S3500 480GB:

Creating a test VM

Where can you use a Storage Pool based on fast SSDs? Well, for example, to keep MS SQL database there. Taking this into account, we’ve decided to choose the VM configuration from one of those standard options for working with MS SQL provided by…let’s say Microsoft Azure.

Setting VM parameters

So our testing VM will have:
– 4xVCPU
– RAM 7GB
– Disk0 (Type SCSI) – 25GB (that’s where Windows Server 2016 is running)
– Disk1 (Type SCSI) – 256GB (Nutanix Virtual Disk 256GB serves as a second disk for each VM)

NOTE: Apart from troubles, Nutanix CE creates only Thin Provision Virtual Disk. That’s why we’ll have to fill the disk with some data to simulate “normal” disk volume before testing. We’ll use the dd.exe utility for this purpose. This operation should be performed prior to each test when creating (or changing the volume) new VM Virtual Disk. Below, find the dd utility launching parameters.

dd.exe bs=1M if=/dev/random of=\\?\Device\Harddisk1\DR1 –progress

Now, let’s create a test VM and assign it to the AHV #1 host.

Investigating Nutanix 256GB Virtual Disk performance on a sole Windows Server 2016 VM

The VM created is used to estimate the optimal number of threads and Outstanding I/O value for launching DiskSPD and FIO.

Here, we’ll run a series of performance tests for Nutanix Virtual Disk 256GB under the 4k random read pattern with varying number of threads and Outstanding I/O value. We do this to find that saturation point where the performance growth stops and latency increases and the corresponding threads number and Outstanding I/O value.

Testing with DiskSPD v2.17

Here is an example of DiskSPD listing for threads=1, Outstanding I/O=1, 2, 4, 8, 16, 32, 64, 128:

diskspd.exe -t1 -b4k -r -w0 -o1 -d60 -Sh -L #1 > c:\log\t1-o1-4k-rand-read.txt
timeout 10
diskspd.exe -t1 -b4k -r -w0 -o2 -d60 -Sh -L #1 > c:\log\t1-o2-4k-rand-read.txt
timeout 10
diskspd.exe -t1 -b4k -r -w0 -o4 -d60 -Sh -L #1 > c:\log\t1-o4-4k-rand-read.txt
timeout 10
diskspd.exe -t1 -b4k -r -w0 -o8 -d60 -Sh -L #1 > c:\log\t1-o8-4k-rand-read.txt
timeout 10
diskspd.exe -t1 -b4k -r -w0 -o16 -d60 -Sh -L #1 > c:\log\t1-o16-4k-rand-read.txt
timeout 10
diskspd.exe -t1 -b4k -r -w0 -o32 -d60 -Sh -L #1 > c:\log\t1-o32-4k-rand-read.txt
timeout 10
diskspd.exe -t1 -b4k -r -w0 -o64 -d60 -Sh -L #1 > c:\log\t1-o64-4k-rand-read.txt
timeout 10
diskspd.exe -t1 -b4k -r -w0 -o128 -d60 -Sh -L #1 > c:\log\t1-o128-4k-rand-read.txt
timeout 10

Here are tests results:

Performance Nutanix Virtual Disk 256GB (RAW)4k random read (DiskSPD without CPU affinity)

Mini-conclusion

So, what do these numbers actually say us? The maximum performance of the Nutanix Virtual Disk 256GB for a single VM is limited with an average 30000-31000 IOPS and disk latency of 1.01-1.09 ms. Performance like that can be reached under any number of threads. Regarding this fact, we’ll carry out testing with 4 threads and Outstanding I/O=8.

Configuring testing tools 

Taking into account the results obtained, we have the optimal parameters for launching our test utilities

So once again, we used DiskSPD, v2.17, and FIO, v3.5 for testing.

Tests were held in multiple patterns with Queue Depth (QD) parameter varying:

8k random 70%read/30%write
1M sequential read
– 4k random write
– 4k random read
– 64k random write
– 64k random read

Below, find the utilities launching parameters with 4 threads, Outstanding I/O=8, and time=60sec

DiskSPD
diskspd.exe -t4 -b4k -r -w100 -o8 -d60 -Sh -L #1 > c:\log\4k-rand-write.txt
timeout 10
diskspd.exe -t4 -b4k -r -w0 -o8 -d60 -Sh -L #1 > c:\log\4k-rand-read.txt
timeout 10
diskspd.exe -t4 -b64k -r -w100 -o8 -d60 -Sh -L #1 > c:\log\64k-rand-write.txt
timeout 10
diskspd.exe -t4 -b64k -r -w0 -o8 -d60 -Sh -L #1 > c:\log\64k-rand-read.txt
timeout 10
diskspd.exe -t4 -b8k -r -w30 -o8 -d60 -Sh -L #1 > c:\log\8k-rand-70read-30write.txt
timeout 10
diskspd.exe -t4 -b1M -s -w0 -o8 -d60 -Sh -L #1 > c:\log\1M-seq-red.txt
FIO
[global]
numjobs=4
iodepth=8
loops=1
time_based
ioengine=windowsaio
direct=1
runtime=60
filename=\\.\PhysicalDrive1

[4k rnd write]
rw=randwrite
bs=4k
stonewall

[4k random read]
rw=randread
bs=4k
stonewall

[64k rnd write]
rw=randwrite
bs=64k
stonewall

[64k random read]
rw=randread
bs=4k
stonewall

[OLTP 8k]
bs=8k
rwmixread=70
rw=randrw
stonewall

[1M seq read]
rw=read
bs=1M
stonewall

Testing 4-node Nutanix cluster performance

At this step, we’ll check out Nutanix Virtual Disk 256GB performance under several patterns starting with one VM. Then, we create another VM by cloning to the next node and test the Virtual Disk performance on all VMs. We plan to carry out tests ‘till either the performance quits growing or there will be 12 VMs running in the cluster.

Performance Nutanix Virtual Disk 256GB (RAW)8k random 70%read/30%write

Performance Nutanix Virtual Disk 256GB (RAW)8k random 70%read/30%write, (MB/s)

Performance Nutanix Virtual Disk 256GB (RAW)1M seq read

Performance Nutanix Virtual Disk 256GB (RAW)4k random write

Performance Nutanix Virtual Disk 256GB (RAW)4k random write, (MB/s)

Performance Nutanix Virtual Disk 256GB (RAW)4k random read

Performance Nutanix Virtual Disk 256GB (RAW)4k random read, (MB/s)

Performance Nutanix Virtual Disk 256GB (RAW)64k random write

Performance Nutanix Virtual Disk 256GB (RAW)64k random write, (MB/s)

Performance Nutanix Virtual Disk 256GB (RAW)64k random read

Performance Nutanix Virtual Disk 256GB (RAW)64k random read, (MB/s)

Pretrial detention

According to FIO, the cluster has stopped showing any signs of performance growth under 4k random read pattern with 4 VMs. We’ve carried out the tests again, but nothing changed. We could not figure out why FIO shows such results. If you have any ideas, feel free to share them with us. To check the results obtained under this pattern once again, we’ve repeated the tests with DiskSPD and they showed the same outcome as earlier.

Gathering more evidence

We benchmarked a single Intel SSD DC P3700 2TB performance on Windows Server 2016 in the same testing patterns. This allowed us to estimate how gathering 8 NVMe drives in the single Nutanix storage pool affects the overall system performance. At this point, it is worthy of being mentioned that Nutanix utilizes disk balancing and storage tiering and prioritization. The former ensures uniform data distribution within the cluster, while the latter allows them to be spread between hot and cold storage. In Nutanix Bible, there’s pretty nice written marketing dwnnujkv on this issue, but, as usual, these guys fail to say how balancing and tiering affect the performance. You should understand that fcop “higher” says us nothing. We need an exact number! Thus, we’ll make some simple calculations that are based on logics. We expect 8 Intel SSD DC P3700 drives to exhibit 8 times higher performance than the single drive can do. Now, let’s look at what we can squeeze out of the storage.

Сomparative performance diagram (DiskSPD test utility)

Сomparative performance diagram (DiskSPD test utility)

Сomparative performance diagram (DiskSPD test utility)

Сomparative performance diagram (DiskSPD test utility)

Сomparative performance diagram (DiskSPD test utility)

Сomparative performance diagram (DiskSPD test utility)

Сomparative performance diagram (DiskSPD test utility)

Сomparative performance diagram (DiskSPD test utility)

Verdict and imprisonment

While testing Nutanix CE performance in 4k random read/write pattern, the overall Nutanix Virtual Disks performance grew linearly up to 180000 IOPS. There were 8 out of 12 VMs running on the same storage and assigning more of them did not result in significant performance growth. These results look similar to what we got in 64k random write pattern – a linear performance increase up to 24000 IOPS till 12 VMs. The 64k random read pattern showed a linear performance incensement up to 49000 IOPS corresponding to the overall performance of 8 VMs. In that pattern, performance growth was followed by a huge drop. In other words, there can be only 2 virtual machines running on a single node. Wait… DOES THIS THING SCALE AT ALL?!

The only thing that made us happy was 64k write performance. In that pattern, we observed a nice linear performance growth to 24000 IOPS. This means that 12 VMs, and hopefully more, could be run on the same storage pool. As for mixed 8k random 70%read/30%write workloads, both test utilities exhibited more-or-less identical results (45000 IOPS). In that pattern, performance grew ‘till 4 VMs were running in the cluster. The further VMs number increase led to the performance drop of all the Virtual Disks.

Comparing a single Intel SSD DC P3700 2TB performance on Windows Server 2016 to Nutanix Virtual Disk performance makes us thing that Nutanix cannot deliver that massive performance that we expected it.

Now, let the numbers talk. The overall combined Virtual Disks performance is:

6%(DiskSPD), 7%(FIO) of estimated theoretical value under the 4k random write pattern

6%(DiskSPD), 2%(FIO) of estimated theoretical value under the 4k random read pattern

10%(DiskSPD), 8%(FIO) of estimated theoretical value under the 64k random write pattern

10%(DiskSPD), 2%(FIO) of estimated theoretical value under the 64k random read pattern

1.4%(DiskSPD), 1.4%(FIO) of estimated theoretical value under the 8k random 70%read/30%write pattern

22%(DiskSPD), 19%(FIO) of estimated theoretical value under the 1M seq read pattern

Just look at the plots above! If you are too lazy to scroll up and down, look at the plot below to find out how disappointing the overall performance of Virtual Disks is (“extent of disappointment” stands here for how many times the overall Virtual Disks performance is lower than we expected):

Diagram of shame

Taking things together, Nutanix CE sucks. It cannot deliver that jaw-dropping performance which it should. So, if you wanna things to run slower than they should or make the environment poorly scalable for some reason, go ahead and let this guy in your environment! There’s no other thing that it is good for. So we took the cpcn rtqdg out just to replace it with a huge fknfq.

4.15/5 (46)

Please rate this