BALALAIKAS ASIDE! Testing StarWind Virtual SAN performance on 2 physical nodes

MISSION

So while we were testing VMware vSAN on a 2-node cluster powered with NVMes, the Russian bear slammed in screaming (imagine typical Russian mafia accent) “Dnacv, hold my vodka and give me two balalaikas servers and I show you how to scale from mother Russia to America”. We thought we don’t wanna mess with drunken bears so here we are. Testing StarWind Virtual SAN for Hyper-V. Yes, there is a version for Linux, (seems that these guys just scale their solution instead of performance?) on a two-node VMware ESXi cluster powered with NVMe drives. Wait what?! VSAN for Hyper-V on top of ESXi? Yes, we have two ESXi hosts with one StarWind VM per each and the VMs are working on top of Windows Server 2016, providing the highly available storage (NTFS, it’s Windows baby) further as a VMFS datastore to ESXi…This is madness…NO! THIS IS STARWIND!!

OK, just so that you got the idea of how this setup looks like, check the network interconnection diagram below:

network interconnection diagram

HARDWARE ARSENAL

You’re right guys, the hardware we use stays the same in our tests. Well, we wanna make our game fair, don’t we? So take a look one more time at what we shall use today.

ESXi Host 1, ESXi Host 2
Servers:
Dell R730, CPU 2x Intel Xeon E5-2683 v3 @ 2.00 GHz, RAM 64GB

Storage 1: 1x Intel SSD DC P3700 2TB

Storage 2: 1x Intel SSD DC P3700 2TB

LAN: 2x Mellanox ConnectX-4 100Gbit/s

Hypervisor: VMware ESXi 6.7

SOFTWARE INVENTORY FOR STARWIND VSAN VMS:

CPU: 56xCPU (2x Sockets, 28x Cores per Socket)

Memory: 4GB

Hard disk 1: 25GB (vmdk, SCSI controller 0 – LSI Logic SAS)

Hard disk 2: 1820GB (vmdk, Thick Provisioned Eager Zeroed, SCSI controller 1 – VMware Paravirtual); Disk D (NTFS)

Hard disk 3: 1820GB (vmdk, Thick Provisioned Eager Zeroed, SCSI controller 2 – VMware Paravirtual); Disk E (NTFS)

Network adapter: 3x VMXNET 3

OS: Windows Server 2016 Datacenter

Virtual SAN: StarWind Virtual SAN for Hyper-V (Windows-based) (StarWind_8.0_R6_20180507_12166_6395_405_R6U2-release)

edit settings

If for some reason you would want to download StarWind Virtual SAN for Hyper-V ( the one that operates on top of Windows) you can follow this link. ATTENTION!! Marketing spam filters should be turned on!

download

Since we already started describing the software part, let’s take a look at the testing VM settings:
CPU:
4xCPU (4x Sockets, 1x Cores per Socket)

Memory: 4GB

Hard disk 1: 25GB (vmdk, SCSI controller 0 – LSI Logic SAS), location – ESXi local datastore, OS Windows Server 2016 Datacenter

Hard disk 2: 80GB (vmdk, Thick Provisioned Eager Zeroed, SCSI controller 1 – VMware Paravirtual), location – StarWind iSCSI disk

edit settings
We believe you are pretty much familiar with our workflow so far. So just to make sure that we’re on the same page check our testing strategy.

TESTING METHODOLOGY

1. We do not trust anyone so we need to check if Intel SSD DC P3700 meets the vendor claimed performance level on Windows Server 2016.

2. Do some dirty job: deploy the testing lab. If you wanna double check that we made no mistakes when configuring StarWind VSAN, you can always find their installation guide here.

2.1 Now you know how it works with Russian bears…you better do what they say. So, we make intial ESXi hosts preparation, create 2 Windows-based VMs and deploy StarWind Virtual SAN for Hyper-V (Windows-based) on top of them. Now we won’t waste your time for all that reading (nothing special there) and just gonna jump into the HA devices creation.

2.2 Create StarWind HA devices (Storage1, Storage2):

ATTENTION! StarWind HA device (Storage1) will be located on the D drive of each StarWind VM, StarWind HA device (Storage 2) goes to the E drive accordingly.

3. Create a VM (Windows Server 2016) keeping it pinned to the ESXi host. We don’t give hwem on which host we create it and pin since both are identical. The test VMware Virtual Disk is 80GB (VMware Paravirtual) and located on datastore DS1 and is utilized by the VM as a second disk.

4. Find out the perfect values for the threads number and Outstanding I/O value that we gonna further use in DiskSPD and FIO testing tools.

5. Test VMware Virtual Disk 80GB speed on a single VM.

6. Clone that VM and pin it to the second ESXi host with an 80GB VMware Virtual Disk on a separate Datastore. Run performance test simultaneously for two VMs.

7. As usual, clone as much as we can till we hit the saturation point and performance growth stops.

8. Benchmark the performance of a single Intel SSD DC P3700 2TB disk on Windows Server 2016. The results we get will be used to compare the performance of StarWind Virtual SAN for Hyper-V (Windows-based).

PREPARING RUSSIAN BEAR TO HARDCORE PERFORMANCE

OK, let’s do some clicking. After StarWind Virtual SAN for Hyper-V (Windows-based) is installed, open StarWind Management Console on one of the VMs and Add Device (advanced). We do Hard Disk Device.

Add Device

Select Virtual Disk here. Pretty much obvious right?

Select Virtual Disk

As mentioned earlier, we name our devices as Storage 1 and Storage 2. We wanted to name them as “Lenin” and “Stalin” but our editor said that this can result in an international conflict…so we didn’t. Also, we are setting size here as 1800 GB.

Create a new virtual disk

We make it thick-provisioned (we don’t even know what the hwem LSFS is) and make it 512 sector size (yeap, although it’s Windows, we have VMware below and gonna give these disks as datastores to ESXi hosts, so no point in 4K here).

Thick-provisioned

Hwem caching! Behold, the pure power of NVMe!

Specify device RAM cache parameters

Flash + flash = ummm…flash? No need for that really since we work on NVMe storage.

Specify device RAM cache parameters

We leave everything as is here in fear to break something.

Target parameters

And finally, we can hit create!

Creation page

Everything is ready so we can close this.

Device Creation completed

And…everything went smooth here. We honestly expected something else from crazy Russians (we are not 100% sure, maybe one of these buttons launches missiles…who knows?).

imagefile1

And totally the same goes for StarWind HA device Storage 2.

imagefile2

OK, so not to jump between the two VMs and 2 StarWind consoles, we can add our partner VM (shows as server here) to the console on VM1.

add new starwind server

OK, partner VM is there.

wp-image-2222

So now, we want our Storage 1 device to be replicated to the partner VM so we launch the replication manager, choose Add Replica, and select Two-Way Replication.

wp-image-2223

Give the partner VM IP or name.

wp-image-2224

We choose the Failover Strategy as Heartbeat. We have no other options here since we have two nodes.

wp-image-2225

Good, now, create a new partner device.

wp-image-2226

Select the path for the partner replica on the second StarWind VM.

wp-image-2227

Now, we have to setup the iSCSI and synchronization links. Go to Change Network Settings.

wp-image-2228

We’ve got one synchronization link plus iSCSI and management links for heartbeat.

wp-image-2229

Sure, we want to synchronize from existing device.

wp-image-2230

Once all is set, we can hit Create Replica.

wp-image-2231

It says the replicated device has been created. OK, let’s trust it and close the wizard.

wp-image-2232

And now we can go and have some vodka! The synchronization process takes a while.

wp-image-2233

We have created the second device in the exact same way so no need to copy n paste right?

wp-image-2234

Time to connect StarWind HA devices to the ESXi hosts!

Now, since StarWind operates over iSCSI, we need to enable the software iSCSI storage adapter on each ESXi host. That is done in the web interface. Go to Storage->Adapters->Configure iSCSI and mark iSCSI as enabled and Save configuration.

wp-image-2235

After that, we can make all we need in the vSphere Client web interface. We know…too many consoles…

On each of our ESXi hosts, in Dynamic Discovery, we add the iSCSI IPs of both StarWind VMs and hit Rescan Adapter.

wp-image-2236

Everything went well so far, and we can see two StarWind devices added and a total number of four paths.

wp-image-2237

wp-image-2238

Now, we have to create two datastores (DS1 and DS2) on top of two StarWind HA devices. Just choose any ESXi host and create a new datastore.

wp-image-2239

The wizard is pretty much simple and to save you from some boring reading, we just pasted several screenshots.

wp-image-2240
wp-image-2241

wp-image-2242

wp-image-2243

wp-image-2244

And the same goes for the DS2 Datastore. We now have two of them on ESXi 1.
wp-image-2245

And two datastores are also connected on ESXi 2.

wp-image-2246

Now, to make sure that we utilize all the paths to the storage simultaneously, we need to set Path Selection Policy as Round Robin (VMware) for each datastore. On each ESXi host, go to Configuration->Storage Devices->STARWIND iSCSI Disk ->Properties->Edit Multipathing.

wp-image-2247

wp-image-2248

Now, let’s connect over SSH to the ESXi host and find our ID for StarWind iSCSI Disk using “esxcli storage nmp device list” command.

wp-image-2249

For each StarWind iSCSI Disk, change the Round Robin size from 1000 to 1 using “esxcli storage nmp psp roundrobin deviceconfig set –type=iops –iops=1 –device=<id-device>” command. Why? Well, when it comes to VMware, the Round Robin default path selection size is 1000 IOPS. This means that the next path will come into play only after 1000 I/Os are issued for the frst path. So, to boost the performance, we reduced the limit to 1 to get higher saturation. Hwem, VMware guys, why can’t you do it by default??
wp-image-2250

Repeat the same steps for the second ESXi host for each StarWind iSCSI Disk.

We’re not finished yet with the tricks, stay calm. On each ESXi host, we need to set the Disk.DiskMaxIOSize as 512. Once again, why? So ESXi passes 32767 KB I/O requests directly to the storage device. I/O requests larger than that are divided into several, smaller I/O requests. However, this can cause performance degradation and since we have 512 sector size for StarWind, we gonna tune it to 512 accordingly. Go to Advanced System Settings, find Disk.DiskMaxIOSize and edit.

wp-image-2251

Set the mentioned max 512 I/O size and we’re almost good to go.

wp-image-2252

OK, we need a new cluster. Let’s create it, shall we?

wp-image-2253

Now, add both ESXi hosts to the new Cluster and turn on vSphere HA. We do everything HA! We need as much availability as possible, both for storage (StarWind) and our VMs (vSphere). Go to Cluster->Configure->vSphere Availability and Edit.

wp-image-2254

Turn on vSphere HA and Host Monitoring. The Host Failure Response can be set to Restart VMs. Don’t really care about this, we’re not here for failover testing.

wp-image-2255

Finally, we’re done with our testing lab.

TRUTHS AND A LIE: TESTING IF INTEL SSD DC P3700 2TB PERFORMS AS VENDOR CLAIMS

Below, you can see the table for performance power of P3700 disk series (source).

wp-image-2256

According to Intel, the speed claimed was achieved while measuring with 4 workers and Queue Depth 32:

wp-image-2257

OK, let’s try to prove this on Random Read test with DiskSPD v2.0.20a and Fio v.3.8.

Here are the results:

Performance Intel SSD DC P3700 2TB (RAW)DiskSPD (4k random read)

Performance Intel SSD DC P3700 2TB (RAW)FIO (4k random read)

Well, the Intel guys don’t seem to be lying much about the Intel SSD DC P3700 2TB speed. Under the 4k random read pattern with 4 workers and Queue Depth 32, DiskSPD showed values that are pretty much close to those 460K IOPS claimed for Intel SSD DC P3700 2TB.

3…2…1…LAUNCH: TESTING STARWIND VIRTUAL SAN FOR HYPER-V (WINDOWS-BASED)

OK, after the ESXi hosts are ready and networks configured, let’s test their interconnection bandwidth with iperf. Despite we did our best and put 100Gb/s NICs and connected them directly port to port, the speed between ESXi hosts was just 43Gb/s.

wp-image-2258

wp-image-2259

wp-image-2260

wp-image-2261

We tried even more! We installed the VMware ESXi 6.7 nmlx5-core 4.17.13.8 Driver for Mellanox ConnectX-4 100Gb/s CX465A. Unfortunately, this didn’t help us. Still, the 43Gb/s will be enough!

OK, just in case you don’t trust us, let’s do some simple calculations. One Intel disk can give us 460K IOPS under the 4k random read pattern. One Mellanox ConnectX-4 port gives us 40Gb/s (5GB/s). Accordingly, two ports give 10GB/s throughput. Therefore, one ESXi host will have the network bandwidth limit of (10GB/s*1024*1024)/4≈2620K IOPS. Two Intel SSD DC P3700 performance is 2*460K=920K IOPS which is still lower than 2620K IOPS limit.

NOTE. Yes, we have created our VM testing disks as Provision Eager Zeroed. But we still filled the drives with some junk data to get the nominal disk volume prior to testing. We will also do this prior to each test when creating new Virtual Disk (or changing its size) in a VM. And we us dd.exe utility for this.

Here is the example of launching dd:

dd.exe bs=1M if=/dev/random of=\\?\Device\Harddisk1\DR1 –progress

LET THE BATTLE BEGIN!

Here, we have created our first testing VM pinned to the first ESXi host. Prior to testing StarWind VSAN, let’s find the optimal parameters (threads, Outstanding I/O) for working with DiskSPD and Fio.

We’re gonna test 80GB VMware Virtual Disk over StarWind HA device under the 4k random read pattern while changing the number of threads and Outstanding I/O value. This should help us find the saturation point after which the performance growth stops with corresponding threads and Outstanding I/O value.

Testing utilities: DiskSPD v2.0.20a, Fio v3.8

DiskSPD testing parameters under threads=1, Outstanding I/O=1,2,4,8,16,32,64,128

diskspd.exe -t1 -b4k -r -w0 -o1 -d60 -Sh -L #1 > c:\log\t1-o1-4k-rand-read.txt

timeout 10

diskspd.exe -t1 -b4k -r -w0 -o2 -d60 -Sh -L #1 > c:\log\t1-o2-4k-rand-read.txt

timeout 10

diskspd.exe -t1 -b4k -r -w0 -o4 -d60 -Sh -L #1 > c:\log\t1-o4-4k-rand-read.txt

timeout 10

diskspd.exe -t1 -b4k -r -w0 -o8 -d60 -Sh -L #1 > c:\log\t1-o8-4k-rand-read.txt

timeout 10

diskspd.exe -t1 -b4k -r -w0 -o16 -d60 -Sh -L #1 > c:\log\t1-o16-4k-rand-read.txt

timeout 10

diskspd.exe -t1 -b4k -r -w0 -o32 -d60 -Sh -L #1 > c:\log\t1-o32-4k-rand-read.txt

timeout 10

diskspd.exe -t1 -b4k -r -w0 -o64 -d60 -Sh -L #1 > c:\log\t1-o64-4k-rand-read.txt

timeout 10

diskspd.exe -t1 -b4k -r -w0 -o128 -d60 -Sh -L #1 > c:\log\t1-o128-4k-rand-read.txt

timeout 10

TESTING RESULTS

VMware Virtual Disk 80GB (RAW) over StarWind HA Device4k random read (DiskSPD)

VMware Virtual Disk 80GB (RAW) over StarWind HA Device – 4k random read (DiskSPD)
threads=1 threads=2 threads=4 threads=8
IOPS MB/s Latency (ms) IOPS MB/s Latency (ms) IOPS MB/s Latency (ms) IOPS MB/s Latency (ms)
QD=1 3015 12 0,33 6147 24 0,33 10461 41 0,38 19123 75 0,42
QD=2 6172 24 0,32 10566 41 0,38 19063 74 0,42 36991 145 0,43
QD=4 10737 42 0,37 19426 76 0,41 36557 143 0,44 73744 288 0,43
QD=8 19260 75 0,42 37170 145 0,43 73777 288 0,43 125395 490 0,51
QD=16 37081 145 0,43 73105 286 0,44 128513 502 0,50 128922 504 0,99
QD=32 73432 287 0,44 127380 498 0,50 130919 511 0,98 131278 513 1,95
QD=64 123179 481 0,52 131449 513 0,97 131142 512 1,95 130178 509 3,93
QD=128 129986 508 0,98 131351 513 1,95 131600 514 3,89 129416 506 7,91

VMware Virtual Disk 80GB (RAW) over StarWind HA Device4k random read (FIO)

VMware Virtual Disk 80GB (RAW) over StarWind HA Device – 4k random read (FIO)
threads=1 threads=2 threads=4 threads=8
IOPS MB/s Latency (ms) IOPS MB/s Latency (ms) IOPS MB/s Latency (ms) IOPS MB/s Latency (ms)
QD=1 2794 11 0,35 5590 22 0,35 10048 39 0,39 18976 74 0,41
QD=2 5737 22 0,34 10142 40 0,38 18758 73 0,42 36187 141 0,43
QD=4 10281 40 0,38 18814 74 0,41 36234 142 0,43 70077 274 0,45
QD=8 18595 73 0,42 35663 139 0,44 70459 275 0,44 123666 483 0,51
QD=16 34956 137 0,44 69778 273 0,44 124190 485 0,50 130097 508 0,97
QD=32 69012 270 0,44 122640 479 0,50 131962 515 0,95 130100 508 1,96
QD=64 117637 460 0,50 134665 526 0,93 132723 518 1,91 129832 507 3,93
QD=128 132602 518 0,93 133784 523 1,86 131640 514 3,85 128926 504 7,93

From the tests we can see that the optimal parameters for our testing utilities are: threads=4 and Outstanding I/O=16

BRACE YOURSELVES COMRADES! TESTING 80GB VMWARE VIRTUAL DISK ON TOP OF STARWIND HIGHLY AVAILABLE DEVICE

OK, now we’re gonna test the 80GB VMware Virtual Disk performance starting with one VM. After this, we clone that VM with a new 80 GB virtual disk to the second ESXi host and pin it there. Then, repeat the test for all 80GB VMware Virtual Disks.

Testing tools: DiskSPD v2.0.20a, Fio v3.8

Testing patterns:

– 4k random write

– 4k random read

– 64k random write

– 64k random read

8k random 70%read/30%write

1M sequential read

Parameters for our testing VMs: thread=4, Outstanding I/O=16, time=60sec

DiskSPD

diskspd.exe -t4 -b4k -r -w100 -o16 -d60 -Sh -L #1 > c:\log\4k-rand-write.txt

timeout 10

diskspd.exe -t4 -b4k -r -w0 -o16 -d60 -Sh -L #1 > c:\log\4k-rand-read.txt

timeout 10

diskspd.exe -t4 -b64k -r -w100 -o16 -d60 -Sh -L #1 > c:\log\64k-rand-write.txt

timeout 10

diskspd.exe -t4 -b64k -r -w0 -o16 -d60 -Sh -L #1 > c:\log\64k-rand-read.txt

timeout 10

diskspd.exe -t4 -b8k -r -w30 -o16 -d60 -Sh -L #1 > c:\log\8k-rand-70read-30write.txt

timeout 10

diskspd.exe -t4 -b1M -s -w0 -o16 -d60 -Sh -L #1 > c:\log\1M-seq-red.txt

FIO

[global]

numjobs=4

iodepth=16

loops=1

time_based

ioengine=windowsaio

direct=1

runtime=60

filename=\\.\PhysicalDrive1

[4k rnd write]

rw=randwrite

bs=4k

stonewall

[4k random read]

rw=randread

bs=4k

stonewall

[64k rnd write]

rw=randwrite

bs=64k

stonewall

[64k random read]

rw=randread

bs=64k

stonewall

[OLTP 8k]

bs=8k

rwmixread=70

rw=randrw

stonewall

[1M seq read]

rw=read

bs=1M

stonewall

TESTING RESULTS

 Performance VMware Virtual Disk 80GB (RAW)4k random write, (IOPS)

Performance VMware Virtual Disk 80GB (RAW)4k random write (MB/s)

Performance VMware Virtual Disk 80GB (RAW)4k random read, (IOPS)

Performance VMware Virtual Disk 80GB (RAW)4k random read, (MB/s)

Performance VMware Virtual Disk 80GB (RAW)64k random write, (IOPS)

Performance VMware Virtual Disk 80GB (RAW)64k random write, (MB/s)

Performance VMware Virtual Disk 80GB (RAW)64k random read, (IOPS)

Performance VMware Virtual Disk 80GB (RAW)64k random read, (MB/s)

Performance VMware Virtual Disk 80GB (RAW)8k random 70%read/30%write, (IOPS)

Performance VMware Virtual Disk 80GB (RAW)8k random 70%read/30%write, (MB/s)

Performance VMware Virtual Disk 80GB (RAW)1M seq read, (IOPS)

Performance VMware Virtual Disk 80GB (RAW)1M seq read, (MB/s)

wp-image-2262

TESTING A SINGLE INTEL SSD DC P3700 2TB DISK PERFORMANCE

We have benchmarked Intel SSD DC P3700 2TB on Windows Server 2016 under the same patterns. We need this to measure the performance difference of 4 Intel SSD DC P3700 2TB disks provisioned for two StarWind HA devices.

TESTING LAB CONFIGURATION FOR INTEL SSD DC P3700 2TB

Dell R730, CPU 2x Intel Xeon E5-2683 v3 @ 2.00 GHz , RAM 64GB

Storage: 1x Intel SSD DC P3700 2TB

OS: Windows Server 2016

To calculate the theoretical performance value in the most straightforward way, let’s make the following assumptions:

1. Since read operations can be performed from all NVMEs simultaneously, let’s count it as 4x for a single Intel SSD DC P3700 reads.

2. Writes can also go to all NVMes simultaneously. However, since each block written to StarWind HA Device is then replicated to the partner, we’re gonna use a RAID-1 (Mirroring) logic here. That’s why, the theoretical write speed should be calculated as ((IOPS-Write-one-disk)*N)/2, where N is a number of disks utilized for simultaneous writes (4 in our case).

3. As to 8k random 70%read/30%write pattern, the theoretical performance value is calculated as (IOPS-Read-one-disk*N*0.7)+((IOPS-Write-one-disk*N*0.3)/2), where N is a number of disks utilized simultaneously for reads/writes (yes, it’s still 4).

DiskSPD
Intel SSD DC P3700
( Windows Server 2016)
Theoretical values for use 4x Intel SSD DC P3700 Max Performance for
VMware Virtual Disk
over StarWind HA Device
Ratio of measured performance to theoretical value
IOPS MB/s IOPS MB/s IOPS MB/s %
4k random write 409367 1599 818734 3198 70685 276 8,63
4k random read 423210 1653 1692840 6612 163582 639 9,66
64k random write 30889 1931 61778 3862 34687 2168 56,15
64k random read 51738 3234 206952 12936 57319 3582 27,70
8k random 70%read/30%write 403980 3156 1419600 5410 104862 819 7,39
1M seq read 3237 3237 12948 12948 2960 2960 22,86
threads=4

Outstanding I/O=16

threads=4

Outstanding I/O=16

Fio
Intel SSD DC P3700
( Windows Server 2016)
Theoretical values for use 4x Intel SSD DC P3700 Max Performance for
VMware Virtual Disk
over StarWind HA Device
Ratio of measured performance to theoretical value
IOPS MB/s IOPS MB/s IOPS MB/s %
4k random write 351489 1373 702978 2746 74819 292 10,64
4k random read 333634 1303 1334536 5212 162622 635 12,19
64k random write 31113 1945 62226 3890 33733 2110 54,21
64k random read 37069 2317 148276 9268 54106 3382 36,49
8k random 70%read/30%write 351240 2744 1178000 4616 104450 816 8,87
1M seq read 3231 3233 12924 12932 2584 2600 19,99
threads=4

Outstanding I/O=16

threads=4

Outstanding I/O=16

Сomparative performance diagram (DiskSPD test utility)

Сomparative performance diagram (DiskSPD test utility)

Сomparative performance diagram (DiskSPD test utility)

Сomparative performance diagram (DiskSPD test utility)

Сomparative performance diagram (Fio test utility)

Сomparative performance diagram (Fio test utility)

Сomparative performance diagram (Fio test utility)

Сomparative performance diagram (Fio test utility)

ANOTHER ONE BITES THE DUST

It seems that the Russian bear gone too drunk and went cycling to the circus. Well, they sure can do that. Now, let the numbers talk. The testing results show that the StarWind Virtual SAN for Hyper-V (Windows-based) performance under all testing patterns grows only with the addition of the second VM. The further increase of VMs number doesn’t give us any performance growth. Also, with reads on 4K and 6K blocks, we can see a linear performance fall with the addition of new VMs.

Under the 4k random read pattern, we can observe sharp performance drop when growing Vms number from two to four (162K IOPS to 130K IOPS accordingly). The increase of VMs number makes it only worth…124K IOPS.

When it comes to 64k random reads, there are no surprise drops. Instead, the performance just smoothly decreases from 3400MB/s to 2600MB/s during the VMs population from 2 till 12.

For 8k random 70%read/30%write pattern, the linear speed growth can be seen only when we add a second VM to the cluster (104K IOPS is the highest). After the subsequent VMs addition, the performance drops and remains stable with some 87K-88K IOPS.

OK, what 1M seq reads give us? Stable performance (2500-3000MB/s) starting from the second VM added.

Should we even compare that single Intel SSD DC P3700 on Windows Server 2016 and StarWind Virtual SAN for Hyper-V (Windows-based)? Yes? Well we can clearly see that StarWind Virtual SAN for Hyper-V (Windows-based) speed is significantly lower than that of a single Intel SSD DC P3700. The only exception is 64k random read/write pattern, where StarWind runs faster. But, still much slower that it should in theory.

THEORY AND PRACTICE

Let’s try interpreting the results. If compared to the theoretical values we have calculated previously, StarWind gave us:

9%(DiskSPD), 11%(Fio) from the theoretical value under the 4k random write pattern

10%(DiskSPD), 12%(Fio) from the theoretical value under the 4k random read pattern

56%(DiskSPD), 54%(Fio) from the theoretical value under the 64k random write pattern
28%(DiskSPD),
36%(Fio) from the theoretical value under the 64k random read pattern
7%(DiskSPD),
9%(Fio) from the theoretical value under the 8k random 70%read/30%write pattern

23%(DiskSPD), 20%(Fio) from the theoretical value under the 1M seq read pattern

In other words, take a look in how many times the overall performance of all VMware Virtual Disks over StarWind HA Devices is lower than the theoretical values in our good old Diagram of Shame:

 What do Russian bears do during winter? That’s right, drink vodka and play balalaika. They don’t really give a hwem about their software performance. Hey, Russian bear, you better go play balalaika.

4.92/5 (13)

Please rate this