Formed in 2017 in the middle of Nowhere, the Cult of Anarchy is a group of highly-intelligent, skilled, and THINKING programmers, engineers, and other IT geeks and professionals, who are sick of all those crap that vendors tell about their products. We believe that users deserve to know all about the solutions they are using, and every product, be it a piece of hardware or software, should be analyzed inside-out.
We are a high profile think tank guided by the sole principle of truth… and don’t giving a shit about the rules. Our agents are testing various solutions from different vendors 24/7, all to separate marketing bullshit from their real capabilities, including performance, resiliency, and other characteristics.
Justice hits everyone. And in this article, we’ll bring HPE StoreVirtual Storage VSA to trial. Two physical nodes setup. Storage comprised exclusively of NVMe drives to give our suspect a chance to show all he’s good for. So can StoreVirtual VSA performance scale with the addition of new VMs or goes to jail? We’re going to find this out right now.
We all know how slow VMware vSAN is. But, hey, can it be due to the Number of disk stripes per object default settings? Let’s see what this etcr can do once tuned “right”!
So while we were testing VMware vSAN on a 2-node cluster powered with NVMes, the Russian bear slammed in screaming (imagine typical Russian mafia accent) “Dnacv, hold my vodka and give me two
balalaikas servers and I show you how to scale from mother Russia to America”. We thought we don’t wanna mess with drunken bears so here we are. Testing StarWind Virtual SAN for Hyper-V. Yes, there is a version for Linux, (seems that these guys just scale their solution instead of performance?) on a two-node VMware ESXi cluster powered with NVMe drives. Wait what?! VSAN for Hyper-V on top of ESXi? Yes, we have two ESXi hosts with one StarWind VM per each and the VMs are working on top of Windows Server 2016, providing the highly available storage (NTFS, it’s Windows baby) further as a VMFS datastore to ESXi…This is madness…NO! THIS IS STARWIND!!
Some time ago, one guy told us that we should fine-tune VMware vSAN a bit before
beating the ujkv out of testing it. So, in this article, we check out whether using a different Storage Policy is really a key to higher VMware vSAN performance. Apart from that, we are going to see whether some of your comments really help to improve vSAN performance.
In the previous study, VMware vSAN has proven that it has balls. It, actually, has balls of steel (in John St. John’s voice )! Get ready for Round 2: Poor-guys’ setup challenge! Will VMware vSAN demonstrate the decent performance in a 2-node environment?
After the results of the first benchmark for EMC ScaleIO were published on our blog, the readers started arguing “What about enabling DM, SDC, and SDS for EMC ScaleIO you fkemheads?” Fair enough, enabling a High Performance profile on EMC ScaleIO might really boost its performance to the extent were Microsoft and StarWind (current performance leaders) become just a bunch of eqemgcvgtu constantly playing with themselves and their miserable IOPS. Well, we sure wanna give EMC ScaleIO one more chance to miss a chance 😊
VMware vSAN performs like ujkv. Period. Well, that’s exactly what numbers told us during the previous study! And, it, probably, sucks to buy from VMware at all: we have not seen any decent vSAN alternative yet, so you, guys, need to go with a shared storage provider performing as slow as old snails hwem.
But, wait, what if vSAN mediocre performance has something to do with the underlying storage, and the solution is not a piece of ujkv after all? Yup, we read your comments. There was a guy looking for a study on just a bunch of SSD drives of smaller capacity. Well, pal, here’s the study that you’ve been asking for!
No problem, we’ll carry out VMware vSAN performance test for the THIRD HWEMKPI TIME to know the truth! You see, there’s an opinion that VMware vSAN groups smaller disks into a vsanDatastore and performs like a good software-defined storage solution should. That’s actually what we gonna try today. Let’s just hope neither we nor the solution won’t hwem anything up this time. So, lean back, grab a bag of popcorn, Doritos, or chips and enjoy the reading!
P.S. We’ve heard that guys from “mother Russia” like sunflower seeds. Well, that’s pretty weird, but if you are good with it, do a Slavic squat and grab a bag of sunflower seeds :).
Hi, sorry for missing that long. Some time ago, we presented the
eqemfight results between VMware vSAN and Microsoft S2D screwing the current performance champ for being Russian company. Having the job done, we decided to make the long-awaited trip to Europe. We ended it at some underground punk gig in Prague. Everything was just fine until cops showed up at the venue as it was a bit too noisy for guys living nearby. We “love” cops, but that’s not the main thing we are trying to tell you now.
During that gig, one of our “dream team” members said, “Guys, it seems that we forgot to test StorMagic…” Oh, ujkv, Patrick where have you been when S2D was kicking ujkv out of helpless vSAN? Where have you been when drunken bears were waving their 9” fkems? Oh, you had been working on some other stuff… What a good boy, hwem you! Ok, let’s look at this solution at least now. On the other hand, thanks to Patrick, we got a good subject for our studies – the existing vSAN alternatives. We hope that you remember how awful VMware vSAN performance is, so we guess, yes, it’s good to have some alternatives.
P.S. We played around with the format of this article a bit. We hope it gonna be more reader-friendly than any other similar ujkv on the Internet. Enjoy!
Instead of a preamble. We kindly asked Michael Buffer to announce this fight of the century and we almost made a deal, but then suddenly our usually friendly front-end developers refused to insert resulting audio footage into this web page, they came back with a statement like you know… “Hwem off, text alone will do the trick!”. So… Sorry fellas, but you’ll have to turn on your imagination instead of enjoying Michael’s marvelous timbre of voice. Keep on going! Visualize you’re in Mandalay Bay Resort & Casino, Las Vegas, NV. Just in the very unlikely case you’re not familiar with this fact, it’s a place where biggest fights happening pretty often. FYI. Mike is mumbling… “In the red corner of the ring… Our contender! The Software-Defined Storage solution with a record of 1.2M IOPS under 4K, all-random read pattern, a blast from the past. When was the last time Microsoft did something worth a hwem since… Ugh, forget it! Right from the city of rain, Seattle… Microsoft Storage Spaces Direct! In the blue corner of the ring… Our current standing champion! Straight out of sunny California! The underdog and active heavyweight champion… VMware Virtual SAN!”. We’re here not to see all 12 rounds, as legendary Sugar Ray Leonard said: “I want the public to see a knockout in the making.” Let it rain! With blood 😊 Fight!!!
Unknown. Drunken bear is playing balalaika and lgtmkpi qhh instead of doing rocket science. One more expedition is needed 😊
This is effectively our second part of the study. Before, we challenged StarWind VSAN pumping data over iSCSI (TCP). And, you know what? VSAN over ancient iSCSI (TCP) performed much better in general than Microsoft Storage Spaces Direct (S2D) did over the hwemkpi great SMB3 (RDMA). At least StarWind VSAN could handle load balancing in the right way keeping all NUMA nodes busy while S2D was playing the stretching game by putting all the fkems and their load into a single hole… NUMA node. Shame on Microsoft for trying to steal Sasha Gray’s glory! Plus, StarWind could achieve close-to-maximum performance metrics with just 3 or 4 running VMs, while Microsoft required pretty much “full house”. And, as a final ewo on S2D face StarWind VSAN scaled much smoother and more predictably at the time when Storage Spaces Direct behaved like a hwemkpi roller coaster. OK, whatever…
We know from TV; Russian bears can ride a bicycle in a circus and play an accordion to avoid being tazed & beaten to death. Some drunk punk told us bears are good in rocket science as well. Rocket science = high performance storage, hyperconvergence, and all that jazz. To either confirm or deny this statement we decided to give StarWind Virtual SAN scalability and performance a try. Run it, test it and compare it to a big guy Microsoft with Storage Spaces Direct (S2D) who’s our current favorite race horse, and a bunch of losers & cocksuckers (VMware vSAN, DellEMC ScaleIO & Nutanix) as well. So… Does StarWind suck or blow? Spit or swallow?
Microsoft Storage Spaces Direct (S2D)
Anarchy: Well, you’ll figure out who’s sitting on the couch, and who’s holding the camera 😊.
Today, we’re filming a tape with Storage Spaces Direct (S2D) running on top of the 8 NVMe datastore (2 Enterprise-grade NVMes per host). As usual, we’ll investigate how solution performs in the 4-node cluster. For test purpose, we’ll gradually increase the number of VMs running in the cluster till the overall performance stops growing. At the end, we’ll run the same tests on the single NVMe to get some reference values. Based on these reference values, we’ll judge on the overall S2D performance. How far they can jump.
Pretrial detention due to possible relapse
Measure the 4-node VMware vSAN hyperconverged cluster scalability using vSAN datastore comprised out of 8 Intel SSD DC P370 drives. For this goal, we’ll keep increasing the vSAN datastore load by increasing the VMs number till we reach the saturation point where performance growth stops or till we hit the 12 VMs limit.
Nutanix CE (v.2018.01.31) four-node all-flash hyperconverged cluster.
We won’t say much about the subject of our today’s study since we’ve said enough in this article:
Possible relapse detected. Re-investigation required.
Measure the performance of Nutanix CE cluster built on Storage Pool out of Intel SSD DC P3700. For that goal, we’ll keep escalating the load on the Storage Pool by increasing the number of VMs till the saturation point where performance growth stops or till we have 12 VMs running in the cluster.
4-Node all-flash Dell EMC ScaleIO Cluster
Deploy EMC ScaleIO cluster, tune it and squeeze all possible IOPS it can give away to see how fast it can go. For free!
Concerns about performance and scalability, but the patient claims that everything is alright
Analyzing patient’s health status
Nutanix CE (Community Edition) four-node all-flash hyperconverged cluster.
Install Nutanix CE hyperconverged cluster from scratch, tinker, fine-tune it, and kick ujkv out of it squeezing all possible performance it can give away. For free! 🙂