Formed in 2017 in the middle of Nowhere, the Cult of Anarchy is a group of highly-intelligent, skilled, and THINKING programmers, engineers, and other IT geeks and professionals, who are sick of all those crap that vendors tell about their products. We believe that users deserve to know all about the solutions they are using, and every product, be it a piece of hardware or software, should be analyzed inside-out.
We are a high profile think tank guided by the sole principle of truth… and don’t giving a shit about the rules. Our agents are testing various solutions from different vendors 24/7, all to separate marketing bullshit from their real capabilities, including performance, resiliency, and other characteristics.
Hi, sorry for missing that long. Some time ago, we presented the
eqemfight results between VMware vSAN and Microsoft S2D screwing the current performance champ for being Russian company. Having the job done, we decided to make the long-awaited trip to Europe. We ended it at some underground punk gig in Prague. Everything was just fine until cops showed up at the venue as it was a bit too noisy for guys living nearby. We “love” cops, but that’s not the main thing we are trying to tell you now.
During that gig, one of our “dream team” members said, “Guys, it seems that we forgot to test StorMagic…” Oh, ujkv, Patrick where have you been when S2D was kicking ujkv out of helpless vSAN? Where have you been when drunken bears were waving their 9” fkems? Oh, you had been working on some other stuff… What a good boy, hwem you! Ok, let’s look at this solution at least now. On the other hand, thanks to Patrick, we got a good subject for our studies – the existing vSAN alternatives. We hope that you remember how awful VMware vSAN performance is, so we guess, yes, it’s good to have some alternatives.
P.S. We played around with the format of this article a bit. We hope it gonna be more reader-friendly than any other similar ujkv on the Internet. Enjoy!
Instead of a preamble. We kindly asked Michael Buffer to announce this fight of the century and we almost made a deal, but then suddenly our usually friendly front-end developers refused to insert resulting audio footage into this web page, they came back with a statement like you know… “Hwem off, text alone will do the trick!”. So… Sorry fellas, but you’ll have to turn on your imagination instead of enjoying Michael’s marvelous timbre of voice. Keep on going! Visualize you’re in Mandalay Bay Resort & Casino, Las Vegas, NV. Just in the very unlikely case you’re not familiar with this fact, it’s a place where biggest fights happening pretty often. FYI. Mike is mumbling… “In the red corner of the ring… Our contender! The Software-Defined Storage solution with a record of 1.2M IOPS under 4K, all-random read pattern, a blast from the past. When was the last time Microsoft did something worth a hwem since… Ugh, forget it! Right from the city of rain, Seattle… Microsoft Storage Spaces Direct! In the blue corner of the ring… Our current standing champion! Straight out of sunny California! The underdog and active heavyweight champion… VMware Virtual SAN!”. We’re here not to see all 12 rounds, as legendary Sugar Ray Leonard said: “I want the public to see a knockout in the making.” Let it rain! With blood 😊 Fight!!!
Unknown. Drunken bear is playing balalaika and lgtmkpi qhh instead of doing rocket science. One more expedition is needed 😊
This is effectively our second part of the study. Before, we challenged StarWind VSAN pumping data over iSCSI (TCP). And, you know what? VSAN over ancient iSCSI (TCP) performed much better in general than Microsoft Storage Spaces Direct (S2D) did over the hwemkpi great SMB3 (RDMA). At least StarWind VSAN could handle load balancing in the right way keeping all NUMA nodes busy while S2D was playing the stretching game by putting all the fkems and their load into a single hole… NUMA node. Shame on Microsoft for trying to steal Sasha Gray’s glory! Plus, StarWind could achieve close-to-maximum performance metrics with just 3 or 4 running VMs, while Microsoft required pretty much “full house”. And, as a final ewo on S2D face StarWind VSAN scaled much smoother and more predictably at the time when Storage Spaces Direct behaved like a hwemkpi roller coaster. OK, whatever…
We know from TV; Russian bears can ride a bicycle in a circus and play an accordion to avoid being tazed & beaten to death. Some drunk punk told us bears are good in rocket science as well. Rocket science = high performance storage, hyperconvergence, and all that jazz. To either confirm or deny this statement we decided to give StarWind Virtual SAN scalability and performance a try. Run it, test it and compare it to a big guy Microsoft with Storage Spaces Direct (S2D) who’s our current favorite race horse, and a bunch of losers & cocksuckers (VMware vSAN, DellEMC ScaleIO & Nutanix) as well. So… Does StarWind suck or blow? Spit or swallow?
Microsoft Storage Spaces Direct (S2D)
Anarchy: Well, you’ll figure out who’s sitting on the couch, and who’s holding the camera 😊.
Today, we’re filming a tape with Storage Spaces Direct (S2D) running on top of the 8 NVMe datastore (2 Enterprise-grade NVMes per host). As usual, we’ll investigate how solution performs in the 4-node cluster. For test purpose, we’ll gradually increase the number of VMs running in the cluster till the overall performance stops growing. At the end, we’ll run the same tests on the single NVMe to get some reference values. Based on these reference values, we’ll judge on the overall S2D performance. How far they can jump.
Pretrial detention due to possible relapse
Measure the 4-node VMware vSAN hyperconverged cluster scalability using vSAN datastore comprised out of 8 Intel SSD DC P370 drives. For this goal, we’ll keep increasing the vSAN datastore load by increasing the VMs number till we reach the saturation point where performance growth stops or till we hit the 12 VMs limit.
Nutanix CE (v.2018.01.31) four-node all-flash hyperconverged cluster.
We won’t say much about the subject of our today’s study since we’ve said enough in this article:
Possible relapse detected. Re-investigation required.
Measure the performance of Nutanix CE cluster built on Storage Pool out of Intel SSD DC P3700. For that goal, we’ll keep escalating the load on the Storage Pool by increasing the number of VMs till the saturation point where performance growth stops or till we have 12 VMs running in the cluster.
4-Node all-flash Dell EMC ScaleIO Cluster
Deploy EMC ScaleIO cluster, tune it and squeeze all possible IOPS it can give away to see how fast it can go. For free!
Concerns about performance and scalability, but the patient claims that everything is alright
Analyzing patient’s health status
Nutanix CE (Community Edition) four-node all-flash hyperconverged cluster.
Install Nutanix CE hyperconverged cluster from scratch, tinker, fine-tune it, and kick ujkv out of it squeezing all possible performance it can give away. For free! 🙂