SlideShare a Scribd company logo
29.5.2020
1/10
Nutanix
Contact Name:
Rodney Reinhardt
Company Name: Project Reference:
Ntnx
Date:
29.5.2020
Evaluation Report for:
Include Matrix Summary Table
Include Matrix Table with all details
Date: 29-May-20
1st 2nd 3rd 4th 5th
Total # of vendors in this comparison 12
Total # of products in this comparison 23
Total # of technical evaluation points (features) 127
ECP [HCI] / 5.16
Ultimate
88.5%
vSAN [SDS] / 6.7 U3
Enterprise
73.9%
VxRail [HCI] / 4.7.410
x86
70.1%
Comparison
Product Evaluation Report
Comparison Summary
SDS and HCI
Detailed Matrix Scores
Nutanix
73.3%
VMware
61.2%
Dell EMC
58.1%
Matrix Score - by Category
Category Nutanix VMware Dell EMC
Total 73.3% 61.2% 58.1%
General N/A N/A N/A
Design & Deploy 75.0% 83.3% 66.7%
Workload Support 96.2% 92.3% 76.9%
Server Support 88.5% 96.2% 76.9%
Storage Support 91.7% 87.5% 83.3%
Data Availability 91.7% 73.3% 78.3%
Data Services 81.0% 44.8% 43.1%
Management 92.9% 82.1% 85.7%
All Vendors Ranking
29.5.2020
2/10
ECP [HCI] 5.16 SSY [SDS] 10.0 PSP10 DVX [HCI] 5.0.1.3 vSAN [SDS] 6.7 U3 Acuity [HCI] 10.6.1
6th
SimpliVity [HCI] 4.0.0
7th
NetApp HCI [HCI] 1.7
8th
VxRail [HCI] 4.7.410
9th
HyperFlex [HCI] 4.0
10th
Virtual SAN [SDS] 13279
11th
S2D [SDS] 2019
12th
HC3 [HCI] 8.6.5
Active Use Case: Default
Matrix Content
Content Creator WhatMatrix WhatMatrix WhatMatrix
Overview NEW
Name: Enterprise Cloud Platform (ECP)
Type: Hardware+Software (HCI)
Development Start: 2009 First Product
Release: 2011
Name: vSAN Type: Software-only (SDS)
Development Start: Unknown First
Product Release: 2014
Name: VxRail Type: Hardware+Software
(HCI) Development Start: 2015 First
Product Release: feb 2016
Maturity NEW
GA Release Dates: AOS 5.16: jan 2020
AOS 5.11: aug 2019 AOS 5.10: nov 2018
AOS 5.9: oct 2018 AOS 5.8: jul 2018 AOS
5.6.1: jun 2018 AOS 5.6: apr 2018 AOS
5.5: dec 2017 AOS 5.1.2 / 5.2*: sep 2017
AOS 5 1 1 1: jul 2017 AOS 5 1: may 2017
NEW
GA Release Dates: vSAN 6.7 U2: apr
2019 vSAN 6.7 U1: oct 2018 vSAN 6.7:
may 2018 vSAN 6.6.1: jul 2017 vSAN 6.6:
apr 2017 vSAN 6.5: nov 2016 vSAN 6.2:
mar 2016 vSAN 6.1: aug 2015 vSAN 6.0:
mar 2015 vSAN 5 5: mar 2014
NEW
GA Release Dates: VxRail 4.7.410 (vSAN
6.7.3): dec 2019 VxRail 4.7.300 (vSAN
6.7.3): sep 2019 VxRail 4.7.212 (vSAN
6.7.2): jul 2019 VxRail 4.7.200 (vSAN
6.7.2): may 2019 VxRail 4.7.100 (vSAN
6 7 1): mar 2019 VxRail 4 7 001 (vSAN
88%
Ultimate
83%
x86
75%
x86
73%
Enterprise
72%
Datacenter
71%
380
70%
x86
70%
x86
69%
x86
68%
x8667%
Datacenter
56%
x86
29.5.2020
3/10
AOS 5.1.1.1: jul 2017 AOS 5.1: may 2017
AOS 5.0: dec 2016 AOS 4.7: jun 2016
AOS 4.6: feb 2016 AOS 4.5: oct 2015
NOS 4.1: jan 2015 NOS 4.0: apr 2014
NOS 3.5: aug 2013 NOS 3.0: dec 2012
mar 2015 vSAN 5.5: mar 2014 6.7.1): mar 2019 VxRail 4.7.001 (vSAN
6.7.1): dec 2018 VxRail 4.7.000 (vSAN
6.7.1): nov 2018 VxRail 4.5.225 (vSAN
6.6.1): oct 2018 VxRail 4.5.218 (vSAN
6.6.1): aug 2018 VxRail 4.5.210 (vSAN
6.6.1): may 2018 VxRail 4.5 (vSAN 6.6):
sep 2017 VxRail 4.0 (vSAN 6.2): dec
2016 VxRail 3.5 (vSAN 6.2): jun 2016
VxRail 3.0 (vSAN 6.1): feb 2016
Hardware Pricing Model Per Node N/A Per Node
Software Pricing Model Per Core + Flash TiB (AOS) Per Node
(AOS, Prism Pro) Per Concurrent User
(VDI) Per VM (ROBO) Per TB (Files) Per
VM (Calm) Per VM (Xi Leap)
Per CPU Socket Per Desktop (VDI use
cases only) Per Used GB (VCPP only)
NEW
Per Node
Support Pricing Model Per Node Per CPU Socket Per Desktop (VDI use
cases only)
Per Node
Consolidation Scope Hypervisor Compute Storage Data
Protection (limited) Management
Automation&Orchestration
Hypervisor Compute Storage Data
Protection (limited) Management
Automation&Orchestration
Hypervisor Compute Storage Network
(limited) Data Protection (limited)
Management Automation&Orchestration
Network Topology 1, 10, 25, 40 GbE 1, 10, 40 GbE 1, 10, 25 GbE
Overall Design Complexity High Medium Medium
External Performance Validation Login VSI (may 2017) ESG Lab (feb
2017) SAP (nov 2016)
StorageReview (aug 2018, aug 2016)
ESG Lab (aug 2018, apr 2016) Evaluator
Group (oct 2018, jul 2017, aug 2016)
StorageReview (dec 2018) Principled
Technologies (jul 2017, jun 2017)
Evaluation Methods Community Edition (forever)
Hyperconverged Test Drive in GCP Proof-
of-Concept (POC) Partner Driven Demo
Environment Xi Services Free Trial (60-
days)
Free Trial (60-days) Online Lab Proof-of-
Concept (POC)
Proof-of-Concept (POC) vSAN: Free Trial
(60-days) vSAN: Online Lab
Deployment Architecture Single-Layer (primary) Dual-Layer
(secondary)
Single-Layer Single-Layer
Deployment Method Turnkey (very fast; highly automated) BYOS (fast, some automation) Pre-
installed (very fast, turnkey approach)
Turnkey (very fast; highly automated)
Hypervisor Deployment Virtual Storage Controller Kernel Integrated Kernel Integrated
Hypervisor Compatibility VMware vSphere ESXi 6.0U1A-6.7U2
Microsoft Hyper-V 2012 R2 and 2016*
Microsoft CPS Standard Nutanix
Acropolis Hypervisor (AHV) Citrix
XenServer 7.1-7.5
NEW
VMware vSphere ESXi 6.7U3
VMware vSphere ESXi 6.7U3
Hypervisor Interconnect NFS SMB3 iSCSI NEW
vSAN (incl. WSFC)
vSAN (incl. WSFC)
Bare Metal Compatibility Microsoft Windows Server
2008R2/2012R2/2016 Red Hat Enterprise
Many N/A
29.5.2020
4/10
2008R2/2012R2/2016 Red Hat Enterprise
Linux (RHEL) 6.7/6.8/7.2 SLES 11/12
Oracle Linux 6.7/7.2 AIX 7.1/7.2 on
POWER Oracle Solaris 11.3 on SPARC
ESXi 5.5/6 with VMFS (very specific use-
cases)
Bare Metal Interconnect iSCSI NEW
iSCSI
N/A
Container Integration Type Built-in (native) NEW
Built-in (Hypervisor-based, vSAN
supported)
Built-in (Hypervisor-based, vSAN
supported)
Container Platform Compatibility Docker EE 1.13+ Node OS CentOS 7.5
Kubernetes 1.11-1.14
Docker CE 17.06.1+ for Linux on ESXi
6.0+ Docker EE/Docker for Windows
17.06+ on ESXi 6.0+
Docker CE 17.06.1+ for Linux on ESXi
6.0+ Docker EE/Docker for Windows
17.06+ on ESXi 6.0+
Container Platform Interconnect Docker Volume plugin (certified) Docker Volume Plugin (certified) +
VMware VIB
Docker Volume Plugin (certified) +
VMware VIB
Container Host Compatibility Virtualized container hosts on all
supported hypervisors Bare Metal
container hosts
Virtualized container hosts on VMware
vSphere hypervisor
Virtualized container hosts on VMware
vSphere hypervisor
Container Host OS Compatbility CentOS 7 Red Hat Enterprise Linux
(RHEL) 7.3 Ubuntu Linux 16.04.2
Linux Windows 10 or Windows Server
2016
Linux Windows 10 or 2016
Container Orch. Compatibility NEW
Kubernetes
NEW
VCP: Kubernetes 1.6.5+ on ESXi 6.0+
CNS: Kubernetes 1.14+
VCP: Kubernetes 1.6.5+ on ESXi 6.0+
CNS: Kubernetes 1.14+
Container Orch. Interconnect Kubernetes Volume plugin Kubernetes Volume Plugin Kubernetes Volume Plugin
VDI Compatibility VMware Horizon Citrix XenDesktop
(certified) Citrix Cloud (certified) Parallels
RAS Xi Frame
VMware Horizon Citrix XenDesktop VMware Horizon Citrix XenDesktop
VDI Load Bearing VMware: up to 170 virtual desktops/node
Citrix: up to 175 virtual desktops/node
VMware: up to 200 virtual desktops/node
Citrix: up to 90 virtual desktops/node
VMware: up to 160 virtual desktops/node
Citrix: up to 140 virtual desktops/node
Hardware Vendor Choice Super Micro (Nutanix branded) Super
Micro (source your own) Dell EMC (OEM)
Lenovo (OEM) Fujitsu (OEM) IBM (OEM)
HPE (Select) Cisco UCS (Select) Crystal
(Rugged) Klas Telecom (Rugged) Many
(CE only)
Many Dell
Models NEW
5 Native Models (SX-1000, NX-1000, NX-
3000, NX-5000, NX-8000) 15 Native
Model sub-types
Many 5 Dell (native) Models (E-, G-, P-, S- and
V-Series)
Density Native: 1 (NX1000, NX3000, NX6000,
NX8000) 2 (NX6000, NX8000) 4
(NX1000, NX3000) 3 or 4 (SX1000)
nodes per chassis
1, 2 or 4 nodes per chassis 1 or 4 nodes per chassis
29.5.2020
5/10
Mixing Allowed Yes Yes Yes
CPU Config NEW
Flexible: up to 10 options
Flexible NEW
Flexible
Memory Config Flexible: up to 10 options Flexible Flexible
Storage Config Flexible: capacity (up to 7 options per disk
type); number of disks (Dell, Cisco) Fixed:
Number of disks (hybrid, most all-flash)
Flexible: number of disks + capacity Flexible: number of disks (limited) +
capacity
Network Config Flexible: up to 4 add-on options Flexible Flexible (1, 10 or 25 Gbps)
GPU Config NVIDIA Tesla (specific appliance models
only)
NVIDIA Tesla AMD FirePro Intel Iris Pro NVIDIA Tesla
Scale-up Memory Storage GPU CPU Memory Storage GPU Memory Storage Network GPU
Scale-out Compute+storage Compute-only (NFS;
SMB3) Storage-only
Compute+storage Compute-only (vSAN
VMKernel)
Compute+storage
Scalability 3-Unlimited nodes in 1-node increments 2-64 nodes in 1-node increments 3-64 storage nodes in 1-node increments
Small-scale (ROBO) 3 Node minimum (data center) 1 or 2
Node minimum (ROBO) 1 Node minimum
(backup)
2 Node minimum 2 Node minimum
Layout Distributed File System (ADSF) Object Storage File System (OSFS) Object Storage File System (OSFS)
Data Locality Full Partial Partial
Storage Type(s) Direct-attached (Raw) Direct-attached (Raw) Direct-attached (Raw)
Composition Hybrid (Flash+Magnetic) All-Flash Hybrid (Flash+Magnetic) All-Flash Hybrid (Flash+Magnetic) All-Flash
Hypervisor OS Layer SuperMicro (G3,G4,G5): DOM
SuperMicro (G6): M2 SSD Dell: SD or
SSD Lenovo: DOM, SD or SSD Cisco: SD
or SSD
SD, USB, DOM, HDD or SSD SSD
Memory Layer DRAM DRAM DRAM
Memory Purpose Read Cache Read Cache Read Cache
29.5.2020
6/10
Memory Capacity Configurable Non-configurable Non-configurable
Flash Layer SSD, NVMe SSD, PCIe, UltraDIMM, NVMe NEW
SSD; NVMe
Flash Purpose Read/Write Cache Storage Tier Hybrid: Read/Write Cache All-Flash: Write
Cache + Storage Tier
Hybrid: Read/Write Cache All-Flash: Write
Cache + Storage Tier
Flash Capacity NEW
Hybrid: 1-4 SSDs per node All-Flash: 3-24
SSDs per node NVMe-Hybrid: 2-4 NVMe
+ 4-8 SSDs per node
Hybrid: 1-5 Flash devices per node (1 per
disk group) All-Flash: 40 Flash devices
per node (8 per disk group,1 for cache
and 7 for capacity)
Hybrid: 4 Flash devices per node All-
Flash: 2-24 Flash devices per node
Magnetic Layer Hybrid: SATA Hybrid: SAS or SATA Hybrid: SAS or SATA
Magnetic Purpose Persistent Storage Persistent Storage Persistent Storage
Magnetic Capacity NEW
2-20 SATA HDDs per node
1-35 SAS/SATA HDDs per host/node 3-5 SAS/SATA HDDs per disk group
Persistent Write Buffer Flash Layer (SSD; NVMe) Flash Layer (SSD;PCIe;NVMe) Flash Layer (SSD, NVMe)
Disk Failure Protection 1-2 Replicas (2N-3N) (primary) Erasure
Coding (N+1/N+2) (secondary)
Hybrid/All-Flash: 0-3 Replicas (RAID1;
1N-4N), Host Pinning (1N) All-Flash:
Erasure Coding (RAID5-6)
Hybrid/All-Flash: 0-3 Replicas (RAID1;
1N-4N) All-Flash: Erasure Coding
(RAID5-6)
Node Failure Protection 1-2 Replicas (2N-3N) (primary) Erasure
Coding (N+1/N+2) (secondary)
Hybrid/All-Flash: 0-3 Replicas (RAID1;
1N-4N), Host Pinning (1N) All-Flash:
Erasure Coding (RAID5-6)
Hybrid/All-Flash: 0-3 Replicas (RAID1;
1N-4N) All-Flash: Erasure Coding
(RAID5-6)
Block Failure Protection Block Awareness (integrated) Failure Domains Failure Domains
Rack Failure Protection Rack Fault Tolerance (ESXi and AVH
only)
Failure Domains Failure Domains
Protection Capacity Overhead RF2 (2N) (primary): 100% RF3 (3N)
(primary): 200% EC-X (N+1) (secondary):
20-50% EC-X (N+2) (secondary): 50%
Host Pinning (1N): Dependent on # of
VMs Replicas (2N): 100% Replicas (3N):
200% Erasure Coding (RAID5): 33%
Erasure Coding (RAID6): 50%
Replicas (2N): 100% Replicas (3N):
200% Erasure Coding (RAID5): 33%
Erasure Coding (RAID6): 50%
Data Corruption Detection Read integrity checks Disk scrubbing
(software)
Read integrity checks Disk scrubbing
(software)
Read integrity checks Disk scrubbing
(software)
Snapshot Type Built-in (native) Built-in (native) Built-in (native)
Snapshot Scope Local + Remote Local Local
Snapshot Frequency NEW
GUI: 1-15 minutes (nearsync replication);
1 h ( li ti )
GUI: 1 hour GUI: 1 hour
29.5.2020
7/10
1 hour (async replication)
Snapshot Granularity Per VM Per VM Per VM
Backup Type Built-in (native) External (vSAN Certified) External (vSAN Certified)
Backup Scope To local single-node To local and remote
clusters To remote cloud object stores
(Amazon S3, Microsoft Azure)
N/A N/A
Backup Frequency NearSync to remote clusters: 1-15
minutes* Async to remote clusters: 1 hour
AWS/Azure Cloud: 1 hour
N/A N/A
Backup Consistency File System Consistent (Windows)
Application Consistent (MS Apps on
Windows)
N/A N/A
Restore Granularity Entire VM or Single File (local snapshots) Entire VM (local snapshots) Entire VM (local snapshots)
Restore Ease-of-use Entire VM: GUI Single File: GUI, nCLI Entire VM: GUI (local snapshots) Entire VM: GUI (local snapshots)
Remote Replication Type Built-in (native) Built-in (Stretched Cluster only) Built-in
Remote Replication Scope To remote sites To AWS and MS Azure
Cloud To Xi Cloud (US and UK only)
VR: To remote sites, To VMware clouds RPVM: To remote sites, To VMware
clouds
Remote Replication Cloud Function Data repository (AWS/Azure) DR-site (Xi
Cloud)
VR: DR-site (VMware Clouds) RPVM: DR-site (VMware clouds)
Remote Replication Topologies Single-site and multi-site VR: Single-site and multi-site RPVM: Single-site and multi-site
Remote Replication Frequency NEW
Synchronous to remote cluster:
continuous NearSync to remote clusters:
1-15 minutes* Async to remote clusters: 1
hour AWS/Azure Cloud: 1 hour Xi Cloud:
1 hour
VR: 5 minutes (Asynchronous) vSAN:
Continuous (Stretched Cluster)
RPVM: near-synchronous
(Asynchronous) vSAN: Continuous
(Stretched Cluster)
Remote Replication Granularity VM iSCSI LUN VR: VM RPVM: VM
Consistency Groups Yes VR: No RPVM: Yes
DR Orchestration VMware SRM (certified) Xi Leap (native;
US and UK)
VMware SRM (certified) RPVM: integrated VR: VMware SRM
(certified)
Stretched Cluster (SC) vSphere: Yes Hyper-V: Yes AHV: No VMware vSphere: Yes (certified) VMware vSphere: Yes (certified)
29.5.2020
8/10
SC Configuration vSphere: 3-sites = two active sites + tie-
breaker in 3rd site Hyper-V: 3-sites = two
active sites + tie-breaker in 3rd site
3-sites: two active sites + tie-breaker in
3rd site
3-sites: two active sites + tie-breaker in
3rd site
SC Distance <=5ms RTT / <400 KMs <=5ms RTT <=5ms RTT
SC Scaling No set max. # Nodes; Mixing hardware
models allowed
<=15 hosts at each active site <=15 nodes at each active site
SC Data Redundancy Replicas: 1N at each active site Erasure
Coding (optional): Nutanix EC-X at each
active site
Replicas: 0-3 Replicas (1N-4N) at each
active site Erasure Coding: RAID5-6 at
each active site
Replicas: 0-3 Replicas (1N-4N) at each
active site Erasure Coding: RAID5-6 at
each active site
Dedup/Compr. Engine Software All-Flash: Software Hybrid: N/A All-Flash: Software Hybrid: N/A
Dedup/Compr. Function Efficiency (full) and Performance (limited) Efficiency (Space savings) Efficiency (Space savings)
Dedup/Compr. Process Perf. Tier: Inline (dedup post-ack / compr
pre-ack) Cap. Tier: Post-process
All-Flash: Inline (post-ack) Hybrid: N/A All-Flash: Inline (post-ack) Hybrid: N/A
Dedup/Compr. Type Dedup Inline: Optional Dedup Post-
Process: Optional Compr. Inline: Optional
Compr. Post-Process: Optional
All-Flash: Optional Hybrid: N/A All-Flash: Optional Hybrid: N/A
Dedup/Compr. Scope Dedup Inline: memory and flash layers
Dedup Post-process: persistent data layer
(adaptive) Compr. Inline: flash and
persistent data layers Compr. Post-
process: persistent data layer (adaptive)
Persistent data layer Persistent data layer
Dedup/Compr. Radius Storage Container Disk Group Disk Group
Dedup/Compr. Granularity 16 KB fixed block size 4 KB fixed block size 4 KB fixed block size
Dedup/Compr. Guarantee N/A N/A N/A
Data Rebalancing Full NEW
Full
NEW
Full
Data Tiering N/A N/A N/A
Task Offloading vSphere: VMware VAAI-NAS (full) Hyper-
V: SMB3 ODX; UNMAP/TRIM AVH:
Integrated
vSphere: Integrated vSphere: Integrated
QoS Type IOPs Limits (maximums) IOPs Limits (maximums) IOPs Limits (maximums)
29.5.2020
9/10
QoS Granularity Per VM Per VM/Virtual Disk Per VM/Virtual Disk
Flash Pinning VM Flash Mode: Per VM/Virtual
Disk/iSCSI LUN
Cache Read Reservation: Per VM/Virtual
Disk
Cache Read Reservation: Per VM/Virtual
Disk
Data Encryption Type Built-in (native) Built-in (native) Built-in (native)
Data Encryption Options Hardware: Self-encrypting drives (SEDs)
Software: AOS encryption; Vormetric VTE
(validated), Gemalto (verified)
Hardware: N/A Software: vSAN data
encryption; HyTrust DataControl
(validated)
Hardware: N/A Software: vSAN data
encryption; HyTrust DataControl
(validated)
Data Encryption Scope Hardware: Data-at-rest Software AOS:
Data-at-rest Software VTE/Gemalto:
Data-at-rest + Data-in-transit
Hardware: N/A Software vSAN: Data-at-
rest Software Hytrust: Data-at-rest +
Data-in-transit
Hardware: N/A Software vSAN: Data-at-
rest Software Hytrust: Data-at-rest +
Data-in-transit
Data Encryption Compliance Hardware: FIPS 140-2 Level 2 (SEDs)
Software: FIPS 140-2 Level 1 (AOS, VTE)
Hardware: N/A Software: FIPS 140-2
Level 1 (vSAN); FIPS 140-2 Level 1
(HyTrust)
Hardware: N/A Software: FIPS 140-2
Level 1 (vSAN); FIPS 140-2 Level 1
(HyTrust)
Data Encryption Efficiency Impact Hardware: No Software AOS: No
Software VTE/Gemalto: Yes
Hardware: N/A Software: No (vSAN); Yes
(HyTrust)
Hardware: N/A Software: No (vSAN); Yes
(HyTrust)
Fast VM Cloning Yes No No
Hypervisor Migration ESXi to AHV (integrated) AHV to ESXi
(integrated) Hyper-V to AHV (external)
Hyper-V to ESXi (external) Hyper-V to ESXi (external)
Fileserver Type Built-in (native) External (vSAN Certified) External (vSAN Certified)
Fileserver Compatibility Windows clients Apple Mac clients Linux
clients
N/A N/A
Fileserver Interconnect SMB NFS N/A N/A
Fileserver Quotas Share Quotas, User Quotas N/A N/A
Fileserver Analytics Yes N/A N/A
Object Storage Type NEW
S3-compatible
N/A N/A
Object Storage Protection Versioning N/A N/A
Object Storage LT Retention WORM N/A N/A
GUI Functionality Centralized Centralized Centralized
29.5.2020
10/10
GUI Functionality Centralized Centralized Centralized
GUI Scope Prism: Single-site Prism Central: Multi-site Single-site and Multi-site Single-site and Multi-site
GUI Perf. Monitoring Advanced NEW
Advanced
Advanced
GUI Integration VMware: Prism (subset) Microsoft: SCCM
(SCOM and SCVMM) AHV: Prism Nutanix
Files: Prism Xi Frame: Prism Central
VMware HTML5 vSphere Client
(integrated) VMware vSphere Web Client
(integrated)
VMware vSphere Web Client (integrated)
Policies Partial (Protection) Full Full
API/Scripting REST-APIs PowerShell nCLI REST-APIs Ruby vSphere Console (RVC)
PowerCLI
REST-APIs Ruby vSphere Console
(RVC) PowerCLI
Integration OpenStack VMware vRealize Automation
(vRA) Nutanix Calm
OpenStack VMware vRealize Automation
(vRA)
OpenStack VMware vRealize Automation
(vRA)
Self Service AHV only N/A (not part of vSAN license) N/A (not part of VxRail license bundle)
SW Composition Unified Partially Distributed Partially Distributed
SW Upgrade Execution Rolling Upgrade (1-by-1) NEW
Rolling Upgrade (1-by-1)
Rolling Upgrade (1-by-1)
FW Upgrade Execution 1-Click Rolling Upgrade (1-by-1) 1-Click
Single HW/SW Support Yes (Nutanix; Dell; Lenovo; IBM) Yes (most OEM vendors) Yes
Call-Home Function Full Partial (vSAN Support Insight) Full
Predictive Analytics Full NEW
Full (not part of vSAN license)
Full (not part of VxRail license bundle)

More Related Content

PPT
Domain Driven Design (DDD)
PDF
Introduction to column oriented databases
PDF
Designing Data-Intensive Applications
PDF
AWS DynamoDB
ODP
Cassandra Insider
PDF
Platform Strategy to Deliver Digital Experiences on Azure
PDF
Azure Synapse 101 Webinar Presentation
PPTX
Differentiate Big Data vs Data Warehouse use cases for a cloud solution
Domain Driven Design (DDD)
Introduction to column oriented databases
Designing Data-Intensive Applications
AWS DynamoDB
Cassandra Insider
Platform Strategy to Deliver Digital Experiences on Azure
Azure Synapse 101 Webinar Presentation
Differentiate Big Data vs Data Warehouse use cases for a cloud solution

What's hot (20)

PDF
Neo4j Presentation
PDF
Domain Driven Design (Ultra) Distilled
PPTX
Azure Cloud PPT
PPTX
HBase Tutorial For Beginners | HBase Architecture | HBase Tutorial | Hadoop T...
PPTX
Cassandra vs. ScyllaDB: Evolutionary Differences
PDF
Introduction to Apache Hive
PDF
What’s New in Syncsort’s Trillium Software System (TSS) 15.7
PPTX
Domain Driven Design
PPTX
Google Vertex AI
PPT
An overview of snowflake
PDF
Testing Microservices
PDF
Azure Pipelines Multistage YAML - Top 10 Features
PPTX
Introduction to NOSQL databases
PDF
CQRS without event sourcing
PDF
Introduction to Knowledge Graphs and Semantic AI
PPTX
Introduction to Azure Databricks
PPTX
GOOGLE BIGTABLE
PDF
AWS Glue - let's get stuck in!
PPTX
Mongodb vs mysql
PPTX
Apache HBase™
Neo4j Presentation
Domain Driven Design (Ultra) Distilled
Azure Cloud PPT
HBase Tutorial For Beginners | HBase Architecture | HBase Tutorial | Hadoop T...
Cassandra vs. ScyllaDB: Evolutionary Differences
Introduction to Apache Hive
What’s New in Syncsort’s Trillium Software System (TSS) 15.7
Domain Driven Design
Google Vertex AI
An overview of snowflake
Testing Microservices
Azure Pipelines Multistage YAML - Top 10 Features
Introduction to NOSQL databases
CQRS without event sourcing
Introduction to Knowledge Graphs and Semantic AI
Introduction to Azure Databricks
GOOGLE BIGTABLE
AWS Glue - let's get stuck in!
Mongodb vs mysql
Apache HBase™
Ad

Similar to HCI comparison whatmatrix (20)

PPTX
Virtual san hardware guidance &amp; best practices
PDF
VMware HCI solutions - 2020-01-16
PPTX
Hyper-Converged Infrastructure Vx Rail
PPTX
Running DataStax Enterprise in VMware Cloud and Hybrid Environments
PPTX
VMworld 2015: What's New in vSphere?
PPTX
Vce vxrail-customer-presentation new
PPTX
What is coming for VMware vSphere?
PPTX
Nutanix Advantage vs VMware vSAN-VxRail (Customer Deck).pptx
PPTX
VMware virtual SAN 6 overview
PDF
VMworld 2013: IBM Solutions for VMware Virtual SAN
PDF
Partner Presentation vSphere6-VSAN-vCloud-vRealize
PDF
VMware vSphere Version Comparison 4.0 to 6.5
PDF
Dell EMC VxRAIL Appliance based on VMware SDS
PPTX
VMworld 2016 Recap
PDF
VMworld 2014: Virtual SAN Architecture Deep Dive
PPTX
CloudStack Day Japan 2015 - Hypervisor Selection in CloudStack 4.5
PPT
Vsphere 4-partner-training180
PPTX
Denver VMUG nov 2011
PDF
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
PDF
VMware: Enabling Software-Defined Storage Using Virtual SAN (Technical Decisi...
Virtual san hardware guidance &amp; best practices
VMware HCI solutions - 2020-01-16
Hyper-Converged Infrastructure Vx Rail
Running DataStax Enterprise in VMware Cloud and Hybrid Environments
VMworld 2015: What's New in vSphere?
Vce vxrail-customer-presentation new
What is coming for VMware vSphere?
Nutanix Advantage vs VMware vSAN-VxRail (Customer Deck).pptx
VMware virtual SAN 6 overview
VMworld 2013: IBM Solutions for VMware Virtual SAN
Partner Presentation vSphere6-VSAN-vCloud-vRealize
VMware vSphere Version Comparison 4.0 to 6.5
Dell EMC VxRAIL Appliance based on VMware SDS
VMworld 2016 Recap
VMworld 2014: Virtual SAN Architecture Deep Dive
CloudStack Day Japan 2015 - Hypervisor Selection in CloudStack 4.5
Vsphere 4-partner-training180
Denver VMUG nov 2011
VMworld Europe 2014: Virtual SAN Architecture Deep Dive
VMware: Enabling Software-Defined Storage Using Virtual SAN (Technical Decisi...
Ad

Recently uploaded (20)

PPTX
Introduction to Knowledge Engineering Part 1
PPTX
Business Acumen Training GuidePresentation.pptx
PPTX
oil_refinery_comprehensive_20250804084928 (1).pptx
PPTX
Global journeys: estimating international migration
PPT
Quality review (1)_presentation of this 21
PDF
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
PPTX
Data_Analytics_and_PowerBI_Presentation.pptx
PDF
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
PPTX
Introduction-to-Cloud-ComputingFinal.pptx
PDF
Introduction to Business Data Analytics.
PPTX
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
PPTX
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
PDF
Foundation of Data Science unit number two notes
PPTX
Moving the Public Sector (Government) to a Digital Adoption
PPTX
1_Introduction to advance data techniques.pptx
PDF
.pdf is not working space design for the following data for the following dat...
PPTX
Business Ppt On Nestle.pptx huunnnhhgfvu
PPT
Chapter 3 METAL JOINING.pptnnnnnnnnnnnnn
PPTX
05. PRACTICAL GUIDE TO MICROSOFT EXCEL.pptx
Introduction to Knowledge Engineering Part 1
Business Acumen Training GuidePresentation.pptx
oil_refinery_comprehensive_20250804084928 (1).pptx
Global journeys: estimating international migration
Quality review (1)_presentation of this 21
Recruitment and Placement PPT.pdfbjfibjdfbjfobj
Data_Analytics_and_PowerBI_Presentation.pptx
BF and FI - Blockchain, fintech and Financial Innovation Lesson 2.pdf
Introduction-to-Cloud-ComputingFinal.pptx
Introduction to Business Data Analytics.
The THESIS FINAL-DEFENSE-PRESENTATION.pptx
CEE 2 REPORT G7.pptxbdbshjdgsgjgsjfiuhsd
Foundation of Data Science unit number two notes
Moving the Public Sector (Government) to a Digital Adoption
1_Introduction to advance data techniques.pptx
.pdf is not working space design for the following data for the following dat...
Business Ppt On Nestle.pptx huunnnhhgfvu
Chapter 3 METAL JOINING.pptnnnnnnnnnnnnn
05. PRACTICAL GUIDE TO MICROSOFT EXCEL.pptx

HCI comparison whatmatrix

  • 1. 29.5.2020 1/10 Nutanix Contact Name: Rodney Reinhardt Company Name: Project Reference: Ntnx Date: 29.5.2020 Evaluation Report for: Include Matrix Summary Table Include Matrix Table with all details Date: 29-May-20 1st 2nd 3rd 4th 5th Total # of vendors in this comparison 12 Total # of products in this comparison 23 Total # of technical evaluation points (features) 127 ECP [HCI] / 5.16 Ultimate 88.5% vSAN [SDS] / 6.7 U3 Enterprise 73.9% VxRail [HCI] / 4.7.410 x86 70.1% Comparison Product Evaluation Report Comparison Summary SDS and HCI Detailed Matrix Scores Nutanix 73.3% VMware 61.2% Dell EMC 58.1% Matrix Score - by Category Category Nutanix VMware Dell EMC Total 73.3% 61.2% 58.1% General N/A N/A N/A Design & Deploy 75.0% 83.3% 66.7% Workload Support 96.2% 92.3% 76.9% Server Support 88.5% 96.2% 76.9% Storage Support 91.7% 87.5% 83.3% Data Availability 91.7% 73.3% 78.3% Data Services 81.0% 44.8% 43.1% Management 92.9% 82.1% 85.7% All Vendors Ranking
  • 2. 29.5.2020 2/10 ECP [HCI] 5.16 SSY [SDS] 10.0 PSP10 DVX [HCI] 5.0.1.3 vSAN [SDS] 6.7 U3 Acuity [HCI] 10.6.1 6th SimpliVity [HCI] 4.0.0 7th NetApp HCI [HCI] 1.7 8th VxRail [HCI] 4.7.410 9th HyperFlex [HCI] 4.0 10th Virtual SAN [SDS] 13279 11th S2D [SDS] 2019 12th HC3 [HCI] 8.6.5 Active Use Case: Default Matrix Content Content Creator WhatMatrix WhatMatrix WhatMatrix Overview NEW Name: Enterprise Cloud Platform (ECP) Type: Hardware+Software (HCI) Development Start: 2009 First Product Release: 2011 Name: vSAN Type: Software-only (SDS) Development Start: Unknown First Product Release: 2014 Name: VxRail Type: Hardware+Software (HCI) Development Start: 2015 First Product Release: feb 2016 Maturity NEW GA Release Dates: AOS 5.16: jan 2020 AOS 5.11: aug 2019 AOS 5.10: nov 2018 AOS 5.9: oct 2018 AOS 5.8: jul 2018 AOS 5.6.1: jun 2018 AOS 5.6: apr 2018 AOS 5.5: dec 2017 AOS 5.1.2 / 5.2*: sep 2017 AOS 5 1 1 1: jul 2017 AOS 5 1: may 2017 NEW GA Release Dates: vSAN 6.7 U2: apr 2019 vSAN 6.7 U1: oct 2018 vSAN 6.7: may 2018 vSAN 6.6.1: jul 2017 vSAN 6.6: apr 2017 vSAN 6.5: nov 2016 vSAN 6.2: mar 2016 vSAN 6.1: aug 2015 vSAN 6.0: mar 2015 vSAN 5 5: mar 2014 NEW GA Release Dates: VxRail 4.7.410 (vSAN 6.7.3): dec 2019 VxRail 4.7.300 (vSAN 6.7.3): sep 2019 VxRail 4.7.212 (vSAN 6.7.2): jul 2019 VxRail 4.7.200 (vSAN 6.7.2): may 2019 VxRail 4.7.100 (vSAN 6 7 1): mar 2019 VxRail 4 7 001 (vSAN 88% Ultimate 83% x86 75% x86 73% Enterprise 72% Datacenter 71% 380 70% x86 70% x86 69% x86 68% x8667% Datacenter 56% x86
  • 3. 29.5.2020 3/10 AOS 5.1.1.1: jul 2017 AOS 5.1: may 2017 AOS 5.0: dec 2016 AOS 4.7: jun 2016 AOS 4.6: feb 2016 AOS 4.5: oct 2015 NOS 4.1: jan 2015 NOS 4.0: apr 2014 NOS 3.5: aug 2013 NOS 3.0: dec 2012 mar 2015 vSAN 5.5: mar 2014 6.7.1): mar 2019 VxRail 4.7.001 (vSAN 6.7.1): dec 2018 VxRail 4.7.000 (vSAN 6.7.1): nov 2018 VxRail 4.5.225 (vSAN 6.6.1): oct 2018 VxRail 4.5.218 (vSAN 6.6.1): aug 2018 VxRail 4.5.210 (vSAN 6.6.1): may 2018 VxRail 4.5 (vSAN 6.6): sep 2017 VxRail 4.0 (vSAN 6.2): dec 2016 VxRail 3.5 (vSAN 6.2): jun 2016 VxRail 3.0 (vSAN 6.1): feb 2016 Hardware Pricing Model Per Node N/A Per Node Software Pricing Model Per Core + Flash TiB (AOS) Per Node (AOS, Prism Pro) Per Concurrent User (VDI) Per VM (ROBO) Per TB (Files) Per VM (Calm) Per VM (Xi Leap) Per CPU Socket Per Desktop (VDI use cases only) Per Used GB (VCPP only) NEW Per Node Support Pricing Model Per Node Per CPU Socket Per Desktop (VDI use cases only) Per Node Consolidation Scope Hypervisor Compute Storage Data Protection (limited) Management Automation&Orchestration Hypervisor Compute Storage Data Protection (limited) Management Automation&Orchestration Hypervisor Compute Storage Network (limited) Data Protection (limited) Management Automation&Orchestration Network Topology 1, 10, 25, 40 GbE 1, 10, 40 GbE 1, 10, 25 GbE Overall Design Complexity High Medium Medium External Performance Validation Login VSI (may 2017) ESG Lab (feb 2017) SAP (nov 2016) StorageReview (aug 2018, aug 2016) ESG Lab (aug 2018, apr 2016) Evaluator Group (oct 2018, jul 2017, aug 2016) StorageReview (dec 2018) Principled Technologies (jul 2017, jun 2017) Evaluation Methods Community Edition (forever) Hyperconverged Test Drive in GCP Proof- of-Concept (POC) Partner Driven Demo Environment Xi Services Free Trial (60- days) Free Trial (60-days) Online Lab Proof-of- Concept (POC) Proof-of-Concept (POC) vSAN: Free Trial (60-days) vSAN: Online Lab Deployment Architecture Single-Layer (primary) Dual-Layer (secondary) Single-Layer Single-Layer Deployment Method Turnkey (very fast; highly automated) BYOS (fast, some automation) Pre- installed (very fast, turnkey approach) Turnkey (very fast; highly automated) Hypervisor Deployment Virtual Storage Controller Kernel Integrated Kernel Integrated Hypervisor Compatibility VMware vSphere ESXi 6.0U1A-6.7U2 Microsoft Hyper-V 2012 R2 and 2016* Microsoft CPS Standard Nutanix Acropolis Hypervisor (AHV) Citrix XenServer 7.1-7.5 NEW VMware vSphere ESXi 6.7U3 VMware vSphere ESXi 6.7U3 Hypervisor Interconnect NFS SMB3 iSCSI NEW vSAN (incl. WSFC) vSAN (incl. WSFC) Bare Metal Compatibility Microsoft Windows Server 2008R2/2012R2/2016 Red Hat Enterprise Many N/A
  • 4. 29.5.2020 4/10 2008R2/2012R2/2016 Red Hat Enterprise Linux (RHEL) 6.7/6.8/7.2 SLES 11/12 Oracle Linux 6.7/7.2 AIX 7.1/7.2 on POWER Oracle Solaris 11.3 on SPARC ESXi 5.5/6 with VMFS (very specific use- cases) Bare Metal Interconnect iSCSI NEW iSCSI N/A Container Integration Type Built-in (native) NEW Built-in (Hypervisor-based, vSAN supported) Built-in (Hypervisor-based, vSAN supported) Container Platform Compatibility Docker EE 1.13+ Node OS CentOS 7.5 Kubernetes 1.11-1.14 Docker CE 17.06.1+ for Linux on ESXi 6.0+ Docker EE/Docker for Windows 17.06+ on ESXi 6.0+ Docker CE 17.06.1+ for Linux on ESXi 6.0+ Docker EE/Docker for Windows 17.06+ on ESXi 6.0+ Container Platform Interconnect Docker Volume plugin (certified) Docker Volume Plugin (certified) + VMware VIB Docker Volume Plugin (certified) + VMware VIB Container Host Compatibility Virtualized container hosts on all supported hypervisors Bare Metal container hosts Virtualized container hosts on VMware vSphere hypervisor Virtualized container hosts on VMware vSphere hypervisor Container Host OS Compatbility CentOS 7 Red Hat Enterprise Linux (RHEL) 7.3 Ubuntu Linux 16.04.2 Linux Windows 10 or Windows Server 2016 Linux Windows 10 or 2016 Container Orch. Compatibility NEW Kubernetes NEW VCP: Kubernetes 1.6.5+ on ESXi 6.0+ CNS: Kubernetes 1.14+ VCP: Kubernetes 1.6.5+ on ESXi 6.0+ CNS: Kubernetes 1.14+ Container Orch. Interconnect Kubernetes Volume plugin Kubernetes Volume Plugin Kubernetes Volume Plugin VDI Compatibility VMware Horizon Citrix XenDesktop (certified) Citrix Cloud (certified) Parallels RAS Xi Frame VMware Horizon Citrix XenDesktop VMware Horizon Citrix XenDesktop VDI Load Bearing VMware: up to 170 virtual desktops/node Citrix: up to 175 virtual desktops/node VMware: up to 200 virtual desktops/node Citrix: up to 90 virtual desktops/node VMware: up to 160 virtual desktops/node Citrix: up to 140 virtual desktops/node Hardware Vendor Choice Super Micro (Nutanix branded) Super Micro (source your own) Dell EMC (OEM) Lenovo (OEM) Fujitsu (OEM) IBM (OEM) HPE (Select) Cisco UCS (Select) Crystal (Rugged) Klas Telecom (Rugged) Many (CE only) Many Dell Models NEW 5 Native Models (SX-1000, NX-1000, NX- 3000, NX-5000, NX-8000) 15 Native Model sub-types Many 5 Dell (native) Models (E-, G-, P-, S- and V-Series) Density Native: 1 (NX1000, NX3000, NX6000, NX8000) 2 (NX6000, NX8000) 4 (NX1000, NX3000) 3 or 4 (SX1000) nodes per chassis 1, 2 or 4 nodes per chassis 1 or 4 nodes per chassis
  • 5. 29.5.2020 5/10 Mixing Allowed Yes Yes Yes CPU Config NEW Flexible: up to 10 options Flexible NEW Flexible Memory Config Flexible: up to 10 options Flexible Flexible Storage Config Flexible: capacity (up to 7 options per disk type); number of disks (Dell, Cisco) Fixed: Number of disks (hybrid, most all-flash) Flexible: number of disks + capacity Flexible: number of disks (limited) + capacity Network Config Flexible: up to 4 add-on options Flexible Flexible (1, 10 or 25 Gbps) GPU Config NVIDIA Tesla (specific appliance models only) NVIDIA Tesla AMD FirePro Intel Iris Pro NVIDIA Tesla Scale-up Memory Storage GPU CPU Memory Storage GPU Memory Storage Network GPU Scale-out Compute+storage Compute-only (NFS; SMB3) Storage-only Compute+storage Compute-only (vSAN VMKernel) Compute+storage Scalability 3-Unlimited nodes in 1-node increments 2-64 nodes in 1-node increments 3-64 storage nodes in 1-node increments Small-scale (ROBO) 3 Node minimum (data center) 1 or 2 Node minimum (ROBO) 1 Node minimum (backup) 2 Node minimum 2 Node minimum Layout Distributed File System (ADSF) Object Storage File System (OSFS) Object Storage File System (OSFS) Data Locality Full Partial Partial Storage Type(s) Direct-attached (Raw) Direct-attached (Raw) Direct-attached (Raw) Composition Hybrid (Flash+Magnetic) All-Flash Hybrid (Flash+Magnetic) All-Flash Hybrid (Flash+Magnetic) All-Flash Hypervisor OS Layer SuperMicro (G3,G4,G5): DOM SuperMicro (G6): M2 SSD Dell: SD or SSD Lenovo: DOM, SD or SSD Cisco: SD or SSD SD, USB, DOM, HDD or SSD SSD Memory Layer DRAM DRAM DRAM Memory Purpose Read Cache Read Cache Read Cache
  • 6. 29.5.2020 6/10 Memory Capacity Configurable Non-configurable Non-configurable Flash Layer SSD, NVMe SSD, PCIe, UltraDIMM, NVMe NEW SSD; NVMe Flash Purpose Read/Write Cache Storage Tier Hybrid: Read/Write Cache All-Flash: Write Cache + Storage Tier Hybrid: Read/Write Cache All-Flash: Write Cache + Storage Tier Flash Capacity NEW Hybrid: 1-4 SSDs per node All-Flash: 3-24 SSDs per node NVMe-Hybrid: 2-4 NVMe + 4-8 SSDs per node Hybrid: 1-5 Flash devices per node (1 per disk group) All-Flash: 40 Flash devices per node (8 per disk group,1 for cache and 7 for capacity) Hybrid: 4 Flash devices per node All- Flash: 2-24 Flash devices per node Magnetic Layer Hybrid: SATA Hybrid: SAS or SATA Hybrid: SAS or SATA Magnetic Purpose Persistent Storage Persistent Storage Persistent Storage Magnetic Capacity NEW 2-20 SATA HDDs per node 1-35 SAS/SATA HDDs per host/node 3-5 SAS/SATA HDDs per disk group Persistent Write Buffer Flash Layer (SSD; NVMe) Flash Layer (SSD;PCIe;NVMe) Flash Layer (SSD, NVMe) Disk Failure Protection 1-2 Replicas (2N-3N) (primary) Erasure Coding (N+1/N+2) (secondary) Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N), Host Pinning (1N) All-Flash: Erasure Coding (RAID5-6) Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N) All-Flash: Erasure Coding (RAID5-6) Node Failure Protection 1-2 Replicas (2N-3N) (primary) Erasure Coding (N+1/N+2) (secondary) Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N), Host Pinning (1N) All-Flash: Erasure Coding (RAID5-6) Hybrid/All-Flash: 0-3 Replicas (RAID1; 1N-4N) All-Flash: Erasure Coding (RAID5-6) Block Failure Protection Block Awareness (integrated) Failure Domains Failure Domains Rack Failure Protection Rack Fault Tolerance (ESXi and AVH only) Failure Domains Failure Domains Protection Capacity Overhead RF2 (2N) (primary): 100% RF3 (3N) (primary): 200% EC-X (N+1) (secondary): 20-50% EC-X (N+2) (secondary): 50% Host Pinning (1N): Dependent on # of VMs Replicas (2N): 100% Replicas (3N): 200% Erasure Coding (RAID5): 33% Erasure Coding (RAID6): 50% Replicas (2N): 100% Replicas (3N): 200% Erasure Coding (RAID5): 33% Erasure Coding (RAID6): 50% Data Corruption Detection Read integrity checks Disk scrubbing (software) Read integrity checks Disk scrubbing (software) Read integrity checks Disk scrubbing (software) Snapshot Type Built-in (native) Built-in (native) Built-in (native) Snapshot Scope Local + Remote Local Local Snapshot Frequency NEW GUI: 1-15 minutes (nearsync replication); 1 h ( li ti ) GUI: 1 hour GUI: 1 hour
  • 7. 29.5.2020 7/10 1 hour (async replication) Snapshot Granularity Per VM Per VM Per VM Backup Type Built-in (native) External (vSAN Certified) External (vSAN Certified) Backup Scope To local single-node To local and remote clusters To remote cloud object stores (Amazon S3, Microsoft Azure) N/A N/A Backup Frequency NearSync to remote clusters: 1-15 minutes* Async to remote clusters: 1 hour AWS/Azure Cloud: 1 hour N/A N/A Backup Consistency File System Consistent (Windows) Application Consistent (MS Apps on Windows) N/A N/A Restore Granularity Entire VM or Single File (local snapshots) Entire VM (local snapshots) Entire VM (local snapshots) Restore Ease-of-use Entire VM: GUI Single File: GUI, nCLI Entire VM: GUI (local snapshots) Entire VM: GUI (local snapshots) Remote Replication Type Built-in (native) Built-in (Stretched Cluster only) Built-in Remote Replication Scope To remote sites To AWS and MS Azure Cloud To Xi Cloud (US and UK only) VR: To remote sites, To VMware clouds RPVM: To remote sites, To VMware clouds Remote Replication Cloud Function Data repository (AWS/Azure) DR-site (Xi Cloud) VR: DR-site (VMware Clouds) RPVM: DR-site (VMware clouds) Remote Replication Topologies Single-site and multi-site VR: Single-site and multi-site RPVM: Single-site and multi-site Remote Replication Frequency NEW Synchronous to remote cluster: continuous NearSync to remote clusters: 1-15 minutes* Async to remote clusters: 1 hour AWS/Azure Cloud: 1 hour Xi Cloud: 1 hour VR: 5 minutes (Asynchronous) vSAN: Continuous (Stretched Cluster) RPVM: near-synchronous (Asynchronous) vSAN: Continuous (Stretched Cluster) Remote Replication Granularity VM iSCSI LUN VR: VM RPVM: VM Consistency Groups Yes VR: No RPVM: Yes DR Orchestration VMware SRM (certified) Xi Leap (native; US and UK) VMware SRM (certified) RPVM: integrated VR: VMware SRM (certified) Stretched Cluster (SC) vSphere: Yes Hyper-V: Yes AHV: No VMware vSphere: Yes (certified) VMware vSphere: Yes (certified)
  • 8. 29.5.2020 8/10 SC Configuration vSphere: 3-sites = two active sites + tie- breaker in 3rd site Hyper-V: 3-sites = two active sites + tie-breaker in 3rd site 3-sites: two active sites + tie-breaker in 3rd site 3-sites: two active sites + tie-breaker in 3rd site SC Distance <=5ms RTT / <400 KMs <=5ms RTT <=5ms RTT SC Scaling No set max. # Nodes; Mixing hardware models allowed <=15 hosts at each active site <=15 nodes at each active site SC Data Redundancy Replicas: 1N at each active site Erasure Coding (optional): Nutanix EC-X at each active site Replicas: 0-3 Replicas (1N-4N) at each active site Erasure Coding: RAID5-6 at each active site Replicas: 0-3 Replicas (1N-4N) at each active site Erasure Coding: RAID5-6 at each active site Dedup/Compr. Engine Software All-Flash: Software Hybrid: N/A All-Flash: Software Hybrid: N/A Dedup/Compr. Function Efficiency (full) and Performance (limited) Efficiency (Space savings) Efficiency (Space savings) Dedup/Compr. Process Perf. Tier: Inline (dedup post-ack / compr pre-ack) Cap. Tier: Post-process All-Flash: Inline (post-ack) Hybrid: N/A All-Flash: Inline (post-ack) Hybrid: N/A Dedup/Compr. Type Dedup Inline: Optional Dedup Post- Process: Optional Compr. Inline: Optional Compr. Post-Process: Optional All-Flash: Optional Hybrid: N/A All-Flash: Optional Hybrid: N/A Dedup/Compr. Scope Dedup Inline: memory and flash layers Dedup Post-process: persistent data layer (adaptive) Compr. Inline: flash and persistent data layers Compr. Post- process: persistent data layer (adaptive) Persistent data layer Persistent data layer Dedup/Compr. Radius Storage Container Disk Group Disk Group Dedup/Compr. Granularity 16 KB fixed block size 4 KB fixed block size 4 KB fixed block size Dedup/Compr. Guarantee N/A N/A N/A Data Rebalancing Full NEW Full NEW Full Data Tiering N/A N/A N/A Task Offloading vSphere: VMware VAAI-NAS (full) Hyper- V: SMB3 ODX; UNMAP/TRIM AVH: Integrated vSphere: Integrated vSphere: Integrated QoS Type IOPs Limits (maximums) IOPs Limits (maximums) IOPs Limits (maximums)
  • 9. 29.5.2020 9/10 QoS Granularity Per VM Per VM/Virtual Disk Per VM/Virtual Disk Flash Pinning VM Flash Mode: Per VM/Virtual Disk/iSCSI LUN Cache Read Reservation: Per VM/Virtual Disk Cache Read Reservation: Per VM/Virtual Disk Data Encryption Type Built-in (native) Built-in (native) Built-in (native) Data Encryption Options Hardware: Self-encrypting drives (SEDs) Software: AOS encryption; Vormetric VTE (validated), Gemalto (verified) Hardware: N/A Software: vSAN data encryption; HyTrust DataControl (validated) Hardware: N/A Software: vSAN data encryption; HyTrust DataControl (validated) Data Encryption Scope Hardware: Data-at-rest Software AOS: Data-at-rest Software VTE/Gemalto: Data-at-rest + Data-in-transit Hardware: N/A Software vSAN: Data-at- rest Software Hytrust: Data-at-rest + Data-in-transit Hardware: N/A Software vSAN: Data-at- rest Software Hytrust: Data-at-rest + Data-in-transit Data Encryption Compliance Hardware: FIPS 140-2 Level 2 (SEDs) Software: FIPS 140-2 Level 1 (AOS, VTE) Hardware: N/A Software: FIPS 140-2 Level 1 (vSAN); FIPS 140-2 Level 1 (HyTrust) Hardware: N/A Software: FIPS 140-2 Level 1 (vSAN); FIPS 140-2 Level 1 (HyTrust) Data Encryption Efficiency Impact Hardware: No Software AOS: No Software VTE/Gemalto: Yes Hardware: N/A Software: No (vSAN); Yes (HyTrust) Hardware: N/A Software: No (vSAN); Yes (HyTrust) Fast VM Cloning Yes No No Hypervisor Migration ESXi to AHV (integrated) AHV to ESXi (integrated) Hyper-V to AHV (external) Hyper-V to ESXi (external) Hyper-V to ESXi (external) Fileserver Type Built-in (native) External (vSAN Certified) External (vSAN Certified) Fileserver Compatibility Windows clients Apple Mac clients Linux clients N/A N/A Fileserver Interconnect SMB NFS N/A N/A Fileserver Quotas Share Quotas, User Quotas N/A N/A Fileserver Analytics Yes N/A N/A Object Storage Type NEW S3-compatible N/A N/A Object Storage Protection Versioning N/A N/A Object Storage LT Retention WORM N/A N/A GUI Functionality Centralized Centralized Centralized
  • 10. 29.5.2020 10/10 GUI Functionality Centralized Centralized Centralized GUI Scope Prism: Single-site Prism Central: Multi-site Single-site and Multi-site Single-site and Multi-site GUI Perf. Monitoring Advanced NEW Advanced Advanced GUI Integration VMware: Prism (subset) Microsoft: SCCM (SCOM and SCVMM) AHV: Prism Nutanix Files: Prism Xi Frame: Prism Central VMware HTML5 vSphere Client (integrated) VMware vSphere Web Client (integrated) VMware vSphere Web Client (integrated) Policies Partial (Protection) Full Full API/Scripting REST-APIs PowerShell nCLI REST-APIs Ruby vSphere Console (RVC) PowerCLI REST-APIs Ruby vSphere Console (RVC) PowerCLI Integration OpenStack VMware vRealize Automation (vRA) Nutanix Calm OpenStack VMware vRealize Automation (vRA) OpenStack VMware vRealize Automation (vRA) Self Service AHV only N/A (not part of vSAN license) N/A (not part of VxRail license bundle) SW Composition Unified Partially Distributed Partially Distributed SW Upgrade Execution Rolling Upgrade (1-by-1) NEW Rolling Upgrade (1-by-1) Rolling Upgrade (1-by-1) FW Upgrade Execution 1-Click Rolling Upgrade (1-by-1) 1-Click Single HW/SW Support Yes (Nutanix; Dell; Lenovo; IBM) Yes (most OEM vendors) Yes Call-Home Function Full Partial (vSAN Support Insight) Full Predictive Analytics Full NEW Full (not part of vSAN license) Full (not part of VxRail license bundle)