SlideShare a Scribd company logo
Storage for Virtual EnvironmentsStephen FoskettFoskett Services and Gestalt ITLive Footnotes:@Sfoskett
#VirtualStorageThis is Not a Rah-Rah Session
Agenda
Introducing the Virtual Data Center
This Hour’s Focus:What Virtualization DoesIntroducing storage and server virtualizationThe future of virtualizationThe virtual datacenterVirtualization confounds storageThree pillars of performanceOther issuesStorage features for virtualizationWhat’s new in VMware
Virtualization of Storage, Serverand NetworkStorage has been stuck in the Stone Age since the Stone Age!Fake disks, fake file systems, fixed allocationLittle integration and no communicationVirtualization is a bridge to the futureMaintains functionality for existing appsImproves flexibility and efficiency
A Look at the Future
Server Virtualization is On the RiseData: InformationWeek Analytics 2010 Virtualization Management Survey of 316 business technology professionals, August 2010
Server Virtualization is a Pile of Lies!What the OS thinks it’s running on…What the OS is actually running on…Physical HardwareVMkernelBinary Translation, Paravirtualization, Hardware AssistGuest OSVMGuest OSVMScheduler and Memory AllocatorvNICvSwitchNIC DrivervSCSI/PVVMDKVMFSI/O Driver
And It Gets Worse Outside the Server!
The Virtual Data Center of TomorrowManagementApplicationsThe Cloud™ApplicationsLegacyApplicationsApplicationsApplicationsCPUNetworkBackupStorage
The Real Future of IT InfrastructureOrchestration Software
Three Pillars of VM Performance
Confounding Storage PresentationStorage virtualization is nothing new…RAID and NAS virtualized disksCaching arrays and SANs masked volumesNew tricks: Thin provisioning, automated tiering, array virtualizationBut, we wrongly assume this is where it endsVolume managers and file systemsDatabasesNow we have hypervisors virtualizing storageVMFS/VMDK = storage array?Virtual storage appliances (VSAs)
Begging for Converged I/O4G FC Storage1 GbE Network1 GbE ClusterHow many I/O ports and cables does a server need?Typical server has 4 ports, 2 usedApplication servers have 4-8 ports used!Do FC and InfiniBand make sense with 10/40/100 GbE?When does commoditization hit I/O?Ethernet momentum is unbeatableBlades and hypervisors demand greater I/O integration and flexibilityOther side of the coin – need to virtualize I/O
Driving Storage VirtualizationServer virtualization demands storage featuresData protection with snapshots and replicationAllocation efficiency with thin provisioning+Performance and cost tweaking with automated sub-LUN tieringImproved locking and resource sharingFlexibility is the big oneMust be able to create, use, modify and destroy storage on demandMust move storage logically and physicallyMust allow OS to move too
“The I/O Blender” Demands New ArchitecturesShared storage is challenging to implementStorage arrays “guess” what’s coming next based on allocation (LUN) taking advantage of sequential performanceServer virtualization throws I/O into a blender – All I/O is now random I/O!
Server Virtualization Requires SAN and NASServer virtualization has transformed the data center and storage requirementsVMware is the #1 driver of SAN adoption today!60% of virtual server storage is on SAN or NAS86% have implemented some server virtualizationServer virtualization has enabled and demanded centralization and sharing of storage on arrays like never before!Source: ESG, 2008
Keys to the Future For Storage FolksYe Olde Seminar Content!
Primary Production Virtualization PlatformData: InformationWeek Analytics 2010 Virtualization Management Survey of 316 business technology professionals, August 2010
Storage Features for Virtualization
Which Features Are People Using?Source: VirtualGeek.typepad.com 2010 virtualization survey of 125 readers
What’s New in vSphere 4 and 4.1VMware vSphere 4 (AKA ESX/ESXi 4) is a major upgrade for storageLots of new features like thin provisioning, PSA, any-to-any Storage VMotion, PVSCSIMassive performance upgrade (400k IOPS!)vSphere 4.1 is equally huge for storageBoot from SANvStorage APIs for Array Integration (VAAI)Storage I/O control (SIOC)
What’s New in vSphere 5VMFS-5 – Scalability and efficiency improvementsStorage DRS – Datastore clusters and improved load balancingStorage I/O Control – Cluster-wide and NFS supportProfile-Driven Storage – Provisioning, compliance and monitoringFCoE Software InitiatoriSCSI Initiator GUIStorage APIs – Storage Awareness (VASA)Storage APIs – Array Integration (VAAI 2) – Thin Stun, NFS, T10Storage vMotion - Enhanced with mirror modevSphere Storage Appliance (VSA)vSphere Replication – New in SRM
And Then, There’s VDI…Virtual desktop infrastructure (VDI) takes everything we just worried about and amplifies it:Massive I/O crunchesHuge duplication of dataMore wasted capacityMore user visibilityMore backup trouble
What’s nextVendor Showcase and Networking Break
Technical Considerations - Configuring Storage for VMsThe mechanics of presenting and using storage in virtualized environments
This Hour’s Focus:Hypervisor Storage FeaturesStorage vMotionVMFSStorage presentation: Shared, raw, NFS, etc.Thin provisioningMultipathing (VMware Pluggable Storage Architecture)VAAI and VASAStorage I/O control and storage DRS
Storage vMotionIntroduced in ESX 3 as “Upgrade vMotion”ESX 3.5 used a snapshot while the datastore was in motionvSphere 4 used changed-block tracking (CBT) and recursive passesvSphere 5 Mirror Mode mirrors writes to in-progress vMotions and also supports migration of vSphere snapshots and Linked ClonesCan be offloaded for VAAI-Block (but not NFS)
vSphere 5: What’s New in VMFS 5Max VMDK size is still 2 TB – 512 bytesVirtual (non-passthru) RDM still limited to 2 TBMax LUNs per host is still 256
Hypervisor Storage Options:Shared StorageThe common/ workstation approachVMware: VMDK image in VMFS datastoreHyper-V: VHD image in CSV datastoreBlock storage (direct or FC/iSCSI SAN)Why?Traditional, familiar, common (~90%)Prime features (Storage VMotion, etc)Multipathing, load balancing, failover*But…Overhead of two storage stacks (5-8%)Harder to leverage storage featuresOften shares storage LUN and queueDifficult storage managementVMHostGuestOSVMFSVMDKDAS or SANStorage
Hypervisor Storage Options:Shared Storage on NASSkip VMFS and use NASNFS or SMB is the datastoreWow!Simple – no SANMultiple queuesFlexible (on-the-fly changes)Simple snap and replicate*Enables full VmotionLink aggregation (trunking) is possibleBut…Less familiar (ESX 3.0+)CPU load questionsLimited to 8 NFS datastores (ESX default)Snapshot consistency for multiple VMDKVMHostGuestOSNASStorageVMDK
Hypervisor Storage Options:Guest iSCSISkip VMFS and use iSCSI directlyAccess a LUN just like any physical serverVMware ESX can even boot from iSCSI!Ok…Storage folks love it!Can be faster than ESX iSCSIVery flexible (on-the-fly changes)Guest can move and still access storageBut…Less common to VM folksCPU load questionsNo Storage VMotion (but doesn’t need it)VMHostGuestOSiSCSIStorageLUN
Hypervisor Storage Options:Raw Device Mapping (RDM)Guest VM’s access storage directly over iSCSI or FCVM’s can even boot from raw devicesHyper-V pass-through LUN is similarGreat!Per-server queues for performanceEasier measurementThe only method for clusteringSupports LUNs larger than 2 TB (60 TB passthru in vSphere 5!)But…Tricky VMotion and dynamic resource scheduling (DRS)No storage VMotionMore management overheadLimited to 256 LUNs per data centerVMHostGuestOSI/OMapping FileSAN Storage
Hypervisor Storage Options:Direct I/OVMware ESX VMDirectPath - Guest VM’s access I/O hardware directlyLeverages AMD IOMMU or Intel VT-dGreat!Potential for native performanceJust like RDM but better!But…No VMotion or Storage VMotionNo ESX fault tolerance (FT)No ESX snapshots or VM suspendNo device hot-addNo performance benefit in the real world!VMHostGuestOSI/OMapping FileSAN Storage
Which VMware Storage Method Performs Best?Mixed random I/OCPU cost per I/OVMFS,RDM (p), or RDM (v)Source: “Performance Characterization of VMFS and RDM Using a SAN”, VMware Inc.,ESX 3.5, 2008
vSphere 5: Policy or Profile-Driven StorageAllows storage tiers to be defined in vCenter based on SLA, performance, etc.Used during provisioning, cloning, Storage vMotion, Storage DRSLeverages VASA for metrics and characterizationAll HCL arrays and types (NFS, iSCSI, FC)Custom descriptions and tagging for tiersCompliance status is a simple binary report
Native VMware Thin ProvisioningVMware ESX 4 allocates storage in 1 MB chunks as capacity is usedSimilar support enabled for virtual disks on NFS in VI 3Thin provisioning existed for block, could be enabled on the command line in VI 3Present in VMware desktop productsvSphere 4 fully supports and integrates thin provisioningEvery version/license includes thin provisioningAllows thick-to-thin conversion during Storage VMotionIn-array thin provisioning also supported (we’ll get to that…)
Four Types of VMware ESX VolumesNote: FT is not supportedWhat will your array do? VAAI helps…Friendly to on-array thin provisioning
Storage Allocation and Thin ProvisioningVMware tests show no performance impact from thin provisioning after zeroing
Pluggable Storage Architecture:Native MultipathingVMware ESX includes multipathing built inBasic native multipathing (NMP) is round-robin fail-over only – it will not load balance I/O across multiple paths or make more intelligent decisions about which paths to usePluggable Storage Architecture (PSA)VMware NMPThird-Party MPPVMware SATPThird-Party SATPVMware PSPThird-Party PSP
Pluggable Storage Architecture: PSP and SATPvSphere 4 Pluggable Storage Architecture allows third-party developers to replace ESX’s storage I/O stackESX Enterprise+ OnlyThere are two classes of third-party plug-ins:Path-selection plug-ins (PSPs) optimize the choice of which path to use, ideal for active/passive type arraysStorage array type plug-ins (SATPs) allow load balancing across multiple paths in addition to path selection for active/active arraysEMC PowerPath/VE for vSphere does everything
Storage Array Type Plug-ins (SATP)ESX native approachesActive/PassiveActive/ActivePseudo ActiveStorage Array Type Plug-InsVMW_SATP_LOCAL – Generic local direct-attached storageVMW_SATP_DEFAULT_AA – Generic for active/active arraysVMW_SATP_DEFAULT_AP – Generic for active/passive arraysVMW_SATP_LSI – LSI/NetApp arrays from Dell, HDS, IBM, Oracle, SGIVMW_SATP_SVC – IBM SVC-based systems (SVC, V7000, Actifio)VMW_SATP_ALUA – Asymmetric Logical Unit Access-compliant arraysVMW_SATP_CX – EMC/Dell CLARiiON  and Celerra (also VMW_SATP_ALUA_CX)VMW_SATP_SYMM – EMC Symmetrix DMX-3/DMX-4/VMAX, InvistaVMW_SATP_INV – EMC Invista and VPLEXVMW_SATP_EQL – Dell EqualLogic systemsAlso, EMC PowerPath and HDS HDLM and vendor-unique plugins not detailed in the HCL
Path Selection Plug-ins (PSP)VMW_PSP_MRU – Most-Recently Used (MRU) – Supports hundreds of storage arraysVMW_PSP_FIXED – Fixed - Supports hundreds of storage arraysVMW_PSP_RR – Round-Robin - Supports dozens of storage arraysDELL_PSP_EQL_ROUTED – Dell EqualLogic iSCSI arraysAlso, EMC PowerPath and other vendor unique
vStorage APIs for Array Integration (VAAI)VAAI integrates advanced storage features with VMwareBasic requirements:A capable storage arrayESX 4.1+A software plug-in for ESXNot every implementation is equalBlock zeroing can be very demanding for some arraysZeroing might conflict with full copy
VAAI Support Matrix
vSphere 5: VAAI 2Block(FC/iSCSI)T10 compliance is improved - No plug-in needed for many arraysFile(NFS)NAS plugins come from vendors, not VMware
vSphere 5: vSphereStorage APIs – Storage Awareness (VASA)VASA is communication mechanism for vCenter to detect array capabilitiesRAID level, thin provisioning state, replication state, etc.Two locations in vCenter Server:“System-Defined Capabilities” – per-datastore descriptorsStorage views and SMS API’s
Storage I/O Control (SIOC)Storage I/O Control (SIOC) is all about fairness:Prioritization and QoS for VMFSRe-distributes unused I/O resourcesMinimizes “noisy neighbor” issuesESX can provide quality of service for storage access to virtual machinesEnabled per-datastoreWhen a pre-defined latency level is exceeded on a VM it begins to throttle I/O (default 30 ms)Monitors queues on storage arrays and per-VM I/O latencyBut:vSphere 4.1 with Enterprise PlusDisabled by default but highly recommended!Block storage only (FC or ISCSI)Whole-LUN only (no extents)No RDM
Storage I/O Control in Action
Virtual Machine MobilityMoving virtual machines is the next big challengePhysical servers are difficult to move around and between data centersPent-up desire to move virtual machines from host to host and even to different physical locationsVMware DRS would move live VMs around the data centerThe “Holy Grail” for server managersRequires networked storage (SAN/NAS)
vSphere 5: Storage DRSDatastore clusters aggregate multiple datastoresVMs and VMDKs placement metrics:Space - Capacity utilization and availability (80% default)Performance – I/O latency (15 ms default)When thresholds are crossed, vSphere will rebalance all VMs and VMDKs according to Affinity RulesStorage DRS works with either VMFS/block or NFS datastoresMaintenance Mode evacuates a datastore
What’s nextLunch
Expanding the ConversationConverged I/O, storage virtualization and new storage architectures
This Hour’s Focus:Non-Hypervisor Storage FeaturesConverged networkingStorage protocols (FC, iSCSI, NFS)Enhanced Ethernet (DCB, CAN, FCoE)I/O virtualizationStorage for virtual storageTiered storage and SSD/flashSpecialized arraysVirtual storage appliances (VSA)
Introduction: Converging on ConvergenceData centers rely more on standard ingredientsWhat will connect these systems together?IP and Ethernet are logical choices
Drivers of Convergence
Which Storage Protocol to Use?Server admins don’t know/care about storage protocols and will want whatever they are familiar withStorage admins have preconceived notions about the merits of various options:FC is fast, low-latency, low-CPU, expensiveNFS is slow, high-latency, high-CPU, cheapiSCSI is medium, medium, medium, medium
vSphere Protocol Performance
vSphere CPU Utilization
vSphere Latency
Microsoft Hyper-V Performance
Which Storage Protocols Do People Use?Source: VirtualGeek.typepad.com 2010 virtualization survey of 125 readers
The Upshot: It Doesn’t MatterUse what you have and are familiar with!FC, iSCSI, NFS all work wellMost enterprise production VM data is on FC, many smaller shops using iSCSI or NFSEither/or? - 50% use a combinationFor IP storageNetwork hardware and config matter more than protocol (NFS, iSCSI, FC)Use a separate network or VLANUse a fast switch and consider jumbo framesFor FC storage8 Gb FC/FCoE is awesome for VMsLook into NPIVLook for VAAI
The Storage Network Roadmap
Serious Performance10 GbE is faster than most storage interconnectsiSCSI and FCoE both can perform at wire-rate
Latency is Critical TooLatency is even more critical in shared storageFCoE with 10 GbE can achieve well over 500,000 4K IOPS (if the array and client can handle it!)
Benefits Beyond Speed10 GbE takes performance off the table (for now…)But performance is only half the story:Simplified connectivityNew network architectureVirtual machine mobility1 GbE Cluster4G FC Storage1 GbE Network10 GbE(Plus 6 Gbps extra capacity)
Enhanced 10 Gb EthernetEthernet and SCSI were not made for each other
SCSI expects a lossless transport with guaranteed delivery
Ethernet expects higher-level protocols to take care of issues
“Data Center Bridging” is a project to create lossless Ethernet
AKA Data Center Ethernet (DCE), Converged Enhanced Ethernet (CEE)
iSCSI and NFS are happy with or without DCB
DCB is a work in progress
FCoE requires PFC (Qbb or PAUSE), DCBX (Qaz)
QCN (Qau) is still not readyPriority Flow Control (PFC)802.1QbbCongestion Management (QCN)802.1QauBandwidth Management (ETS)802.1QazPAUSE802.3xData Center Bridging Exchange Protocol (DCBX)Traffic Classes 802.1p/Q
FCoE CNAs for VMware ESXNo Intel (OpenFCoE) or Broadcom support in vSphere 4…
vSphere 5: FCoE Software InitiatorDramatically expands the FCoE footprint from just a few CNAsBased on Intel OpenFCoE? – Shows as “Intel Corporation FCoE Adapter”
I/O Virtualization: Virtual I/OExtends I/O capabilities beyond physical connections (PCIe slots, etc)Increases flexibility and mobility of VMs and bladesReduces hardware, cabling, and cost for high-I/O machinesIncreases density of blades and VMs
I/O Virtualization: IOMMU (Intel VT-d)IOMMU gives devices direct access to system memoryAMD IOMMU or Intel VT-dSimilar to AGP GARTVMware VMDirectPath leverages IOMMUAllows VMs to access devices directlyMay not improve real-world performanceSystem MemoryIOMMUMMUI/O DeviceCPU
Does SSD Change the Equation?RAM and flash promise high performance…But you have to use it right
Flash is Not A DiskFlash must be carefully engineered and integratedCache and intelligence to offset write penaltyAutomatic block-level data placement to maximize ROIIF a system can do this, everything else improvesOverall system performanceUtilization of disk capacitySpace and power efficiencyEven system cost can improve!
The Tiered Storage ClichéCost and PerformanceOptimized for Savings!

More Related Content

PPT
VMware Presentation
PPTX
Whats new v sphere 6
PDF
Veeam Backup & Replication v8 for VMware — General Overview
PDF
Veeam Backup and Replication: Overview
PPTX
VMware vSphere Storage Appliance (VSA) - Technical Presentation,Almacenamien...
PPTX
VMware Vsphere Graduation Project Presentation
PPTX
Upgrading to VMware vSphere 6.0
PDF
Oracle Virtualization Best Practices
 
VMware Presentation
Whats new v sphere 6
Veeam Backup & Replication v8 for VMware — General Overview
Veeam Backup and Replication: Overview
VMware vSphere Storage Appliance (VSA) - Technical Presentation,Almacenamien...
VMware Vsphere Graduation Project Presentation
Upgrading to VMware vSphere 6.0
Oracle Virtualization Best Practices
 

What's hot (20)

PDF
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
PPS
Safe checkup - vmWare vSphere 5.0 22feb2012
PDF
White paper: IBM FlashSystems in VMware Environments
PPTX
Oracle on vSphere best practices
PDF
RHT Design for Security
PPTX
Q2 Sirius Lunch & Learn - vSphere 6 & Windows 2003 EoL
PDF
VMworld 2013: Architecting Oracle Databases on vSphere 5 with NetApp Storage
PDF
Accelerating virtualized Oracle 12c performance with vSphere 5.5 advanced fea...
PPTX
VMware vSphere technical presentation
PDF
VMware HA deep Dive
PDF
Presentazione Corso VMware vSphere 6.5
PPTX
Efficient Data Protection – Backup in VMware environments
PDF
VMworld 2013: Part 1: Getting Started with vCenter Orchestrator
PDF
Configuring v sphere 5 profile driven storage
PDF
What’s new in Veeam Availability Suite v9
PPTX
VMware Overview
PDF
EMC FAST VP for Unified Storage Systems
 
PDF
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
PDF
VMworld 2013: VMware vSphere High Availability - What's New and Best Practices
PPT
xen server 5.6, provisioning server 5.6 — технические детали и планы на будущее
IBM SONAS and VMware vSphere 5 scale-out cloud foundation: A reference guide ...
Safe checkup - vmWare vSphere 5.0 22feb2012
White paper: IBM FlashSystems in VMware Environments
Oracle on vSphere best practices
RHT Design for Security
Q2 Sirius Lunch & Learn - vSphere 6 & Windows 2003 EoL
VMworld 2013: Architecting Oracle Databases on vSphere 5 with NetApp Storage
Accelerating virtualized Oracle 12c performance with vSphere 5.5 advanced fea...
VMware vSphere technical presentation
VMware HA deep Dive
Presentazione Corso VMware vSphere 6.5
Efficient Data Protection – Backup in VMware environments
VMworld 2013: Part 1: Getting Started with vCenter Orchestrator
Configuring v sphere 5 profile driven storage
What’s new in Veeam Availability Suite v9
VMware Overview
EMC FAST VP for Unified Storage Systems
 
IBM Tivoli Storage Manager Data Protection for VMware - PCTY 2011
VMworld 2013: VMware vSphere High Availability - What's New and Best Practices
xen server 5.6, provisioning server 5.6 — технические детали и планы на будущее
Ad

Similar to Storage for Virtual Environments 2011 R2 (20)

PPTX
Rearchitecting Storage for Server Virtualization
PPTX
Virtualization Changes Storage
PPTX
Storage Virtualization Introduction
PPT
Vsphere 4-partner-training180
PDF
Presentation integration vmware with emc storage
PPTX
V sphere virtual volumes technical overview
PPTX
VMworld 2015: Explaining Advanced Virtual Volumes Configurations
PPTX
VMworld 2016: Virtual Volumes Technical Deep Dive
PPTX
What is coming for VMware vSphere?
PPTX
VMworld - sto7650 -Software defined storage @VMmware primer
PPTX
What's New with vSphere 4
PPTX
London VMUG Presentation 19th July 2012
PPTX
VMworld 2015: Virtual Volumes Technical Deep Dive
PPT
Vsphere 4-partner-training180
PDF
VMworld 2014: VMware Vision and Strategy for Software-Defined Storage
PDF
VMworld 2014: Virtual Volumes Technical Deep Dive
PDF
VMworld 2013: IBM Solutions for VMware Virtual SAN
PDF
VMware: Enabling Software-Defined Storage Using Virtual SAN (Business Decisio...
PDF
VMware Vsan vtug 2014
Rearchitecting Storage for Server Virtualization
Virtualization Changes Storage
Storage Virtualization Introduction
Vsphere 4-partner-training180
Presentation integration vmware with emc storage
V sphere virtual volumes technical overview
VMworld 2015: Explaining Advanced Virtual Volumes Configurations
VMworld 2016: Virtual Volumes Technical Deep Dive
What is coming for VMware vSphere?
VMworld - sto7650 -Software defined storage @VMmware primer
What's New with vSphere 4
London VMUG Presentation 19th July 2012
VMworld 2015: Virtual Volumes Technical Deep Dive
Vsphere 4-partner-training180
VMworld 2014: VMware Vision and Strategy for Software-Defined Storage
VMworld 2014: Virtual Volumes Technical Deep Dive
VMworld 2013: IBM Solutions for VMware Virtual SAN
VMware: Enabling Software-Defined Storage Using Virtual SAN (Business Decisio...
VMware Vsan vtug 2014
Ad

More from Stephen Foskett (18)

PPTX
The Zen of Storage
PPTX
What’s the Deal with Containers, Anyway?
PPTX
Out of the Lab and Into the Datacenter - Which Technologies Are Ready?
PPTX
The Four Horsemen of Storage System Performance
PPTX
Gestalt IT - Why It’s Time to Stop Thinking In Terms of Silos
PPTX
It's the End of Data Storage As We Know It (And I Feel Fine)
PPTX
"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011
PPTX
State of the Art Thin Provisioning
PPT
Eleven Essential Attributes For Email Archiving
PPT
Email Archiving Solutions Whats The Difference
PPT
Storage School 1
PPT
Storage School 2
PPTX
Deep Dive Into Email Archiving Products
PPTX
Extreme Tiered Storage Flash, Disk, And Cloud
PPTX
The Right Approach To Cloud Storage
PPTX
Storage Decisions Nirvanix Introduction
PPTX
Solve 3 Enterprise Storage Problems Today
PPTX
Cloud Storage Benefits
The Zen of Storage
What’s the Deal with Containers, Anyway?
Out of the Lab and Into the Datacenter - Which Technologies Are Ready?
The Four Horsemen of Storage System Performance
Gestalt IT - Why It’s Time to Stop Thinking In Terms of Silos
It's the End of Data Storage As We Know It (And I Feel Fine)
"FCoE vs. iSCSI - Making the Choice" from Interop Las Vegas 2011
State of the Art Thin Provisioning
Eleven Essential Attributes For Email Archiving
Email Archiving Solutions Whats The Difference
Storage School 1
Storage School 2
Deep Dive Into Email Archiving Products
Extreme Tiered Storage Flash, Disk, And Cloud
The Right Approach To Cloud Storage
Storage Decisions Nirvanix Introduction
Solve 3 Enterprise Storage Problems Today
Cloud Storage Benefits

Recently uploaded (20)

PPTX
Tartificialntelligence_presentation.pptx
PPTX
TLE Review Electricity (Electricity).pptx
PDF
Machine learning based COVID-19 study performance prediction
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
Spectroscopy.pptx food analysis technology
PPTX
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPTX
1. Introduction to Computer Programming.pptx
PDF
Approach and Philosophy of On baking technology
PDF
A comparative analysis of optical character recognition models for extracting...
PPTX
OMC Textile Division Presentation 2021.pptx
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
Machine Learning_overview_presentation.pptx
PDF
A comparative study of natural language inference in Swahili using monolingua...
PDF
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PDF
Getting Started with Data Integration: FME Form 101
Tartificialntelligence_presentation.pptx
TLE Review Electricity (Electricity).pptx
Machine learning based COVID-19 study performance prediction
Building Integrated photovoltaic BIPV_UPV.pdf
Spectroscopy.pptx food analysis technology
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Per capita expenditure prediction using model stacking based on satellite ima...
Diabetes mellitus diagnosis method based random forest with bat algorithm
1. Introduction to Computer Programming.pptx
Approach and Philosophy of On baking technology
A comparative analysis of optical character recognition models for extracting...
OMC Textile Division Presentation 2021.pptx
Accuracy of neural networks in brain wave diagnosis of schizophrenia
Spectral efficient network and resource selection model in 5G networks
Machine Learning_overview_presentation.pptx
A comparative study of natural language inference in Swahili using monolingua...
Blue Purple Modern Animated Computer Science Presentation.pdf.pdf
Agricultural_Statistics_at_a_Glance_2022_0.pdf
Getting Started with Data Integration: FME Form 101

Storage for Virtual Environments 2011 R2

  • 1. Storage for Virtual EnvironmentsStephen FoskettFoskett Services and Gestalt ITLive Footnotes:@Sfoskett
  • 2. #VirtualStorageThis is Not a Rah-Rah Session
  • 5. This Hour’s Focus:What Virtualization DoesIntroducing storage and server virtualizationThe future of virtualizationThe virtual datacenterVirtualization confounds storageThree pillars of performanceOther issuesStorage features for virtualizationWhat’s new in VMware
  • 6. Virtualization of Storage, Serverand NetworkStorage has been stuck in the Stone Age since the Stone Age!Fake disks, fake file systems, fixed allocationLittle integration and no communicationVirtualization is a bridge to the futureMaintains functionality for existing appsImproves flexibility and efficiency
  • 7. A Look at the Future
  • 8. Server Virtualization is On the RiseData: InformationWeek Analytics 2010 Virtualization Management Survey of 316 business technology professionals, August 2010
  • 9. Server Virtualization is a Pile of Lies!What the OS thinks it’s running on…What the OS is actually running on…Physical HardwareVMkernelBinary Translation, Paravirtualization, Hardware AssistGuest OSVMGuest OSVMScheduler and Memory AllocatorvNICvSwitchNIC DrivervSCSI/PVVMDKVMFSI/O Driver
  • 10. And It Gets Worse Outside the Server!
  • 11. The Virtual Data Center of TomorrowManagementApplicationsThe Cloud™ApplicationsLegacyApplicationsApplicationsApplicationsCPUNetworkBackupStorage
  • 12. The Real Future of IT InfrastructureOrchestration Software
  • 13. Three Pillars of VM Performance
  • 14. Confounding Storage PresentationStorage virtualization is nothing new…RAID and NAS virtualized disksCaching arrays and SANs masked volumesNew tricks: Thin provisioning, automated tiering, array virtualizationBut, we wrongly assume this is where it endsVolume managers and file systemsDatabasesNow we have hypervisors virtualizing storageVMFS/VMDK = storage array?Virtual storage appliances (VSAs)
  • 15. Begging for Converged I/O4G FC Storage1 GbE Network1 GbE ClusterHow many I/O ports and cables does a server need?Typical server has 4 ports, 2 usedApplication servers have 4-8 ports used!Do FC and InfiniBand make sense with 10/40/100 GbE?When does commoditization hit I/O?Ethernet momentum is unbeatableBlades and hypervisors demand greater I/O integration and flexibilityOther side of the coin – need to virtualize I/O
  • 16. Driving Storage VirtualizationServer virtualization demands storage featuresData protection with snapshots and replicationAllocation efficiency with thin provisioning+Performance and cost tweaking with automated sub-LUN tieringImproved locking and resource sharingFlexibility is the big oneMust be able to create, use, modify and destroy storage on demandMust move storage logically and physicallyMust allow OS to move too
  • 17. “The I/O Blender” Demands New ArchitecturesShared storage is challenging to implementStorage arrays “guess” what’s coming next based on allocation (LUN) taking advantage of sequential performanceServer virtualization throws I/O into a blender – All I/O is now random I/O!
  • 18. Server Virtualization Requires SAN and NASServer virtualization has transformed the data center and storage requirementsVMware is the #1 driver of SAN adoption today!60% of virtual server storage is on SAN or NAS86% have implemented some server virtualizationServer virtualization has enabled and demanded centralization and sharing of storage on arrays like never before!Source: ESG, 2008
  • 19. Keys to the Future For Storage FolksYe Olde Seminar Content!
  • 20. Primary Production Virtualization PlatformData: InformationWeek Analytics 2010 Virtualization Management Survey of 316 business technology professionals, August 2010
  • 21. Storage Features for Virtualization
  • 22. Which Features Are People Using?Source: VirtualGeek.typepad.com 2010 virtualization survey of 125 readers
  • 23. What’s New in vSphere 4 and 4.1VMware vSphere 4 (AKA ESX/ESXi 4) is a major upgrade for storageLots of new features like thin provisioning, PSA, any-to-any Storage VMotion, PVSCSIMassive performance upgrade (400k IOPS!)vSphere 4.1 is equally huge for storageBoot from SANvStorage APIs for Array Integration (VAAI)Storage I/O control (SIOC)
  • 24. What’s New in vSphere 5VMFS-5 – Scalability and efficiency improvementsStorage DRS – Datastore clusters and improved load balancingStorage I/O Control – Cluster-wide and NFS supportProfile-Driven Storage – Provisioning, compliance and monitoringFCoE Software InitiatoriSCSI Initiator GUIStorage APIs – Storage Awareness (VASA)Storage APIs – Array Integration (VAAI 2) – Thin Stun, NFS, T10Storage vMotion - Enhanced with mirror modevSphere Storage Appliance (VSA)vSphere Replication – New in SRM
  • 25. And Then, There’s VDI…Virtual desktop infrastructure (VDI) takes everything we just worried about and amplifies it:Massive I/O crunchesHuge duplication of dataMore wasted capacityMore user visibilityMore backup trouble
  • 26. What’s nextVendor Showcase and Networking Break
  • 27. Technical Considerations - Configuring Storage for VMsThe mechanics of presenting and using storage in virtualized environments
  • 28. This Hour’s Focus:Hypervisor Storage FeaturesStorage vMotionVMFSStorage presentation: Shared, raw, NFS, etc.Thin provisioningMultipathing (VMware Pluggable Storage Architecture)VAAI and VASAStorage I/O control and storage DRS
  • 29. Storage vMotionIntroduced in ESX 3 as “Upgrade vMotion”ESX 3.5 used a snapshot while the datastore was in motionvSphere 4 used changed-block tracking (CBT) and recursive passesvSphere 5 Mirror Mode mirrors writes to in-progress vMotions and also supports migration of vSphere snapshots and Linked ClonesCan be offloaded for VAAI-Block (but not NFS)
  • 30. vSphere 5: What’s New in VMFS 5Max VMDK size is still 2 TB – 512 bytesVirtual (non-passthru) RDM still limited to 2 TBMax LUNs per host is still 256
  • 31. Hypervisor Storage Options:Shared StorageThe common/ workstation approachVMware: VMDK image in VMFS datastoreHyper-V: VHD image in CSV datastoreBlock storage (direct or FC/iSCSI SAN)Why?Traditional, familiar, common (~90%)Prime features (Storage VMotion, etc)Multipathing, load balancing, failover*But…Overhead of two storage stacks (5-8%)Harder to leverage storage featuresOften shares storage LUN and queueDifficult storage managementVMHostGuestOSVMFSVMDKDAS or SANStorage
  • 32. Hypervisor Storage Options:Shared Storage on NASSkip VMFS and use NASNFS or SMB is the datastoreWow!Simple – no SANMultiple queuesFlexible (on-the-fly changes)Simple snap and replicate*Enables full VmotionLink aggregation (trunking) is possibleBut…Less familiar (ESX 3.0+)CPU load questionsLimited to 8 NFS datastores (ESX default)Snapshot consistency for multiple VMDKVMHostGuestOSNASStorageVMDK
  • 33. Hypervisor Storage Options:Guest iSCSISkip VMFS and use iSCSI directlyAccess a LUN just like any physical serverVMware ESX can even boot from iSCSI!Ok…Storage folks love it!Can be faster than ESX iSCSIVery flexible (on-the-fly changes)Guest can move and still access storageBut…Less common to VM folksCPU load questionsNo Storage VMotion (but doesn’t need it)VMHostGuestOSiSCSIStorageLUN
  • 34. Hypervisor Storage Options:Raw Device Mapping (RDM)Guest VM’s access storage directly over iSCSI or FCVM’s can even boot from raw devicesHyper-V pass-through LUN is similarGreat!Per-server queues for performanceEasier measurementThe only method for clusteringSupports LUNs larger than 2 TB (60 TB passthru in vSphere 5!)But…Tricky VMotion and dynamic resource scheduling (DRS)No storage VMotionMore management overheadLimited to 256 LUNs per data centerVMHostGuestOSI/OMapping FileSAN Storage
  • 35. Hypervisor Storage Options:Direct I/OVMware ESX VMDirectPath - Guest VM’s access I/O hardware directlyLeverages AMD IOMMU or Intel VT-dGreat!Potential for native performanceJust like RDM but better!But…No VMotion or Storage VMotionNo ESX fault tolerance (FT)No ESX snapshots or VM suspendNo device hot-addNo performance benefit in the real world!VMHostGuestOSI/OMapping FileSAN Storage
  • 36. Which VMware Storage Method Performs Best?Mixed random I/OCPU cost per I/OVMFS,RDM (p), or RDM (v)Source: “Performance Characterization of VMFS and RDM Using a SAN”, VMware Inc.,ESX 3.5, 2008
  • 37. vSphere 5: Policy or Profile-Driven StorageAllows storage tiers to be defined in vCenter based on SLA, performance, etc.Used during provisioning, cloning, Storage vMotion, Storage DRSLeverages VASA for metrics and characterizationAll HCL arrays and types (NFS, iSCSI, FC)Custom descriptions and tagging for tiersCompliance status is a simple binary report
  • 38. Native VMware Thin ProvisioningVMware ESX 4 allocates storage in 1 MB chunks as capacity is usedSimilar support enabled for virtual disks on NFS in VI 3Thin provisioning existed for block, could be enabled on the command line in VI 3Present in VMware desktop productsvSphere 4 fully supports and integrates thin provisioningEvery version/license includes thin provisioningAllows thick-to-thin conversion during Storage VMotionIn-array thin provisioning also supported (we’ll get to that…)
  • 39. Four Types of VMware ESX VolumesNote: FT is not supportedWhat will your array do? VAAI helps…Friendly to on-array thin provisioning
  • 40. Storage Allocation and Thin ProvisioningVMware tests show no performance impact from thin provisioning after zeroing
  • 41. Pluggable Storage Architecture:Native MultipathingVMware ESX includes multipathing built inBasic native multipathing (NMP) is round-robin fail-over only – it will not load balance I/O across multiple paths or make more intelligent decisions about which paths to usePluggable Storage Architecture (PSA)VMware NMPThird-Party MPPVMware SATPThird-Party SATPVMware PSPThird-Party PSP
  • 42. Pluggable Storage Architecture: PSP and SATPvSphere 4 Pluggable Storage Architecture allows third-party developers to replace ESX’s storage I/O stackESX Enterprise+ OnlyThere are two classes of third-party plug-ins:Path-selection plug-ins (PSPs) optimize the choice of which path to use, ideal for active/passive type arraysStorage array type plug-ins (SATPs) allow load balancing across multiple paths in addition to path selection for active/active arraysEMC PowerPath/VE for vSphere does everything
  • 43. Storage Array Type Plug-ins (SATP)ESX native approachesActive/PassiveActive/ActivePseudo ActiveStorage Array Type Plug-InsVMW_SATP_LOCAL – Generic local direct-attached storageVMW_SATP_DEFAULT_AA – Generic for active/active arraysVMW_SATP_DEFAULT_AP – Generic for active/passive arraysVMW_SATP_LSI – LSI/NetApp arrays from Dell, HDS, IBM, Oracle, SGIVMW_SATP_SVC – IBM SVC-based systems (SVC, V7000, Actifio)VMW_SATP_ALUA – Asymmetric Logical Unit Access-compliant arraysVMW_SATP_CX – EMC/Dell CLARiiON and Celerra (also VMW_SATP_ALUA_CX)VMW_SATP_SYMM – EMC Symmetrix DMX-3/DMX-4/VMAX, InvistaVMW_SATP_INV – EMC Invista and VPLEXVMW_SATP_EQL – Dell EqualLogic systemsAlso, EMC PowerPath and HDS HDLM and vendor-unique plugins not detailed in the HCL
  • 44. Path Selection Plug-ins (PSP)VMW_PSP_MRU – Most-Recently Used (MRU) – Supports hundreds of storage arraysVMW_PSP_FIXED – Fixed - Supports hundreds of storage arraysVMW_PSP_RR – Round-Robin - Supports dozens of storage arraysDELL_PSP_EQL_ROUTED – Dell EqualLogic iSCSI arraysAlso, EMC PowerPath and other vendor unique
  • 45. vStorage APIs for Array Integration (VAAI)VAAI integrates advanced storage features with VMwareBasic requirements:A capable storage arrayESX 4.1+A software plug-in for ESXNot every implementation is equalBlock zeroing can be very demanding for some arraysZeroing might conflict with full copy
  • 47. vSphere 5: VAAI 2Block(FC/iSCSI)T10 compliance is improved - No plug-in needed for many arraysFile(NFS)NAS plugins come from vendors, not VMware
  • 48. vSphere 5: vSphereStorage APIs – Storage Awareness (VASA)VASA is communication mechanism for vCenter to detect array capabilitiesRAID level, thin provisioning state, replication state, etc.Two locations in vCenter Server:“System-Defined Capabilities” – per-datastore descriptorsStorage views and SMS API’s
  • 49. Storage I/O Control (SIOC)Storage I/O Control (SIOC) is all about fairness:Prioritization and QoS for VMFSRe-distributes unused I/O resourcesMinimizes “noisy neighbor” issuesESX can provide quality of service for storage access to virtual machinesEnabled per-datastoreWhen a pre-defined latency level is exceeded on a VM it begins to throttle I/O (default 30 ms)Monitors queues on storage arrays and per-VM I/O latencyBut:vSphere 4.1 with Enterprise PlusDisabled by default but highly recommended!Block storage only (FC or ISCSI)Whole-LUN only (no extents)No RDM
  • 51. Virtual Machine MobilityMoving virtual machines is the next big challengePhysical servers are difficult to move around and between data centersPent-up desire to move virtual machines from host to host and even to different physical locationsVMware DRS would move live VMs around the data centerThe “Holy Grail” for server managersRequires networked storage (SAN/NAS)
  • 52. vSphere 5: Storage DRSDatastore clusters aggregate multiple datastoresVMs and VMDKs placement metrics:Space - Capacity utilization and availability (80% default)Performance – I/O latency (15 ms default)When thresholds are crossed, vSphere will rebalance all VMs and VMDKs according to Affinity RulesStorage DRS works with either VMFS/block or NFS datastoresMaintenance Mode evacuates a datastore
  • 54. Expanding the ConversationConverged I/O, storage virtualization and new storage architectures
  • 55. This Hour’s Focus:Non-Hypervisor Storage FeaturesConverged networkingStorage protocols (FC, iSCSI, NFS)Enhanced Ethernet (DCB, CAN, FCoE)I/O virtualizationStorage for virtual storageTiered storage and SSD/flashSpecialized arraysVirtual storage appliances (VSA)
  • 56. Introduction: Converging on ConvergenceData centers rely more on standard ingredientsWhat will connect these systems together?IP and Ethernet are logical choices
  • 58. Which Storage Protocol to Use?Server admins don’t know/care about storage protocols and will want whatever they are familiar withStorage admins have preconceived notions about the merits of various options:FC is fast, low-latency, low-CPU, expensiveNFS is slow, high-latency, high-CPU, cheapiSCSI is medium, medium, medium, medium
  • 63. Which Storage Protocols Do People Use?Source: VirtualGeek.typepad.com 2010 virtualization survey of 125 readers
  • 64. The Upshot: It Doesn’t MatterUse what you have and are familiar with!FC, iSCSI, NFS all work wellMost enterprise production VM data is on FC, many smaller shops using iSCSI or NFSEither/or? - 50% use a combinationFor IP storageNetwork hardware and config matter more than protocol (NFS, iSCSI, FC)Use a separate network or VLANUse a fast switch and consider jumbo framesFor FC storage8 Gb FC/FCoE is awesome for VMsLook into NPIVLook for VAAI
  • 66. Serious Performance10 GbE is faster than most storage interconnectsiSCSI and FCoE both can perform at wire-rate
  • 67. Latency is Critical TooLatency is even more critical in shared storageFCoE with 10 GbE can achieve well over 500,000 4K IOPS (if the array and client can handle it!)
  • 68. Benefits Beyond Speed10 GbE takes performance off the table (for now…)But performance is only half the story:Simplified connectivityNew network architectureVirtual machine mobility1 GbE Cluster4G FC Storage1 GbE Network10 GbE(Plus 6 Gbps extra capacity)
  • 69. Enhanced 10 Gb EthernetEthernet and SCSI were not made for each other
  • 70. SCSI expects a lossless transport with guaranteed delivery
  • 71. Ethernet expects higher-level protocols to take care of issues
  • 72. “Data Center Bridging” is a project to create lossless Ethernet
  • 73. AKA Data Center Ethernet (DCE), Converged Enhanced Ethernet (CEE)
  • 74. iSCSI and NFS are happy with or without DCB
  • 75. DCB is a work in progress
  • 76. FCoE requires PFC (Qbb or PAUSE), DCBX (Qaz)
  • 77. QCN (Qau) is still not readyPriority Flow Control (PFC)802.1QbbCongestion Management (QCN)802.1QauBandwidth Management (ETS)802.1QazPAUSE802.3xData Center Bridging Exchange Protocol (DCBX)Traffic Classes 802.1p/Q
  • 78. FCoE CNAs for VMware ESXNo Intel (OpenFCoE) or Broadcom support in vSphere 4…
  • 79. vSphere 5: FCoE Software InitiatorDramatically expands the FCoE footprint from just a few CNAsBased on Intel OpenFCoE? – Shows as “Intel Corporation FCoE Adapter”
  • 80. I/O Virtualization: Virtual I/OExtends I/O capabilities beyond physical connections (PCIe slots, etc)Increases flexibility and mobility of VMs and bladesReduces hardware, cabling, and cost for high-I/O machinesIncreases density of blades and VMs
  • 81. I/O Virtualization: IOMMU (Intel VT-d)IOMMU gives devices direct access to system memoryAMD IOMMU or Intel VT-dSimilar to AGP GARTVMware VMDirectPath leverages IOMMUAllows VMs to access devices directlyMay not improve real-world performanceSystem MemoryIOMMUMMUI/O DeviceCPU
  • 82. Does SSD Change the Equation?RAM and flash promise high performance…But you have to use it right
  • 83. Flash is Not A DiskFlash must be carefully engineered and integratedCache and intelligence to offset write penaltyAutomatic block-level data placement to maximize ROIIF a system can do this, everything else improvesOverall system performanceUtilization of disk capacitySpace and power efficiencyEven system cost can improve!
  • 84. The Tiered Storage ClichéCost and PerformanceOptimized for Savings!
  • 86. Three Approaches to SSD For VMEMC Project Lightning promises to deliver all three!
  • 87. Storage for Virtual Servers (Only!)New breed of storage solutions just for virtual serversHighly integrated (vCenter, VMkernel drivers, etc.)High-performance (SSD cache)Mostly from startups (for now)Tintri– NFS-based caching arrayVirsto+EvoStor – Hyper-V software, moving to VMware
  • 88. Virtual Storage Appliances (VSA)What if the SAN was pulled inside the hypervisor?VSA = A virtual storage array as a guest VMGreat for lab or PoCSome are not for productionCan build a whole data center in a hypervisor, including LAN, SAN, clusters, etcPhysical Server ResourcesHypervisorVM GuestVM GuestVirtual Storage ApplianceVirtual SANVirtual LANCPURAM
  • 89. vSphere 5: vSphere Storage Appliance (VSA)Aimed at SMB marketTwo deployment options:2x replicates storage 4:23x replicates round-robin 6:3Uses local (DAS) storageEnables HA and vMotion with no SAN or NASUses NFS for storage accessAlso manages IP addresses for HA
  • 91. Whew! Let’s Sum UpServer virtualization changes everythingThrow your old assumptions about storage workloads and presentation out the windowWe (storage folks) have some work to doNew ways of presenting storage to the serverConverged I/O (Ethernet!)New demand for storage virtualization featuresNew architectural assumptions

Editor's Notes

  • #30: Mirror Mode paper: http://www.usenix.org/events/atc11/tech/final_files/Mashtizadeh.pdfhttp://blogs.vmware.com/vsphere/2011/07/new-vsphere-50-storage-features-part-2-storage-vmotion.html
  • #31: http://blogs.vmware.com/vsphere/2011/07/new-vsphere-50-storage-features-part-1-vmfs-5.html
  • #32: Up to 256 FC or iSCSI LUNsESX multipathingLoad balancingFailoverFailover between FC and iSCSI*Beware of block sizes greater than 256 KB!If you want virtual disks greater than 256 GB, you must use a VMFS block size larger than 1 MBAlign your virtual disk starting offset to your array (by booting the VM and using diskpart, Windows PE, or UNIX fdisk)*
  • #33: Link Aggregate Control Protocol (LACP) for trunking/EtherChannel - Use “fixed” path policy, not LRUUp to 8 (or 32) NFS mount pointsTurn off access time updatesThin provisioning? Turn on AutoSize and watch out
  • #41: http://www.techrepublic.com/blog/datacenter/stretch-your-storage-dollars-with-vsphere-thin-provisioning/2655http://www.vmware.com/pdf/vsp_4_thinprov_perf.pdf
  • #48: http://virtualgeek.typepad.com/virtual_geek/2011/07/vstorage-apis-for-array-integration-vaai-vsphere-5-edition.htmlhttp://blogs.vmware.com/vsphere/2011/07/new-enhanced-vsphere-50-storage-features-part-3-vaai.html
  • #50: http://www.vmware.com/files/pdf/techpaper/vsp_41_perf_SIOC.pdfFor FC storage the recommended latency threshold is  20 – 30 MSFor SAS storage the recommended latency threshold is  20 – 30 MSFor SATA storage the recommended latency threshold is 30 – 50 MSFor SSD storage the recommended latency threshold is 15 – 20 MShttp://www.yellow-bricks.com/2010/10/19/storage-io-control-best-practices/
  • #51: http://www.vmware.com/files/pdf/techpaper/vsp_41_perf_SIOC.pdfFor FC storage the recommended latency threshold is  20 – 30 MSFor SAS storage the recommended latency threshold is  20 – 30 MSFor SATA storage the recommended latency threshold is 30 – 50 MSFor SSD storage the recommended latency threshold is 15 – 20 MShttp://www.yellow-bricks.com/2010/10/19/storage-io-control-best-practices/
  • #53: http://www.slideshare.net/esloof/vsphere-5-whats-new-storage-drshttp://blogs.vmware.com/vsphere/2011/07/vsphere-50-storage-features-part-5-storage-drs-initial-placement.html
  • #82: http://www.ntpro.nl/blog/archives/1804-vSphere-5-Whats-New-Storage-Appliance-VSA.html
  • #83: http://jpaul.me/?p=2072