SlideShare a Scribd company logo
4.1 New Features: Network
NetworkReceive Side Scaling (RSS) Support EnhancementsImprovements to RSS support for guests via enhancements to VMXNET3. Enhanced VM to VM CommunicationFurther, inter-VM throughput performance will be improved under conditions where VMs are communicating directly with one another over the same virtual switch on the same ESX/ESXi host (inter-VM traffic).This is achieved through networking asynchronous TX processing architecture which enables the leveraging of additional physical CPU cores for processing inter-VM traffic. VM – VM throughput improved by 2X, to up to 19 Gbps10% improvement when going out to physical network
Other Improvements – Network PerformanceNetQueue Support ExtensionNetQueue support is extended to include support for hardware based LRO (large receive off-load) further improving CPU and throughput performance in 10 GE environments. LRO supportLarge Receive OffloadEach packets transmitted causes CPU to react Lots of small packets received from physical media result in high CPU loadLRO merges packets and transmits them at onceReceive tests indicate 5-30% improvement in throughput40 - 60% decrease in CPU costEnabled for pNICs Broadcoms bnx2x and Intels NianticEnabled for vNIC vmxnet2 and vmxnet3, but only recent Linux guestOS3
IPv6—Progress towards full NIST “Host” Profile ComplianceVI 3 (ESX 3.5)IPv6 supported in guestsvSphere 4.0IPv6 support for ESX 4 vSphere Client vCentervMotionIP Storage (iSCSI, NFS) — EXPERIMENTALNot supported for vSphere vCLI, HA, FT, Auto DeployvSphere 4.1 NIST compliance with “Host” Profile (http://www.antd.nist.gov/usgv6/usgv6-v1.pdf)Including IPSEC, IKEv2, etc.Not supported for vSphere vCLI, HA, FT
Cisco Nexus 1000V—Planned Enhancements Easier software upgradeIn Service Software Upgrade (ISSU) for VSM and VEMBinary compatibilityWeighted Fair Queuing (s/w scheduler)Increased Scalability, inline with vDS scalabilitySPAN to and from Port ProfileVLAN pinning to PNICInstaller app for VSM HA and L3 VEM/VSM communicationStart of EAL4 Common Criteria certification4094 active VLANsScale Port Profiles > 512Always check with Cisco for latest info.
Network I/O Control
1GigE pNICs10 GigE pNICsNetwork Traffic Management—Emergence of 10 GigEiSCSIiSCSIFTvMotionNFSFTvMotionNFSTCP/IPTCP/IPvSwitchvSwitch10 GigE1GigETraffic Types compete. Who gets what share of the vmnic?NICs dedicated for some traffic types e.g. vMotion, IP Storage
Bandwidth assured by dedicated physical NICs
Traffic typically converged to two 10 GigE NICs
Some traffic types & flows could dominate others through oversubscriptionTraffic ShapingFeatures in 4.0/4.1vSwitch or vSwitch Port GroupLimit outbound trafficAverage bandwidthPeek bandwidthBurst SizevDS dvPortGroupIngress/ Egress Traffic ShapingAverage bandwidthPeak bandwidthBurst SizeNot optimised for 10 GEiSCSICOSvMotionVMs10 Gbit/s NIC
Traffic ShapingTraffic Shaping DisadvantagesLimits are fixed- even if there is bandwidth available it will not be used for other servicesbandwidth cannot be guaranteed without limiting other traffic (like reservations)VMware recommended to have separate pNICs for iSCSI/ NFS/ vMotion/ COS to have enough bandwidth available for these traffic typesCustomers don’t want to waste 8-9Gbit/s if this pNIC is dedicated for vMotionInstead of 6 1Gbit pNICs customers might have two 10Gbit pNICs sharing trafficGuaranteed bandwidth for vMotion limits bandwidth for other traffic even in the case where there is no vMotion activeTraffic shaping is only a static way to control trafficiSCSIunusedCOSunusedvMotionVMs10Gbit/s NIC
Network I/O ControlNetwork I/O Control GoalsIsolationOne flow should not dominate othersFlexible PartitioningAllow isolation and over commitmentGuarantee Service Levels when flows competeNote: This feature is only available with vDS (Enterprise Plus)
Overall Design
ParametersLimits and Shares Limits specify the absolute maximumbandwidth for a flow over a TeamSpecified in MbpsTraffic from a given flow will never exceed its specified limitEgress from ESX hostShares specify the relative importance of an egress flow on a vmnic i.e. guaranteed minimumSpecified in abstract units, from 1-100Presets for Low (25 shares), Normal (50 shares), High (100 shares), plus CustomBandwidth divided between flows based on their relative sharesControls apply to output from ESX host Shares apply to a given vmnicLimits apply across the team
Configuration from vSphere Client LimitsMaximum bandwidth for traffic class/typeSharesGuaranteed minimum service levelvDS only feature!Preconfigured Traffic Classese.g. VM traffic in this example: - limited to max of 500 Mbps (aggregate of all VMs)  - with minimum of 50/400 of pNIC bandwidth (50/(100+100+50+50+50+50)
Resource ManagementSharesNormal = 50Low = 25High = 100Custom = any values between 1 and 100Default valuesVM traffic = High (100)All others = Normal (50)No limit set
ImplementationEach host calculates the shares separately or independantlyOne host might have only 1Gbit/s NICs while another one has already 10Gbit/s onesSo resulting guaranteed bandwidth is differentOnly outgoing traffic is controlledInter-switch traffic is not controlled, only the pNICs are affectedLimits are still valid even if the pNIC is opted outScheduler uses a static “Packets-In-Flight” windowinFlightPackets: Packets that are actually in flight and in transmit process in the pNICWindow size is 50 kBNo more than 50 kB are in flight (to the wire) at a given moment
Excluding a physical NICPhysical NICs per hosts can be excluded from Network Resource ManagementHost configuration -> Advanced Settings -> Net -> Net.ResMgmtPnicOptOutWill exclude specified NICs from shares calculation, notfromlimits!
ResultsWith QoS in place, performance is less impacted
Load-Based Teaming
Current Teaming PolicyIn vSphere 4.0 three policiesPort IDIP hashMAC HashDisadvantagesStatic mappingNo load balancingCould cause unbalanced load on pNICsDid not differ between pNIC bandwidth
NIC Teaming Enhancements—Load Based Teaming (LBT)Note: adjacent physical switch configuration is same as other teaming types (except IP-hash). i.e. same L2 domainLBT invoked if saturation detected on Tx or Rx (>75% mean utilization over 30s period)30 sec period—long period avoids MAC address flapping issues with adjacent physical switches
Load Based TeamingInitial mappingLike PortIDBalanced mapping between ports and pNICsMapping not based on load (as initially no load existed)Adjusting the mappingBased on time frames; the load on a pNIC during a timeframe is taken into accountIn case load is unbalanced one VM (to be precise: the vSwitch port) will get re-assigned to a different pNICParametersTime frames and load thresholdDefault frame 30 seconds, minimum value 10 secondsDefault load threshold 75%, possible values 0-100Both Configurable through command line tool (only for debug purpose - not for customer)
Load Based TeamingAdvantagesDynamic adjustments to loadDifferent NIC speeds are taken into account as this is based on % loadCan have a mix of 1 Gbit, 10 Gbit and even 100 Mbit NICsDependenciesLBT works independent from other algorithmsDoes not take limits or reservation from traffic shaping or Network I/O Management into accountAlgorithm based on the local host onlyDRS has to take care of cluster wide balancing Implemented on vNetwork Distributed Switch onlyEdit dvPortGroup to change setting
4.1 New Features: Storage
NFS & HW iSCSI in vSphere 4.1 Improved NFS performanceUp to 15% reduction in CPU cost for both read & writeUp to 15% improvement in Throughput cost for both read & writeBroadcom iSCSI HW Offload Support89% improvement in CPU read cost!83% improvement in CPU write cost!
VMware Data Recovery: New CapabilitiesBackup and Recovery ApplianceSupport for up to 10 appliances per vCenter instance to allow protection of up to 1000 VMs
File Level Restore client for Linux VMsVMware vSphere 4.1Improved VSS support for Windows 2008 and Windows 7: application level quiescingDestination StorageExpanded support for DAS, NFS, iSCSI or Fibre Channel storage plus CIFS shares as destination
Improved deduplication performancevSphere Client Plug-InAbility for seamless switch between multiple backup appliances
Improved usability and user experienceVMware vCenter
ParaVirtual SCSI (PVSCSI) We will now support PVSCSI when used with these guest OS: Windows XP (32bit and 64bit) Vista (32bit and 64bit) Windows 7 (32bit and 64bit) /vmimages/floppiesPoint the VM Floppy Driver at the .FLP fileWhen installing press F6 key to read the floppy
ParaVirtual SCSI VM configured with a PVSCSI adapter can be part of an Fault Tolerant cluster.PVSCSI adapters already support hot-plugging or hot-unplugging of virtual devices, but the guest OS is not notified of any changes on the SCSI bus. Consequently, any addition/removal of devices need to be followed by a manual rescan of the bus from within the guest.
Storage IO Control
The I/O Sharing ProblemLow priority VM can limit I/O bandwidth for high priority VMs Storage I/O allocation should be in line with VM prioritiesWhat you want to seeWhat you seeMicrosoftExchangeMicrosoftExchangeonline storeonline storedata miningdata miningdatastoredatastore
Solution: Storage I/O ControlCPU shares: LowMemory shares: LowCPU shares: HighMemory shares: HighCPU shares: HighMemory shares: HighI/O shares: HighI/O shares: LowI/O shares: High32GHz 16GBMicrosoftExchangeonline storedata miningDatastore A
Setting I/O Controls
Enabling Storage I/O Control
Enabling Storage I/O ControlClick the Storage I/O Control ‘Enabled’ checkbox to turn the feature on for that volume.
Enabling Storage I/O ControlClicking on the Advanced button allow you to change the congestion threshold.
If the latency rises above this value, Storage I/O Control will kick in, and prioritize a VM’s I/O based on its shares value.Viewing Configuration Settings
Allocate I/O ResourcesShares translate into ESX I/O queue slotsVMs with more shares are allowed to send more I/O’s at a timeSlot assignment is dynamic, based on VM shares and current loadTotal # of slots available is dynamic, based on level of congestiondata miningMicrosoftExchangeonline storeI/O’s in flightSTORAGE ARRAY
Experimental Setup
14%21%42%15%Without Storage I/O Control (Default)Performance without Storage IO Control
14%22%8%500 shares500 shares750 shares750 shares4000 sharesWith Storage I/O Control (Congestion Threshold: 25ms)Performance with Storage IO Control
Storage I/O Control in Action: Example #2Two Windows VMs running SQL Server on two hosts250 GB data disk, 50 GB log diskVM1: 500 sharesVM2: 2000 sharesResult: VM2 with higher shares gets more orders/min & lower latency!
Step 1: Detect CongestionNo benefit beyond certain loadThroughput(IOPS or MB/s)Congestion signal: ESX-array response time > thresholdDefault threshold: 35msWe will likely recommend different defaults for SSD and SATAChanging default threshold (not usually recommended)Low latency goal: set lower if latency is critical for some VMsHigh throughput goal: set close to IOPS maximization pointTotal Datastore Load (# of IO’s in flight)
Storage I/O Control InternalsThere are two I/O schedulers involved in Storage I/O Control.
The first is the local VMI/O scheduler. This is called SFQ, the start-time fair queuing scheduler. This scheduler ensures share-based allocation of I/O resources between VMs on a per host basis.
The second is the distributed I/O scheduler for ESX hosts. This is called PARDA, the Proportional Allocation of Resources for Distributed Storage Access.
PARDA
carves out the array queue amongst all the VMs which are sending I/O to the datastore on the array.
adjusts the per host per datastore queue size (aka LUN queue/device queue) depending on the sum of the per VM shares on the host.
communicates this adjustment to each ESX via VSI nodes.
ESX servers also share cluster wide statistics between each other via a statsfileNew VSI Nodes for Storage I/O ControlESX 4.1 introduces a number of new VSI nodes for Storage I/O Control purposes:A new VSI node per datastore to get/set the latency threshold.A new VSI node per datastore to enable/disable PARDA.A new maxQueueDepth VSI nodes for /storage/scsifw/devices/* has been introduced which means that each device has a logical queue depth/ slot size parameter that the PARDA scheduler enforces.
Host-LevelIssue QueuesArray QueueStorage ArraySFQSFQSFQQueue lengths varied dynamicallyStorage I/O Control ArchitecturePARDAPARDAPARDA
RequirementsStorage I/O Control supported on FC or iSCSI storage. NFS datastores are not supported.not supported on datastores with multiple extents.Array with Automated Storage Tiering capabilityAutomated storage tiering is the ability of an array (or group of arrays) to automatically migrate LUNs/volumes or parts of LUNs/volumes to different types of storage media (SSD, FC, SAS, SATA) based on user-set policies and current I/O patterns. Before using Storage I/O Control on datastores that are backed by arrays with automated storage tiering capabilities, check the VMware Storage/SAN Compatibility Guide to verify whether your automated tiered storage array has been certified to be compatible with Storage I/O ControlNo special certification is required for arrays that do not have any such automatic migration/tiering feature, including those that provide the ability to manually migrate data between different types of storage media
Hardware-Assist Storage OperationFormally known as vStorage API for Array Integration
vStorage APIs for Array Integration (VAAI)	Improves performance by leveraging efficient array-based operations as an alternative to host-based solutionsThree Primitives include:Full Copy – Xcopy like function to offload work to the arrayWrite Same -Speeds up zeroing out of blocks or writing repeated contentAtomic Test and Set – Alternate means to locking the entire LUNHelping function such as:Storage vMotionProvisioning VMs from TemplateImproves thin provisioning disk performanceVMFS share storage pool scalabilityNotes:Requires firmware from Storage Vendors (6 participating)supports block based storage only. NFS not yet supported in 4.1
Array Integration Primitives: IntroductionAtomic Test & Set (ATS)A mechanism to modify a disk sector to improve the performance of the ESX when doing metadata updates.Clone Blocks/Full Copy/XCOPYFull copy of blocks and ESX is guaranteed to have full space access to the blocks.Default offloaded clone size is 4MB.Zero Blocks/Write SameWrite Zeroes. This will address the issue of time falling behind in a VM when the guest operating system writes to previously unwritten regions of its virtual disk:http://kb.vmware.com/kb/1008284This primitive will improve MSCS in virtualization environment solutions where we need to zero out the virtual disk.Default zeroing size is 1MB.
Hardware AccelerationAll vStorage support will be grouped into one attribute, called "Hardware Acceleration". Not Supported implies one or more Hardware Acceleration primitives failed.Unknown implies Hardware Acceleration primitives have not yet been attempted.
VM Provisioning from Template with Full CopyBenefitsReduce installation timeStandardize to ensure efficient                                                     management, protection & controlChallengesRequires a full data copy100 GB template (10 GB to copy): 5-20 minutesFT requires additional zeroing of blocksImproved SolutionUse array’s native copy/clone & zeroing functionsUp to 10-20x speedup in provisioning time
Storage vMotion with Array Full Copy Function BenefitsZero-downtime migrationEases array maintenance, tiering, load balancing, upgrades, space mgmt ChallengesPerformance impact on host, array, networkLong migration time (0.5 - 2.5 hrs for 100GB VM)Best practice: use infrequently Improved solutionUse array’s native copy/clone functionality
VAAI Speeds Up Storage vMotion - Example42:27 - 39:12 = 2 Min 21 sec w/out(141 seconds)33:04 - 32:37  =27 Sec with VAAI141 sec vs. 27 sec
Copying Data – Optimized Cloning with VAAIVMFS directs storage to move data directly
Much less time!
Up to 95% reduction
Dramatic reduction in load on:
Servers
Network
StorageScalable Lock ManagementA number of VMFS operations cause the LUN to temporarily become locked for exclusive write use by one of the ESX nodes, including:
Moving a VM with vMotion
Creating a new VM or deploying a VM from a template
Powering a VM on or off
Creating a template
Creating or deleting a file, including snapshots
A new VAAI feature, atomic_test_and_set allows the  ESX Server to offload the management of the required locks to thestorage and avoids locking the entire VMFS file system.Atomic Test & SetOriginal file locking techniqueAcquire SCSI reservationAcquire file lockRelease SCSI reservationDo work on VMFS file/metadataRelease file lockNew file locking techniqueAcquire ATS lockAcquire file lockRelease ATS lockDo work on VMFS file/metadataRelease file lockThe main difference with using the ATS lock is that it does not affect the other ESX hosts sharing the datastore
VMFS Scalability with Atomic Test and Set (ATS)Makes VMFS more scalable overall, by offloading block locking mechanismUsing Atomic Test and Set (ATS) capability provides an alternate option to use of SCSI reservations to protect the VMFS metadata from being written to by two separate ESX Servers at one time.Normal VMware Locking (No ATS)Enhanced VMware Locking (With ATS)
For more details on VAAIvSphere 4.1 Documentation also describes use of this features in the ESX Configuration Guide Chapter 9 (pages 124 - 125)Listed in TOC as “Storage Hardware Acceleration”Three setting under advanced settings:DataMover.HardwareAcceratedMove 	- Full CopyDataMover.HardwareAcceratedInit	- Write SameVMFS3.HarwareAccerated Locking	- Atomic Test SetAdditional Collateral planned for release after GAFrequently Asked QuestionsDatasheet or webpage contentPartners include: Dell/EQL, EMC, HDS, HP, IBM and NetApp
RequirementsThe VMFS data mover will not leverage hardware offloads, and will use software data movement instead, in the following cases: If the source and destination VMFS volumes have different block size; in such situations data movement will fall back to the generic FSDM layer, which will only do software data movement. If the source file type is RDM and the destination file type is non-RDM (regular file) If the source VMDK type is eagerzeroedthick and the destination VMDK type is thin. If either source or destination VMDK is any sort of sparse or hosted format. If the logical address and/or transfer length in the requested operation are not aligned to the minimum alignment required by the storage device.
VMFS Data Movement CaveatsVMware supports VAAI primitives on VMFS with multiple LUNs/extents, if they are all on the same array and the array supports offloading.VMware does not support VAAI primitives on VMFS with multiple LUNs/extents, if they are all on different arrays, but all arrays support offloading. HW cloning between arrays (even if it's within the same VMFS volume) won't work, so that would fall back to Software data movement.
vSphere 4.1 New Features: ManagementManagement related features
Management – New Features SummaryvCenter32-bit to 64-bit data migrationEnhanced ScalabilityFaster response timeUpdate ManagerHost Profile EnhancementsOrchestratorActive Directory Support (Host and vMA)VMware ConverterHyper-V Import.Win08 R2 and Win7 convertVirtual Serial Port Concentrator
Scripting & AutomationHost Profiles, Orchestrator, vMA, CLI, PowerCLI
SummaryHost ProfilesVMware OrchestratorVMware vMAPowerShellesxtopvscsiStatsVMware Tools
Host Profiles EnhancementsHost ProfilesCisco supportPCI device ordering (support for selecting NICs)iSCSI supportAdmin password (setting root password) Logging on the hostFile is at C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\Logs\PyVmomiServer.logConfig not covered by Host Profiles are: Licensing vDS policy configuration (however you can do non-policy vDS stuff) iSCSI Multipathing
Host Profiles EnhancementsLbtdLsassd (Part of AD. See the AD preso)Lwiod (Part of AD)Netlogond (part of AD)vSphere 4.1vSphere 4.0
Orchestrator Enhancementsprovides a client and server for 64-bit installations, with an optional 32-bit client.performance enhancements due to 64-bit installation
VMware Tools Command Line UtilityThis feature provides an alternative to the VMware Tools control panel (the GUI dialog box)The command line based toolbox will allow for administrators to automate the use of the toolbox functionalities by writing their own scripts
vSphere Management Assistant (vMA)A convenient place to perform administrationVirtual Appliance packaged as an OVFDistributed, maintained and supported by VMware Not included with ESXi – must be downloaded separatelyThe environment has the following pre-installed:64-bit Enterprise Linux OSVMware ToolsPerl ToolkitvSphere Command Line Interface (VCLI)JRE (to run applications built with the vSphere SDK)VI Fast Pass (authentication service for scripts)VI Logger (log aggregator)
vMAImprovements in 4.1Improved authentication capability – Active Directory support Transition from RHEL to CentOSSecurityThe security hole that exposed clear text passwords on ESX(i) or vCenter hosts when using vifpinit (vi-fastpass) is fixedvMA as netdump serverYou can configure ESXi host to get the netcoredump onto a remote server in case of crash or panic.Each ESXi must be configured to write the core dump.
For Tech Partner: VMware CIM APIWhat it is:for developers building management applications. With the VMware CIM APIs, developers can use standards-based CIM-compliant applications to manage ESX/ESXi hosts.The VMware Common Information Model (CIM) APIs allow you to:view VMs and resources using profiles defined by the Storage Management Initiative Specification (SMI-S)manage hosts using the System Management Architecture for Server Hardware (SMASH) standard. SMASH profiles allow CIM clients to monitor system health of a managed server. What’s new in 4.1www.vmware.com/support/developer/cim-sdk/4.1/cim_410_releasenotes.html
vCLI and PowerCLI: primary scripting interfacesvCLI and PowerCLI built on same API as vSphere ClientSame authentication (e.g. Active Directory), roles and privileges, event loggingAPI is secure, optimized for remote environments, firewall-friendly, standards-basedvCLIvSpherePowerCLIOther utility scriptsOtherlanguagesvSphere SDKvSphere ClientvSphere Web Service API
vCLI for Administrative and Troubleshooting TasksAreas of functionalityHost Configuration:NTP, SNMP, Remote syslog, ESX conf, Kernel modules, local usersStorage Configuration:NAS, SAN, iSCSI, vmkfstools, storage pathing, VMFS volume managementNetwork Configuration:vSwitches (standard and distributed), physical NICs, Vmkernel NICs, DNS, RoutingMiscellaneous:Monitoring, File management, VM Management, host backup, restore, and updatevCLI can point to an ESXi host or to vCentervMA is a convenient way for accessing vCLIRemote CLI now run faster in 4.1 relative to 4.0
Anatomy of a vCLI commandRun directly on ESXi Hostvicfg-nics--server hostname --user username --password mypasswordoptionsHostname of ESXi hostUser defined locally on ESXi hostRun through vCentervicfg-nics--server hostname --user username --password mypassword  --vihost hostnameoptionsHostname of vCenter hostUser defined in vCenter (AD)Target ESXi host
Additional vCLI configuration commands in 4.1Storageesxcliswiscsi session: Manage iSCSI sessions esxcliswiscsinic: Manage iSCSI NICsesxcliswiscsivmknic: List VMkernel NICs available for binding to particular iSCSI adapter esxcliswiscsivmnic: List available uplink adapters for use with a specified iSCSI adapteresxclivaai device: Display information about devices claimed by the VMware VAAI (vStorage APIs for Array Integration) Filter Plugin.esxclicorestorage device: List devices or plugins. Used in conjunction with hardware acceleration.
Additional vCLI commandsNetworkesxcli network: List active connections or list active ARP table entries. vicfg-authconfig --server=<ESXi_IP_Adress> --username=root --password '' --authscheme AD --joindomain <ad_domain_name> --adusername=<ad_user_name> --adpassword=<ad_user_password>StorageNFS statistics available in resxtopVMesxclivms: Forcibly stop VMs that do not respond to normal stop operations, by using kill commands.# esxcli vms vm kill --type <kill_type> --world-id <ID>Note: designed to kill VMs in a reliable way (not dependent upon well-behaving system)Eliminating one of the most common reasons for wanting to use TSM.
esxcli  - New Namespacesesxcli has got 3 new namespaces – network, vaai and vms[root@cs-tse-i132 ~]# esxcliUsage: esxcli [disp options] <namespace> <object> <command>For esxcli help please run esxcli –helpAvailable namespaces:corestorage  VMware core storage commands.network      VMware networking commands.nmp          VMware Native Multipath Plugin (NMP). This is the VMware default             implementation of the Pluggable Storage Architecture.swiscsi      VMware iSCSI commands.vaai         Vaai Namespace containing vaai code.vms          Limited Operations on VMs.
Control VM Operations# esxcli vms vmUsage: esxcli [disp options] vms vm <command>For esxcli help please run esxcli –helpAvailable commands:kill  Used to forcibly kill VMs that are stuck and not responding to normal stop operations.list  List the VMs on this system.  This command currently will only list running VMs on the system.[root@cs-tse-i132 ~]# esxcli vms vm listvSphere Management Assistant (vMA)    World ID: 5588    Process ID: 27253    VMX Cartel ID: 5587    UUID: 42 01 a1 98 d6 65 6b e8-79 3b 2a 7c 9d 88 70 05    Display Name: vSphere Management Assistant (vMA)    Config File: /vmfs/volumes/4b1e10ed-8ce9ce16-f692-00215e364468/vSphere Management Assistant (vM/vSphere Management Assistant (vM.vmx
esxtop – Disk Devices ViewUse the ‘u’ option to display ‘Disk Devices’.NFS statistics can now be observed. Here we are looking at throughput and latency stats for the devices.
New VAAI Statistics in esxtop (1 of 2)There are new fields in esxtop which look at VAAI statistics.
Each of the three primitives has their own unique set of statistics.
Toggle VAAI fields (‘O’ and ‘P’) to on for VAAI specific statistics.New VAAI Statistics in esxtop (2 of 2)Clone (Move) OpsVMFSLockOpsZeroing (Init) OpsLatenciesThe way to track failures is via esxtop or resxtop. Here you'll see CLONE_F, which is clone failures. Similarly, you'll see ATS_F, ZERO_F and so on.esxtop – VM Viewesxtop also provides a mechanism to view VM I/O & latency statistics, even if they reside on NFS.The VM with GID 65 (SmallVMOnNAS) above resides on an NFS datastore.
VSI# vsish/> cat /vmkModules/nfsclient/mnt/isos/propertiesmount point information {   server name:rhtraining.vmware.com   server IP:10.21.64.206   server volume:/mnt/repo/isos   UUID:4f125ca5-de4ee74d   socketSendSize:270336   socketReceiveSize:131072reads:7   writes:0   readBytes:92160   writeBytes:0   readTime:404366   writeTime:0   aborts:0   active:0   readOnly:1   isMounted:1   isAccessible:1   unstableWrites:0   unstableNoCommit:0}NFS I/O statistics are also available via the VSI nodes
vm-support enhancementsvm-support now enables user to run 3rd party scripts. To make vm-support run such scripts, add the scripts to "/etc/vmware/vm-support/command-files.d" directory and run vm-support. The results will be added to the vm-support archive.Each script that is run will have its own directory which contain output and log files for that script in the vm-support archive. These directories are stored in top-level directory "vm-support-commands-output".
Power CLIFeature Highlights:Easier to customize and extend PowerCLI, especially for reporting Output objects can be customized by adding extra propertiesBetter readability and less typing in scripts based on Get-View. Each output object has its associated view as nested property. Less typing is required to call Get-View and convert between PowerCLI object IDs and managed object IDs.Basic vDS support – moving VMs from/to vDS, adding/removing hosts from/to vDSMore reporting: new getter cmdlets, new properties added to existing output objects, improvements in Get-Stat.Cmdlets for host HBAsPowerCLI Cmdlet Reference now documents all output typesCmdlets to control host routing tablesFaster Datastore providerhttp://blogs.vmware.com/vipowershell/2010/07/powercli-41-is-out.html
If you are really really curious…. Additional commands (not supported)http://www.petri.co.il/vmware-esxi4-console-secret-commands.htm
vCenter specific
vCenter improvementBetter load balancing with improved DRS/DPM algorithm effectivenessImproved performance at higher vCenter inventory limits – up to 7x higher throughput and up to 75% reduced latencyImproved performance at higher cluster inventory limits – up to 3x higher throughput and up to 60% reduced latencyFaster vCenter startup – around 5 minutes for maximum vCenter inventory sizeBetter vSphere Client responsiveness, quicker user interaction, and faster user loginFaster host operations and VM operations on standalone hosts – up to 60% reduction in latencyLower resource usage by vCenter agents by up to 40%Reduced VM group power-on latency by up to 25%Faster VM recovery with HA – up to 60% reduction in total recovery time for 1.6x more VMs
88Enhanced vCenter Scalability
vCenter 4.1 installNew option: Managing the RAM of JVM
vCenter Server: Changing JVM SizingThe same change should be visible by launching "Configure Tomcat" from the program menu (Start->Programs->VMware->VMware Tomcat).
vCenter: Services in WindowsThe following are not shown as servicesLicence Reporting manager
New Alarms
Predefined Alarms
Remote Console to VMFormally known as Virtual Serial Port Concentrator
OverviewMany customers rely on managing physical hosts by connecting to the target machine over the serial port. Physical serial port concentrators are used by such admins to multiplex connections to multiple hosts. Provides a suitable way to remote a VM’s serial port(s) over a network connection, and supporting a “virtual serial port concentrator” utility.Using VMs you lose this functionality and the ability to do remote management using scripted installs and management. Virtual Serial Port ConcentratorCommunicate between VMs and IP-enabled serial devices. Connect to VM's serial port over the network, using telnet /ssh. Have this connection uninterrupted during vmotion and other similar events.
Virtual Serial Port ConcentratorWhat it isRedirect VM serial ports over a standard network linkvSPC aggregates traffic from multiple serial ports onto one management console. It behaves similarly as physical serial port concentrators. BenefitsUsing a vSPC also allows network connections to a VM's serial ports to migrate seamlessly when the VM is migrated using vMotionManagement efficienciesLower costs for multi-host managementEnables 3rd party concentrator integration if required
Example (using Avocent)ACS 6000 Advanced Console Server running as a vSPC. There is not a serial port or virtual serial port in the ACS6000 console server. ACS6000 console server has a telnet daemon (server) listen to connections coming from ESX. ESX will make one telnet connection for each virtual serial port configured to send data to ACS6000 console server. The serial daemon will implement the telnet server with support to all telnet extensions implemented by VMware.
VMware vSphere 4.1 deep dive - part 2
Configuring Virtual Ports on a VM
Configuring Virtual Ports on a VMEnables two VMs or a VM and a process on the host tocommunicate as if they were physical machines connected by a serial cable. For example, this can be used for remote debugging on a VMvSPC, which will act as proxy.
Configuring Virtual Ports on a VMExample (for Avocent):Type ACSID://ttySxx in the Port URI, where xx is between 1 to 48. It defines which virtual serial port from the ACS6000 console server this serial port will connect to.1 VM 1 port.ACS6000 has 48 ports onlyType telnet://<IP of  Avocent VM>:8801
Configuring Virtual Ports on a VM
Configure VM to redirect Console LoginCheck your system's serial supportCheck operating system recognizes serial ports in your hardwareConfigure your /etc/inittab to support serial console loginsAdd the following lines to the /etc/inittab# Run agetty on COM1/ttyS0s0:2345:respawn:/sbin/agetty -L -f /etc/issueserial 9600 ttyS0 vt100
Configure VM to redirect Console LoginActivate the changes that you made in /etc/inittab# init qIf you want to be able to login via serial console as the root user, you will need to edit the /etc/securetty configuration file.Add ttyS0 as an entry in the /etc/securetty		consolettyS0		vc/1 		vc/2
Configure serial port as the system consoleUse options in /etc/grub.conf to redirect console output to one of your serial portsEnables you to see all of the bootup and shutdown messages from your terminal.The text to add to the config file is highlighted :
Accessing the Serial Port of the Virtual MachineOpen a Web connection to the AvocentACS6000Click on the Portsfolder and click SerialPortsBased on the SerialPort connection configured in the VirtualMachine, you should see Signals of CTS|DSR|CD|RI
Accessing the Serial Port of the Virtual MachineClick in the SerialViewer link and a console will open
Enter password of avocent and hit the Enterkey to establish the connectionPerformance Monitoring
UI > Performance > Advanced vSphere 4.1vSphere 4.0Additional Chart Options in vSphere 4.1 around storage performance statistics:Datastore, Power, Storageadapter & Storage path.
110Performance GraphsAdditional Performance GraphViews added to vSphere 4.1Host – Datastore, Management Agent, Power, Storage Adapter, Storage PathVM – Datastore, Power, Virtual Disk110
Storage Statistics: vCenter & esxtopNot available in this timeframe: Aggregation at cluster level in vCenter (possible through APIs)*Network-based storage (NFS, iSCSI) I/O breakdown still being researched** Not applicable to NFS; datastore is the equivalent  ESXTOP publishes throughput and latency  for LUN, if  datastore has only one LUN then LUN will be  equal datastore
Volume Stats for NFS Device
Datastore Activity Per Host
Other Host Stats
Datastore Activity per VM
Virtual Disk Activity per VM
VMware Update Manager
Update ManagerCentral automated, actionable VI patch compliance management solution
Define, track, and enforce software update  compliance for ESX hosts/clusters, 3rd party ESX extensions, Virtual Appliances, VMTools/VM Hardware, online*/offline VMs, templates
Patch notification and recall
Cluster level pre-remediation check analysis and report
Framework to support 3rd party IHV/ISV updates, customizations: mass install, /update of EMC’s PowerPath  module
Enhanced compatibility with DPM for cluster level patch operations
Performance and scalability enhancements to match vCenterOverviewvCenter Update Manager enables centralized, automated patch and version management .
Define, track, and enforce software update compliance and support for :
ESX/ESXi hosts
VMs
Virtual  Appliances
3rd Party ESX Modules
Online/Offline VMs, Templates
Automate and Generate Reports using Update Manager Database Views ESX/ESXi VMVirtual ApplianceVMToolsVM  H/WVMToolsVM H/WOnline/offline ; Templates3rd party extensionsvCenter Update Manager
Deployment ComponentsvCenterServerVI ClientUpdateManagerServerUpdate Manager Components:Update Manager Server + DB2.   Update Manager VI Client Plug-in 3.   Update Manager Download ServiceVirtualizedInfrastructureExternal Patch FeedsConfidential
New Features in 4.1Update Manager now provides management of host upgrade packages.Provisioning, patching, and upgrade support for third-party modules.Offline bundles.Recalled patchesEnhanced cluster operation.Better handling of low bandwidth and high latency networkPowerCLIBetter support for virtual vCenter
NotificationsAs we have already seen with the notification Schedule, Update Manager 4.1 contacts VMware at regular intervals to download notifications about patch recalls, new fixes and alerts.  If patches with problems/potential issues are released, these patches are recalled in the metadata and VUM marks them as recalled. If you try to install a recalled patch, Update Manager notifies you that the patch is recalled and does not install it on the host. If you have already installed such a patch, VUM notifies you that the recalled patch is installed on certain hosts, but does not remove the recalled patch from the host.Update Manager also deletes all the recalled patches from the Update Manager patch repository.When a patch fixing the problem is released, Update Manager 4.1 downloads the new patch and prompts you to install it.
NotificationsNotifications which Update Manager downloads are displayed on the Notifications tab of the Update Manager Administration view.An Alarm is Generated and an email sent if the Notification Check Schedule is configured
Update Manager shows the patch as recalledNotifications - Patch Recall Details
NotificationsAlarms posted for recalled and fixed PatchesRecalledPatches are represented by a Flag
VUM 4.1 Feature - Notification Check ScheduleBy default Update Manager checks for notifications about patch recalls, patch fixes and alerts at certain time intervals.Edit Notifications to define the Frequency (hourly, daily, weekly, Monthly) and the Start time ( minutes after hour ), the Interval and the email address of who to Notify for recalled Patches
VUM 4.1 Feature  - ESX Host/Cluster Settings When Remediating objects in a cluster with Distributed Power Management (DPM), High Availability (HA), and Fault Tolerance (FT) you should temporarilydisable these features for the entire cluster.VUM does not remediate hosts on which these features are enabled. When the update completes, VUM restores these featuresThese settings become the default failure response. You can specify differentsettings when you configure individual remediation tasks.
VUM 4.1 Feature - ESX Host/Cluster SettingsUpdate Manager can not remediate hosts where VMs have connected CD/DVD drives.CD/DVD drives that are connected to the VMs on a host might prevent the host from entering maintenancemode and interrupt remediation.Select Temporarily disable any CD-ROMs that may prevent a host from entering maintenance mode.
Baselines and GroupsBaselines might be upgrade, extension or patchbaselines. Baselines contain a collection of one or more patches, servicepacks and bugfixes, extensions or upgrades.Baseline groups are assembled from existingbaselines and might contain one upgradebaseline per type and one or more patch and extensionbaselines, or a combination of multiple patch and extension baselines.Preconfigured BaselinesHosts – 2 BaselinesVM/VA – 6 Baselines
Baselines and GroupsUpdate Manager 4.1 introduces a new Host Extension BaselineHost Extension baselines contain additional software for ESX/ESXi hosts. This additional software might be VMwaresoftware or third-partysoftware.

More Related Content

PPTX
VMware vSphere 4.1 deep dive - part 1
PPT
PDF
VMware vSphere Networking deep dive
PDF
VMware Horizon (view) 7 Lab Manual
PPTX
VMware Advance Troubleshooting Workshop - Day 4
PPTX
Esxi troubleshooting
PPT
Vsphere 4-partner-training180
PDF
Xrm xensummit
VMware vSphere 4.1 deep dive - part 1
VMware vSphere Networking deep dive
VMware Horizon (view) 7 Lab Manual
VMware Advance Troubleshooting Workshop - Day 4
Esxi troubleshooting
Vsphere 4-partner-training180
Xrm xensummit

What's hot (15)

PDF
Xen community update
PPT
C3 Citrix Cloud Center
PPTX
VMware vSphere Performance Troubleshooting
PPSX
Win2k8 cluster kaliyan
PDF
Advanced performance troubleshooting using esxtop
PDF
VMware Site Recovery Manager (SRM) 6.0 Lab Manual
PPTX
VMware Advance Troubleshooting Workshop - Day 2
PPTX
Realtime scheduling for virtual machines in SKT
PPTX
VMware Advance Troubleshooting Workshop - Day 3
PDF
VMworld 2013: Silent Killer: How Latency Destroys Performance...And What to D...
PDF
20 christian ferber xen_server_6_workshop
PDF
Hyper-V Best Practices & Tips and Tricks
PDF
Xen ATG case study
PPTX
VMware Advance Troubleshooting Workshop - Day 5
PPTX
Rearchitecting Storage for Server Virtualization
Xen community update
C3 Citrix Cloud Center
VMware vSphere Performance Troubleshooting
Win2k8 cluster kaliyan
Advanced performance troubleshooting using esxtop
VMware Site Recovery Manager (SRM) 6.0 Lab Manual
VMware Advance Troubleshooting Workshop - Day 2
Realtime scheduling for virtual machines in SKT
VMware Advance Troubleshooting Workshop - Day 3
VMworld 2013: Silent Killer: How Latency Destroys Performance...And What to D...
20 christian ferber xen_server_6_workshop
Hyper-V Best Practices & Tips and Tricks
Xen ATG case study
VMware Advance Troubleshooting Workshop - Day 5
Rearchitecting Storage for Server Virtualization
Ad

Similar to VMware vSphere 4.1 deep dive - part 2 (20)

PDF
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
PPTX
Inf net2227 heath
PPTX
VMworld 2016: vSphere 6.x Host Resource Deep Dive
PDF
VMworld 2013: vSphere Distributed Switch – Design and Best Practices
PPTX
VMworld 2015: vSphere Distributed Switch 6 –Technical Deep Dive
PDF
The Unofficial VCAP / VCP VMware Study Guide
PDF
VMworld 2013: Extreme Performance Series: Network Speed Ahead
PPT
2011 q1-indy-vmug
PPT
ASBIS: Virtualization Aware Networking - Cisco Nexus 1000V
PDF
VMworld 2014: vSphere Distributed Switch
PPTX
Next-Generation Best Practices for VMware and Storage
PPTX
What is coming for VMware vSphere?
PPTX
What's New with vSphere 4
PPTX
Storage Changes in VMware vSphere 4.1
PDF
VMware vSphere Networking deep dive
PPTX
VMWARE Professionals - Security, Multitenancy and Flexibility
PDF
Christian ferver xen server_6.1_overview
PPTX
VMware Networking, CISCO Nexus 1000V, and CISCO UCS VM-FEX
PPT
Vsphere 4-partner-training180
PPT
IBM System Networking Easy Connect Mode
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
Inf net2227 heath
VMworld 2016: vSphere 6.x Host Resource Deep Dive
VMworld 2013: vSphere Distributed Switch – Design and Best Practices
VMworld 2015: vSphere Distributed Switch 6 –Technical Deep Dive
The Unofficial VCAP / VCP VMware Study Guide
VMworld 2013: Extreme Performance Series: Network Speed Ahead
2011 q1-indy-vmug
ASBIS: Virtualization Aware Networking - Cisco Nexus 1000V
VMworld 2014: vSphere Distributed Switch
Next-Generation Best Practices for VMware and Storage
What is coming for VMware vSphere?
What's New with vSphere 4
Storage Changes in VMware vSphere 4.1
VMware vSphere Networking deep dive
VMWARE Professionals - Security, Multitenancy and Flexibility
Christian ferver xen server_6.1_overview
VMware Networking, CISCO Nexus 1000V, and CISCO UCS VM-FEX
Vsphere 4-partner-training180
IBM System Networking Easy Connect Mode
Ad

More from Louis Göhl (19)

PPTX
Citrix vision and product highlights november 2011
PPTX
Citrix vision & strategy overview november 2011
PPTX
SVR402: DirectAccess Technical Drilldown, Part 2 of 2: Putting it all together.
PPTX
SVR401: DirectAccess Technical Drilldown, Part 1 of 2: IPv6 and transition te...
PPTX
Storage and hyper v - the choices you can make and the things you need to kno...
PPTX
Security best practices for hyper v and server virtualisation [svr307]
PPTX
Hyper v and live migration on cisco unified computing system - virtualized on...
PPT
HP Bladesystem Overview September 2009
PPTX
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...
PPTX
SVR208 Gaining Higher Availability with Windows Server 2008 R2 Failover Clust...
PPTX
SVR205 Introduction to Hyper-V and Windows Server 2008 R2 with Microsoft Syst...
PPTX
SIA319 What's Windows Server 2008 R2 Going to Do for Your Active Directory?
PPTX
SIA311 Better Together: Microsoft Exchange Server 2010 and Microsoft Forefron...
PPTX
MGT310 Reduce Support Costs and Improve Business Alignment with Microsoft Sys...
PPTX
MGT300 Using Microsoft System Center to Manage beyond the Trusted Domain
PPTX
MGT220 - Virtualisation 360: Microsoft Virtualisation Strategy, Products, and...
PPTX
CLI319 Microsoft Desktop Optimization Pack: Planning the Deployment of Micros...
PPTX
Windows Virtual Enterprise Centralized Desktop
PPTX
Optimized Desktop, Mdop And Windows 7
Citrix vision and product highlights november 2011
Citrix vision & strategy overview november 2011
SVR402: DirectAccess Technical Drilldown, Part 2 of 2: Putting it all together.
SVR401: DirectAccess Technical Drilldown, Part 1 of 2: IPv6 and transition te...
Storage and hyper v - the choices you can make and the things you need to kno...
Security best practices for hyper v and server virtualisation [svr307]
Hyper v and live migration on cisco unified computing system - virtualized on...
HP Bladesystem Overview September 2009
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...
SVR208 Gaining Higher Availability with Windows Server 2008 R2 Failover Clust...
SVR205 Introduction to Hyper-V and Windows Server 2008 R2 with Microsoft Syst...
SIA319 What's Windows Server 2008 R2 Going to Do for Your Active Directory?
SIA311 Better Together: Microsoft Exchange Server 2010 and Microsoft Forefron...
MGT310 Reduce Support Costs and Improve Business Alignment with Microsoft Sys...
MGT300 Using Microsoft System Center to Manage beyond the Trusted Domain
MGT220 - Virtualisation 360: Microsoft Virtualisation Strategy, Products, and...
CLI319 Microsoft Desktop Optimization Pack: Planning the Deployment of Micros...
Windows Virtual Enterprise Centralized Desktop
Optimized Desktop, Mdop And Windows 7

VMware vSphere 4.1 deep dive - part 2

  • 2. NetworkReceive Side Scaling (RSS) Support EnhancementsImprovements to RSS support for guests via enhancements to VMXNET3. Enhanced VM to VM CommunicationFurther, inter-VM throughput performance will be improved under conditions where VMs are communicating directly with one another over the same virtual switch on the same ESX/ESXi host (inter-VM traffic).This is achieved through networking asynchronous TX processing architecture which enables the leveraging of additional physical CPU cores for processing inter-VM traffic. VM – VM throughput improved by 2X, to up to 19 Gbps10% improvement when going out to physical network
  • 3. Other Improvements – Network PerformanceNetQueue Support ExtensionNetQueue support is extended to include support for hardware based LRO (large receive off-load) further improving CPU and throughput performance in 10 GE environments. LRO supportLarge Receive OffloadEach packets transmitted causes CPU to react Lots of small packets received from physical media result in high CPU loadLRO merges packets and transmits them at onceReceive tests indicate 5-30% improvement in throughput40 - 60% decrease in CPU costEnabled for pNICs Broadcoms bnx2x and Intels NianticEnabled for vNIC vmxnet2 and vmxnet3, but only recent Linux guestOS3
  • 4. IPv6—Progress towards full NIST “Host” Profile ComplianceVI 3 (ESX 3.5)IPv6 supported in guestsvSphere 4.0IPv6 support for ESX 4 vSphere Client vCentervMotionIP Storage (iSCSI, NFS) — EXPERIMENTALNot supported for vSphere vCLI, HA, FT, Auto DeployvSphere 4.1 NIST compliance with “Host” Profile (http://www.antd.nist.gov/usgv6/usgv6-v1.pdf)Including IPSEC, IKEv2, etc.Not supported for vSphere vCLI, HA, FT
  • 5. Cisco Nexus 1000V—Planned Enhancements Easier software upgradeIn Service Software Upgrade (ISSU) for VSM and VEMBinary compatibilityWeighted Fair Queuing (s/w scheduler)Increased Scalability, inline with vDS scalabilitySPAN to and from Port ProfileVLAN pinning to PNICInstaller app for VSM HA and L3 VEM/VSM communicationStart of EAL4 Common Criteria certification4094 active VLANsScale Port Profiles > 512Always check with Cisco for latest info.
  • 7. 1GigE pNICs10 GigE pNICsNetwork Traffic Management—Emergence of 10 GigEiSCSIiSCSIFTvMotionNFSFTvMotionNFSTCP/IPTCP/IPvSwitchvSwitch10 GigE1GigETraffic Types compete. Who gets what share of the vmnic?NICs dedicated for some traffic types e.g. vMotion, IP Storage
  • 8. Bandwidth assured by dedicated physical NICs
  • 9. Traffic typically converged to two 10 GigE NICs
  • 10. Some traffic types & flows could dominate others through oversubscriptionTraffic ShapingFeatures in 4.0/4.1vSwitch or vSwitch Port GroupLimit outbound trafficAverage bandwidthPeek bandwidthBurst SizevDS dvPortGroupIngress/ Egress Traffic ShapingAverage bandwidthPeak bandwidthBurst SizeNot optimised for 10 GEiSCSICOSvMotionVMs10 Gbit/s NIC
  • 11. Traffic ShapingTraffic Shaping DisadvantagesLimits are fixed- even if there is bandwidth available it will not be used for other servicesbandwidth cannot be guaranteed without limiting other traffic (like reservations)VMware recommended to have separate pNICs for iSCSI/ NFS/ vMotion/ COS to have enough bandwidth available for these traffic typesCustomers don’t want to waste 8-9Gbit/s if this pNIC is dedicated for vMotionInstead of 6 1Gbit pNICs customers might have two 10Gbit pNICs sharing trafficGuaranteed bandwidth for vMotion limits bandwidth for other traffic even in the case where there is no vMotion activeTraffic shaping is only a static way to control trafficiSCSIunusedCOSunusedvMotionVMs10Gbit/s NIC
  • 12. Network I/O ControlNetwork I/O Control GoalsIsolationOne flow should not dominate othersFlexible PartitioningAllow isolation and over commitmentGuarantee Service Levels when flows competeNote: This feature is only available with vDS (Enterprise Plus)
  • 14. ParametersLimits and Shares Limits specify the absolute maximumbandwidth for a flow over a TeamSpecified in MbpsTraffic from a given flow will never exceed its specified limitEgress from ESX hostShares specify the relative importance of an egress flow on a vmnic i.e. guaranteed minimumSpecified in abstract units, from 1-100Presets for Low (25 shares), Normal (50 shares), High (100 shares), plus CustomBandwidth divided between flows based on their relative sharesControls apply to output from ESX host Shares apply to a given vmnicLimits apply across the team
  • 15. Configuration from vSphere Client LimitsMaximum bandwidth for traffic class/typeSharesGuaranteed minimum service levelvDS only feature!Preconfigured Traffic Classese.g. VM traffic in this example: - limited to max of 500 Mbps (aggregate of all VMs) - with minimum of 50/400 of pNIC bandwidth (50/(100+100+50+50+50+50)
  • 16. Resource ManagementSharesNormal = 50Low = 25High = 100Custom = any values between 1 and 100Default valuesVM traffic = High (100)All others = Normal (50)No limit set
  • 17. ImplementationEach host calculates the shares separately or independantlyOne host might have only 1Gbit/s NICs while another one has already 10Gbit/s onesSo resulting guaranteed bandwidth is differentOnly outgoing traffic is controlledInter-switch traffic is not controlled, only the pNICs are affectedLimits are still valid even if the pNIC is opted outScheduler uses a static “Packets-In-Flight” windowinFlightPackets: Packets that are actually in flight and in transmit process in the pNICWindow size is 50 kBNo more than 50 kB are in flight (to the wire) at a given moment
  • 18. Excluding a physical NICPhysical NICs per hosts can be excluded from Network Resource ManagementHost configuration -> Advanced Settings -> Net -> Net.ResMgmtPnicOptOutWill exclude specified NICs from shares calculation, notfromlimits!
  • 19. ResultsWith QoS in place, performance is less impacted
  • 21. Current Teaming PolicyIn vSphere 4.0 three policiesPort IDIP hashMAC HashDisadvantagesStatic mappingNo load balancingCould cause unbalanced load on pNICsDid not differ between pNIC bandwidth
  • 22. NIC Teaming Enhancements—Load Based Teaming (LBT)Note: adjacent physical switch configuration is same as other teaming types (except IP-hash). i.e. same L2 domainLBT invoked if saturation detected on Tx or Rx (>75% mean utilization over 30s period)30 sec period—long period avoids MAC address flapping issues with adjacent physical switches
  • 23. Load Based TeamingInitial mappingLike PortIDBalanced mapping between ports and pNICsMapping not based on load (as initially no load existed)Adjusting the mappingBased on time frames; the load on a pNIC during a timeframe is taken into accountIn case load is unbalanced one VM (to be precise: the vSwitch port) will get re-assigned to a different pNICParametersTime frames and load thresholdDefault frame 30 seconds, minimum value 10 secondsDefault load threshold 75%, possible values 0-100Both Configurable through command line tool (only for debug purpose - not for customer)
  • 24. Load Based TeamingAdvantagesDynamic adjustments to loadDifferent NIC speeds are taken into account as this is based on % loadCan have a mix of 1 Gbit, 10 Gbit and even 100 Mbit NICsDependenciesLBT works independent from other algorithmsDoes not take limits or reservation from traffic shaping or Network I/O Management into accountAlgorithm based on the local host onlyDRS has to take care of cluster wide balancing Implemented on vNetwork Distributed Switch onlyEdit dvPortGroup to change setting
  • 26. NFS & HW iSCSI in vSphere 4.1 Improved NFS performanceUp to 15% reduction in CPU cost for both read & writeUp to 15% improvement in Throughput cost for both read & writeBroadcom iSCSI HW Offload Support89% improvement in CPU read cost!83% improvement in CPU write cost!
  • 27. VMware Data Recovery: New CapabilitiesBackup and Recovery ApplianceSupport for up to 10 appliances per vCenter instance to allow protection of up to 1000 VMs
  • 28. File Level Restore client for Linux VMsVMware vSphere 4.1Improved VSS support for Windows 2008 and Windows 7: application level quiescingDestination StorageExpanded support for DAS, NFS, iSCSI or Fibre Channel storage plus CIFS shares as destination
  • 29. Improved deduplication performancevSphere Client Plug-InAbility for seamless switch between multiple backup appliances
  • 30. Improved usability and user experienceVMware vCenter
  • 31. ParaVirtual SCSI (PVSCSI) We will now support PVSCSI when used with these guest OS: Windows XP (32bit and 64bit) Vista (32bit and 64bit) Windows 7 (32bit and 64bit) /vmimages/floppiesPoint the VM Floppy Driver at the .FLP fileWhen installing press F6 key to read the floppy
  • 32. ParaVirtual SCSI VM configured with a PVSCSI adapter can be part of an Fault Tolerant cluster.PVSCSI adapters already support hot-plugging or hot-unplugging of virtual devices, but the guest OS is not notified of any changes on the SCSI bus. Consequently, any addition/removal of devices need to be followed by a manual rescan of the bus from within the guest.
  • 34. The I/O Sharing ProblemLow priority VM can limit I/O bandwidth for high priority VMs Storage I/O allocation should be in line with VM prioritiesWhat you want to seeWhat you seeMicrosoftExchangeMicrosoftExchangeonline storeonline storedata miningdata miningdatastoredatastore
  • 35. Solution: Storage I/O ControlCPU shares: LowMemory shares: LowCPU shares: HighMemory shares: HighCPU shares: HighMemory shares: HighI/O shares: HighI/O shares: LowI/O shares: High32GHz 16GBMicrosoftExchangeonline storedata miningDatastore A
  • 38. Enabling Storage I/O ControlClick the Storage I/O Control ‘Enabled’ checkbox to turn the feature on for that volume.
  • 39. Enabling Storage I/O ControlClicking on the Advanced button allow you to change the congestion threshold.
  • 40. If the latency rises above this value, Storage I/O Control will kick in, and prioritize a VM’s I/O based on its shares value.Viewing Configuration Settings
  • 41. Allocate I/O ResourcesShares translate into ESX I/O queue slotsVMs with more shares are allowed to send more I/O’s at a timeSlot assignment is dynamic, based on VM shares and current loadTotal # of slots available is dynamic, based on level of congestiondata miningMicrosoftExchangeonline storeI/O’s in flightSTORAGE ARRAY
  • 43. 14%21%42%15%Without Storage I/O Control (Default)Performance without Storage IO Control
  • 44. 14%22%8%500 shares500 shares750 shares750 shares4000 sharesWith Storage I/O Control (Congestion Threshold: 25ms)Performance with Storage IO Control
  • 45. Storage I/O Control in Action: Example #2Two Windows VMs running SQL Server on two hosts250 GB data disk, 50 GB log diskVM1: 500 sharesVM2: 2000 sharesResult: VM2 with higher shares gets more orders/min & lower latency!
  • 46. Step 1: Detect CongestionNo benefit beyond certain loadThroughput(IOPS or MB/s)Congestion signal: ESX-array response time > thresholdDefault threshold: 35msWe will likely recommend different defaults for SSD and SATAChanging default threshold (not usually recommended)Low latency goal: set lower if latency is critical for some VMsHigh throughput goal: set close to IOPS maximization pointTotal Datastore Load (# of IO’s in flight)
  • 47. Storage I/O Control InternalsThere are two I/O schedulers involved in Storage I/O Control.
  • 48. The first is the local VMI/O scheduler. This is called SFQ, the start-time fair queuing scheduler. This scheduler ensures share-based allocation of I/O resources between VMs on a per host basis.
  • 49. The second is the distributed I/O scheduler for ESX hosts. This is called PARDA, the Proportional Allocation of Resources for Distributed Storage Access.
  • 50. PARDA
  • 51. carves out the array queue amongst all the VMs which are sending I/O to the datastore on the array.
  • 52. adjusts the per host per datastore queue size (aka LUN queue/device queue) depending on the sum of the per VM shares on the host.
  • 53. communicates this adjustment to each ESX via VSI nodes.
  • 54. ESX servers also share cluster wide statistics between each other via a statsfileNew VSI Nodes for Storage I/O ControlESX 4.1 introduces a number of new VSI nodes for Storage I/O Control purposes:A new VSI node per datastore to get/set the latency threshold.A new VSI node per datastore to enable/disable PARDA.A new maxQueueDepth VSI nodes for /storage/scsifw/devices/* has been introduced which means that each device has a logical queue depth/ slot size parameter that the PARDA scheduler enforces.
  • 55. Host-LevelIssue QueuesArray QueueStorage ArraySFQSFQSFQQueue lengths varied dynamicallyStorage I/O Control ArchitecturePARDAPARDAPARDA
  • 56. RequirementsStorage I/O Control supported on FC or iSCSI storage. NFS datastores are not supported.not supported on datastores with multiple extents.Array with Automated Storage Tiering capabilityAutomated storage tiering is the ability of an array (or group of arrays) to automatically migrate LUNs/volumes or parts of LUNs/volumes to different types of storage media (SSD, FC, SAS, SATA) based on user-set policies and current I/O patterns. Before using Storage I/O Control on datastores that are backed by arrays with automated storage tiering capabilities, check the VMware Storage/SAN Compatibility Guide to verify whether your automated tiered storage array has been certified to be compatible with Storage I/O ControlNo special certification is required for arrays that do not have any such automatic migration/tiering feature, including those that provide the ability to manually migrate data between different types of storage media
  • 57. Hardware-Assist Storage OperationFormally known as vStorage API for Array Integration
  • 58. vStorage APIs for Array Integration (VAAI) Improves performance by leveraging efficient array-based operations as an alternative to host-based solutionsThree Primitives include:Full Copy – Xcopy like function to offload work to the arrayWrite Same -Speeds up zeroing out of blocks or writing repeated contentAtomic Test and Set – Alternate means to locking the entire LUNHelping function such as:Storage vMotionProvisioning VMs from TemplateImproves thin provisioning disk performanceVMFS share storage pool scalabilityNotes:Requires firmware from Storage Vendors (6 participating)supports block based storage only. NFS not yet supported in 4.1
  • 59. Array Integration Primitives: IntroductionAtomic Test & Set (ATS)A mechanism to modify a disk sector to improve the performance of the ESX when doing metadata updates.Clone Blocks/Full Copy/XCOPYFull copy of blocks and ESX is guaranteed to have full space access to the blocks.Default offloaded clone size is 4MB.Zero Blocks/Write SameWrite Zeroes. This will address the issue of time falling behind in a VM when the guest operating system writes to previously unwritten regions of its virtual disk:http://kb.vmware.com/kb/1008284This primitive will improve MSCS in virtualization environment solutions where we need to zero out the virtual disk.Default zeroing size is 1MB.
  • 60. Hardware AccelerationAll vStorage support will be grouped into one attribute, called "Hardware Acceleration". Not Supported implies one or more Hardware Acceleration primitives failed.Unknown implies Hardware Acceleration primitives have not yet been attempted.
  • 61. VM Provisioning from Template with Full CopyBenefitsReduce installation timeStandardize to ensure efficient management, protection & controlChallengesRequires a full data copy100 GB template (10 GB to copy): 5-20 minutesFT requires additional zeroing of blocksImproved SolutionUse array’s native copy/clone & zeroing functionsUp to 10-20x speedup in provisioning time
  • 62. Storage vMotion with Array Full Copy Function BenefitsZero-downtime migrationEases array maintenance, tiering, load balancing, upgrades, space mgmt ChallengesPerformance impact on host, array, networkLong migration time (0.5 - 2.5 hrs for 100GB VM)Best practice: use infrequently Improved solutionUse array’s native copy/clone functionality
  • 63. VAAI Speeds Up Storage vMotion - Example42:27 - 39:12 = 2 Min 21 sec w/out(141 seconds)33:04 - 32:37 =27 Sec with VAAI141 sec vs. 27 sec
  • 64. Copying Data – Optimized Cloning with VAAIVMFS directs storage to move data directly
  • 66. Up to 95% reduction
  • 70. StorageScalable Lock ManagementA number of VMFS operations cause the LUN to temporarily become locked for exclusive write use by one of the ESX nodes, including:
  • 71. Moving a VM with vMotion
  • 72. Creating a new VM or deploying a VM from a template
  • 73. Powering a VM on or off
  • 75. Creating or deleting a file, including snapshots
  • 76. A new VAAI feature, atomic_test_and_set allows the ESX Server to offload the management of the required locks to thestorage and avoids locking the entire VMFS file system.Atomic Test & SetOriginal file locking techniqueAcquire SCSI reservationAcquire file lockRelease SCSI reservationDo work on VMFS file/metadataRelease file lockNew file locking techniqueAcquire ATS lockAcquire file lockRelease ATS lockDo work on VMFS file/metadataRelease file lockThe main difference with using the ATS lock is that it does not affect the other ESX hosts sharing the datastore
  • 77. VMFS Scalability with Atomic Test and Set (ATS)Makes VMFS more scalable overall, by offloading block locking mechanismUsing Atomic Test and Set (ATS) capability provides an alternate option to use of SCSI reservations to protect the VMFS metadata from being written to by two separate ESX Servers at one time.Normal VMware Locking (No ATS)Enhanced VMware Locking (With ATS)
  • 78. For more details on VAAIvSphere 4.1 Documentation also describes use of this features in the ESX Configuration Guide Chapter 9 (pages 124 - 125)Listed in TOC as “Storage Hardware Acceleration”Three setting under advanced settings:DataMover.HardwareAcceratedMove - Full CopyDataMover.HardwareAcceratedInit - Write SameVMFS3.HarwareAccerated Locking - Atomic Test SetAdditional Collateral planned for release after GAFrequently Asked QuestionsDatasheet or webpage contentPartners include: Dell/EQL, EMC, HDS, HP, IBM and NetApp
  • 79. RequirementsThe VMFS data mover will not leverage hardware offloads, and will use software data movement instead, in the following cases: If the source and destination VMFS volumes have different block size; in such situations data movement will fall back to the generic FSDM layer, which will only do software data movement. If the source file type is RDM and the destination file type is non-RDM (regular file) If the source VMDK type is eagerzeroedthick and the destination VMDK type is thin. If either source or destination VMDK is any sort of sparse or hosted format. If the logical address and/or transfer length in the requested operation are not aligned to the minimum alignment required by the storage device.
  • 80. VMFS Data Movement CaveatsVMware supports VAAI primitives on VMFS with multiple LUNs/extents, if they are all on the same array and the array supports offloading.VMware does not support VAAI primitives on VMFS with multiple LUNs/extents, if they are all on different arrays, but all arrays support offloading. HW cloning between arrays (even if it's within the same VMFS volume) won't work, so that would fall back to Software data movement.
  • 81. vSphere 4.1 New Features: ManagementManagement related features
  • 82. Management – New Features SummaryvCenter32-bit to 64-bit data migrationEnhanced ScalabilityFaster response timeUpdate ManagerHost Profile EnhancementsOrchestratorActive Directory Support (Host and vMA)VMware ConverterHyper-V Import.Win08 R2 and Win7 convertVirtual Serial Port Concentrator
  • 83. Scripting & AutomationHost Profiles, Orchestrator, vMA, CLI, PowerCLI
  • 84. SummaryHost ProfilesVMware OrchestratorVMware vMAPowerShellesxtopvscsiStatsVMware Tools
  • 85. Host Profiles EnhancementsHost ProfilesCisco supportPCI device ordering (support for selecting NICs)iSCSI supportAdmin password (setting root password) Logging on the hostFile is at C:\Documents and Settings\All Users\Application Data\VMware\VMware VirtualCenter\Logs\PyVmomiServer.logConfig not covered by Host Profiles are: Licensing vDS policy configuration (however you can do non-policy vDS stuff) iSCSI Multipathing
  • 86. Host Profiles EnhancementsLbtdLsassd (Part of AD. See the AD preso)Lwiod (Part of AD)Netlogond (part of AD)vSphere 4.1vSphere 4.0
  • 87. Orchestrator Enhancementsprovides a client and server for 64-bit installations, with an optional 32-bit client.performance enhancements due to 64-bit installation
  • 88. VMware Tools Command Line UtilityThis feature provides an alternative to the VMware Tools control panel (the GUI dialog box)The command line based toolbox will allow for administrators to automate the use of the toolbox functionalities by writing their own scripts
  • 89. vSphere Management Assistant (vMA)A convenient place to perform administrationVirtual Appliance packaged as an OVFDistributed, maintained and supported by VMware Not included with ESXi – must be downloaded separatelyThe environment has the following pre-installed:64-bit Enterprise Linux OSVMware ToolsPerl ToolkitvSphere Command Line Interface (VCLI)JRE (to run applications built with the vSphere SDK)VI Fast Pass (authentication service for scripts)VI Logger (log aggregator)
  • 90. vMAImprovements in 4.1Improved authentication capability – Active Directory support Transition from RHEL to CentOSSecurityThe security hole that exposed clear text passwords on ESX(i) or vCenter hosts when using vifpinit (vi-fastpass) is fixedvMA as netdump serverYou can configure ESXi host to get the netcoredump onto a remote server in case of crash or panic.Each ESXi must be configured to write the core dump.
  • 91. For Tech Partner: VMware CIM APIWhat it is:for developers building management applications. With the VMware CIM APIs, developers can use standards-based CIM-compliant applications to manage ESX/ESXi hosts.The VMware Common Information Model (CIM) APIs allow you to:view VMs and resources using profiles defined by the Storage Management Initiative Specification (SMI-S)manage hosts using the System Management Architecture for Server Hardware (SMASH) standard. SMASH profiles allow CIM clients to monitor system health of a managed server. What’s new in 4.1www.vmware.com/support/developer/cim-sdk/4.1/cim_410_releasenotes.html
  • 92. vCLI and PowerCLI: primary scripting interfacesvCLI and PowerCLI built on same API as vSphere ClientSame authentication (e.g. Active Directory), roles and privileges, event loggingAPI is secure, optimized for remote environments, firewall-friendly, standards-basedvCLIvSpherePowerCLIOther utility scriptsOtherlanguagesvSphere SDKvSphere ClientvSphere Web Service API
  • 93. vCLI for Administrative and Troubleshooting TasksAreas of functionalityHost Configuration:NTP, SNMP, Remote syslog, ESX conf, Kernel modules, local usersStorage Configuration:NAS, SAN, iSCSI, vmkfstools, storage pathing, VMFS volume managementNetwork Configuration:vSwitches (standard and distributed), physical NICs, Vmkernel NICs, DNS, RoutingMiscellaneous:Monitoring, File management, VM Management, host backup, restore, and updatevCLI can point to an ESXi host or to vCentervMA is a convenient way for accessing vCLIRemote CLI now run faster in 4.1 relative to 4.0
  • 94. Anatomy of a vCLI commandRun directly on ESXi Hostvicfg-nics--server hostname --user username --password mypasswordoptionsHostname of ESXi hostUser defined locally on ESXi hostRun through vCentervicfg-nics--server hostname --user username --password mypassword --vihost hostnameoptionsHostname of vCenter hostUser defined in vCenter (AD)Target ESXi host
  • 95. Additional vCLI configuration commands in 4.1Storageesxcliswiscsi session: Manage iSCSI sessions esxcliswiscsinic: Manage iSCSI NICsesxcliswiscsivmknic: List VMkernel NICs available for binding to particular iSCSI adapter esxcliswiscsivmnic: List available uplink adapters for use with a specified iSCSI adapteresxclivaai device: Display information about devices claimed by the VMware VAAI (vStorage APIs for Array Integration) Filter Plugin.esxclicorestorage device: List devices or plugins. Used in conjunction with hardware acceleration.
  • 96. Additional vCLI commandsNetworkesxcli network: List active connections or list active ARP table entries. vicfg-authconfig --server=<ESXi_IP_Adress> --username=root --password '' --authscheme AD --joindomain <ad_domain_name> --adusername=<ad_user_name> --adpassword=<ad_user_password>StorageNFS statistics available in resxtopVMesxclivms: Forcibly stop VMs that do not respond to normal stop operations, by using kill commands.# esxcli vms vm kill --type <kill_type> --world-id <ID>Note: designed to kill VMs in a reliable way (not dependent upon well-behaving system)Eliminating one of the most common reasons for wanting to use TSM.
  • 97. esxcli - New Namespacesesxcli has got 3 new namespaces – network, vaai and vms[root@cs-tse-i132 ~]# esxcliUsage: esxcli [disp options] <namespace> <object> <command>For esxcli help please run esxcli –helpAvailable namespaces:corestorage VMware core storage commands.network VMware networking commands.nmp VMware Native Multipath Plugin (NMP). This is the VMware default implementation of the Pluggable Storage Architecture.swiscsi VMware iSCSI commands.vaai Vaai Namespace containing vaai code.vms Limited Operations on VMs.
  • 98. Control VM Operations# esxcli vms vmUsage: esxcli [disp options] vms vm <command>For esxcli help please run esxcli –helpAvailable commands:kill Used to forcibly kill VMs that are stuck and not responding to normal stop operations.list List the VMs on this system. This command currently will only list running VMs on the system.[root@cs-tse-i132 ~]# esxcli vms vm listvSphere Management Assistant (vMA) World ID: 5588 Process ID: 27253 VMX Cartel ID: 5587 UUID: 42 01 a1 98 d6 65 6b e8-79 3b 2a 7c 9d 88 70 05 Display Name: vSphere Management Assistant (vMA) Config File: /vmfs/volumes/4b1e10ed-8ce9ce16-f692-00215e364468/vSphere Management Assistant (vM/vSphere Management Assistant (vM.vmx
  • 99. esxtop – Disk Devices ViewUse the ‘u’ option to display ‘Disk Devices’.NFS statistics can now be observed. Here we are looking at throughput and latency stats for the devices.
  • 100. New VAAI Statistics in esxtop (1 of 2)There are new fields in esxtop which look at VAAI statistics.
  • 101. Each of the three primitives has their own unique set of statistics.
  • 102. Toggle VAAI fields (‘O’ and ‘P’) to on for VAAI specific statistics.New VAAI Statistics in esxtop (2 of 2)Clone (Move) OpsVMFSLockOpsZeroing (Init) OpsLatenciesThe way to track failures is via esxtop or resxtop. Here you'll see CLONE_F, which is clone failures. Similarly, you'll see ATS_F, ZERO_F and so on.esxtop – VM Viewesxtop also provides a mechanism to view VM I/O & latency statistics, even if they reside on NFS.The VM with GID 65 (SmallVMOnNAS) above resides on an NFS datastore.
  • 103. VSI# vsish/> cat /vmkModules/nfsclient/mnt/isos/propertiesmount point information { server name:rhtraining.vmware.com server IP:10.21.64.206 server volume:/mnt/repo/isos UUID:4f125ca5-de4ee74d socketSendSize:270336 socketReceiveSize:131072reads:7 writes:0 readBytes:92160 writeBytes:0 readTime:404366 writeTime:0 aborts:0 active:0 readOnly:1 isMounted:1 isAccessible:1 unstableWrites:0 unstableNoCommit:0}NFS I/O statistics are also available via the VSI nodes
  • 104. vm-support enhancementsvm-support now enables user to run 3rd party scripts. To make vm-support run such scripts, add the scripts to "/etc/vmware/vm-support/command-files.d" directory and run vm-support. The results will be added to the vm-support archive.Each script that is run will have its own directory which contain output and log files for that script in the vm-support archive. These directories are stored in top-level directory "vm-support-commands-output".
  • 105. Power CLIFeature Highlights:Easier to customize and extend PowerCLI, especially for reporting Output objects can be customized by adding extra propertiesBetter readability and less typing in scripts based on Get-View. Each output object has its associated view as nested property. Less typing is required to call Get-View and convert between PowerCLI object IDs and managed object IDs.Basic vDS support – moving VMs from/to vDS, adding/removing hosts from/to vDSMore reporting: new getter cmdlets, new properties added to existing output objects, improvements in Get-Stat.Cmdlets for host HBAsPowerCLI Cmdlet Reference now documents all output typesCmdlets to control host routing tablesFaster Datastore providerhttp://blogs.vmware.com/vipowershell/2010/07/powercli-41-is-out.html
  • 106. If you are really really curious…. Additional commands (not supported)http://www.petri.co.il/vmware-esxi4-console-secret-commands.htm
  • 108. vCenter improvementBetter load balancing with improved DRS/DPM algorithm effectivenessImproved performance at higher vCenter inventory limits – up to 7x higher throughput and up to 75% reduced latencyImproved performance at higher cluster inventory limits – up to 3x higher throughput and up to 60% reduced latencyFaster vCenter startup – around 5 minutes for maximum vCenter inventory sizeBetter vSphere Client responsiveness, quicker user interaction, and faster user loginFaster host operations and VM operations on standalone hosts – up to 60% reduction in latencyLower resource usage by vCenter agents by up to 40%Reduced VM group power-on latency by up to 25%Faster VM recovery with HA – up to 60% reduction in total recovery time for 1.6x more VMs
  • 110. vCenter 4.1 installNew option: Managing the RAM of JVM
  • 111. vCenter Server: Changing JVM SizingThe same change should be visible by launching "Configure Tomcat" from the program menu (Start->Programs->VMware->VMware Tomcat).
  • 112. vCenter: Services in WindowsThe following are not shown as servicesLicence Reporting manager
  • 115. Remote Console to VMFormally known as Virtual Serial Port Concentrator
  • 116. OverviewMany customers rely on managing physical hosts by connecting to the target machine over the serial port. Physical serial port concentrators are used by such admins to multiplex connections to multiple hosts. Provides a suitable way to remote a VM’s serial port(s) over a network connection, and supporting a “virtual serial port concentrator” utility.Using VMs you lose this functionality and the ability to do remote management using scripted installs and management. Virtual Serial Port ConcentratorCommunicate between VMs and IP-enabled serial devices. Connect to VM's serial port over the network, using telnet /ssh. Have this connection uninterrupted during vmotion and other similar events.
  • 117. Virtual Serial Port ConcentratorWhat it isRedirect VM serial ports over a standard network linkvSPC aggregates traffic from multiple serial ports onto one management console. It behaves similarly as physical serial port concentrators. BenefitsUsing a vSPC also allows network connections to a VM's serial ports to migrate seamlessly when the VM is migrated using vMotionManagement efficienciesLower costs for multi-host managementEnables 3rd party concentrator integration if required
  • 118. Example (using Avocent)ACS 6000 Advanced Console Server running as a vSPC. There is not a serial port or virtual serial port in the ACS6000 console server. ACS6000 console server has a telnet daemon (server) listen to connections coming from ESX. ESX will make one telnet connection for each virtual serial port configured to send data to ACS6000 console server. The serial daemon will implement the telnet server with support to all telnet extensions implemented by VMware.
  • 121. Configuring Virtual Ports on a VMEnables two VMs or a VM and a process on the host tocommunicate as if they were physical machines connected by a serial cable. For example, this can be used for remote debugging on a VMvSPC, which will act as proxy.
  • 122. Configuring Virtual Ports on a VMExample (for Avocent):Type ACSID://ttySxx in the Port URI, where xx is between 1 to 48. It defines which virtual serial port from the ACS6000 console server this serial port will connect to.1 VM 1 port.ACS6000 has 48 ports onlyType telnet://<IP of Avocent VM>:8801
  • 124. Configure VM to redirect Console LoginCheck your system's serial supportCheck operating system recognizes serial ports in your hardwareConfigure your /etc/inittab to support serial console loginsAdd the following lines to the /etc/inittab# Run agetty on COM1/ttyS0s0:2345:respawn:/sbin/agetty -L -f /etc/issueserial 9600 ttyS0 vt100
  • 125. Configure VM to redirect Console LoginActivate the changes that you made in /etc/inittab# init qIf you want to be able to login via serial console as the root user, you will need to edit the /etc/securetty configuration file.Add ttyS0 as an entry in the /etc/securetty consolettyS0 vc/1 vc/2
  • 126. Configure serial port as the system consoleUse options in /etc/grub.conf to redirect console output to one of your serial portsEnables you to see all of the bootup and shutdown messages from your terminal.The text to add to the config file is highlighted :
  • 127. Accessing the Serial Port of the Virtual MachineOpen a Web connection to the AvocentACS6000Click on the Portsfolder and click SerialPortsBased on the SerialPort connection configured in the VirtualMachine, you should see Signals of CTS|DSR|CD|RI
  • 128. Accessing the Serial Port of the Virtual MachineClick in the SerialViewer link and a console will open
  • 129. Enter password of avocent and hit the Enterkey to establish the connectionPerformance Monitoring
  • 130. UI > Performance > Advanced vSphere 4.1vSphere 4.0Additional Chart Options in vSphere 4.1 around storage performance statistics:Datastore, Power, Storageadapter & Storage path.
  • 131. 110Performance GraphsAdditional Performance GraphViews added to vSphere 4.1Host – Datastore, Management Agent, Power, Storage Adapter, Storage PathVM – Datastore, Power, Virtual Disk110
  • 132. Storage Statistics: vCenter & esxtopNot available in this timeframe: Aggregation at cluster level in vCenter (possible through APIs)*Network-based storage (NFS, iSCSI) I/O breakdown still being researched** Not applicable to NFS; datastore is the equivalent ESXTOP publishes throughput and latency for LUN, if datastore has only one LUN then LUN will be equal datastore
  • 133. Volume Stats for NFS Device
  • 139. Update ManagerCentral automated, actionable VI patch compliance management solution
  • 140. Define, track, and enforce software update compliance for ESX hosts/clusters, 3rd party ESX extensions, Virtual Appliances, VMTools/VM Hardware, online*/offline VMs, templates
  • 142. Cluster level pre-remediation check analysis and report
  • 143. Framework to support 3rd party IHV/ISV updates, customizations: mass install, /update of EMC’s PowerPath module
  • 144. Enhanced compatibility with DPM for cluster level patch operations
  • 145. Performance and scalability enhancements to match vCenterOverviewvCenter Update Manager enables centralized, automated patch and version management .
  • 146. Define, track, and enforce software update compliance and support for :
  • 148. VMs
  • 150. 3rd Party ESX Modules
  • 152. Automate and Generate Reports using Update Manager Database Views ESX/ESXi VMVirtual ApplianceVMToolsVM H/WVMToolsVM H/WOnline/offline ; Templates3rd party extensionsvCenter Update Manager
  • 153. Deployment ComponentsvCenterServerVI ClientUpdateManagerServerUpdate Manager Components:Update Manager Server + DB2. Update Manager VI Client Plug-in 3. Update Manager Download ServiceVirtualizedInfrastructureExternal Patch FeedsConfidential
  • 154. New Features in 4.1Update Manager now provides management of host upgrade packages.Provisioning, patching, and upgrade support for third-party modules.Offline bundles.Recalled patchesEnhanced cluster operation.Better handling of low bandwidth and high latency networkPowerCLIBetter support for virtual vCenter
  • 155. NotificationsAs we have already seen with the notification Schedule, Update Manager 4.1 contacts VMware at regular intervals to download notifications about patch recalls, new fixes and alerts. If patches with problems/potential issues are released, these patches are recalled in the metadata and VUM marks them as recalled. If you try to install a recalled patch, Update Manager notifies you that the patch is recalled and does not install it on the host. If you have already installed such a patch, VUM notifies you that the recalled patch is installed on certain hosts, but does not remove the recalled patch from the host.Update Manager also deletes all the recalled patches from the Update Manager patch repository.When a patch fixing the problem is released, Update Manager 4.1 downloads the new patch and prompts you to install it.
  • 156. NotificationsNotifications which Update Manager downloads are displayed on the Notifications tab of the Update Manager Administration view.An Alarm is Generated and an email sent if the Notification Check Schedule is configured
  • 157. Update Manager shows the patch as recalledNotifications - Patch Recall Details
  • 158. NotificationsAlarms posted for recalled and fixed PatchesRecalledPatches are represented by a Flag
  • 159. VUM 4.1 Feature - Notification Check ScheduleBy default Update Manager checks for notifications about patch recalls, patch fixes and alerts at certain time intervals.Edit Notifications to define the Frequency (hourly, daily, weekly, Monthly) and the Start time ( minutes after hour ), the Interval and the email address of who to Notify for recalled Patches
  • 160. VUM 4.1 Feature - ESX Host/Cluster Settings When Remediating objects in a cluster with Distributed Power Management (DPM), High Availability (HA), and Fault Tolerance (FT) you should temporarilydisable these features for the entire cluster.VUM does not remediate hosts on which these features are enabled. When the update completes, VUM restores these featuresThese settings become the default failure response. You can specify differentsettings when you configure individual remediation tasks.
  • 161. VUM 4.1 Feature - ESX Host/Cluster SettingsUpdate Manager can not remediate hosts where VMs have connected CD/DVD drives.CD/DVD drives that are connected to the VMs on a host might prevent the host from entering maintenancemode and interrupt remediation.Select Temporarily disable any CD-ROMs that may prevent a host from entering maintenance mode.
  • 162. Baselines and GroupsBaselines might be upgrade, extension or patchbaselines. Baselines contain a collection of one or more patches, servicepacks and bugfixes, extensions or upgrades.Baseline groups are assembled from existingbaselines and might contain one upgradebaseline per type and one or more patch and extensionbaselines, or a combination of multiple patch and extension baselines.Preconfigured BaselinesHosts – 2 BaselinesVM/VA – 6 Baselines
  • 163. Baselines and GroupsUpdate Manager 4.1 introduces a new Host Extension BaselineHost Extension baselines contain additional software for ESX/ESXi hosts. This additional software might be VMwaresoftware or third-partysoftware.
  • 164. Patch Download SettingsUpdate Manager can downloadpatches and extensions either from the Internet ( vmware.com ) or from a shared repository.A new feature of Update Manager 4.1 allows you to import both VMware and Third-party patches manually from a ZIP file, called an Offline Bundle. You download these patches from the Internet or copy them from a media drive, and then save them as offline bundle ZIP files on a local drive.Use the Import Patches to upload to the Update Manager Repository
  • 165. Patch Download SettingsClick Import Patches at the bottom of the Patch Download Sources pane.Browse to locate the ZIP file containing the patches you want to import in the Update Manager patch repository.
  • 166. Patch Download SettingsThe patches are successfully imported into the Update Manager Patch Repository.Use the Search box to filter e.g. ThirdPartyRight Mouse Click Patch and select Show Patch Detail
  • 167. VUM 4.1 Feature - Host Upgrade ReleasesYou can upgrade the hosts in your environment using HostUpgrade ReleaseBaselines which is a new feature of Update Manager 4.1.This feature facilitates fasterremediation of hosts by having the Upgrade Release media already uploaded to the VUM Repository. Previously, the media had to be uploaded for each remediation.To create a Host Upgrade Release Baseline, download the hostupgradefiles from vmware.com and then upload themto the Update Manager Repository.Each upgradefile that you upload contains information about the targetversion to which it will upgrade the host. Update Manager distinguishes the target release versions and combines the uploaded Host Upgrade files into Host Upgrade Releases. A host upgrade release is a combination of host upgrade files, which allows you to upgradehosts to a particular release.
  • 168. VUM 4.1 Feature - Host Upgrade ReleasesYou cannot delete an Host Upgrade Release if it is included in a baseline. First delete any Baselines that have the Host Upgrade Release included.Update Manager 4.1 supportsupgrades from versions ESX 3.0.x and later as well as ESXi 3.5 and later to versions ESX4.0.x and ESX4.1. The remediation from ESX 4.0 to ESX 4.0.x is a patching operation, while the remediation from ESX 4.0.x to ESX 4.1 is considered an upgrade.
  • 169. VUM 4.1 Feature - Host Upgrade ReleasesThe Upgrade files that you upload are ISO or ZIP files. The file type depends on the host type, host version and on the upgrade that you want to perform. The following Table represents the types of the upgradefiles that you must upload for upgrading the ESX/ESXi hosts in your environment.
  • 170. VUM 4.1 Feature - Host Upgrade ReleasesDepending on the files that you upload, host upgrade releases can be partial or complete.Partial upgrade releases are host upgrade releases that do not contain all of the upgrade files required for an upgrade of both the ESX and ESXi hosts.Complete upgrade releases are host upgrade releases that contain all of the upgrade files required for an upgrade of both the ESX and ESXi hosts. To upgrade all of the ESX/ESXi hosts in your vSphere environment to version 4.1, you must upload all of the files required for this upgrade (three ZIP files and one ISO file):esx-DVD-4.1.0-build_number.iso for ESX 3.x hostsupgrade-from-ESXi3.5-to-4.1.0.build_number.zip for ESXi 3.x hostsupgrade-from-ESX-4.0-to-4.1.0-0.0.build_number-release.zip for ESX 4.0.x hostsupgrade-from-ESXi4.0-to-4.1.0-0.0.build_number-release.zip for ESXi 4.0.x hosts
  • 171. VUM 4.1 Feature - Host Upgrade ReleasesYou can upgrademultiple ESX/ESXi hosts of different versions simultaneously if you import a complete release bundle.You import and manage host upgrade files from the Host Upgrade Releases tab of the Update Manager Administration view.
  • 172. VUM 4.1 Feature - Host Upgrade ReleasesWait until the file upload completes.The uploaded Host Upgrade Release files appear in the ImportedUpgradeReleases pane as an upgrade release.139
  • 173. VUM 4.1 Feature - Host Upgrade ReleasesHost Upgrade Releases are stored in the <patchStore> location specified in the vci-integrity.xml file in the host_upgrade_packages folder.We can use the Update Manager Database View called VUMV_HOST_UPGRADES to locate them.
  • 174. Patch RepositoryPatch and extensionmetadata is kept in the Update Manager Patch Repository. You can use the repository to managepatches and extensions, check on new patches and extensions, view patch and extension details, view in which baseline a patch or an extension is included, view the recalled patches and import patches.141
  • 175. Import Offline Patch to RepositoryFrom the Patch Repository you can include available, recentlydownloadedpatches and extensions in a baseline you select.Instead of using a shared repository or the Internet as a patch download source, you can import patches manually by using an offline bundle.
  • 176. NotificationsAs we have already seen with the notification Schedule, Update Manager 4.1 contacts VMware at regular intervals to downloadnotifications about patch recalls, new fixes and alerts. If patches with problems/potential issues are released, these patches are recalled in the metadata and VUM marks them as recalled. If you try to install a recalled patch, Update Manager notifies you that the patch is recalled and does not install it on the host. If you have already installed such a patch, VUM notifies you that the recalled patch is installed on certain hosts, but does notremove the recalled patch from the host.Update Manager also deletes all the recalledpatches from the Update Manager patch repository.When a patch fixing the problem is released, Update Manager 4.1 downloads the new patch and prompts you to install it.
  • 177. NotificationsNotifications which Update Manager downloads are displayed on the Notifications tab of the Update Manager Administration view.An Alarm is Generated and an email sent if the Notification Check Schedule is configured
  • 178. Update Manager shows the patch as recalledNotifications - Patch Recall Details
  • 179. NotificationsAlarms posted for recalled and fixed PatchesRecalledPatches are represented by a Flag
  • 181. Converter 4.2 (not 4.1)Physical to VM conversion support for Linux sources including: Red Hat Enterprise Linux 2.1, 3.0, 4.0, and 5.0SUSE Linux Enterprise Server 8.0, 9.0, 10.0, and 11.0Ubuntu 5.x, 6.x, 7.x, and 8.xHot cloning improvements to clone any incremental changes to physical machine during the P2V conversion processSupport for converting new third-party image formats including Parallels Desktop VMs, newer versions of Symantec, Acronis, and StorageCraftWorkflow automation enhancements:automatic source shutdown, automatic start-up of the destination VM as well as shutting down one or more services at the source and starting up selected services at the destinationDestination disk selection and the ability to specify how the volumes are laid out in the new destination VMDestination VM configuration, including CPU, memory, and disk controller typeSupport for importing powered-off Microsoft Hyper-V R1 and Hyper-V R2 VMsSupport for importing Windows 7 sourcesAbility to throttle the data transfer from source to destination based on network bandwidth or CPU
  • 182. Converter – Hyper-V ImportMicrosoft Hyper-V ImportHyper-V can be compared to VMware ServerRuns on top of operating systemBy default only manageable locallyUp to now import went through P2V inside of the VMConverter imports VMs from Hyper-V now as V2VCollects information from the Hyper-V server re VMsDoes not go through Hyper-V administration toolsUses default Windows methods to access the VMRequirementsConverter needs administrator credentials to import a VMHyper-V must be able to create a network connection to destination ESX hostVM to be imported must be powered offVM OS must be supported guestOS by vSphere
  • 184. Support InfoVMware Converter plug-in. vSphere 4.1 and its updates/patches are the last releases for the VMware Converter plug-in for vSphere Client.We will continue to update and support the free Converter Standalone productVMware Guided Consolidation. vSphere 4.1 and its update/patch are the last major releases for VMware Guided Consolidation. VMware Update Manager: Guest OS patchingUpdate Manager 4.1 and its update are the last releases to support scanning and remediation of patches for Windows and Linux guest OS. The ability to perform VM operations such as upgrade of VMware Tools and VM hardware will continue to be supported and enhanced.VMware Consolidated Backup 1.5 U2 VMware has extended the end of availability timeline for VCB and added VCB support for vSphere 4.1. VMware supports VCB 1.5 U2 for vSphere 4.1 and its update/patch through the end of their lifecycles. VMware Host Update utilityNo longer used. Use Update Manager or CLI to patch ESXvSphere Client no longer bundled with ESX/ESXiReduced size by around 160 MB.
  • 185. Support InfoVMI Paravirtualized Guest OS support. vSphere 4.1 is the last release to support the VMI guest OS paravirtualization interface. For information about migrating VMs that are enabled for VMI so that they can run on future vSphere releases, see Knowledge Base article 1013842. vSphere Web Access. Support is now on best effort basis.Linux Guest OS Customization. vSphere 4.1 is the last release to support customization for these Linux guest OS: RedHat Enterprise Linux (AS/ES) 2.1, RedHat Desktop 3, RedHat Enterprise Linux (AS/ES) 3.0, SUSE Linux Enterprise Server 8Ubuntu 8.04, Ubuntu 8.10, Debian 4.0, Debian 5.0Microsoft Clustering with Windows 2000 is not supported in vSphere 4.1. See the Microsoft Website for additional information. Likely due to MSCS with Win2K EOL. Need to double confirm.
  • 186. vCenter MUST be hosted on 64-bit Windows OS 32-bit OS NOT supported as a host OS with vCenter vSphere 4.1Why the change?Scalability is restricted by the x86 32 bit virtual address space and moving to 64 bit will eliminate this problemReduces dev and QA cycles and resources (faster time to market)Two OptionsvCenter in a VM running 64-bit Windows OSvCenter install on a 64-bit Windows OS Best Practice – Use Option 1http://kb.vmware.com/kb/1021635vCenter – Migration to 64-bit
  • 187. Data Migration Tool - What is backed up ?vCenter LDAP data Configuration Port settings HTTP/S ports Heartbeat port Web services HTTP/S ports LDAP / LDAP SSL ports Certificates SSL folder Database Bundled SQL Server Express only Install Data License folder
  • 188. Data Migration Tool - Steps to Backup the ConfigurationExample of the start of the backup.bat command running
  • 189. CompatibilityvSphere Client compatibilityCan use the “same” client to access 4.1, 4.0 and 3.5vCenter LinkedModevCenter 4.1 and 4.0 can co-exist in Linked ModeAfter both versions of vSphere Client are installed, you can access vCenter linked objects with either client. For Linked Mode environments with vCenter 4.0 and vCenter 4.1, you must have vSphere Client 4.0 Update 1 and vSphere Client 4.1. MS SQL ServerUnchanged. 4.1, 4.0 U2, 4.0 U1 and 4.0 have identical support32 bit DB is also supported.
  • 190. CompatibilityvCenter 4.0 does not support ESX 4.1Upgrade vCenter before upgrading ESXvCenter 4.1 does not support ESX 2.5ESX 2.5 has reached the limited/non support statusvCenter 4.1 adds support for ESX 3.0.3 U1Storage:No change in VMFS formatNetworkDistributed Switch 4.1 needs ESX 4.1Quiz: how to upgrade?
  • 191. Upgrading Distributed SwitchSource:Manual. ESX Configuration Guide, see “Upgrade a vDS to a Newer Version”
  • 192. CompatibilityViewNeed to upgrade to 4.5View 4.0 composer is a 32-bit application, while vCenter 4.1 is 64 bit.SRMneed to upgrade to SRM 4.1SRM 4.1 supports vSphere 4.0 U1, 4.0 U2 and 3.5 U5SRM 4.1 needs vCenter 4.1SRM 4.1 needs 64 bit OS. SRM 4.1 adds support for Win08 R2CapacityIQCapacityIQ 1.0.3 (the current shipping release) is not known to have any issues with VC 4.1 but you need to use a “–NoVersionCheck” flag when registering CIQ with it.CapacityIQ 1.0.4 will be released soon to address that.
  • 193. Compatibility: Win08 R2This is for R2, not R1This is to run the VMware products on Windows, not to host Win08 as Guest OSWin08 as guest is supported on 4.0Minimum vSphere products version to run on Windows 2008 R2:vSphere Client 4.1vCenter 4.1Guest OS Customization for 4.0 and 4.1vCenter Update Manager as its server. It is not yet supported for patching Win08 R2. Update Manager also does not patch Win7vCenter ConverterVmware Orchestrator vCO: Client and Server 4.1SRM 4.1
  • 194. Known IssuesFull list: https://www.vmware.com/support/vsphere4/doc/vsp_esxi41_vc41_rel_notes.html#sdkIPv6 Disabled by Default when installing ESXi 4.1.Hardware iSCSI.Broadcom Hardware iSCSI does not support Jumbo Frames or IPv6. Dependent hardware iSCSI does not support iSCSI access to the same LUN when a host uses dependent and independent hardware iSCSI adapters simultaneously.VM MAC address conflictsEach vCenter system has a vCenter instance ID. This ID is a number between 0 and 63 that is randomly generated at installation time but can be reconfigured after installation. vCenter uses the vCenter instance ID to generate MAC addresses and UUIDs for VMs. If two vCenter systems have the same vCenter instance ID, they might generate identical MAC addresses for VMs. This can cause conflicts if the VMs are on the same network, leading to packet loss and other problems.
  • 195. Thank YouI’m sure you are tired too 
  • 196. Useful referenceshttp://vsphere-land.com/news/tidbits-on-the-new-vsphere-41-release.html]http://www.petri.co.il/virtualization.htmhttp://www.petri.co.il/vmware-esxi4-console-secret-commands.htmhttp://www.petri.co.il/vmware-data-recovery-backup-and-restore.htmhttp://www.delltechcenter.com/page/VMware+Techhttp://www.kendrickcoleman.com/index.php?/Tech-Blog/vm-advanced-iso-free-tools-for-advanced-tasks.htmlhttp://www.ntpro.nl/blog/archives/1461-Storage-Protocol-Choices-Storage-Best-Practices-for-vSphere.htmlhttp://www.ntpro.nl/blog/archives/1539-vSphere-4.1-Virtual-Serial-Port-Concentrator.htmlhttp://www.virtuallyghetto.com/2010/07/vsphere-41-is-gift-that-keeps-on-giving.htmlhttp://www.virtuallyghetto.com/2010/07/script-automate-vaai-configurations-in.htmlhttp://searchvmware.techtarget.com/tip/0,289483,sid179_gci1516821,00.htmlhttp://vmware-land.com/esxcfg-help.htmlhttp://virtualizationreview.com/blogs/everyday-virtualization/2010/07/esxi-hosts-ad-integrated-security-gotcha.aspxhttp://www.MS.com/licensing/about-licensing/client-access-license.aspx#tab=2http://www.MSvolumelicensing.com/userights/ProductPage.aspx?pid=348