SlideShare a Scribd company logo
vSphere 4.1: Delta to 4.0Tech Sharing for PartnersIwan ‘e1’ Rahabok, Senior Systems Consultante1@vmware.com  |  virtual-red-dot.blogspot.com  |  tinyurl.com/SGP-User-Group  |  facebook.com/e1angAugust 2010
Audience AssumptionThis is a level 200 - 300 presentation.It assumes:Good understanding of vCenter 4, ESX 4, ESXi 4. Preferably hands-onWe will only cover the delta between 4.1 and 4.0Overview understanding of related products like VUM, Data Recovery, SRM, View, Nexus, Chargeback, CapacityIQ, vShieldZones, etcGood understanding of related storage, server, network technologyTarget audienceVMware Specialist: SE + Delivery from partners
AgendaNew featuresServerStorageNetworkManagementUpgrade
4.1 New Feature (over 4.0, not 3.5): Server
4.1 New Feature (over 4.0, not 3.5): Server
4.1 New Feature (over 4.0, not 3.5): Storage
4.1 New Feature (over 4.0, not 3.5): Network
4.1 New Feature: Management
Builds:ESX build 260247VC build 258902Some stats:4000 development weeks were spent to get to FC5100 QA weeks were spent to get to FC872 beta customers downloaded and tried it out2012 servers, 2277 storage arrays, and 2170 IO devices are already on the HCL 
Consulting Services: KitThe vSphere Fundamentals services kitIncludes core services enablement materials for vSphere Jumpstarts, Upgrades, Converter/P2V and PoCs.  The update reflects what’s new in vSphere 4.1 - including new resource limits, memory compression, Storage IO Control, vNetwork Traffic Management, and vSphere Active Directory Integration. The kit is intended for use by PSO Consultants, TAMs, and SEs to help with delivering services engagements, PoCs, or knowledge transfer sessions with customers. Located at Partner Central – Services IP Assetshttps://na6.salesforce.com/sfc/#version?selectedDocumentId=069800000000SSiFor delivery partner: Please download this.
4.1 New Features: Server
PXE Boot RetryVirtual Machine -> Edit Settings -> Options -> Boot OptionsFailed Boot Recovery disabled by defaultEnable and set the automatically retry boot after X Seconds12
Wide NUMA SupportWide VMWide-VM is defined as a VM that has more vCPUs than the available cores on a NUMA node. A 5-vCPU VM in a quad-core serverOnly the cores count, and hyperthreading threads don’tESX 4.1 scheduler introduces wide-VM NUMA supportImproves memory locality for memory-intensive workloads. Based on testing with micro benchmarks, the performance benefit can be up to 11–17%.How it worksESX 4.1 allows wide-VMs to take advantage of NUMA management. NUMA management means that a VM is assigned a home node where memory is allocated and vCPUs are scheduled. By scheduling vCPUs on a NUMA node where memory is allocated, the memory accesses become local, which is faster than remote accesses
ESXiEnhancements to ESXi. Not applicable to ESX
Transitioning to ESXiESXi is our architecturegoing forward
Moving toward ESXiPermalink to: VMware ESX and ESXi 4.1 ComparisonService Console (COS)Agentless vAPI-basedManagement AgentsHardware AgentsAgentless CIM-basedCommands forconfiguration anddiagnosticsvCLI, PowerCLILocal Support ConsoleCIM APIvSphere APIInfrastructureService AgentsNative Agents:NTP, Syslog, SNMPVMware ESXi“Classic” VMware ESX
Software Inventory - Connected to ESXi/ESXFrom vSphere 4.1BeforeEnumerate instance of CIM_SoftwareIdentityEnhanced CIM provider now displays great detail on installed software bundles.
18Software Inventory – Connected to vCenterBeforeFrom vSphere 4.1Enumerate instance of CIM_SoftwareIdentityEnhanced CIM provider now displays great detail on installed software bundles.Additional Deployment OptionBoot From SANFully supported in ESXi 4.1Was only experimentally supported in ESXi 4.0Boot from SAN supported for FC, iSCSI, and FCoEESX and ESXi have different requirement:iBFT (Boot Firmware Table) requiredThe host must have an iSCSI boot capable NIC that supports the iSCSI iBFT format. iBFT is a method of communicating parameters about the iSCSI boot device to an OS
Additional Deployment OptionScripted InstallationNumerous choices for installationInstaller booted fromCD-ROM (default)Preboot Execution Environment (PXE)ESXi Installation image onCD-ROM (default), HTTP/S, FTP, NFSScript can be stored and accessedWithin the ESXi Installer ramdiskOn the installation CD-ROMHTTP / HTTPS, FTP, NFS Config script (“ks.cfg”) can includePreinstallPostinstallFirst bootCannot use scripted installation to install to a USB device
PXE BootRequirementsPXE-capable NIC.DHCP Server (IPv4). Use existing one.Media depot + TFTP server + gPXEA server hosting the entire content of ESXi  media. Protocal: HTTP/HTTPS, FTP, or NFS server.OS: Windows/Linux server.InfoWe recommend the method that uses gPXE. If not, you might experience issues while booting the ESXi installer on a heavily loaded Network.TFTP is a light-weight version of the FTP service, and is typically used only for network booting systems or loading firmware on network devices such as routers.
PXE bootPXE uses DHCP and Trivial File Transfer Protocol (TFTP) to bootstrap an OS over network.How it worksA host makes a DHCP request to configure its NIC. A host downloads and executes a kernel and support files. PXE booting the installer provides only the first step to installing ESXi. To complete the installation, you must provide the contents of the ESXi DVD Once ESXi installer is booted, it works like a DVD-based installation, except that the location of the ESXi installation media must be specified.
Additional Deployment Option
Sample ks.cfg file# Accept the EULA (End User Licence Agreement)vmaccepteula# Set the root password to vmware123rootpw vmware123# Install the ESXi image from CDROMinstall cdrom# Auto partition the first disk – if a VMFS exists it will overwrite it.autopart --firstdisk --overwritevmfs# Create a partition called Foobar# Partition the disk identified with vmhba1:c0:t1:l0 to grow to a maxsize of 4000partition Foobar --ondisk=mpx.vmhba1:C0:T1:L0 --grow –maxsize=4000# Set up the management network on the vmnic0 using DHCPnetwork –bootproto=dhcp --device=vmnic0 --addvmportgroup=0%firstboot --level=90.1 --unsupported --interpreter=busybox# On this first boot, save the current date to a temporary filedate > /tmp/foo# Mount an nfs share and put it at /vmfs/volumes/wwwesxcfg-nas -add -host 10.20.118.5 -share /var/www www
Full Support of Tech Support ModeThere you go 2 typesRemote: SSHLocal: Direct Console
Full Support of Tech Support ModeEnter to toggle. That’s it!Disable/Enable Timeout automatically disables TSM (local and remote)Running sessions are not terminated.All commands issued in Tech Support Mode are sent to syslog
Full Support of Tech Support ModeRecommended usesSupport, troubleshooting, and break-fixScripted deployment preinstall, postinstall, and first boot scriptsDiscouraged usesAny other scriptsRunning commands/scripts periodically (cron jobs)Leaving open for routine access or permanent SSH connectionAdmin will benotified when active
Full Support of Tech Support ModeWe can also enable it via GUICan enable in vCenter or DCUIEnable/Disable
Security BannerA message that is displayed on the direct console Welcome screen.
Total Lockdown
Total LockdownAbility to totally control local access via vCenterDCUILockdown Mode (disallows all access except root on DCUI)Tech Support Mode (local and remote)If all configured, then no local activity possible (except pull the plugs)
Additional commands in Tech Support ModevscsciStats is now available in the console.Output is raw data for histogram.Use spreadsheet to plot the histogramSome use cases:Identify whether IO are sequential or randomOptimizing for IO SizesChecking for disk mis-alignmentLooking at storage latency in moredetails
Additional commands in Tech Support ModeAdditional commands for troubleshootingnc (netcat)http://en.wikipedia.org/wiki/Netcattcpdump-uwhttp://en.wikipedia.org/wiki/Tcpdump
More ESXi Services listedMore services are now shown in GUI.Ease of controlFor example, if SSH is not running, you can turn it on from GUI.ESXi 4.0ESXi 4.1
ESXi Diagnostics and Troubleshooting If things go wrong:
 During normal operations:DCUI: misconfigs / restart mgmt agents vCLIvCenter vSphere APIsTSM: Advanced troubleshooting (GSS) ESXiRemote AccessLocal Access
Common Enhancements for both ESX and ESXi64 bit User WorldRunning VMs with very large memory footprints implies that we need a large address space for the VMX. 32-bit user worlds (VMX32) do not have sufficient address space for VMs with large memory. 64-bit User worlds overcome this limitation.NFSThe number of NFS volumes supported is increased from 8 to 64.Fiber ChannelEnd-To-End Support for 8 GB (HBA, Switch & Array).VMFSVersion changed to 3.46. No customer visible changes. Changes related to algorithms in the vmfs3 driver to handle new VMware APIs for Array Integration (VAAI).
Common Enhancements for both ESX and ESXiVMkernel TCP/IP Stack UpgradeUpgraded to version based on BSD 7.1. Result:  improving FT logging, VMotion and NFS client performance.Pluggable Storage Architecture (PSA)New naming convention.New filter plugins to support VAAI (vStorage APIs for Array Integration).New PSPs (Path Selection Policies) for ALUA arrays.New PSP from DELL for the EqualLogic arrays.
USB pass-throughNew Features for both ESX/ESXi
USB Devices2 steps:Add USB ControllerAdd USB Devices
USB DevicesOnly devices listed on the manual is supported.Mostly for ISV licence dongle.A few external USB drives.Limited list of device for now
Example 1After vMotion, the VM will be on another (remote) ESXi.Communication inter-ESXi will use Mgmt Network (ESXi has no SC network)You cannot multi-select devices at this stage – add them one by one.Source: http://vstorage.wordpress.com/2010/07/15/usb-passthrough-in-vsphere-4-1/Example 1From the source“I have tested numerous brands of USB mass storage devices (Kingston, Sandisk, Lexar, Imation) as well a couple of of security dongles and they all work well.”
Example 2: adding UPSSource: http://vninja.net/virtualization/using-usb-pass-through-in-vsphere-4-1/Example 2Source: http://vninja.net/virtualization/using-usb-pass-through-in-vsphere-4-1/USB Devices: Supported Devices
USB DevicesUp to 20 devices per VM. Up to 20 devices per ESX host.1 device can only be owned by 1 VM at a given time. No sharing.SupportedvMotionCommunication via the management networkDRSUnsupportedDPM. DPM is not aware of the device and may turn it off. This may cause loss of data. So disable DRS for this VM so it stays in this host only.Fault ToleranceDesign considerationTake note of situation when the ESX host is not available (planned or unplanned downtime)
MS AD integrationNew Features for both ESX/ESXi
AD ServiceProvides authentication for all local servicesvSphere ClientOther access based on vSphere API DCUITech Support Mode (local and remote)Has nominal AD groups functionalityMembers of “ESX Admins” AD group have Administrative privilegeAdministrative privilege includes:Full Administrative role in vSphere Client and vSphere API clientsDCUI accessTech Support Mode access (local and remote)
The Likewise AgentESX uses an agent from Likewise to connect to MS AD and to authenticate users with their domain credentials. The agent integrates with the VMkernel to implement the mapping for applications such as the logon process (/bin/login) which uses a pluggable authentication module (PAM). As such, the agent acts as an LDAP client for authorization (join domain) and as a Kerberos client for authentication (verify users).The vMA appliance also uses an agent from Likewise.ESX and vMA use different versions of the Likewise agent to connect to the Domain Controller. ESX uses version 5.3 whereas vMA uses version 5.1.49
Joining AD: Step 1
Joining AD: Step 21. Select “AD”2. Click “Join Domain”3. Join the domain. Full name.@123.com
AD ServiceA third method for joining ESX/ESXi hosts and enabling Authentication Services to utilize AD is to configure it through Host Profiles
AD Likewise Daemons on ESXlwiod is the Likewise I/O Manager service - I/O services for communication. Launched from /etc/init.d/lwiod script.
netlogond is the Likewise Site Affinity service - detects optimal AD domain controller, global catalogue and data caches. Launched from /etc/init.d/netlogond script.
lsassd is the Likewise Identity & Authentication service. It does authentication, caching and idmap lookups. This daemon depends on the other two daemons running. Launched from /etc/init.d/lsassd script.root     18015     1  0 Dec08 ?     00:00:00 /sbin/lsassd --start-as-daemonroot     31944     1  0 Dec08 ?     00:00:00 /sbin/lwiod --start-as-daemonroot     31982     1  0 Dec08 ?     00:00:02 /sbin/netlogond --start-as-daemon
ESX Firewall Requirements for ADCertain ports in SC are automatically opened in the Firewall Configuration to facilitate AD. Not applicable to ESXiBeforeAfter
Time Sync Requirement for ADTime must be in sync between the ESX/ESXi server and the AD server. For the Likewise agent to communicate over Kerberos with the domain controller, the clock of the client must be within the domain controller's maximum clock skew, which is 300 seconds, or 5 minutes, by default. The recommendation would be that they share the same NTP server.
vSphere ClientNow when assigning permissions to users/groups, the list of users and groups managed by AD can be browsed by selecting the Domain.
Info in ADThe host should also be visible on the Domain Controller in the AD Computers objects listing.Looking at the ESX Computer Properties shows a Name of RHEL(as it the  Service Console on the ESX) & Service pack of ‘Likewise Identity 5.3.0’
Memory CompressionNew Features for both ESX/ESXi
Memory CompressionVMKernel implement a per-VM compression cache to store compressed guest pages. When a guest page (4 KB page) needs to swapped, VMKernel will first try to compress the page. If the page can be compressed to 2 KB or less, the page will be stored in the per-VM compression cache. Otherwise, the page will be swapped out to disk. If a compressed page is again accessed by the guest, the page will decompressed online.
Changing the value of cache size
Virtual Machine Memory CompressionVirtual Machine -> Resource AllocationPer-VM statistic showing compressed memory
Monitoring Compression3 new counters introduced to monitorHost level, not VM level.
Power Management
Power consumption chartPer ESX, not per clusterNeed hardware integration.Difference HW makes have different info
Performance Graphs – Power ConsumptionWe can now track the Power consumption of VMs in real-timeEnabled through Software Settings ->Advanced Settings -> Power -> Power.ChargeVMs65
Host power consumptionIn some situation,  may need to edit /usr/share/sensors/vmware to get support for the hostDifferent HW makers have different API.VM power consumptionExperimental. Off by default
ESXFeatures only for ESX (not ESXi)
ESX: Service Console firewallChanges in ESX 4.1ESX 4.1 introduces these additional configuration files located in /etc/vmware/firewall/chains:usercustom.xmluserdefault.xmlRelationship between the 2 files“user” overwrites.The default files custom.xml and default.xml are overridden by usercustom.xml and userdefault.xml.All configuration is saved in usercustom.xml and userdefault.xml.Copy the original custom.xml and default.xml files. Use them as a template for usercustom.xml and userdefault.xml.
ClusterHA, FT, DRS & DPM
Availability Feature SummaryHA and DRS Cluster LimitationsHigh Availability (HA) Diagnostic and Reliability ImprovementsFT Enhancements vMotionEnhancementsPerformanceUsabilityEnhanced Feature CompatibilityVM-host Affinity (DRS)DPM EnhancementsData Recovery Enhancements
DRS: more HA-awarenessvSphere 4.1 adds logic to prevent imbalance that may not be good from HA point of view.Example20 small VM and 2 very large VM.2 ESXi hosts. Same workload with the above 20 collectively.vSphere 4.0 may put 20 small VM on Host A and 2 very large VM on Host B.From HA point of view, this may result in risks when Host A fails.vSphere 4.1 will try to balance the number of VM.
HA and DRS Cluster ImprovementsIncreased cluster limitationsCluster limits are now unified for HA and DRS clusters
Increased limits for VMs/host and VMs/cluster
Cluster limits for HA and DRS:
32 hosts/cluster
320 VMs/host (regardless of # of hosts/cluster)
3000 VMs/cluster
Note that these limits also apply to post-failover scenarios. Be sure that these limits will not be violated even after the maximum configured number of host failovers.HA and DRS Cluster Limit5-host cluster, tolerate 1 host failurevSphere 4.1 supports 320 VMs/host
Supports 320x5 VMs/cluster?  NO
Cluster can only support 320x4 VMsX5-host cluster, tolerate 2 host failuresSupports 320x5 VMs/cluster?  NO
Cluster can only support 320x3 VMsXX
HA Diagnostic and Reliability ImprovementsHA Healthcheck StatusHA provides an ongoing healthcheck facility to ensure that the required cluster configuration is met at all times. Deviations result in an event or alarm on the cluster.Improved HA-DRS interoperability during HA failoverDRS will perform vMotionto free up contiguous resources (i.e. on one host) so that HA can place a VM that needs to be restartedHA Diagnostic and Reliability ImprovementsHA Operational StatusDisplays more information about the current HA operational status, including the specific status and errors for each host in the HA cluster.It shows if the host is Primary or Secondary!
HA Operational StatusJust another example 
HA: Application AwarenessApplication Monitoring can restart a VM if the heartbeats for an application it is running are not receivedExpose APIs for 3rd party app developersApplication Monitoring works much the same way that VM Monitoring: If the heartbeats for an application are not received for a specified time via VMware Tools, its VM is restarted.ESXi 4.0ESXi 4.1
Fault Tolerance
FT EnhancementsDRSFT fully integrated with DRSDRS load balances FT Primary and Secondary VMs. EVC required.Versioning control lifts requirement on ESX build consistencyPrimary VM can run on host with a different build # as Secondary VM.Events for Primary VM vs. Secondary VM differentiatedEvents logged/stored differently.FT PrimaryVMFT SecondaryVMResource Pool
No data-loss GuaranteevLockStep: 1 CPU step behindPrimary/backup approachA common approach to implementing fault-tolerant servers is the primary/backup approach. The execution of a primary server is replicated by a backup server. Given that the primary and backup servers execute identically, the backup server can take over serving client requests without any interruption or loss of state if the primary server fails
New versioning featureFT now has a version number to determine compatibility Restriction to have identical ESX build # has been liftedNow FT checks it’s own version number to determine compatibilityFuture versions might be compatible with older ones, but possibly not vice-versaAdditional information on vSphere ClientFT version displayed in host summary tab# of FT enabled VMs displayed thereFor hosts prior to ESX/ESXi 4.1, this tab lists the host build number instead.FT versions included in vm-support output/etc/vmware/ft-vmk-version:product-version = 4.1.0build = 235786ft-version = 2.0.0
FT logging improvementsFT traffic was bottlenecked to 2 Gbit/s even on 10 Gbit/s pNICsImproved by implementing ZeroCopy feature for FT traffic Tx, tooFor sending only (Tx)Instead of copying from FT buffer into pNIC/socket buffer just a link to the memory holding the data is transferredDriver accesses data directly- no copy needed
FT: unsupported vSphere featuresSnapshots. Snapshots must be removed or committed before FT can be enabled on a VM. It is not possible to take snapshots of VMs on which FT is enabled.Storage vMotion. Cannot invoke Storage vMotion for FT VM. To migrate the storage, temporarily turn off FT, do Storage vMotion, then turn on FT. Linked clones. Cannot enable FT on a VM that is a linked clone, nor can you create a linked clone from an FT-enabled VM.Back up. Cannot back up an FT VM using VCB, vStorage API for Data Protection, VMware Data Recovery or similar backup products that require the use of a VM snapshot, as performed by ESXi. To back up VM in this manner, first disable FT, then re-enable FT after backup is done. Storage array-based snapshots do not affect FT.Thin Provisioning, NPIV, IPv6, etc
FT: performance sample MS Exchange 20071 core handles 2000 Heavy Online user profileVM CPU utilisation is only 45%. ESX is only 8%Based on previous “generation”Xeon 5500, not 5600vSphere 4.0, not 4.1OpportunityHigher uptime forcustomer emailsystem
Integration with HAImproved FT host managementMove host out of vCenterDRS able to vMotion FT VMsWarning if HA gets disabled and following operations will be disabledTurn on FTEnable FTPower on a FT VM Test failover Test secondary restart
VM-to-Host Affinity
BackgroundDifferent servers in a datacenter is a common scenarioDifferences by memory size, CPU generation or # or type of pNICsBest practice up to nowSeparate different hosts in different clustersWorkaroundsCreating affinity/ anti-affinity rulesPinning a VM to a single host by disabling DRS on the VM.DisadvantageToo expensive as each cluster needed to have HA failover capacityNew feature: DRS GroupsHost and VM groups Organize ESX hosts and VMs into groupsSimilar memorySimilar usage profile…
VM-host Affinity (DRS)Required rulesPreferential rulesRule enforcement: 2 optionsRequired: DRS/HA will never violate the rule; event generated if violated manually. Only advised for enforcing host-based licensing of ISV apps.
Preferential: DRS/HA will violate the rule if necessary for failover or for maintaining availabilityHard RulesHard RulesDRS will follow the hard rulesWith DPM hosts will get powered on to follow a ruleIf DRS can’t follow, vCenter will display an alarmCan not be overwritten by userDRS will not generate any recommendations which would violate hard rulesDRS Groups and hard rules with HAHosts will be tagged as “incompatible” in case of “Must Not run…”  so HA will take care of these rules, too
Soft RulesSoft RulesDRS will follow a soft rule if possibleWill allow actions User-initiatedDRS-mandatoryHA actionsRules are applied as long as their application does not impact satisfying current VM cpu or memory demandDRS will report a warning if the rule isn’t followedDRS does not produce a move recommendation to follow the ruleSoft VM/host affinity rules are treated by DRS as "reasonable effort"
Grouping Hosts with different capabilitiesDRS Groups ManagerDefines GroupsVM groupsHost groups
Managing ISV LicensingExampleCustomer has 4-node clusterOracle DB and Oracle BEA are charged for every hosts that can run it.vSphere 4.1 introduces “hard partitioning”Both DRS and HA will honour this boundary.Rest of VMsOracle DBDMZ VMOracle BEADMZ LANProduction LAN
Managing ISV LicensingHard partitioningIf a host is in a VM-host must affinity rule, they are considered compatible hosts, all the others are tagged as incompatible hosts. DRS, DPM and HA are unable to place the VMs on incompatible hosts.Due to the incompatible host designation, the mandatory VM-Host is a feature what can be (undeniably) described as hard partioning. You cannot place and run a VM on incompatible hostOracle has not acknowledged this as hard partitioning.Sourceshttp://frankdenneman.nl/2010/07/vm-to-hosts-affinity-rule/http://www.latogalabs.com/2010/07/vsphere-41-hidden-gem-host-affinity-rules/
Example of setting-up: Step 1In this example, we are adding the “WinXPsp3” VM to the group.The group name is “Desktop VMs”
Example of setting-up: Step 2Just like we can group VM, we can also group ESX
Example of setting-up: Step 3We have grouped the VMs in the cluster into 2We have grouped the ESX in the cluster into 2
Example of setting-up: Step 4This is the screen where we do themapping.VM Group mapped to Host Group
Example of setting-up: Step 5Mapping is done.The Cluster Settings dialog box now display the new rules type.
HA/ DRSDRS lists rulesSwitch on or offExpand to display DRS Groups Rule detailsRule policyInvolved Groups
VMware vSphere 4.1 deep dive - part 1
Enhancement for Anti-affinity rulesNow more than 2 VMs in a ruleEach rule can have a couple of VMsKeep them all togetherSeparate them through clusterFor each VM at least 1 host is needed101
DPM EnhancementsScheduling DPMTurning on/off DPM is now a scheduled taskDPM can be turned off prior to business hours in anticipation for higher resource demandsDisabling DPMIt brings hosts out of standbyEliminates risk of ESX hosts being stuck in standby mode while DPM is disabled. Ensures that when DPM is disabled, all hosts are powered on and ready to accommodate load increases.
DPM Enhancements
vMotion
vMotionEnhancementsSignificantly decreased the overall migration time (time will vary depending on workload)Increased number of concurrent vMotions:ESX host: 4 on a 1 Gbps network and 8 on a 10 Gbps networkDatastore: 128 (both VMFS and NFS)Maintenance mode evacuation time is greatly decreased due to above improvements
vMotionRe-write of the previous vMotion codeSends memory pages bundled together instead of one after the otherLess network/ TCP/IP overheadDestination pre-allocates memory pagesMultiple senders/ receiversNot only a single world responsible for each vMotion thus limit based on host CPUSends list of changed pages instead of bitmapsPerformance improvementThroughput improved significantly for single vMotionESX 3.5 – ~1.0GbpsESX 4.0 – ~2.6GbpsESX 4.1 – max 8 GbpsElapsed reduced by 50%+ on 10GigE tests. Mix of different bandwidth pNICs not supported
vMotionAggressive ResumeDestination VM resumes earlierOnly workload memory pages have been receivedRemaining pages transferred in backgroundDisk-Backed OperationSource host creates a circular buffer file on shared storageDestination opens this file and reads out of itWorks only on VMFS storageIn case of network failure during transfer vMotion falls back to disk based transferWorks together with aggressive resume feature above
Enhanced vMotion Compatibility ImprovementsPreparation for AMD Next Generation without 3DNow!Future AMD CPUs may not support 3DNow!To prevent vMotion incompatibilities, a new EVC mode is introduced.
EVC ImprovementsBetter handling of powered-on VMsvCenter server now uses a live VM's CPU feature set to determine if it can be migrated into an EVC clusterPreviously, it relied on the host's CPU featuresA VM could run with a different vCPU than the host it runs onI.e. if it was initially started on an older ESX host and vMotioned to the current oneSo the VM is compatible to an older CPU and could possibly be migrated to the EVC cluster even if the ESX hosts the VM runs on is not compatible
Enhanced vMotionCompatibility ImprovementsUsability ImprovementsVM's EVC capability: The VMs tab for hosts and clusters now displays the EVC mode corresponding to the features used by VMs.VM Summary: The Summary tab for a VM lists the EVC mode corresponding to the features used by the VM.
EVC (3/3)Earlier Add-Host Error detectionHost-specific incompatibilities are now displayed prior to the Add-Host work-flow when adding a host into an EVC clusterUp to now this error occurred after all needed steps were done by the administratorNow it’ll warn earlier
LicencingHost-Affinity, Multi-core VM, Licence Reporting Manager
Multi-core CPU inside a VMClick this
Multi-core CPU inside a VM2-core, 4-core, 8 core.No 3-core, 5 core, 6 core, etcType this manually

More Related Content

PPTX
VMware vSphere 4.1 deep dive - part 2
PPTX
Realtime scheduling for virtual machines in SKT
PPT
Vsphere 4-partner-training180
PPTX
Presentation power vm common 2012
PDF
VMworld 2013: Silent Killer: How Latency Destroys Performance...And What to D...
PDF
Xen community update
PDF
20 christian ferber xen_server_6_workshop
PPTX
Presentation power vm virtualization without limits
VMware vSphere 4.1 deep dive - part 2
Realtime scheduling for virtual machines in SKT
Vsphere 4-partner-training180
Presentation power vm common 2012
VMworld 2013: Silent Killer: How Latency Destroys Performance...And What to D...
Xen community update
20 christian ferber xen_server_6_workshop
Presentation power vm virtualization without limits

What's hot (20)

PPT
C3 Citrix Cloud Center
PDF
XS 2008 Boston Capacity Planning
PDF
Multiple Shared Processor Pools In Power Systems
PPTX
VIO LPAR Introduction | Basics | Demo
PDF
#IBMEdge: Brocade SAN Health Session
PPT
Simple Virtualization Overview
PDF
ARM Architecture-based System Virtualization: Xen ARM open source software pr...
PDF
Brocade: Storage Networking For the Virtual Enterprise
 
PDF
Openstack v4 0
PDF
Advanced performance troubleshooting using esxtop
PDF
XS Boston 2008 Project Status
PDF
XS Boston 2008 Self IO Emulation
PDF
XS 2008 Boston VTPM
PDF
XS Boston 2008 VT-D PCI
PPSX
Win2k8 cluster kaliyan
PDF
I/O Scalability in Xen
PDF
#IBMEdge: "Not all Networks are Equal"
PPTX
LAN v podání Brocade
PDF
XS Japan 2008 Oracle VM English
PDF
Xen RAS Status and Progress
C3 Citrix Cloud Center
XS 2008 Boston Capacity Planning
Multiple Shared Processor Pools In Power Systems
VIO LPAR Introduction | Basics | Demo
#IBMEdge: Brocade SAN Health Session
Simple Virtualization Overview
ARM Architecture-based System Virtualization: Xen ARM open source software pr...
Brocade: Storage Networking For the Virtual Enterprise
 
Openstack v4 0
Advanced performance troubleshooting using esxtop
XS Boston 2008 Project Status
XS Boston 2008 Self IO Emulation
XS 2008 Boston VTPM
XS Boston 2008 VT-D PCI
Win2k8 cluster kaliyan
I/O Scalability in Xen
#IBMEdge: "Not all Networks are Equal"
LAN v podání Brocade
XS Japan 2008 Oracle VM English
Xen RAS Status and Progress
Ad

Viewers also liked (20)

PDF
Albert Speer In umbra lui hitler vol.1
DOCX
Dirección de video
PPTX
Especies en peligro
PDF
Guía del trabajo de titulación
PDF
HLU Presentation.Email.Secure
PPT
Módulo atendimento emfils
PPT
Propuesta_Universidades_AIESEC
DOCX
Finance data model
PDF
Yachts Docks & Slips Management for Joomla By Latitude 26
PDF
Software para la Inteligencia Tecnológica de Patentes
PDF
Multiculturalismo eugenia gonzalez
PDF
Arnold Classic Europe 2011
PDF
Productividad colectiva en el Thyssen
PDF
2010 tema 01 patología de esófago [modo de compatibilidad]
PDF
Boe a-2012-3919
PPT
Boot Pass
PDF
Ordine degli Studi - Ambito di PSICOLOGIA - Università Europea di Roma
PDF
Experimentación beneficio mar2014-2
PDF
Chef-Hisham2
PDF
The extra mile magazine june 2013, Leadership, HR and Personal Development
Albert Speer In umbra lui hitler vol.1
Dirección de video
Especies en peligro
Guía del trabajo de titulación
HLU Presentation.Email.Secure
Módulo atendimento emfils
Propuesta_Universidades_AIESEC
Finance data model
Yachts Docks & Slips Management for Joomla By Latitude 26
Software para la Inteligencia Tecnológica de Patentes
Multiculturalismo eugenia gonzalez
Arnold Classic Europe 2011
Productividad colectiva en el Thyssen
2010 tema 01 patología de esófago [modo de compatibilidad]
Boe a-2012-3919
Boot Pass
Ordine degli Studi - Ambito di PSICOLOGIA - Università Europea di Roma
Experimentación beneficio mar2014-2
Chef-Hisham2
The extra mile magazine june 2013, Leadership, HR and Personal Development
Ad

Similar to VMware vSphere 4.1 deep dive - part 1 (20)

PPTX
General-and-complete_Training_Slide_v0.9-TGT.pptx
PDF
VMWare VSphere4 Documentation Notes
PPTX
Vdi pre req
DOCX
Vmware inter
PDF
Vmwareinterviewqa 100927111554-phpapp01
PDF
VMware Interview questions and answers
PDF
Migrating to ESXi: How To
PPTX
2015 02-10 xen server master class
PPTX
VMware Virtualization Basics - Part-1.pptx
PDF
Upgrading your Private Cloud to Windows Server 2012 R2
ODP
OpenQrm
PPTX
Rearchitecting Storage for Server Virtualization
ODP
Using openQRM to Manage Virtual Machines
PPS
Safe checkup - vmWare vSphere 5.0 22feb2012
PPTX
Oracle VM 3.4.1 Installation
PPT
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISOR
PPTX
VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small ...
PDF
High availability virtualization with proxmox
PPT
Microsoft Hyper V Server 2008
PPTX
Storage and hyper v - the choices you can make and the things you need to kno...
General-and-complete_Training_Slide_v0.9-TGT.pptx
VMWare VSphere4 Documentation Notes
Vdi pre req
Vmware inter
Vmwareinterviewqa 100927111554-phpapp01
VMware Interview questions and answers
Migrating to ESXi: How To
2015 02-10 xen server master class
VMware Virtualization Basics - Part-1.pptx
Upgrading your Private Cloud to Windows Server 2012 R2
OpenQrm
Rearchitecting Storage for Server Virtualization
Using openQRM to Manage Virtual Machines
Safe checkup - vmWare vSphere 5.0 22feb2012
Oracle VM 3.4.1 Installation
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISOR
VMworld 2010 - Building an Affordable vSphere Environment for a Lab or Small ...
High availability virtualization with proxmox
Microsoft Hyper V Server 2008
Storage and hyper v - the choices you can make and the things you need to kno...

More from Louis Göhl (18)

PPTX
Citrix vision and product highlights november 2011
PPTX
Citrix vision & strategy overview november 2011
PPTX
SVR402: DirectAccess Technical Drilldown, Part 2 of 2: Putting it all together.
PPTX
SVR401: DirectAccess Technical Drilldown, Part 1 of 2: IPv6 and transition te...
PPTX
Security best practices for hyper v and server virtualisation [svr307]
PPTX
Hyper v and live migration on cisco unified computing system - virtualized on...
PPT
HP Bladesystem Overview September 2009
PPTX
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...
PPTX
SVR208 Gaining Higher Availability with Windows Server 2008 R2 Failover Clust...
PPTX
SVR205 Introduction to Hyper-V and Windows Server 2008 R2 with Microsoft Syst...
PPTX
SIA319 What's Windows Server 2008 R2 Going to Do for Your Active Directory?
PPTX
SIA311 Better Together: Microsoft Exchange Server 2010 and Microsoft Forefron...
PPTX
MGT310 Reduce Support Costs and Improve Business Alignment with Microsoft Sys...
PPTX
MGT300 Using Microsoft System Center to Manage beyond the Trusted Domain
PPTX
MGT220 - Virtualisation 360: Microsoft Virtualisation Strategy, Products, and...
PPTX
CLI319 Microsoft Desktop Optimization Pack: Planning the Deployment of Micros...
PPTX
Windows Virtual Enterprise Centralized Desktop
PPTX
Optimized Desktop, Mdop And Windows 7
Citrix vision and product highlights november 2011
Citrix vision & strategy overview november 2011
SVR402: DirectAccess Technical Drilldown, Part 2 of 2: Putting it all together.
SVR401: DirectAccess Technical Drilldown, Part 1 of 2: IPv6 and transition te...
Security best practices for hyper v and server virtualisation [svr307]
Hyper v and live migration on cisco unified computing system - virtualized on...
HP Bladesystem Overview September 2009
UNC309 - Getting the Most out of Microsoft Exchange Server 2010: Performance ...
SVR208 Gaining Higher Availability with Windows Server 2008 R2 Failover Clust...
SVR205 Introduction to Hyper-V and Windows Server 2008 R2 with Microsoft Syst...
SIA319 What's Windows Server 2008 R2 Going to Do for Your Active Directory?
SIA311 Better Together: Microsoft Exchange Server 2010 and Microsoft Forefron...
MGT310 Reduce Support Costs and Improve Business Alignment with Microsoft Sys...
MGT300 Using Microsoft System Center to Manage beyond the Trusted Domain
MGT220 - Virtualisation 360: Microsoft Virtualisation Strategy, Products, and...
CLI319 Microsoft Desktop Optimization Pack: Planning the Deployment of Micros...
Windows Virtual Enterprise Centralized Desktop
Optimized Desktop, Mdop And Windows 7

Recently uploaded (20)

PDF
Spectral efficient network and resource selection model in 5G networks
PDF
Chapter 3 Spatial Domain Image Processing.pdf
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Empathic Computing: Creating Shared Understanding
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PDF
Dropbox Q2 2025 Financial Results & Investor Presentation
PPTX
Understanding_Digital_Forensics_Presentation.pptx
PPTX
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
PDF
KodekX | Application Modernization Development
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Approach and Philosophy of On baking technology
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PPTX
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
Spectral efficient network and resource selection model in 5G networks
Chapter 3 Spatial Domain Image Processing.pdf
20250228 LYD VKU AI Blended-Learning.pptx
Empathic Computing: Creating Shared Understanding
The Rise and Fall of 3GPP – Time for a Sabbatical?
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
Dropbox Q2 2025 Financial Results & Investor Presentation
Understanding_Digital_Forensics_Presentation.pptx
ACSFv1EN-58255 AWS Academy Cloud Security Foundations.pptx
KodekX | Application Modernization Development
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Diabetes mellitus diagnosis method based random forest with bat algorithm
Approach and Philosophy of On baking technology
Advanced methodologies resolving dimensionality complications for autism neur...
VMware vSphere Foundation How to Sell Presentation-Ver1.4-2-14-2024.pptx
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
Digital-Transformation-Roadmap-for-Companies.pptx
Agricultural_Statistics_at_a_Glance_2022_0.pdf
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Building Integrated photovoltaic BIPV_UPV.pdf

VMware vSphere 4.1 deep dive - part 1

  • 1. vSphere 4.1: Delta to 4.0Tech Sharing for PartnersIwan ‘e1’ Rahabok, Senior Systems Consultante1@vmware.com | virtual-red-dot.blogspot.com | tinyurl.com/SGP-User-Group | facebook.com/e1angAugust 2010
  • 2. Audience AssumptionThis is a level 200 - 300 presentation.It assumes:Good understanding of vCenter 4, ESX 4, ESXi 4. Preferably hands-onWe will only cover the delta between 4.1 and 4.0Overview understanding of related products like VUM, Data Recovery, SRM, View, Nexus, Chargeback, CapacityIQ, vShieldZones, etcGood understanding of related storage, server, network technologyTarget audienceVMware Specialist: SE + Delivery from partners
  • 4. 4.1 New Feature (over 4.0, not 3.5): Server
  • 5. 4.1 New Feature (over 4.0, not 3.5): Server
  • 6. 4.1 New Feature (over 4.0, not 3.5): Storage
  • 7. 4.1 New Feature (over 4.0, not 3.5): Network
  • 8. 4.1 New Feature: Management
  • 9. Builds:ESX build 260247VC build 258902Some stats:4000 development weeks were spent to get to FC5100 QA weeks were spent to get to FC872 beta customers downloaded and tried it out2012 servers, 2277 storage arrays, and 2170 IO devices are already on the HCL 
  • 10. Consulting Services: KitThe vSphere Fundamentals services kitIncludes core services enablement materials for vSphere Jumpstarts, Upgrades, Converter/P2V and PoCs.  The update reflects what’s new in vSphere 4.1 - including new resource limits, memory compression, Storage IO Control, vNetwork Traffic Management, and vSphere Active Directory Integration. The kit is intended for use by PSO Consultants, TAMs, and SEs to help with delivering services engagements, PoCs, or knowledge transfer sessions with customers. Located at Partner Central – Services IP Assetshttps://na6.salesforce.com/sfc/#version?selectedDocumentId=069800000000SSiFor delivery partner: Please download this.
  • 12. PXE Boot RetryVirtual Machine -> Edit Settings -> Options -> Boot OptionsFailed Boot Recovery disabled by defaultEnable and set the automatically retry boot after X Seconds12
  • 13. Wide NUMA SupportWide VMWide-VM is defined as a VM that has more vCPUs than the available cores on a NUMA node. A 5-vCPU VM in a quad-core serverOnly the cores count, and hyperthreading threads don’tESX 4.1 scheduler introduces wide-VM NUMA supportImproves memory locality for memory-intensive workloads. Based on testing with micro benchmarks, the performance benefit can be up to 11–17%.How it worksESX 4.1 allows wide-VMs to take advantage of NUMA management. NUMA management means that a VM is assigned a home node where memory is allocated and vCPUs are scheduled. By scheduling vCPUs on a NUMA node where memory is allocated, the memory accesses become local, which is faster than remote accesses
  • 14. ESXiEnhancements to ESXi. Not applicable to ESX
  • 15. Transitioning to ESXiESXi is our architecturegoing forward
  • 16. Moving toward ESXiPermalink to: VMware ESX and ESXi 4.1 ComparisonService Console (COS)Agentless vAPI-basedManagement AgentsHardware AgentsAgentless CIM-basedCommands forconfiguration anddiagnosticsvCLI, PowerCLILocal Support ConsoleCIM APIvSphere APIInfrastructureService AgentsNative Agents:NTP, Syslog, SNMPVMware ESXi“Classic” VMware ESX
  • 17. Software Inventory - Connected to ESXi/ESXFrom vSphere 4.1BeforeEnumerate instance of CIM_SoftwareIdentityEnhanced CIM provider now displays great detail on installed software bundles.
  • 18. 18Software Inventory – Connected to vCenterBeforeFrom vSphere 4.1Enumerate instance of CIM_SoftwareIdentityEnhanced CIM provider now displays great detail on installed software bundles.Additional Deployment OptionBoot From SANFully supported in ESXi 4.1Was only experimentally supported in ESXi 4.0Boot from SAN supported for FC, iSCSI, and FCoEESX and ESXi have different requirement:iBFT (Boot Firmware Table) requiredThe host must have an iSCSI boot capable NIC that supports the iSCSI iBFT format. iBFT is a method of communicating parameters about the iSCSI boot device to an OS
  • 19. Additional Deployment OptionScripted InstallationNumerous choices for installationInstaller booted fromCD-ROM (default)Preboot Execution Environment (PXE)ESXi Installation image onCD-ROM (default), HTTP/S, FTP, NFSScript can be stored and accessedWithin the ESXi Installer ramdiskOn the installation CD-ROMHTTP / HTTPS, FTP, NFS Config script (“ks.cfg”) can includePreinstallPostinstallFirst bootCannot use scripted installation to install to a USB device
  • 20. PXE BootRequirementsPXE-capable NIC.DHCP Server (IPv4). Use existing one.Media depot + TFTP server + gPXEA server hosting the entire content of ESXi media. Protocal: HTTP/HTTPS, FTP, or NFS server.OS: Windows/Linux server.InfoWe recommend the method that uses gPXE. If not, you might experience issues while booting the ESXi installer on a heavily loaded Network.TFTP is a light-weight version of the FTP service, and is typically used only for network booting systems or loading firmware on network devices such as routers.
  • 21. PXE bootPXE uses DHCP and Trivial File Transfer Protocol (TFTP) to bootstrap an OS over network.How it worksA host makes a DHCP request to configure its NIC. A host downloads and executes a kernel and support files. PXE booting the installer provides only the first step to installing ESXi. To complete the installation, you must provide the contents of the ESXi DVD Once ESXi installer is booted, it works like a DVD-based installation, except that the location of the ESXi installation media must be specified.
  • 23. Sample ks.cfg file# Accept the EULA (End User Licence Agreement)vmaccepteula# Set the root password to vmware123rootpw vmware123# Install the ESXi image from CDROMinstall cdrom# Auto partition the first disk – if a VMFS exists it will overwrite it.autopart --firstdisk --overwritevmfs# Create a partition called Foobar# Partition the disk identified with vmhba1:c0:t1:l0 to grow to a maxsize of 4000partition Foobar --ondisk=mpx.vmhba1:C0:T1:L0 --grow –maxsize=4000# Set up the management network on the vmnic0 using DHCPnetwork –bootproto=dhcp --device=vmnic0 --addvmportgroup=0%firstboot --level=90.1 --unsupported --interpreter=busybox# On this first boot, save the current date to a temporary filedate > /tmp/foo# Mount an nfs share and put it at /vmfs/volumes/wwwesxcfg-nas -add -host 10.20.118.5 -share /var/www www
  • 24. Full Support of Tech Support ModeThere you go 2 typesRemote: SSHLocal: Direct Console
  • 25. Full Support of Tech Support ModeEnter to toggle. That’s it!Disable/Enable Timeout automatically disables TSM (local and remote)Running sessions are not terminated.All commands issued in Tech Support Mode are sent to syslog
  • 26. Full Support of Tech Support ModeRecommended usesSupport, troubleshooting, and break-fixScripted deployment preinstall, postinstall, and first boot scriptsDiscouraged usesAny other scriptsRunning commands/scripts periodically (cron jobs)Leaving open for routine access or permanent SSH connectionAdmin will benotified when active
  • 27. Full Support of Tech Support ModeWe can also enable it via GUICan enable in vCenter or DCUIEnable/Disable
  • 28. Security BannerA message that is displayed on the direct console Welcome screen.
  • 30. Total LockdownAbility to totally control local access via vCenterDCUILockdown Mode (disallows all access except root on DCUI)Tech Support Mode (local and remote)If all configured, then no local activity possible (except pull the plugs)
  • 31. Additional commands in Tech Support ModevscsciStats is now available in the console.Output is raw data for histogram.Use spreadsheet to plot the histogramSome use cases:Identify whether IO are sequential or randomOptimizing for IO SizesChecking for disk mis-alignmentLooking at storage latency in moredetails
  • 32. Additional commands in Tech Support ModeAdditional commands for troubleshootingnc (netcat)http://en.wikipedia.org/wiki/Netcattcpdump-uwhttp://en.wikipedia.org/wiki/Tcpdump
  • 33. More ESXi Services listedMore services are now shown in GUI.Ease of controlFor example, if SSH is not running, you can turn it on from GUI.ESXi 4.0ESXi 4.1
  • 34. ESXi Diagnostics and Troubleshooting If things go wrong:
  • 35. During normal operations:DCUI: misconfigs / restart mgmt agents vCLIvCenter vSphere APIsTSM: Advanced troubleshooting (GSS) ESXiRemote AccessLocal Access
  • 36. Common Enhancements for both ESX and ESXi64 bit User WorldRunning VMs with very large memory footprints implies that we need a large address space for the VMX. 32-bit user worlds (VMX32) do not have sufficient address space for VMs with large memory. 64-bit User worlds overcome this limitation.NFSThe number of NFS volumes supported is increased from 8 to 64.Fiber ChannelEnd-To-End Support for 8 GB (HBA, Switch & Array).VMFSVersion changed to 3.46. No customer visible changes. Changes related to algorithms in the vmfs3 driver to handle new VMware APIs for Array Integration (VAAI).
  • 37. Common Enhancements for both ESX and ESXiVMkernel TCP/IP Stack UpgradeUpgraded to version based on BSD 7.1. Result: improving FT logging, VMotion and NFS client performance.Pluggable Storage Architecture (PSA)New naming convention.New filter plugins to support VAAI (vStorage APIs for Array Integration).New PSPs (Path Selection Policies) for ALUA arrays.New PSP from DELL for the EqualLogic arrays.
  • 38. USB pass-throughNew Features for both ESX/ESXi
  • 39. USB Devices2 steps:Add USB ControllerAdd USB Devices
  • 40. USB DevicesOnly devices listed on the manual is supported.Mostly for ISV licence dongle.A few external USB drives.Limited list of device for now
  • 41. Example 1After vMotion, the VM will be on another (remote) ESXi.Communication inter-ESXi will use Mgmt Network (ESXi has no SC network)You cannot multi-select devices at this stage – add them one by one.Source: http://vstorage.wordpress.com/2010/07/15/usb-passthrough-in-vsphere-4-1/Example 1From the source“I have tested numerous brands of USB mass storage devices (Kingston, Sandisk, Lexar, Imation) as well a couple of of security dongles and they all work well.”
  • 42. Example 2: adding UPSSource: http://vninja.net/virtualization/using-usb-pass-through-in-vsphere-4-1/Example 2Source: http://vninja.net/virtualization/using-usb-pass-through-in-vsphere-4-1/USB Devices: Supported Devices
  • 43. USB DevicesUp to 20 devices per VM. Up to 20 devices per ESX host.1 device can only be owned by 1 VM at a given time. No sharing.SupportedvMotionCommunication via the management networkDRSUnsupportedDPM. DPM is not aware of the device and may turn it off. This may cause loss of data. So disable DRS for this VM so it stays in this host only.Fault ToleranceDesign considerationTake note of situation when the ESX host is not available (planned or unplanned downtime)
  • 44. MS AD integrationNew Features for both ESX/ESXi
  • 45. AD ServiceProvides authentication for all local servicesvSphere ClientOther access based on vSphere API DCUITech Support Mode (local and remote)Has nominal AD groups functionalityMembers of “ESX Admins” AD group have Administrative privilegeAdministrative privilege includes:Full Administrative role in vSphere Client and vSphere API clientsDCUI accessTech Support Mode access (local and remote)
  • 46. The Likewise AgentESX uses an agent from Likewise to connect to MS AD and to authenticate users with their domain credentials. The agent integrates with the VMkernel to implement the mapping for applications such as the logon process (/bin/login) which uses a pluggable authentication module (PAM). As such, the agent acts as an LDAP client for authorization (join domain) and as a Kerberos client for authentication (verify users).The vMA appliance also uses an agent from Likewise.ESX and vMA use different versions of the Likewise agent to connect to the Domain Controller. ESX uses version 5.3 whereas vMA uses version 5.1.49
  • 48. Joining AD: Step 21. Select “AD”2. Click “Join Domain”3. Join the domain. Full name.@123.com
  • 49. AD ServiceA third method for joining ESX/ESXi hosts and enabling Authentication Services to utilize AD is to configure it through Host Profiles
  • 50. AD Likewise Daemons on ESXlwiod is the Likewise I/O Manager service - I/O services for communication. Launched from /etc/init.d/lwiod script.
  • 51. netlogond is the Likewise Site Affinity service - detects optimal AD domain controller, global catalogue and data caches. Launched from /etc/init.d/netlogond script.
  • 52. lsassd is the Likewise Identity & Authentication service. It does authentication, caching and idmap lookups. This daemon depends on the other two daemons running. Launched from /etc/init.d/lsassd script.root 18015 1 0 Dec08 ? 00:00:00 /sbin/lsassd --start-as-daemonroot 31944 1 0 Dec08 ? 00:00:00 /sbin/lwiod --start-as-daemonroot 31982 1 0 Dec08 ? 00:00:02 /sbin/netlogond --start-as-daemon
  • 53. ESX Firewall Requirements for ADCertain ports in SC are automatically opened in the Firewall Configuration to facilitate AD. Not applicable to ESXiBeforeAfter
  • 54. Time Sync Requirement for ADTime must be in sync between the ESX/ESXi server and the AD server. For the Likewise agent to communicate over Kerberos with the domain controller, the clock of the client must be within the domain controller's maximum clock skew, which is 300 seconds, or 5 minutes, by default. The recommendation would be that they share the same NTP server.
  • 55. vSphere ClientNow when assigning permissions to users/groups, the list of users and groups managed by AD can be browsed by selecting the Domain.
  • 56. Info in ADThe host should also be visible on the Domain Controller in the AD Computers objects listing.Looking at the ESX Computer Properties shows a Name of RHEL(as it the Service Console on the ESX) & Service pack of ‘Likewise Identity 5.3.0’
  • 57. Memory CompressionNew Features for both ESX/ESXi
  • 58. Memory CompressionVMKernel implement a per-VM compression cache to store compressed guest pages. When a guest page (4 KB page) needs to swapped, VMKernel will first try to compress the page. If the page can be compressed to 2 KB or less, the page will be stored in the per-VM compression cache. Otherwise, the page will be swapped out to disk. If a compressed page is again accessed by the guest, the page will decompressed online.
  • 59. Changing the value of cache size
  • 60. Virtual Machine Memory CompressionVirtual Machine -> Resource AllocationPer-VM statistic showing compressed memory
  • 61. Monitoring Compression3 new counters introduced to monitorHost level, not VM level.
  • 63. Power consumption chartPer ESX, not per clusterNeed hardware integration.Difference HW makes have different info
  • 64. Performance Graphs – Power ConsumptionWe can now track the Power consumption of VMs in real-timeEnabled through Software Settings ->Advanced Settings -> Power -> Power.ChargeVMs65
  • 65. Host power consumptionIn some situation, may need to edit /usr/share/sensors/vmware to get support for the hostDifferent HW makers have different API.VM power consumptionExperimental. Off by default
  • 66. ESXFeatures only for ESX (not ESXi)
  • 67. ESX: Service Console firewallChanges in ESX 4.1ESX 4.1 introduces these additional configuration files located in /etc/vmware/firewall/chains:usercustom.xmluserdefault.xmlRelationship between the 2 files“user” overwrites.The default files custom.xml and default.xml are overridden by usercustom.xml and userdefault.xml.All configuration is saved in usercustom.xml and userdefault.xml.Copy the original custom.xml and default.xml files. Use them as a template for usercustom.xml and userdefault.xml.
  • 69. Availability Feature SummaryHA and DRS Cluster LimitationsHigh Availability (HA) Diagnostic and Reliability ImprovementsFT Enhancements vMotionEnhancementsPerformanceUsabilityEnhanced Feature CompatibilityVM-host Affinity (DRS)DPM EnhancementsData Recovery Enhancements
  • 70. DRS: more HA-awarenessvSphere 4.1 adds logic to prevent imbalance that may not be good from HA point of view.Example20 small VM and 2 very large VM.2 ESXi hosts. Same workload with the above 20 collectively.vSphere 4.0 may put 20 small VM on Host A and 2 very large VM on Host B.From HA point of view, this may result in risks when Host A fails.vSphere 4.1 will try to balance the number of VM.
  • 71. HA and DRS Cluster ImprovementsIncreased cluster limitationsCluster limits are now unified for HA and DRS clusters
  • 72. Increased limits for VMs/host and VMs/cluster
  • 73. Cluster limits for HA and DRS:
  • 75. 320 VMs/host (regardless of # of hosts/cluster)
  • 77. Note that these limits also apply to post-failover scenarios. Be sure that these limits will not be violated even after the maximum configured number of host failovers.HA and DRS Cluster Limit5-host cluster, tolerate 1 host failurevSphere 4.1 supports 320 VMs/host
  • 79. Cluster can only support 320x4 VMsX5-host cluster, tolerate 2 host failuresSupports 320x5 VMs/cluster? NO
  • 80. Cluster can only support 320x3 VMsXX
  • 81. HA Diagnostic and Reliability ImprovementsHA Healthcheck StatusHA provides an ongoing healthcheck facility to ensure that the required cluster configuration is met at all times. Deviations result in an event or alarm on the cluster.Improved HA-DRS interoperability during HA failoverDRS will perform vMotionto free up contiguous resources (i.e. on one host) so that HA can place a VM that needs to be restartedHA Diagnostic and Reliability ImprovementsHA Operational StatusDisplays more information about the current HA operational status, including the specific status and errors for each host in the HA cluster.It shows if the host is Primary or Secondary!
  • 82. HA Operational StatusJust another example 
  • 83. HA: Application AwarenessApplication Monitoring can restart a VM if the heartbeats for an application it is running are not receivedExpose APIs for 3rd party app developersApplication Monitoring works much the same way that VM Monitoring: If the heartbeats for an application are not received for a specified time via VMware Tools, its VM is restarted.ESXi 4.0ESXi 4.1
  • 85. FT EnhancementsDRSFT fully integrated with DRSDRS load balances FT Primary and Secondary VMs. EVC required.Versioning control lifts requirement on ESX build consistencyPrimary VM can run on host with a different build # as Secondary VM.Events for Primary VM vs. Secondary VM differentiatedEvents logged/stored differently.FT PrimaryVMFT SecondaryVMResource Pool
  • 86. No data-loss GuaranteevLockStep: 1 CPU step behindPrimary/backup approachA common approach to implementing fault-tolerant servers is the primary/backup approach. The execution of a primary server is replicated by a backup server. Given that the primary and backup servers execute identically, the backup server can take over serving client requests without any interruption or loss of state if the primary server fails
  • 87. New versioning featureFT now has a version number to determine compatibility Restriction to have identical ESX build # has been liftedNow FT checks it’s own version number to determine compatibilityFuture versions might be compatible with older ones, but possibly not vice-versaAdditional information on vSphere ClientFT version displayed in host summary tab# of FT enabled VMs displayed thereFor hosts prior to ESX/ESXi 4.1, this tab lists the host build number instead.FT versions included in vm-support output/etc/vmware/ft-vmk-version:product-version = 4.1.0build = 235786ft-version = 2.0.0
  • 88. FT logging improvementsFT traffic was bottlenecked to 2 Gbit/s even on 10 Gbit/s pNICsImproved by implementing ZeroCopy feature for FT traffic Tx, tooFor sending only (Tx)Instead of copying from FT buffer into pNIC/socket buffer just a link to the memory holding the data is transferredDriver accesses data directly- no copy needed
  • 89. FT: unsupported vSphere featuresSnapshots. Snapshots must be removed or committed before FT can be enabled on a VM. It is not possible to take snapshots of VMs on which FT is enabled.Storage vMotion. Cannot invoke Storage vMotion for FT VM. To migrate the storage, temporarily turn off FT, do Storage vMotion, then turn on FT. Linked clones. Cannot enable FT on a VM that is a linked clone, nor can you create a linked clone from an FT-enabled VM.Back up. Cannot back up an FT VM using VCB, vStorage API for Data Protection, VMware Data Recovery or similar backup products that require the use of a VM snapshot, as performed by ESXi. To back up VM in this manner, first disable FT, then re-enable FT after backup is done. Storage array-based snapshots do not affect FT.Thin Provisioning, NPIV, IPv6, etc
  • 90. FT: performance sample MS Exchange 20071 core handles 2000 Heavy Online user profileVM CPU utilisation is only 45%. ESX is only 8%Based on previous “generation”Xeon 5500, not 5600vSphere 4.0, not 4.1OpportunityHigher uptime forcustomer emailsystem
  • 91. Integration with HAImproved FT host managementMove host out of vCenterDRS able to vMotion FT VMsWarning if HA gets disabled and following operations will be disabledTurn on FTEnable FTPower on a FT VM Test failover Test secondary restart
  • 93. BackgroundDifferent servers in a datacenter is a common scenarioDifferences by memory size, CPU generation or # or type of pNICsBest practice up to nowSeparate different hosts in different clustersWorkaroundsCreating affinity/ anti-affinity rulesPinning a VM to a single host by disabling DRS on the VM.DisadvantageToo expensive as each cluster needed to have HA failover capacityNew feature: DRS GroupsHost and VM groups Organize ESX hosts and VMs into groupsSimilar memorySimilar usage profile…
  • 94. VM-host Affinity (DRS)Required rulesPreferential rulesRule enforcement: 2 optionsRequired: DRS/HA will never violate the rule; event generated if violated manually. Only advised for enforcing host-based licensing of ISV apps.
  • 95. Preferential: DRS/HA will violate the rule if necessary for failover or for maintaining availabilityHard RulesHard RulesDRS will follow the hard rulesWith DPM hosts will get powered on to follow a ruleIf DRS can’t follow, vCenter will display an alarmCan not be overwritten by userDRS will not generate any recommendations which would violate hard rulesDRS Groups and hard rules with HAHosts will be tagged as “incompatible” in case of “Must Not run…” so HA will take care of these rules, too
  • 96. Soft RulesSoft RulesDRS will follow a soft rule if possibleWill allow actions User-initiatedDRS-mandatoryHA actionsRules are applied as long as their application does not impact satisfying current VM cpu or memory demandDRS will report a warning if the rule isn’t followedDRS does not produce a move recommendation to follow the ruleSoft VM/host affinity rules are treated by DRS as "reasonable effort"
  • 97. Grouping Hosts with different capabilitiesDRS Groups ManagerDefines GroupsVM groupsHost groups
  • 98. Managing ISV LicensingExampleCustomer has 4-node clusterOracle DB and Oracle BEA are charged for every hosts that can run it.vSphere 4.1 introduces “hard partitioning”Both DRS and HA will honour this boundary.Rest of VMsOracle DBDMZ VMOracle BEADMZ LANProduction LAN
  • 99. Managing ISV LicensingHard partitioningIf a host is in a VM-host must affinity rule, they are considered compatible hosts, all the others are tagged as incompatible hosts. DRS, DPM and HA are unable to place the VMs on incompatible hosts.Due to the incompatible host designation, the mandatory VM-Host is a feature what can be (undeniably) described as hard partioning. You cannot place and run a VM on incompatible hostOracle has not acknowledged this as hard partitioning.Sourceshttp://frankdenneman.nl/2010/07/vm-to-hosts-affinity-rule/http://www.latogalabs.com/2010/07/vsphere-41-hidden-gem-host-affinity-rules/
  • 100. Example of setting-up: Step 1In this example, we are adding the “WinXPsp3” VM to the group.The group name is “Desktop VMs”
  • 101. Example of setting-up: Step 2Just like we can group VM, we can also group ESX
  • 102. Example of setting-up: Step 3We have grouped the VMs in the cluster into 2We have grouped the ESX in the cluster into 2
  • 103. Example of setting-up: Step 4This is the screen where we do themapping.VM Group mapped to Host Group
  • 104. Example of setting-up: Step 5Mapping is done.The Cluster Settings dialog box now display the new rules type.
  • 105. HA/ DRSDRS lists rulesSwitch on or offExpand to display DRS Groups Rule detailsRule policyInvolved Groups
  • 107. Enhancement for Anti-affinity rulesNow more than 2 VMs in a ruleEach rule can have a couple of VMsKeep them all togetherSeparate them through clusterFor each VM at least 1 host is needed101
  • 108. DPM EnhancementsScheduling DPMTurning on/off DPM is now a scheduled taskDPM can be turned off prior to business hours in anticipation for higher resource demandsDisabling DPMIt brings hosts out of standbyEliminates risk of ESX hosts being stuck in standby mode while DPM is disabled. Ensures that when DPM is disabled, all hosts are powered on and ready to accommodate load increases.
  • 111. vMotionEnhancementsSignificantly decreased the overall migration time (time will vary depending on workload)Increased number of concurrent vMotions:ESX host: 4 on a 1 Gbps network and 8 on a 10 Gbps networkDatastore: 128 (both VMFS and NFS)Maintenance mode evacuation time is greatly decreased due to above improvements
  • 112. vMotionRe-write of the previous vMotion codeSends memory pages bundled together instead of one after the otherLess network/ TCP/IP overheadDestination pre-allocates memory pagesMultiple senders/ receiversNot only a single world responsible for each vMotion thus limit based on host CPUSends list of changed pages instead of bitmapsPerformance improvementThroughput improved significantly for single vMotionESX 3.5 – ~1.0GbpsESX 4.0 – ~2.6GbpsESX 4.1 – max 8 GbpsElapsed reduced by 50%+ on 10GigE tests. Mix of different bandwidth pNICs not supported
  • 113. vMotionAggressive ResumeDestination VM resumes earlierOnly workload memory pages have been receivedRemaining pages transferred in backgroundDisk-Backed OperationSource host creates a circular buffer file on shared storageDestination opens this file and reads out of itWorks only on VMFS storageIn case of network failure during transfer vMotion falls back to disk based transferWorks together with aggressive resume feature above
  • 114. Enhanced vMotion Compatibility ImprovementsPreparation for AMD Next Generation without 3DNow!Future AMD CPUs may not support 3DNow!To prevent vMotion incompatibilities, a new EVC mode is introduced.
  • 115. EVC ImprovementsBetter handling of powered-on VMsvCenter server now uses a live VM's CPU feature set to determine if it can be migrated into an EVC clusterPreviously, it relied on the host's CPU featuresA VM could run with a different vCPU than the host it runs onI.e. if it was initially started on an older ESX host and vMotioned to the current oneSo the VM is compatible to an older CPU and could possibly be migrated to the EVC cluster even if the ESX hosts the VM runs on is not compatible
  • 116. Enhanced vMotionCompatibility ImprovementsUsability ImprovementsVM's EVC capability: The VMs tab for hosts and clusters now displays the EVC mode corresponding to the features used by VMs.VM Summary: The Summary tab for a VM lists the EVC mode corresponding to the features used by the VM.
  • 117. EVC (3/3)Earlier Add-Host Error detectionHost-specific incompatibilities are now displayed prior to the Add-Host work-flow when adding a host into an EVC clusterUp to now this error occurred after all needed steps were done by the administratorNow it’ll warn earlier
  • 118. LicencingHost-Affinity, Multi-core VM, Licence Reporting Manager
  • 119. Multi-core CPU inside a VMClick this
  • 120. Multi-core CPU inside a VM2-core, 4-core, 8 core.No 3-core, 5 core, 6 core, etcType this manually
  • 121. Multi-core CPU inside a VMHow to enable (per VM, not batch)Turn off VM. Can not be done online.Click Configuration ParametersClick Add Row and type cpuid.coresPerSocket in the Name column.Type a value (2, 4, or 8) in the Value column.The number of virtual CPUs must be divisible by the number of cores per socket. The coresPerSocket setting must be a power of two.Notes:If enabled, CPU Hot Add is disabled
  • 122. Multi-core CPU inside a VMOnce enabled, it is not readily shown to administratorThis is not shown easily in the UI. VM listing in vSphere Client does not show corePossible to write scriptsIterates per VMSample toolsCPU-ZMS SysInternals
  • 123. Customers Can Self-Enforce Per VM License ComplianceWhen customer use more than they boughtAlert by vCenterBut will be able to continue managing additional VMs. So can over use.Customers are responsible for purchasing additional licenses and any back-SNS. So Support & Subscription must be back dated. This is consistent with current vSphere pricing.
  • 124. Thank YouI’m sure you are tired too 
  • 125. Useful referenceshttp://vsphere-land.com/news/tidbits-on-the-new-vsphere-41-release.htmlhttp://www.petri.co.il/virtualization.htmhttp://www.petri.co.il/vmware-esxi4-console-secret-commands.htmhttp://www.petri.co.il/vmware-data-recovery-backup-and-restore.htmhttp://www.delltechcenter.com/page/VMware+Techhttp://www.kendrickcoleman.com/index.php?/Tech-Blog/vm-advanced-iso-free-tools-for-advanced-tasks.htmlhttp://www.ntpro.nl/blog/archives/1461-Storage-Protocol-Choices-Storage-Best-Practices-for-vSphere.htmlhttp://www.virtuallyghetto.com/2010/07/script-automate-vaai-configurations-in.htmlhttp://searchvmware.techtarget.com/tip/0,289483,sid179_gci1516821,00.htmlhttp://vmware-land.com/esxcfg-help.htmlhttp://virtualizationreview.com/blogs/everyday-virtualization/2010/07/esxi-hosts-ad-integrated-security-gotcha.aspxhttp://www.MS.com/licensing/about-licensing/client-access-license.aspx#tab=2http://www.MSvolumelicensing.com/userights/ProductPage.aspx?pid=348http://www.virtuallyghetto.com/2010/07/vsphere-41-is-gift-that-keeps-on-giving.html
  • 126. vSphere Guest APIIt provides functions that management agents and other software can use to collect data about the state and performance of a VM. The API provides fast access to resource management information, without the need for authentication.The Guest API provides read‐only access. You can read data using the API, but you cannot send control commands. To issue control commands, use the vSphere Web Services SDK.Some information that you can retrieve through the API:Amount of memory reserved for the VM.Amount of memory being used by the VM.Upper limit of memory available to the VM.Number of memory shares assigned to the VM.Maximum speed to which the VM’s CPU is limited.Reserved rate at which the VM is allowed to execute. An idling VM might consume CPU cycles at a much lower rate.Number of CPU shares assigned to the VM.Elapsed time since the VM was last powered on or reset.CPU time consumed by a particular VM. When combined with other measurements, you can estimate how fast the VM’s CPUs are running compared to the host CPUs

Editor's Notes

  • #5: Isn’t cluster supported in 4.0.1? Compared the 2 manuals closely.Design here can mean better design, or you can fix/propose things that you can’t before, or give you more options to take on larger or more complex design.Cost here can mean lower Product cost, Services cost (e.g. reduce effort from partner) or less effort (if internal IT is doing it).Scalability means you can do more, like do more VM per ESX. Performance means can do the same thing but faster. For example, backing up a VM is faster.Memory Compression reduces cost: more VM per ESX means less ESX host, or smaller RAM expense.Scripted install improves security as it reduces risk of variance among installation.ESXi SAN boot improves security as ESXi config are not stored in a hundred places.vSphere 4.1 introduces an FT-specific versioning-control mechanism that allows the Primary and Secondary VMs to run on FT-compatible hosts at different but compatible patch levels. vSphere 4.1 differentiates between events that are logged for a Primary VM and those that are logged for its Secondary VM, and reports why a host might not support FT. In addition, you can disable VMware HA when FT-enabled VMs are deployed in a cluster, allowing for cluster maintenance operations without turning off FT.Compare with 4.0. The VMware HA dashboard in the vSphere Client provides a new detailed window called Cluster Operational Status. This window displays more information about the current VMware HA operational status, including the specific status and errors for each host in the VMware HA cluster.
  • #6: Hyper-V import: without it, it will be more complex and may require longer down time.ESX 4.1 takes advantage of deep sleep states to further reduce power consumption during idle periods. The vSphere Client has a simple user interface that allows you to choose one of four host power management policies. In addition, you can view the history of host power consumption and power cap information on the vSphere Client Performance tab on newer platforms with integrated power meters. Need screenshot and new machine.Faster vMotion improves management as you spend less time waiting for 10 VMs to complete vMotion as you prepare to do hardware maintenance.In some cases, you are given a fixed window to do your maintenance. And you want the 5 or 15 VMs in that host to vmotion as fast as possible.vSphere 4.1 reduces the amount of overhead memory required, especially when running large VMs on systems with CPUs that provide hardware MMU support (AMD RVI or Intel EPT).vSphere 4.1 includes an AMD Opteron Gen. 3 (no 3DNow!™) EVC mode that prepares clusters for vMotion compatibility with future AMD processors. EVC also provides numerous usability improvements, including the display of EVC modes for VMs, more timely error detection, better error messages, and the reduced need to restart VMsVmware Tools now have CLI, which
  • #7: VMware Data Recovery is actually available in 4.0.1 too, as it’s compatibleVMFS enhancements: minor. Transparent to usersThere have been many algorithm changes between v3.33 and and 3.46 VMFS-3.46 driver uses hardware accelerated locking and hardware accelerated Storage VMotion, Virtual Machine provisioning, and cold migrate functions on such hardware. This improved the performance and scalability of workloads that require the above functions.Personally, there are those who are 100% convinced on the benefit of iSCSI boot. This is because it’s mixing storage and network, and can make troubleshooting/support complex.VADP: VSS on Win08NFS performance improvement. Quantified?NFS Performance Enhancements. Networking performance for NFS has been optimized to improve throughput and reduce CPU usage
  • #8: Nexus is not released yet.vDS: scalabilityvNIC enhancements: E1000 vNIC supports jumbo frames
  • #9: You can use Host Profiles to roll out administrator password changes in vSphere 4.1. Enhancements also include improved Cisco Nexus 1000V support and PCI device ordering configurationUnattended Authentication in vSphere Management Assistant (vMA). vMA 4.1 offers improved authentication capability, including integration with AD and commands to configure the connectionUpdate Manager 4.1 immediately sends critical notifications about recalled ESX and related patches. In addition, Update Manager prevents you from installing a recalled patch that you might have already downloaded. This feature also helps you identify hosts where recalled patches might already be installed.The License Reporting Manager provides a centralized interface for all license keys for vSphere 4.1 products in a virtual IT infrastructure and their respective usage. You can view and generate reports on license keys and usage for different time periods with the License Reporting Manager. A historical record of the utilization per license key is maintained in the vCenter database
  • #14: an 8-vCPU SMP VM is considered wide on an Intel Xeon 55xx system because the processor has only four cores per NUMA node
  • #15: ESXi was released around 2 years ago. Just sharing my experience as SE. In this short period of 2 years, the discussions that I have with customers or partners have progressed, from “what is ESXi” to “why should we use ESXi” to “we are using or planning to use ESXi”. For a platform software, it is doing very well since it needs to build its ecosystem.
  • #16: We can say that vSphere 4.1 is the release for ESXi. In this release ESXi takes center stage. 4.1 is our strongest message that we are going toward ESXi as the sole hypervisor. A lot of customers, even some of the largest deployment, have decided to go ESXi going forward. If your customers have not, 4.1 is a good opportunity for you offer a migration services or hardware refresh.As SE, we also know that there are some features that we wish we have in the 4.0 release. For example, while the remote CLI helps, none of the Linux command works as the execution context is the VMA OS, not the ESXi kernel. And in some troubleshooting scenario, customers do need to issue linux command. Another thing we can’t do automatic installation and boot from network.
  • #20: One of the most popular requests among customers is to improve the deployment and management of ESXi.First in the line is boot From SAN is now fully supported in ESXi 4.1. It was as only experimentally supported in ESXi 4.0. Boot from SAN will be supported for FC, iSCSI, and FCoE. For iSCSI and FCoE, it will depend upon hardware qualification, so please check the HCL and Release Notes when vSphere 4.1 is released.Dependent Hardware iSCSI means the card depends on VMware networking, and iSCSI configuration and management interfaces provided by VMware. So properties like IP, MAC, and other parameters used for the iSCSI sessions are configured from VMware GUI/CLI.http://www.vmware.com/resources/compatibility/info.php?deviceCategory=san&mode=san_introductionFor ESXi text installer we have a screen that warns if the user is trying to install image onto an existing data store. It will not prevent user from installing if he/she desires to do so. For scripted install, unless user specifies an override VMFS flag, scripted install will not proceed with installation when a user tries to install on an existing datastore. We will only support a booting of host on a unique LUN. This LUN *cannot be* shared by other hosts. User is expected to set proper LUN masking to avoid this scenario. If the luns were to be shared it could result in data corruption. ----------- copied from 3rd party site iSCSI SW boot: the only currently supported network card is the Broadcom 57711 10GBe NIC. When booting from software iSCSI the boot firmware on the network adapter logs into an iSCSI target. The firmware than saves the network and iSCSI boot parameters in the iBFT which is stored in the host’s memory. Before you can use iBFT you need to configure the boot order in your server’s BIOS so the iBFT NIC is first before all other devices. You than need to configure the iSCSI configuration and CHAP authentication in the BIOS of the NIC before you can use it to boot ESXi from. The ESXi installation media has special iSCSI initialization scripts that use iBFT to connect to the iSCSI target and present it to the BIOS. Once you select the iSCSI target as your boot device the installer copies the boot image to it. Once the media is removed and the host rebooted the iSCSI target is used to boot and the initialization script runs in first boot mode which configures the networking which afterwards is persistent.
  • #21: Second features we have implemented is more choice during install. We can now do PXE boot, and we can script it too.Scripted Installation, the equivalent of Kickstart, is now available. The installer can boot over the network, and at that point you can also do an interactive installation, or else set it up to do a scripted installation. Both the installed image and the config file (called “ks.cfg”) can be obtained over the network using a variety of protocols. There is also an ability to specify preinstall, postinstall, and first-boot scripts. For example, the postinstall script can configure all the host settings, and the first boot script could join the host to vCenter. These three types of scripts run either in the context of the Tech Support Mode or in Python. The Tech Support Mode shell is a highly stripped down version of bash.You can start the scripted installation with a CD-ROM drive or over the network by using PXE booting. You cannot use scripted installation to install ESXi to a USB device
  • #22: The media depot is a network-accessible location that contains the ESXi installation media. You can use HTTP/HTTPS, FTP, or NFS to access the depot. The depot must be populated with the entire contents of the ESXi installation DVD, preserving directory structure.If you are performing a scripted installation, you must point to the media depot in the script by including the install command with the nfs or url option.The following code snippet from an ESXi installation script demonstrates how to format the pointer to the media depot if you are using NFS:install nfs --server=example.com --dir=/nfs3/VMware/ESXi/41
  • #23: The preboot execution environment (PXE) is an environment to boot computers using a network interfaceindependently of available data storage devices or installed OS. These topics discuss thePXELINUX and gPXE methods of PXE booting the ESXi installer.PXE uses DHCP and Trivial File Transfer Protocol (TFTP) to bootstrap an OS (OS) over a network.Network booting with PXE is similar to booting with a DVD, but it requires some network infrastructure anda machine with a PXE-capable network adapter. Once the ESXi installer is booted, it works like a DVD-based installation,except that the location of the ESXi installation media (the contents of the ESXi DVD) must be specified.A host first makes a DHCP request to configure its network adapter and then downloads and executes a kerneland support files. PXE booting the installer provides only the first step to installing ESXi. To complete theinstallation, you must provide the contents of the ESXi DVD either locally or on a networked server throughHTTP/HTTPS, FTP, or NFS.TFTP is a light-weight version of the FTP service, and is typically used only for network booting systems orloading firmware on network devices such as routers.If you do not use gPXE, you might experience issues while booting the ESXi installer on a heavily loadednetwork. This is because TFTP is not a robust protocol and is sometimes unreliable for transferring largeamounts of data. If you use gPXE, only the gpxelinux.0 binary and configuration file are transferred via TFTP.gPXE enables you to use a Web server for transferring the kernel and ramdisk required to boot the ESXi installer.If you use PXELINUX without gPXE, the pxelinux.0 binary, the configuration file, and the kernel and ramdiskare transferred via TFTP.Setting up a new DHCP server is not recommended if your network already has one. If multipleDHCP servers respond to DHCP requests, machines can obtain incorrect or conflicting IP addresses, or canfail to receive the proper boot information. Seek the guidance of a network administrator in your organizationbefore setting up a DHCP
  • #24: Scripted Installation, the equivalent of Kickstart, will be supported on ESXi 4.1. The installer can boot over the network, and at that point you can also do an interactive installation, or else set it up to do a scripted installation. Both the installed image and the config file (called “ks.cfg”) can be obtained over the network using a variety of protocols. There is also an ability to specify preinstall, postinstall, and first-boot scripts. For example, the postinstall script can configure all the host settings, and the first boot script could join the host to vCenter. These three types of scripts run either in the context of the Tech Support Mode shell (which is a highly stripped down version of bash) or in Python.
  • #25: The firstboot scripts are run as initscripts. All initscripts have a numerical part in their filenames. They are sorted by that numerical part to determine the order in which they are run. So a script with "90.1" would run after a script with "90.0" and before a script with "90.2"
  • #26: Finally, the Tech Support Mode is fully supported. We support both the local, when you are in front of the server, or remote, when you are using SSH.In ESXi 4.0, Tech Support Mode usage was ambiguous. We stated that you should only use it with guidance from VMware Support, but VMware also issued several KBs telling customers how to use it. Getting into Tech Support Mode was also not very user-friendly.The warning not to use TSM has been removed from the login screen. However, anytime TSM is enabled (either local or remote), a warning banner will appear in vSphere Client for that host. This is meant to reinforce the recommendation that TSM only be used for fixing problems, not on a routine basis.The SysAdminTools URL in the message above will take you to vMA, PowerCLI, CLI, etc.
  • #27: To enable or disable from the console, it’s pretty straight forward. By default, after you enable TSM (both local and remote), they will automatically become disabled after 10 minutes. This time is configurable, and the timeout can also be disabled entirely. When TSM times out, running sessions are not terminated, allowing you to continue a debugging session. All commands issued in TSM are logged by hostd and sent to syslog, allowing for an incontrovertible audit trail.When lockdown mode is enabled, DCUI access is restricted to the root user (so root can still go in), while access to Tech Support Mode is completely disabled for all users. With lockdown mode enabled, access to the host for management or monitoring using CIM is possible only through vCenter. Direct access to the host using the vSphere Client is not permitted.
  • #28: As you know, the tech support mode is not for day to day use. So anytime it is enabled, we will flag it.
  • #29: We can also enable it via the GUI. You select the ESXi you want to manage, then click on the “Configuration” tab. From here, click on the “Security Profile”. Clicking on the properties brings up this dialog box. From here, we can stop and start the relevant services.
  • #30: Procedure:1 Log in to the host from the vSphere Client.2 From the Configuration tab, select Advanced Settings.3 From the Advanced Settings window, select Annotations.4 Enter a security message.The message is displayed on the direct console Welcome screen.
  • #31: There is now an ability to totally lock down a host. Lockdown mode in ESXi 4.1 forces all remote access to go through vCenter. So Lockdown mode is only available on ESXi hosts that have been added to vCenter.
  • #32: The only local access is for root to access the DCUI – this could be used, for example, to turn off lockdown mode in case vCenter is down. However, there is an option to disable DCUI in vCenter. In this case, with Lockdown mode turned on, there is no possible way to manage the host directly – everything must be done through vCenter. If vCenter is down, the only recourse in this case is to reimage the box.Of course, Lockdown Mode can be selectively disable for a host if there is a need to troubleshoot or fix it via TSM, and then enabled again.BTW,
  • #33: Vscsistats has also been ported and now is available directly in the ESXi console.It is an advanced commands, and can be used to identify the IO patterns.
  • #34: Other useful utilities for troubleshooting have been added to TSM
  • #47: You can add multiple USB devices, such as security dongles and mass storage devices, to a VMthat resides on an ESX/ESXi host to which the devices are physically attached. Knowledge of devicecomponents and their behavior, VM requirements, feature support, and ways to avoid data losscan help make USB device passthrough from an ESX/ESXi host to a VM successful.When you attach a USB device to a physical host, the device is available only to VMs that resideon that host. Those VMs cannot connect to a device on another host in the datacenter.A USB device is available to only one VM at a time. When you remove a device from a virtualmachine, it becomes available to other VMs that reside on the host.USB Arbitrator Manages connection requests and routes USB device traffic. The arbitrator isinstalled and enabled by default on ESX/ESXi hosts. It scans the host for USBdevices and manages device connection among VMs that reside onthe host. It routes device traffic to the correct VM instance fordelivery to the guest OS. The arbitrator monitors the USB deviceand prevents other VMs from using it until you release it from theVM it is connected to.If vCenter polling is delayed, a device that is connected to one virtualmachine might appear as though it is available to add to another virtualmachine. In such cases, the arbitrator prevents the second VM fromaccessing the USB device.USB Controller The USB hardware chip that provides USB function to the USB ports that itmanages. The virtual USB Controller is the software virtualization of the USBhost controller function in the VM.USB controller hardware and modules that support USB 2.0 and USB 1.1devices must exist on the host. Only one virtual USB controller is available toeach VM. The controller supports multiple USB 2.0 and USB 1.1USB devices in the virtual computer. The controller must be present before youcan add USB devices to the virtual computer.The USB arbitrator can monitor a maximum of 15 USB controllers. Devicesconnected to controllers numbered 16 or greater are not available to the virtualMachineBefore you hot add memory, CPU, or PCI devices, you must remove any USB devices. Hot adding theseresources disconnects USB devices, which might result in data loss.n Before you suspend a VM, make sure that a data transfer is not in progress. During thesuspend/resume process, USB devices behave as if they have been disconnected, then reconnected. Also,if you use vMotion to migrate a VM away from the host that the USB device is attached to, itwon't be reconnected when the VM is resumedFor compound devices, the virtualization process filters out the USB hub so that it is not visible to the virtualmachine. The remaining USB devices in the compound appear to the VM as separate devices. Youcan add each device to the same VM or to different VMs if they reside on the samehost.
  • #49: Another feature that was requested a lot is to integrate with MS AD. This further simplify the management of vSphere as we can now be consistent with vCenter.AD integration provides authentication for all local services. This means access via Admin Client, via the console, via remote console are all based on AD.ESX and ESXi should integrate with MS AD for all user authentication. This effectively removes static information from the ESX host and enables the "plug and play" and "stateless appliance" concepts. Customers do not want to manage user accounts on ESX or ESXi because it is additional work to what they would do in a physical environment. Lowers the Opex of managing a VI environment and also competitively positions our platform with Hyper-V which can do this today. Customers don’t want to rely on VC for these functions due to HA of VC.
  • #51: So how do we do it? One way is to select the ESX that you want to add to AD, and choose the “Configuration” tab. From this page, choose the “authentication service” link. Click on the properties link, the dialog box shown on the next slide is shown.
  • #52: From the dialog box that pops up, select “AD” from the drop down.Then specify the Domain name.Then click “Join Domain”. The next dialog box will pop up to let you enter the ID which can join a domain. Click on Join Domain button to join the domain. If there is an error, an error message will be prompted. If not, ESXi will join the domain.
  • #53: I guess a question from customer will be how they can do this automatically, if they have a lot of ESXi and not enough Sys Admin to manage all these things.We have enhanced our host profile. Here is the screen where we can configure the same info in the host profiles.
  • #60: The idea of memory compression is very straightforward: if the swapped out pages can be compressed and stored in a compression cache located in the main memory, the next access to the page only causes a page decompression which can be an order of magnitude faster than the disk access. With memory compression, only a few uncompressible pages need to be swapped out if the compression cache is not full. This means the number of future synchronous swap-in operations will be reduced. Hence, it may improve application performance significantly when the host is in heavy memory pressure. In ESX 4.1, only the swap candidate pages will be compressed. This means ESX will not proactively compress guest pages when host swapping is not necessary. In other words, memory compression does not affect workload performance when host memory is undercommitted. 3.5.1 Reclaiming Memory Through Compression Figure 8 illustrates how memory compression reclaims host memory compared to host swapping. Assuming ESX needs to reclaim two 4KB physical pages from a VM through host swapping, page A and B are the selected pages (Figure 8a). With host swapping only, these two pages will be directly swapped to disk and two physical pages are reclaimed (Figure 8b). However, with memory compression, each swap candidate page will be compressed and stored using 2KB of space in a per-VM compression cache. Note that page compression would be much faster than the normal page swap out operation which involves a disk I/O. Page compression will fail if the compression ratio is less than 50% and the uncompressible pages will be swapped out. As a result, every successful page compression is accounted for reclaiming 2KB of physical memory. As illustrated in Figure 8c, pages A and B are compressed and stored as half-pages in the compression cache. Although both pages are removed from VM guest memory, the actual reclaimed memory size is one page. If any of the subsequent memory access misses in the VM guest memory, the compression cache will be checked first using the host physical page number. If the page is found in the compression cache, it will be decompressed and push back to the guest memory. This page is then removed from the compression cache. Otherwise, the memory request is sent to the host swap device and the VM is blocked. The per-VM compression cache is accounted for by the VM’s guest memory usage, which means ESX will not allocate additional host physical memory to store the compressed pages. The compression cache is transparent to the guest OS. Its size starts with zero when host memory is undercommitted and grows when VM memory starts to be swapped out. If the compression cache is full, one compressed page must be replaced in order to make room for a new compressed page. An age-based replacement policy is used to choose the target page. The target page will be decompressed and swapped out. ESX will not swap out compressed pages. If the pages belonging to compression cache need to be swapped out under severe memory pressure, the compression cache size is reduced and the affected compressed pages are decompressed and swapped out. The maximum compression cache size is important for maintaining good VM performance. If the upper bound is too small, a lot of replaced compressed pages must be decompressed and swapped out. Any following swap-ins of those pages will hurt VM performance. However, since compression cache is accounted for by the VM’s guest memory usage, a very large compression cache may waste VM memory and unnecessarily create VM memory pressure especially when most compressed pages would not be touched in the future. In ESX 4.1, the default maximum compression cache size is conservatively set to 10% http://www.vmware.com/files/pdf/techpaper/vsp_41_perf_memory_mgmt.pdfNote that this paper is written based on ESX4.0 memory management paper. Besides the new content introduced in ESX4.1, e.g., memory compression, quite a few places have been updated to represent the state of the art of ESX memory management.
  • #63: What does the counter _really_ mean as it’s an _average_ of a _rate_?
  • #66: Esxtop also has a power view “p”
  • #67: (2) The feature of displaying per-VM power consumption is experimentaland off by default. It can be turned on with an advanced config optionas the paragraph describes. The per-VM power consumption feature isdependent on the host power consumption feature
  • #70: HA and DRS have always been the popular features among our customers. I have quite a number of customers who found that HA is good enough for their SLA and moved from MS clustering. In the 4.1, we have a couple of enhancements in these main features.
  • #71: Give tips on HATypes of cluster: prod, dmz, tier 2, IT cluster, non prod, desktop, why min host is 4This slides give a summary of the new enhancements. As customers adopt more and more virtualisation, we are entering the phase where mission critical workloads are virtualised. With all these enhancements in 4.1, customers may be tempted to create large clusters and put everything there. By large I mean either large no of nodes, or a lot of VMs in single cluster. Personally, I still prefer the traditional approach, where a cluster is really the building block. So we have multiple clusters, with distinct purpose.From the list above, something that I think customers will appreciate is the
  • #72: In the past, customers reported that they very occasionally saw DRS "get it wrong" in the sense that DRS would move VMs based on purely performance criteria with scant regard for the availability anxiety. What this means is, in the past it was possible (if somewhat unlikely) for DRS to place 20 VMs on an ESX host and only put 8 VMs on another. While that may have been a good idea from a performance standpoint, it could lead to scenarios where DRS itself created an "eggs in one basket" scenario, as DRS didn't distribute VMs to prevent one ESX host from becoming overpopulated (and with a bigger VM count) than another. In this scenario, DRS would have to carry out VMotions to free up resources so HA can power on a VM.
  • #78: For Application Monitoring, developers would develop application monitoring agents using the Application Monitoring SDK for specific applications running in the VM. There is support added in VMware Tools for an application to report its heartbeat/status. This gets communicated to vCenter as an "AppHeartbeatStatus" (similar to the "GuestHeartbeatStatus"). HA can respond to that by going red, indicating that the application has died. Thus, Application monitoring would work for those applications that use the new VMware Tools capability along with an application monitoring agent to report application status. To enable Application MonitoringObtain the SDK from VMware (this is for the ISV, not end customers)Use it to set up customized heartbeats for the applications you want to monitor.
  • #81: Since the hypervisor has full control over the execution of a VM, including delivery of all inputs, the hypervisoris able to capture all the necessary information about non-deterministic operations on theprimary VM and to replay these operations correctly on the backup VM.The tagging scheme doesn’t introduce any significant delay of the replaying VM, since the hypervisorof the recording (primary) VM guarantees that last log entry of each single instructionemulation or a device operation is marked as a go-live point. Since the backup VM cannotbe significantly delayed, the primary VM is also not affected by the use of go-live points
  • #82: Patches can cause host build numbers to vary between ESX and ESXi installations. To ensure that your hosts are FT compatible, do not mix ESX and ESXi hosts in an FT pair.
  • #84: FT with vSphere 4.1 still has some incompatibilities Thin Provisioning and Linked ClonesHot plug devices and USB PassthroughIPv6 (as HA does not support)vSMPN-Port ID Virtualization (NPIV)StorageVMotionSerial/ parallel portsPhysical and remote CD/ floppy
  • #85: Business opportunity: migrate customer from clustering (running 2 instance) to FT, where we have higher up time
  • #86: #1: If administrators wanted to move an ESX hosts from one vCenter instance to a new one (for whatever reasons) they usually did not bring the ESX host into maintenance mode.But adding the host to the new vCenter server without removing it from the previous one caused FT failures.Now the administrator will get a warning- which can be followed or ignored (yes/no) if he tries to add an ESX host which is managed by a different vCenter.#2: DRS will vMotion FT enabled VMs if needed and will place them according to DRS groups and other rules. Storage vMotion is not supported with FT, though.#3: If an administrator disabled HA he was forced to disable FT first. Now he gets a warning and he can decide to override and accept FT will not work as expected. Following this decision several operations re to FT will be disabled while HA is off.
  • #95: So how do we do it? We can now create 2 types of group: groups of VM and groups of ESX.We then map the VM group to the ESX group
  • #96: An ESX host can belong to multiple group?
  • #102: The separate rules now include more than only two VMs.If you select a “Separate rule” and include 5 VMs you’ll need at least 5 ESX hosts to accommodate this rule as each of them must run on separate host.
  • #105: vMotion is not a cluster feature. We can vMotion across cluster.? Can we vMotion from 2 clusters with different EVC? We can try this on the lab.We should be able to vMotion from 4.0 to 4.1, as we can do from 3.5 to 4.1
  • #110: This sound quite complicated but is easy to understand.Assumimg a VM was powered on on an older EVC mode and migrated (without powering off) to a cluster with a newer mode (and newer feature).So in this case the VM is “part” of the new EVC mode, but does not use the new features- instead still the old ones.Previously if you tried to vMotion this VM to and ESX host with the older EVC mode vCenter complained about them not being compatible- as the ESX host was not compatible to the current EVC mode the VM is running.Now it checks which mode the VM itself uses and accepts vMotioning to an older mode- as the VM doesn’t care and is still not using the new features.
  • #112: Earlier Add-Host Error detection: Host-specific incompatibilities are now displayed prior to the Add-Host work-flow when adding a host into an EVC cluster.
  • #116: in the vSphere VM Administation Guide page 92 vmware writes: "You can verify the CPU settings for the VM on the Resource Allocation tab.“But in this menu you can see no indication to the multi core configuration. what do I have to look for? Is it already implemented in the vSphere 4.1 RC ?When you configure multicore virtual CPUs for a VM, CPU hot Add/remove is disabled.For more information about multicore CPUs, see the vSphere Resource Management Guide. You can also searchthe VMware KNOVA database for articles about multicore CPUshttp://www.cpuid.com/softwares/cpu-z.html provides a more detailed view within each Guest OS________________Need to see if we can use Orchestrator or PowerShell to check this
  • #117: Need to see the PowerCLI and vSphere API to see if we can do this programmatically
  • #118: Note that “Average Capacity” in the report refers to the average capacity of all license keys for that product. Products (e.g. vSphere Enterprise) can have multiple keysEach key has a capacity and usage associated with it.In the screen above:Current capacity is total capacity for all the keysAverage capacity is the average capacity for the keys. For example…Product: vSphere Enterprisekey | capacity | usagexxxx-xxxx—xxxx | 1000 | 500yyyy-xxxx—xxxx | 2000 | 100 For the product, vSphere Enterprise we would report:Total Capacity  - 3000Total Usage – 600Average Usage – 300Average Capacity – 1500