SlideShare a Scribd company logo
Setup Guide
NOS 3.5
26-Sep-2013
Copyright | Setup Guide | NOS 3.5 | 2
Notice
Copyright
Copyright 2013 Nutanix, Inc.
Nutanix, Inc.
1740 Technology Drive, Suite 400
San Jose, CA 95110
All rights reserved. This product is protected by U.S. and international copyright and intellectual property
laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks
and names mentioned herein may be trademarks of their respective companies.
Conventions
Convention Description
variable_value The action depends on a value that is unique to your environment.
ncli> command The commands are executed in the Nutanix nCLI.
user@host$ command The commands are executed as a non-privileged user (such as nutanix)
in the system shell.
root@host# command The commands are executed as the root user in the hypervisor host
(vSphere or KVM) shell.
output The information is displayed as output from a command or in a log file.
Default Cluster Credentials
Interface Target Username Password
Nutanix web console Nutanix Controller VM admin admin
vSphere client ESXi host root nutanix/4u
SSH client or console ESXi host root nutanix/4u
SSH client or console KVM host root nutanix/4u
SSH client Nutanix Controller VM nutanix nutanix/4u
IPMI web interface or ipmitool Nutanix node ADMIN ADMIN
IPMI web interface or ipmitool Nutanix node (NX-3000) admin admin
Version
Last modified: September 26, 2013 (2013-09-26-13:03 GMT-7)
3
Contents
Overview........................................................................................................................... 4
Setup Checklist....................................................................................................................................4
Nonconfigurable Components............................................................................................................. 4
Reserved IP Addresses...................................................................................................................... 5
Network Information............................................................................................................................ 5
Product Mixing Restrictions.................................................................................................................6
Three Node Cluster Considerations....................................................................................................7
1: IP Address Configuration.......................................................................8
To Configure the Cluster.....................................................................................................................8
To Configure the Cluster in a VLAN-Segmented Network............................................................... 11
To Assign VLAN Tags to Nutanix Nodes...............................................................................11
To Configure VLANs (KVM)...................................................................................................12
2: Storage Configuration.......................................................................... 14
To Create the Datastore................................................................................................................... 14
3: vCenter Configuration.......................................................................... 16
To Create a Nutanix Cluster in vCenter........................................................................................... 16
To Add a Nutanix Node to vCenter.................................................................................................. 19
vSphere Cluster Settings...................................................................................................................21
4: Final Configuration............................................................................... 23
To Set the Timezone of the Cluster................................................................................................. 23
To Make Optional Settings................................................................................................................23
Diagnostics VMs................................................................................................................................24
To Run a Test Using the Diagnostics VMs............................................................................24
Diagnostics Output................................................................................................................. 25
To Test Email Alerts......................................................................................................................... 26
To Check the Status of Cluster Services......................................................................................... 27
Appendix A: Manual IP Address Configuration..................................... 28
To Verify IPv6 Link-Local Connectivity............................................................................................. 28
To Configure the Cluster (Manual)................................................................................................... 29
Remote Console IP Address Configuration...................................................................................... 31
To Configure the Remote Console IP Address (NX-1000, NX-3050, NX-6000).....................31
To Configure the Remote Console IP Address (NX-3000).................................................... 32
To Configure the Remote Console IP Address (NX-2000).................................................... 32
To Configure the Remote Console IP Address (command line)............................................ 33
To Configure Host Networking..........................................................................................................34
To Configure Host Networking (KVM).............................................................................................. 35
To Configure the Controller VM IP Address.....................................................................................36
Overview | Setup Guide | NOS 3.5 | 4
Overview
This guide provides step-by-step instructions on the post-shipment configuration of a Nutanix Virtual
Computing Platform.
Nutanix support recommends that you review field advisories on the support portal before installing a
cluster.
Setup Checklist
Confirm network settings with customer.
Network Information on page 5
Unpack and rack cluster hardware.
Refer to the Physical Installation Guide for your hardware model
Connect network and power cables.
Refer to the Physical Installation Guide for your hardware model
Assign IP addresses to all nodes in the cluster.
IP Address Configuration on page 8
Configure storage for the cluster.
Storage Configuration on page 14
(vSphere only) Add the vSphere hosts to the customer vCenter.
vCenter Configuration on page 16
Set the timezone of the cluster.
To Set the Timezone of the Cluster on page 23
Make optional configurations.
To Make Optional Settings on page 23
Run a performance diagnostic.
Diagnostics VMs on page 24
Test email alerts.
To Test Email Alerts on page 26
Confirm that the cluster is running.
To Check the Status of Cluster Services on page 27
Nonconfigurable Components
The components listed here are configured by the Nutanix manufacturing process. Do not modify any of
these components except under the direction of Nutanix support.
Overview | Setup Guide | NOS 3.5 | 5
Caution: Modifying any of the settings listed here may render your cluster inoperable.
In particular, do not under any circumstances use the Reset System Configuration option of
ESXi or delete the Nutanix Controller VM.
Nutanix Software
• Local datastore name
• Settings and contents of any Controller VM, including the name
Important: If you create vSphere resource pools, Nutanix Controller VMs must have the top
share.
ESXi Settings
• NFS settings
• VM swapfile location
• VM startup/shutdown order
• iSCSI software adapter settings
• vSwitchNutanix standard virtual switch
• vmk0 interface in port group "Management Network"
• SSH enabled
• Firewall disabled
KVM Settings
• iSCSI settings
• Open vSwitch settings
Reserved IP Addresses
The Nutanix cluster uses the following IP addresses for internal communication:
• 192.168.5.1
• 192.168.5.2
• 192.168.5.254
Important: The ESXi and CVM interfaces on vSwitch0 cannot use IP addresses in any subnets
that overlap with subnet 192.168.5.0/24.
Network Information
Confirm the following network information with the customer before the new block or blocks are connected
to the customer network.
• 10 Gbps Ethernet ports [NX-3000, NX-3050, NX-6000: 2 per node/8 per block] [NX-2000: 1 per node/4
per block]
• (optional) 1 Gbps Ethernet ports [1-2 per node/4-8 per block]
• 10/100 Mbps Ethernet ports [1 per node/4 per block]
• Default Gateway
• Subnet mask
• (optional) VLAN ID
• NTP servers
• DNS domain
• DNS servers
Overview | Setup Guide | NOS 3.5 | 6
• Host servers IP Addresses (remote console) [1 per node/4 per block]
• Host servers IP Addresses (hypervisor management) [1 per node/4 per block]
• Nutanix Controller VMs IP addresses [1 per node/4 per block]
• Reverse SSH port (outgoing connection to nsc.nutanix.com) [default 80]
• (optional) HTTP proxy for reverse SSH port
Product Mixing Restrictions
While a Nutanix cluster can include different products, there are some restrictions.
Caution: Do not configure a cluster that violates any of the following rules.
Compatibility Matrix
NX-1000 NX-2000 NX-2050 NX-3000 NX-3050 NX-6000
NX-1000
1
• • • • • •
NX-2000 • • • • •
NX-2050 • • • • • •
NX-3000 • • • • • •
NX-3050 • • • • • •
NX-6000
2
• • • •
3
•
1. NX-1000 nodes can be mixed with other products in the same cluster only when they are running 10
GbE networking; they cannot be mixed when running 1 GbE networking. If NX-1000 nodes are using
the 1 GbE interface, the maximum cluster size is 8 nodes. If the nodes are using the 10 GbE interface,
the cluster has no limits other than the maximum supported cluster size that applies to all products.
2. NX-6000 nodes cannot be mixed NX-2000 nodes in the same cluster.
3. Because it has a larger Flash tier, NX-3050 is recommended to be mixed with NX-6000 over other
products.
• Any combination of NX-2000, NX-2050, NX-3000, and NX-3050 nodes can be mixed in the same
cluster.
• All nodes in a cluster must be the same hypervisor type (ESXi or KVM).
• All Controller VMs in a cluster must have the same NOS version.
• Mixed Nutanix clusters comprising NX-2000 nodes and other products are supported as specified
above. However, because the NX-2000 processor architecture differs from other models, vSphere does
not support enhanced/live vMotion of VMs from one type of node to another unless Enhanced vMotion
Capability (EVC) is enabled. For more information about EVC, see the vSphere 5 documentation and
the following VMware knowledge base articles:
• Enhanced vMotion Compatibility (EVC) processor support [1003212]
• EVC and CPU Compatibility FAQ [1005764]
Overview | Setup Guide | NOS 3.5 | 7
Three Node Cluster Considerations
A Nutanix cluster must have at least three nodes. Minimum configuration (three node) clusters provide the
same protections as larger clusters, and a three node cluster can continue normally after a node failure.
However, one condition applies to three node clusters only.
When a node failure occurs in a cluster containing four or more nodes, you can dynamically remove that
node to bring the cluster back into full health. The newly configured cluster will still have at least three
nodes, so the cluster is fully protected. You can then replace the failed hardware for that node as needed
and add the node back into the cluster as a new node. However, when the cluster has just three nodes,
the failed node cannot be dynamically removed from the cluster. The cluster continues running without
interruption on two healthy nodes and one failed node, but the failed node cannot be removed when there
are only two healthy nodes. Therefore, the cluster is not fully protected until you fix the problem (such as
replacing a bad root disk) for the existing node.
IP Address Configuration | Setup Guide | NOS 3.5 | 8
1
IP Address Configuration
NOS includes a web-based configuration tool that automates the modification of Controller VMs and
configures the cluster to use these new IP addresses. Other cluster components must be modified
manually.
Requirements
The web-based configuration tool requires that IPv6 link-local be enabled on the subnet. If IPv6 link-local is
not available, you must configure the Controller VM IP addresses and the cluster manually. The web-based
configuration tool also requires that the Controller VMs be able to communicate with each other.
All Controller VMs and hypervisor hosts must be on the same subnet. If the IPMI interfaces are connected,
Nutanix recommends that they be on the same subnet as the Controller VMs and hypervisor hosts.
Guest VMs can be on a different subnet.
To Configure the Cluster
Before you begin.
• Confirm that the system you are using to configure the cluster meets the following requirements:
• IPv6 link-local enabled.
• Windows 7, Vista, or MacOS.
• (Windows only) Bonjour installed (included with iTunes or downloadable from http://
support.apple.com/kb/DL999).
• Determine the IPv6 service of any Controller VM in the cluster.
IPv6 service names are uniquely generated at the factory and have the following form (note the final
period):
NTNX-block_serial_number-node_location-CVM.local.
On the right side of the block toward the front is a label that has the block_serial_number (for example,
12AM3K520060). The node_location is a number 1-4 for NX-3000, a letter A-D for NX-1000/NX-2000/
NX-3050, or a letter A-B for NX-6000.
IP Address Configuration | Setup Guide | NOS 3.5 | 9
If you need to confirm if IPv6 link-local is enabled on the network or if you do not have access to get the
node serial number, see the Nutanix support knowledge base for alternative methods.
1. Open a web browser.
Nutanix recommends using Internet Explorer 9 for Windows and Safari for Mac OS.
Note: Internet Explorer requires protected mode to be disabled. Go to Tools > Internet
Options > Security, clear the Enable Protected Mode check box, and restart the browser.
2. Navigate to http://cvm_host_name:2100/cluster_init.html.
Replace cvm_host_name with the IPv6 service name of any Controller VM that will be added to the
cluster.
IP Address Configuration | Setup Guide | NOS 3.5 | 10
Following is an example URL to access the cluster creation page on a Controller VM:
http://NTNX-12AM3K520060-1-CVM.local.:2100/cluster_init.html
If the cluster_init.html page is blank, then the Controller VM is already part of a cluster. Connect to a
Controller VM that is not part of a cluster.
3. Type a meaningful value in the Cluster Name field.
This value is appended to all automated communication between the cluster and Nutanix support. It
should include the customer's name and if necessary a modifier that differentiates this cluster from any
other clusters that the customer might have.
Note: This entity has the following naming restrictions:
• The maximum length is 75 characters.
• Allowed characters are uppercase and lowercase standard Latin letters (A-Z and a-z),
decimal digits (0-9), dots (.), hyphens (-), and underscores (_).
4. Type the appropriate DNS and NTP addresses in the respective fields.
5. Type the appropriate subnet masks in the Subnet Mask row.
6. Type the appropriate default gateway IP addresses in the Default Gateway row.
7. Select the check box next to each node that you want to add to the cluster.
All unconfigured nodes on the current network are presented on this web page. If you will be configuring
multiple clusters, be sure that you only select the nodes that should be part of the current cluster.
8. Provide an IP address for all components in the cluster.
Note: The unconfigured nodes are not listed according to their position in the block. Ensure
that you assign the intended IP address to each node.
9. Click Create.
Wait until the Log Messages section of the page reports that the cluster has been successfully
configured.
Output similar to the following indicates successful cluster configuration.
Configuring IP addresses on node 12AM2K420010/A...
Configuring IP addresses on node 12AM2K420010/B...
Configuring IP addresses on node 12AM2K420010/C...
Configuring IP addresses on node 12AM2K420010/D...
Configuring Zeus on node 12AM2K420010/A...
Configuring Zeus on node 12AM2K420010/B...
Configuring Zeus on node 12AM2K420010/C...
Configuring Zeus on node 12AM2K420010/D...
Initializing cluster...
Cluster successfully initialized!
Initializing the cluster DNS and NTP servers...
Successfully updated the cluster NTP and DNS server list
10. Log on to any Controller VM in the cluster with SSH.
11. Start the Nutanix cluster.
nutanix@cvm$ cluster start
If the cluster starts properly, output similar to the following is displayed for each node in the cluster:
CVM: 172.16.8.167 Up, ZeusLeader
Zeus UP [3148, 3161, 3162, 3163, 3170, 3180]
Scavenger UP [3333, 3345, 3346, 11997]
IP Address Configuration | Setup Guide | NOS 3.5 | 11
ConnectionSplicer UP [3379, 3392]
Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447]
Medusa UP [3488, 3501, 3502, 3523, 3569]
DynamicRingChanger UP [4592, 4609, 4610, 4640]
Pithos UP [4613, 4625, 4626, 4678]
Stargate UP [4628, 4647, 4648, 4709]
Cerebro UP [4890, 4903, 4904, 4979]
Chronos UP [4906, 4918, 4919, 4968]
Curator UP [4922, 4934, 4935, 5064]
Prism UP [4939, 4951, 4952, 4978]
AlertManager UP [4954, 4966, 4967, 5022]
StatsAggregator UP [5017, 5039, 5040, 5091]
SysStatCollector UP [5046, 5061, 5062, 5098]
To Configure the Cluster in a VLAN-Segmented Network
The automated IP address and cluster configuration utilities depend on Controller VMs being able to
communicate with each other. If the customer network is segmented using VLANs, that communication is
not possible until the Controller VMs are assigned to a valid VLAN.
Note: The web-based configuration tool requires that IPv6 link-local be enabled on the subnet. If
IPv6 link-local is not available, see To Configure the Cluster (Manual) on page 29.
1. Configure the IPMI IP addresses by following the procedure for your hardware model.
→ To Configure the Remote Console IP Address (NX-1000, NX-3050, NX-6000) on page 31
→ To Configure the Remote Console IP Address (NX-3000) on page 32
→ To Configure the Remote Console IP Address (NX-2000) on page 32
Alternatively, you can set the IPMI IP address using a command-line utility by following To Configure
the Remote Console IP Address (command line) on page 33.
2. Configure the hypervisor host IP addresses.
→ vSphere: To Configure Host Networking on page 34
→ KVM: To Configure Host Networking (KVM) on page 35
3. Assign VLAN tags to the hypervisor hosts and Controller VMs by following .
→ vSphere: To Assign VLAN Tags to Nutanix Nodes on page 11
→ KVM: To Configure VLANs (KVM) on page 12
4. Configure the Controller VM IP addresses and the cluster using the automated utilities by following To
Configure the Cluster on page 8 .
To Assign VLAN Tags to Nutanix Nodes
1. Assign the ESXi hosts to the pre-defined host VLAN.
a. Access the ESXi host console.
b. Press F2 and then provide the ESXi host logon credentials.
c. Press the down arrow key until Configure Management Network is highlighted and then press
Enter.
d. Select VLAN (optional) and press Enter.
IP Address Configuration | Setup Guide | NOS 3.5 | 12
e. Type the VLAN ID specified by the customer and press Enter.
f. Press Esc and then Y to apply all changes and restart the management network.
g. Repeat this process for all remaining ESXi hosts.
2. Assign the Controller VMs to the pre-defined virtual machine VLAN.
a. Log on to an ESXi host with the vSphere client.
b. Select the host and then click the Configuration tab.
c. Click Networking.
d. Click the Properties link above vSwitch0.
e. Select VM Network and then click Edit.
f. Type the VLAN ID specified by the customer and click OK.
g. Click Close.
h. Repeat this process for all remaining ESXi hosts.
To Configure VLANs (KVM)
In an environment with separate VLANs for hosts and guest VMs, VLAN tagging is configured differently
for each type of VLAN.
Perform these steps on every KVM host in the cluster.
1. Log on to the KVM host with SSH.
2. Configure VLAN tagging on the host interface.
a. Set the tag for the bridge.
root@kvm# ovs-vsctl set port br0 tag=host_vlan_tag
Replace host_vlan_tag with the VLAN tag for hosts.
b. Confirm VLAN tagging on the interface.
root@kvm# ovs-vsctl list port br0
Check the value of the tag parameter that is shown.
3. Configure VLAN tagging for guest VMs.
a. Copy the existing network configuration and open the configuration file.
root@kvm# virsh net-dumpxml VM-Network > /tmp/network.xml
root@kvm# vi /tmp/network.xml
b. Update the configuration file to describe the new network.
• Delete the uuid and mac parameters.
• Change the name and portgroup name parameters.
• Add a vlan tag element with the ID for the guest VM VLAN.
IP Address Configuration | Setup Guide | NOS 3.5 | 13
The resulting configuration file should look like this.
<network connections='1'>
<name>new_network_name</name>
<forward mode='bridge'/>
<bridge name='br0' />
<virtualport type='openvswitch'/>
<portgroup name='new_network_name' default='yes'>
</portgroup>
<vlan>
<tag id="vm_vlan_tag">
</vlan>
</network>
• Replace new_network_name with the desired name for the network. Ensure that both instances
of this parameter match.
• Replace vm_vlan_tag with the VLAN tag for guest VMs.
c. Create the new network.
root@kvm# virsh net-define /tmp/network.xml
d. Start the new network.
root@kvm# virsh net-start new_network_name
root@kvm# virsh net-autostart new_network_name
e. Confirm that the new network is running.
root@kvm# virsh net-list --all
To create a VM on this VLAN, specify new_network_name instead of VM-Network.
Storage Configuration | Setup Guide | NOS 3.5 | 14
2
Storage Configuration
At the conclusion of the setup process, you will need to create the following entities:
• 1 storage pool that comprises all physical disks in the cluster.
• 1 container that uses all available storage capacity in the pool.
• 1 NFS datastore that is mounted from all hosts in the cluster.
A single datastore comprising all available cluster storage will suit the needs of most customers. If the
customer requests additional NFS datastores during setup, you can create the necessary containers,
and then mount them as datastores. For future datastore needs, refer the customer to the Nutanix
Administration Guide.
To Create the Datastore
1. Sign in to the Nutanix web console.
2. In the Storage dashboard, click the Storage Pool button.
The Create Storage Pool dialog box appears.
3. In the Name field, enter a name for the storage pool.
• For vSphere clusters, name the storage pool sp1.
• For KVM clusters, name the storage pool default.
4. In the Capacity field, check the box to use the available unallocated capacity for this storage pool.
5. When all the field entries are correct, click the Save button.
Storage Configuration | Setup Guide | NOS 3.5 | 15
6. In the Storage dashboard, click the Container button.
The Create Container dialog box appears.
7. Create the container.
Do the following in the indicated fields:
a. Name: Enter a name for the container.
• For vSphere clusters, name the container nfs-ctr.
• For KVM clusters, name the container default.
b. Storage Pool: Select the sp1 (vSphere) or default (KVM) storage pool from the drop-down list.
The following field, Max Capacity, displays the amount of free space available in the selected
storage pool.
c. NFS Datastore: Select the Mount on all hosts button to mount the container on all hosts.
d. When all the field entries are correct, click the Save button.
vCenter Configuration | Setup Guide | NOS 3.5 | 16
3
vCenter Configuration
VMware vCenter enables the centralized management of multiple ESXi hosts. The Nutanix cluster in
vCenter must be configured according to Nutanix best practices.
While most customers prefer to use an existing vCenter, Nutanix provides a vCenter OVF, which is on
the Controller VMs in /home/nutanix/data/images/vcenter. You can deploy the OVF using the standard
procedures for vSphere.
To Create a Nutanix Cluster in vCenter
1. Log on to vCenter with the vSphere client.
2. If you want the Nutanix cluster to be in its own datacenter or if there is no datacenter, click File > New >
Datacenter and type a meaningful name for the datacenter, such as NTNX-DC. Otherwise, proceed to the
next step.
You can also create the Nutanix cluster within an existing datacenter.
3. Right-click the datacenter node and select New Cluster.
4. Type a meaningful name for the cluster in the Name field, such as NTNX-Cluster.
5. Select the Turn on vSphere HA check box and click Next.
6. Select Admission Control > Enable.
7. Select Admission Control Policy > Percentage of cluster resources reserved as failover spare
capacity and enter the percentage appropriate for the number of Nutanix nodes in the cluster the click
Next.
Hosts (N+1) Percentage Hosts (N+2) Percentage Hosts (N+3) Percentage Hosts (N+4) Percentage
1 N/A 9 23% 17 18% 25 16%
2 N/A 10 20% 18 17% 26 15%
3 33% 11 18% 19 16% 27 15%
4 25% 12 17% 20 15% 28 14%
5 20% 13 15% 21 14% 29 14%
6 18% 14 14% 22 14% 30 13%
7 15% 15 13% 23 13% 31 13%
8 13% 16 13% 24 13% 32 13%
8. Click Next on the following three pages to accept the default values.
• Virtual Machine Options
• VM monitoring
• VMware EVC
vCenter Configuration | Setup Guide | NOS 3.5 | 17
9. Verify that Store the swapfile in the same directory as the virtual machine (recommended) is
selected and click Next.
10. Review the settings and then click Finish.
11. Add all Nutanix nodes to the vCenter cluster inventory.
See To Add a Nutanix Node to vCenter on page 19.
12. Right-click the Nutanix cluster node and select Edit Settings.
13. If vSphere HA and DRS are not enabled, select them on the Cluster Features page. Otherwise,
proceed to the next step.
Note: vSphere HA and DRS must be configured even if the customer does not plan to use
the features. The settings will be preserved within the vSphere cluster configuration, so if the
customer later decides to enable the feature, it will be pre-configured based on Nutanix best
practices.
14. Configure vSphere HA.
a. Select vSphere HA > Virtual Machine Options.
b. Change the VM restart priority of all Controller VMs to Disabled.
Tip: Controller VMs include the phrase CVM in their names. It may be necessary to expand
the Virtual Machine column to view the entire VM name.
c. Change the Host Isolation Response setting of all Controller VMs to Leave Powered On.
vCenter Configuration | Setup Guide | NOS 3.5 | 18
d. Select vSphere HA > VM Monitoring
e. Change the VM Monitoring setting for all Controller VMs to Disabled.
f. Select vSphere HA > Datastore Heartbeating.
g. Click Select only from my preferred datastores and select the Nutanix datastore (NTNX-NFS).
h. If the cluster has only one datastore as recommended, click Advanced Options, add an Option
named das.ignoreInsufficientHbDatastore with Value of true, and click OK.
i. If the cluster does not use vSphere HA, disable it on the Cluster Features page. Otherwise,
proceed to the next step.
15. Configure vSphere DRS.
a. Select vSphere DRS > Virtual Machine Options.
b. Change the Automation Level setting of all Controller VMs to Disabled.
vCenter Configuration | Setup Guide | NOS 3.5 | 19
c. Select vSphere DRS > Power Management.
d. Confirm that Off is selected as the default power management for the cluster.
e. If the cluster does not use vSphere DRS, disable it on the Cluster Features page. Otherwise,
proceed to the next step.
16. Click OK to close the cluster settings window.
To Add a Nutanix Node to vCenter
The cluster must be configured according to Nutanix specifications given in vSphere Cluster Settings on
page 21.
Tip: Refer to Default Cluster Credentials on page 2 for the default credentials of all cluster
components.
1. Log on to vCenter with the vSphere client.
2. Right-click the cluster and select Add Host.
3. Type the IP address of the ESXi host in the Host field.
4. Enter the ESXi host logon credentials in the Username and Password fields.
5. Click Next.
If a security or duplicate management alert appears, click Yes.
6. Review the Host Summary page and click Next.
7. Select a license to assign to the ESXi host and click Next.
8. Ensure that the Enable Lockdown Mode check box is left unselected and click Next.
Lockdown mode is not supported.
9. Click Finish.
10. Select the ESXi host and click the Configuration tab.
11. Configure DNS servers.
a. Click DNS and Routing > Properties.
b. Select Use the following DNS server address.
vCenter Configuration | Setup Guide | NOS 3.5 | 20
c. Type DNS server addresses in the Preferred DNS Server and Alternate DNS Server fields and
click OK.
12. Configure NTP servers.
a. Click Time Configuration > Properties > Options > NTP Settings > Add.
b. Type the NTP server address.
Add multiple NTP servers if required.
c. Click OK in the NTP Daemon (ntpd) Options and Time Configuration windows.
d. Click Time Configuration > Properties > Options > General.
e. Select Start automatically under Startup Policy.
f. Click Start
g. Click OK in the NTP Daemon (ntpd) Options and Time Configuration windows.
13. Click Storage and confirm that NFS datastores are mounted.
14. Set the Controller VM to start automatically when the ESXi host is powered on.
a. Click the Configuration tab.
b. Click Virtual Machine Startup/Shutdown in the Software frame.
c. Select the Controller VM and click Properties.
d. Ensure that the Allow virtual machines to start and stop automatically with the system check
box is selected.
e. If the Controller VM is listed in Manual Startup, click Move Up to move the Controller VM into the
Automatic Startup section.
vCenter Configuration | Setup Guide | NOS 3.5 | 21
f. Click OK.
15. (NX-2000 only) Click Host Cache Configuration and confirm that the host cache is stored on the local
datastore.
If it is not correct, click Properties to update the location.
vSphere Cluster Settings
Certain vSphere cluster settings are required for Nutanix clusters.
vSphere HA and DRS must be configured even if the customer does not plan to use the feature. The
settings will be preserved within the vSphere cluster configuration, so if the customer later decides to
enable the feature, it will be pre-configured based on Nutanix best practices.
vSphere HA Settings
Enable host monitoring
Enable admission control and use the percentage-based policy with a value based on the
number of nodes in the cluster.
Set the VM Restart Priority of all Controller VMs to Disabled.
Set the Host Isolation Response of all Controller VMs to Leave Powered On.
Disable VM Monitoring for all Controller VMs.
Enable Datastore Heartbeating by clicking Select only from my preferred datastores and
choosing the Nutanix NFS datastore.
If the cluster has only one datastore, add an advanced option
das.ignoreInsufficientHbDatastore=true.
vCenter Configuration | Setup Guide | NOS 3.5 | 22
vSphere DRS Settings
Disable automation on all Controller VMs.
Leave power management disabled (set to Off).
Other Cluster Settings
Store VM swapfiles in the same directory as the virtual machine.
(NX-2000 only) Store host cache on the local datastore.
Failover Reservation Percentages
Hosts (N+1) Percentage Hosts (N+2) Percentage Hosts (N+3) Percentage Hosts (N+4) Percentage
1 N/A 9 23% 17 18% 25 16%
2 N/A 10 20% 18 17% 26 15%
3 33% 11 18% 19 16% 27 15%
4 25% 12 17% 20 15% 28 14%
5 20% 13 15% 21 14% 29 14%
6 18% 14 14% 22 14% 30 13%
7 15% 15 13% 23 13% 31 13%
8 13% 16 13% 24 13% 32 13%
Final Configuration | Setup Guide | NOS 3.5 | 23
4
Final Configuration
The final steps in the Nutanix block setup are to confirm email alerts, set the timezone, and confirm that it
is running.
To Set the Timezone of the Cluster
1. Log on to any Controller VM in the cluster with SSH.
2. Locate the timezone template for the customer site.
nutanix@cvm$ ls /usr/share/zoneinfo/*
The timezone templates of some common timezones are shown below.
Location Timezone Template
US East Coast /usr/share/zoneinfo/US/Eastern
England /usr/share/zoneinfo/Europe/London
Japan /usr/share/zoneinfo/Asia/Tokyo
3. Copy the timezone template to all Controller VMs in the cluster.
nutanix@cvm$ for i in `svmips`; do echo $i; ssh $i "sudo cp template_path /etc/localtime;
date"; done
Replace template_path with the location of the desired timezone template.
If a host authenticity warning is displayed, type yes to continue connecting.
The expected output is the IP address of each Controller VM and the current time in the desired
timezone, for example:
192.168.1.200
Fri Jan 25 19:43:32 GMT 2013
To Make Optional Settings
You can make one or more of the following settings if necessary to meet customer requirements.
• Add customer email addresses to alerts.
Web Console
> Alert Email Contacts
nCLI ncli> cluster add-to-email-contacts email-addresses="customer_email"
Replace customer_email with a comma-separated list of customer email addresses
to receive alert messages.
Final Configuration | Setup Guide | NOS 3.5 | 24
• Specify an outgoing SMTP server.
Web Console
> SMTP Server
nCLI ncli> cluster set-smtp-server address="smtp_address"
Replace smtp_address with the IP address or name of the SMTP server to use for
alert messages.
• If the site security policy allows the remote support tunnel, enable it.
Warning: Failing to enable remote support prevents Nutanix support from directly addressing
cluster issues. Nutanix recommends that all customers allow email alerts at minimum because
it allows proactive support of customer issues.
Web Console
> Remote Support Services > Enable
nCLI ncli> cluster start-remote-support
• If the site security policy does not allow email alerting, disable it.
Web Console
> Email Alert Services > Disable
nCLI ncli> cluster stop-email-alerts
Diagnostics VMs
Nutanix provides a diagnostics capability to allow partners and customers run performance tests on the
cluster. This is a useful tool in pre-sales demonstrations of the cluster and while identifying the source of
performance issues in a production cluster. Diagnostics should also be run as part of setup to ensure that
the cluster is running properly before the customer takes ownership of the cluster.
The diagnostic utility deploys a VM on each node in the cluster. The Controller VMs control the diagnostic
VM on their hosts and report back the results to a single system.
The diagnostics test provide the following data:
• Sequential write bandwidth
• Sequential read bandwidth
• Random read IOPS
• Random write IOPS
Because the test creates new cluster entities, it is necessary to run a cleanup script when you are finished.
To Run a Test Using the Diagnostics VMs
Before you begin. Ensure that 10 GbE ports are active on the ESXi hosts using esxtop or vCenter. The
tests will run very slow if the nodes are not using the 10 GbE ports. For more information about this known
issue with ESXi 5.0 update 1, see VMware KB article 2030006.
1. Log on to any Controller VM in the cluster with SSH.
2. Set up the diagnostics test.
→ vSphere
nutanix@cvm$ ~/diagnostics/diagnostics.py cleanup
Final Configuration | Setup Guide | NOS 3.5 | 25
In vCenter, right-click any diagnostic VMs labeled as "orphaned", select Remove from Inventory,
and click Yes to confirm removal.
→ KVM
nutanix@cvm$ ~/diagnostics/setup_diagnostics_kvm.py --force
3. Start the diagnostics test.
→ vSphere
nutanix@cvm$ ~/diagnostics/diagnostics.py run
→ KVM
nutanix@cvm$ ~/diagnostics/diagnostics.py --hypervisor kvm --skip_setup run
Include the parameter --default_ncli_password='admin_password' if the Nutanix admin user password
has been changed from the default.
If the command fails with ERROR:root:Zookeeper host port list is not set, refresh the environment
by running source /etc/profile or bash -l and run the command again.
The diagnostic may take up to 15 minutes to complete.
The script performs the following tasks:
1. Installs a diagnostic VM on each node.
2. Creates cluster entities to support the test, if necessary.
3. Runs four performance tests, using the Linux fio utility.
4. Reports the results.
4. Review the results.
5. Remove the entities from this diagnostic.
→ vSphere
nutanix@cvm$ ~/diagnostics/diagnostics.py cleanup
In vCenter, right-click any diagnostic VMs labeled as "orphaned", select Remove from Inventory,
and click Yes to confirm removal.
→ KVM
nutanix@cvm$ ~/diagnostics/setup_diagnostics_kvm.py --cleanup_ctr
Perform these steps for each KVM host.
a. Log on to the KVM host with SSH.
b. Get the diagnostics VM name.
root@kvm# virsh list | grep -i diagnostics
c. Destroy the diagnostics VM.
root@kvm# virsh destroy diagnostics_vm_name
Replace diagnostics_vm_name with the VM name found in the previous step.
Diagnostics Output
System output similar to the following indicates a successful test.
Final Configuration | Setup Guide | NOS 3.5 | 26
Checking if an existing storage pool can be used ...
Using storage pool sp1 for the tests.
Checking if the diagnostics container exists ... does not exist.
Creating a new container NTNX-diagnostics-ctr for the runs ... done.
Mounting NFS datastore 'NTNX-diagnostics-ctr' on each host ... done.
Deploying the diagnostics UVM on host 172.16.8.170 ... done.
Preparing the UVM on host 172.16.8.170 ... done.
Deploying the diagnostics UVM on host 172.16.8.171 ... done.
Preparing the UVM on host 172.16.8.171 ... done.
Deploying the diagnostics UVM on host 172.16.8.172 ... done.
Preparing the UVM on host 172.16.8.172 ... done.
Deploying the diagnostics UVM on host 172.16.8.173 ... done.
Preparing the UVM on host 172.16.8.173 ... done.
VM on host 172.16.8.170 has booted. 3 remaining.
VM on host 172.16.8.171 has booted. 2 remaining.
VM on host 172.16.8.172 has booted. 1 remaining.
VM on host 172.16.8.173 has booted. 0 remaining.
Waiting for the hot cache to flush ... done.
Running test 'Prepare disks' ... done.
Waiting for the hot cache to flush ... done.
Running test 'Sequential write bandwidth (using fio)' ... bandwidth MBps
Waiting for the hot cache to flush ... done.
Running test 'Sequential read bandwidth (using fio)' ... bandwidth MBps
Waiting for the hot cache to flush ... done.
Running test 'Random read IOPS (using fio)' ... operations IOPS
Waiting for the hot cache to flush ... done.
Running test 'Random write IOPS (using fio)' ... operations IOPS
Tests done.
Note:
• Expected results vary based on the specific NOS version and hardware model used. Refer to
the Release Notes for the values appropriate for your environment.
• The IOPS values reported by the diagnostics script will be higher than the values reported by
the Nutanix management interfaces. This difference is because the diagnostics script reports
physical disk I/O, and the management interfaces show IOPS reported by the hypervisor.
• If the reported values are lower than expected, the 10 GbE ports may not be active. For more
information about this known issue with ESXi 5.0 update 1, see VMware KB article 2030006.
To Test Email Alerts
1. Log on to any Controller VM in the cluster with SSH.
2. Send a test email.
nutanix@cvm$ ~/serviceability/bin/email-alerts 
--to_addresses="support@nutanix.com, customer_email" 
--subject="[alert test] `ncli cluster get-params`"
Replace customer_email with a customer email address that receives alerts.
3. Confirm with Nutanix support that the email was received.
Final Configuration | Setup Guide | NOS 3.5 | 27
To Check the Status of Cluster Services
Verify that all services are up on all Controller VMs.
nutanix@cvm$ cluster status
If the cluster is running properly, output similar to the following is displayed for each node in the cluster:
CVM: 172.16.8.167 Up, ZeusLeader
Zeus UP [3148, 3161, 3162, 3163, 3170, 3180]
Scavenger UP [3333, 3345, 3346, 11997]
ConnectionSplicer UP [3379, 3392]
Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447]
Medusa UP [3488, 3501, 3502, 3523, 3569]
DynamicRingChanger UP [4592, 4609, 4610, 4640]
Pithos UP [4613, 4625, 4626, 4678]
Stargate UP [4628, 4647, 4648, 4709]
Cerebro UP [4890, 4903, 4904, 4979]
Chronos UP [4906, 4918, 4919, 4968]
Curator UP [4922, 4934, 4935, 5064]
Prism UP [4939, 4951, 4952, 4978]
AlertManager UP [4954, 4966, 4967, 5022]
StatsAggregator UP [5017, 5039, 5040, 5091]
SysStatCollector UP [5046, 5061, 5062, 5098]
Manual IP Address Configuration | Setup Guide | NOS 3.5 | 28
Appendix A
Manual IP Address Configuration
To Verify IPv6 Link-Local Connectivity
The automated IP address and cluster configuration utilities depend on IPv6 link-local addresses, which
are enabled on most networks. Use this procedure to verify that IPv6 link-local is enabled.
1. Connect two Windows, Linux, or Apple laptops to the switch to be used.
2. Disable any firewalls on the laptops.
3. Verify that each laptop has an IPv6 link-local address.
→ Windows (Control Panel)
Start > Control Panel > View network status and tasks > Change adapter settings > Local
Area Connection > Details
→ Windows (command-line interface)
> ipconfig
Ethernet adapter Local Area Connection:
Connection-specific DNS Suffix . : corp.example.com
Link-local IPv6 Address . . . . . : fe80::ed67:9a32:7fc4:3be1%12
IPv4 Address. . . . . . . . . . . : 172.16.21.11
Subnet Mask . . . . . . . . . . . : 255.240.0.0
Manual IP Address Configuration | Setup Guide | NOS 3.5 | 29
Default Gateway . . . . . . . . . : 172.16.0.1
→ Linux
$ ifconfig eth0
eth0 Link encap:Ethernet HWaddr 00:0c:29:dd:e3:0b
inet addr:10.2.100.180 Bcast:10.2.103.255 Mask:255.255.252.0
inet6 addr: fe80::20c:29ff:fedd:e30b/64 Scope:Link
UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1
RX packets:2895385616 errors:0 dropped:0 overruns:0 frame:0
TX packets:3063794864 errors:0 dropped:0 overruns:0 carrier:0
collisions:0 txqueuelen:1000
RX bytes:2569454555254 (2.5 TB) TX bytes:2795005996728 (2.7 TB)
→ Mac OS
$ ifconfig en0
en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500
ether 70:56:81:ae:a7:47
inet6 fe80::7256:81ff:feae:a747 en0 prefixlen 64 scopeid 0x4
inet 172.16.21.208 netmask 0xfff00000 broadcast 172.31.255.255
media: autoselect
status: active
Note the IPv6 link-local addresses, which always begin with fe80. Omit the / character and anything
following.
4. From one of the laptops, ping the other laptop.
→ Windows
> ping -6 ipv6_linklocal_addr%interface
→ Linux/Mac OS
$ ping6 ipv6_linklocal_addr%interface
• Replace ipv6_linklocal_addr with the IPv6 link-local address of the other laptop.
• Replace interface with the interface identifier on the other laptop (for example, 12 for Windows, eth0
for Linux, or en0 for Mac OS).
If the ping packets are answered by the remote host, IPv6 link-local is enabled on the subnet. If the
ping packets are not answered, ensure that firewalls are disabled on both laptops and try again before
concluding that IPv6 link-local is not enabled.
5. Reenable the firewalls on the laptops and disconnect them from the network.
Results.
• If IPv6 link-local is enabled on the subnet, you can use automated IP address and cluster configuration
utility.
• If IPv6 link-local is not enabled on the subnet, you have to manually create the cluster by following To
Configure the Cluster (Manual) on page 29, which includes manually setting IP addresses.
To Configure the Cluster (Manual)
Use this procedure if IPv6 link-local is not enabled on the subnet.
1. Configure the IPMI IP addresses by following the procedure for your hardware model.
Manual IP Address Configuration | Setup Guide | NOS 3.5 | 30
→ To Configure the Remote Console IP Address (NX-1000, NX-3050, NX-6000) on page 31
→ To Configure the Remote Console IP Address (NX-3000) on page 32
→ To Configure the Remote Console IP Address (NX-2000) on page 32
Alternatively, you can set the IPMI IP address using a command-line utility by following To Configure
the Remote Console IP Address (command line) on page 33.
2. Configure networking on node the by following the hypervisor-specific procedure.
→ vSphere: To Configure Host Networking on page 34
→ KVM: To Configure Host Networking (KVM) on page 35
3. Configure the Controller VM IP addresses by following To Configure the Controller VM IP Address on
page 36.
4. Log on to any Controller VM in the cluster with SSH.
5. Create the cluster.
nutanix@cvm$ cluster -s cvm_ip_addrs create
Replace cvm_ip_addrs with a comma-separated list of Controller VM IP addresses. Include all
Controller VMs that will be part of the cluster.
For example, if the new cluster should comprise all four nodes in a block, include all the IP addresses of
all four Controller VMs.
6. Start the Nutanix cluster.
nutanix@cvm$ cluster start
If the cluster starts properly, output similar to the following is displayed for each node in the cluster:
CVM: 172.16.8.167 Up, ZeusLeader
Zeus UP [3148, 3161, 3162, 3163, 3170, 3180]
Scavenger UP [3333, 3345, 3346, 11997]
ConnectionSplicer UP [3379, 3392]
Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447]
Medusa UP [3488, 3501, 3502, 3523, 3569]
DynamicRingChanger UP [4592, 4609, 4610, 4640]
Pithos UP [4613, 4625, 4626, 4678]
Stargate UP [4628, 4647, 4648, 4709]
Cerebro UP [4890, 4903, 4904, 4979]
Chronos UP [4906, 4918, 4919, 4968]
Curator UP [4922, 4934, 4935, 5064]
Prism UP [4939, 4951, 4952, 4978]
AlertManager UP [4954, 4966, 4967, 5022]
StatsAggregator UP [5017, 5039, 5040, 5091]
SysStatCollector UP [5046, 5061, 5062, 5098]
7. Set the name of the cluster.
nutanix@cvm$ ncli cluster edit-params new-name=cluster_name
Replace cluster_name with a name for the cluster chosen by the customer.
8. Configure the DNS servers.
nutanix@cvm$ ncli cluster add-to-name-servers servers="dns_server"
Replace dns_server with the IP address of a single DNS server or with a comma-separated list of DNS
server IP addresses.
Manual IP Address Configuration | Setup Guide | NOS 3.5 | 31
9. Configure the NTP servers.
nutanix@cvm$ ncli cluster add-to-ntp-servers servers="ntp_server"
Replace ntp_server with the IP address or host name of a single NTP server or a with a comma-
separated list of NTP server IP addresses or host names.
Remote Console IP Address Configuration
The Intelligent Platform Management Interface (IPMI) is a standardized interface used to manage a host
and monitor its operation. To enable remote access to the console of each host, you must configure the
IPMI settings within BIOS.
The Nutanix cluster provides a Java application to remotely view the console of each node, or host server.
You can use this console to configure additional IP addresses in the cluster.
The procedure for configuring the remote console IP address is slightly different for each hardware
platform.
To Configure the Remote Console IP Address (NX-1000, NX-3050, NX-6000)
1. Connect a keyboard and monitor to a node in the Nutanix block.
2. Restart the node and press Delete to enter the BIOS setup utility.
You will have a limited amount of time to enter BIOS before the host completes the restart process.
3. Press the right arrow key to select the IPMI tab.
4. Press the down arrow key until BMC network configuration is highlighted and then press Enter.
5. Select Configuration Address source and press Enter.
6. Select Static and press Enter.
7. Assign the Station IP address, Subnet mask, and Router IP address.
Manual IP Address Configuration | Setup Guide | NOS 3.5 | 32
8. Review the BIOS settings and press F4 to save the configuration changes and exit the BIOS setup
utility.
The node restarts.
To Configure the Remote Console IP Address (NX-3000)
1. Connect a keyboard and monitor to a node in the Nutanix block.
2. Restart the node and press Delete to enter the BIOS setup utility.
You will have a limited amount of time to enter BIOS before the host completes the restart process.
3. Press the right arrow key to select the Server Mgmt tab.
4. Press the down arrow key until BMC network configuration is highlighted and then press Enter.
5. Select Configuration source and press Enter.
6. Select Static on next reset and press Enter.
7. Assign the Station IP address, Subnet mask, and Router IP address.
8. Press F10 to save the configuration changes.
9. Review the settings and then press Enter.
The node restarts.
To Configure the Remote Console IP Address (NX-2000)
1. Connect a keyboard and monitor to a node in the Nutanix block.
2. Restart the node and press Delete to enter the BIOS setup utility.
You will have a limited amount of time to enter BIOS before the host completes the restart process.
3. Press the right arrow key to select the Advanced tab.
Manual IP Address Configuration | Setup Guide | NOS 3.5 | 33
4. Press the down arrow key until IPMI Configuration is highlighted and then press Enter.
5. Select Set LAN Configuration and press Enter.
6. Select Static to assign an IP address, subnet mask, and gateway address.
7. Press F10 to save the configuration changes.
8. Review the settings and then press Enter.
9. Restart the node.
To Configure the Remote Console IP Address (command line)
You can configure the management interface from the hypervisor host on the same node.
Perform these steps once from each hypervisor host in the cluster where the management network
configuration need to be changed.
1. Log on to the hypervisor host with SSH or the IPMI remote console.
2. Set the networking parameters.
root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 ipsrc static
root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 ipaddr mgmt_interface_ip_addr
root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 netmask mgmt_interface_subnet_addr
root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 defgw ipaddr mgmt_interface_gateway
root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 ipsrc static
root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 ipaddr mgmt_interface_ip_addr
root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 netmask mgmt_interface_subnet_addr
root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 defgw ipaddr mgmt_interface_gateway
3. Show current settings.
root@esx# /ipmitool -v -U ADMIN -P ADMIN lan print 1
root@kvm# ipmitool -v -U ADMIN -P ADMIN lan print 1
Manual IP Address Configuration | Setup Guide | NOS 3.5 | 34
Confirm that the parameters are set to the correct values.
To Configure Host Networking
You can access the ESXi console either through IPMI or by attaching a keyboard and monitor to the node.
1. On the ESXi host console, press F2 and then provide the ESXi host logon credentials.
2. Press the down arrow key until Configure Management Network is highlighted and then press Enter.
3. Select Network Adapters and press Enter.
4. Ensure that the connected network adapters are selected.
If they are not selected, press Space to select them and press Enter to return to the previous screen.
5. If a VLAN ID needs to be configured on the Management Network, select VLAN (optional) and press
Enter. In the dialog box, provide the VLAN ID and press Enter.
6. Select IP Configuration and press Enter.
7. If necessary, highlight the Set static IP address and network configuration option and press Space
to update the setting.
8. Provide values for the following: IP Address, Subnet Mask, and Default Gateway fields based on your
environment and then press Enter .
9. Select DNS Configuration and press Enter.
10. If necessary, highlight the Use the following DNS server addresses and hostname option and press
Space to update the setting.
11. Provide values for the Primary DNS Server and Alternate DNS Server fields based on your
environment and then press Enter.
12. Press Esc and then Y to apply all changes and restart the management network.
13. Select Test Management Network and press Enter.
14. Press Enter to start the network ping test.
Manual IP Address Configuration | Setup Guide | NOS 3.5 | 35
15. Verify that the default gateway and DNS servers reported by the ping test match those that you
specified earlier in the procedure and then press Enter.
Ensure that the tested addresses pass the ping test. If they do not, confirm that the correct IP
addresses are configured.
Press Enter to close the test window.
16. Press Esc to log out.
To Configure Host Networking (KVM)
You can access the hypervisor host console either through IPMI or by attaching a keyboard and monitor to
the node.
1. Log on to the host as root.
2. Open the network interface configuration file.
root@kvm# vi /etc/sysconfig/network-scripts/ifcfg-br0
3. Press A to edit values in the file.
4. Update entries for netmask, gateway, and address.
The block should look like this:
ONBOOT="yes"
NM_CONTROLLED="no"
NETMASK="subnet_mask"
IPADDR="host_ip_addr"
DEVICE="eth0"
TYPE="ethernet"
GATEWAY="gateway_ip_addr"
BOOTPROTO="none"
• Replace host_ip_addr with the IP address for the hypervisor host.
• Replace subnet_mask with the subnet mask for host_ip_addr.
• Replace gateway_ip_addr with the gateway address for host_ip_addr.
5. Press Esc.
6. Type :wq and press Enter to save your changes.
7. Open the name services configuration file.
root@kvm# vi /etc/resolv.conf
8. Update the values for the nameserver parameter then save and close the file.
Manual IP Address Configuration | Setup Guide | NOS 3.5 | 36
9. Restart networking.
root@kvm# /etc/init.d/network restart
To Configure the Controller VM IP Address
1. Log on to the hypervisor host with SSH or the IPMI remote console.
2. Log on to the Controller VM with SSH.
root@host# ssh nutanix@192.168.5.254
Enter the Controller VM nutanix password.
3. Change the network interface configuration.
a. Open the network interface configuration file.
nutanix@cvm$ sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0
Enter the nutanix password.
b. Press A to edit values in the file.
c. Update entries for netmask, gateway, and address.
The block should look like this:
ONBOOT="yes"
NM_CONTROLLED="no"
NETMASK="subnet_mask"
IPADDR="cvm_ip_addr"
DEVICE="eth0"
TYPE="ethernet"
GATEWAY="gateway_ip_addr"
BOOTPROTO="none"
• Replace cvm_ip_addr with the IP address for the Controller VM.
• Replace subnet_mask with the subnet mask for cvm_ip_addr.
• Replace gateway_ip_addr with the gateway address for cvm_ip_addr.
d. Press Esc.
e. Type :wq and press Enter to save your changes.
4. Restart the Controller VM.
nutanix@cvm$ sudo reboot
Enter the nutanix password if prompted. Wait to proceed until the Controller VM has finished starting,
which takes approximately 5 minutes.

More Related Content

PDF
Platform administration guide-nos_v3_5
PDF
Command reference nos-v3_5
PDF
Field installation guide-v3_1
PDF
Comparação entre XenServer 6.2 e VMware VSphere 5.1 - Comparison of Citrix Xe...
PDF
Xen server storage Overview
PDF
Diretrizes para Implementação do Citrix XenServer 6.2.0 em Servidores HP Prol...
DOCX
C mode class
PPTX
VMware Advance Troubleshooting Workshop - Day 4
Platform administration guide-nos_v3_5
Command reference nos-v3_5
Field installation guide-v3_1
Comparação entre XenServer 6.2 e VMware VSphere 5.1 - Comparison of Citrix Xe...
Xen server storage Overview
Diretrizes para Implementação do Citrix XenServer 6.2.0 em Servidores HP Prol...
C mode class
VMware Advance Troubleshooting Workshop - Day 4

What's hot (20)

PPTX
VMware Performance Troubleshooting
PDF
Visão geral sobre Citrix XenServer 6 - Ferramentas e Licenciamento
PDF
Simplifying Ceph Management with Virtual Storage Manager (VSM)
DOC
RAC_Database_Technical_Document
PDF
Freenas Tutorial EuroBSDCon 2012
PDF
Hypervisor comparison 201212
PPT
High Availability with Windows Server Clustering and Geo-Clustering
PPTX
STO7534 VSAN Day 2 Operations (VMworld 2016)
PDF
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
PDF
Domino9on centos6
PPTX
What is coming for VMware vSphere?
PDF
VMware vSphere Networking deep dive
PDF
Installing and Configuring Domino 10 on CentOS 7
PDF
Advancedtroubleshooting 101208145718-phpapp01
PDF
S4 xen hypervisor_20080622
PPTX
VMware Advance Troubleshooting Workshop - Day 5
PPT
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISOR
PDF
Guia instalacion SQL Server Denali
PPTX
VDI-in-a-Box installation guide for Lab PCs
PPTX
VMworld 2017 Core Storage
VMware Performance Troubleshooting
Visão geral sobre Citrix XenServer 6 - Ferramentas e Licenciamento
Simplifying Ceph Management with Virtual Storage Manager (VSM)
RAC_Database_Technical_Document
Freenas Tutorial EuroBSDCon 2012
Hypervisor comparison 201212
High Availability with Windows Server Clustering and Geo-Clustering
STO7534 VSAN Day 2 Operations (VMworld 2016)
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
Domino9on centos6
What is coming for VMware vSphere?
VMware vSphere Networking deep dive
Installing and Configuring Domino 10 on CentOS 7
Advancedtroubleshooting 101208145718-phpapp01
S4 xen hypervisor_20080622
VMware Advance Troubleshooting Workshop - Day 5
LOAD BALANCING OF APPLICATIONS USING XEN HYPERVISOR
Guia instalacion SQL Server Denali
VDI-in-a-Box installation guide for Lab PCs
VMworld 2017 Core Storage
Ad

Viewers also liked (11)

PDF
Sandeep kaushik npp certification exam (4.5) certificate
PDF
Dell XC630-10 Nutanix on VMware ESXi reference architecture
PDF
NUTANIX and SPLUNK
PDF
Nutanix Community Editionのご紹介
PPTX
Databases love nutanix
PDF
VMware vROps Management Pack for Nutanix Overview
PPTX
Nutanix vdi workshop presentation
PPTX
Becoming a Professional with Prism
PPTX
Visitor Management System
PDF
SYN 104: Citrix and Nutanix
PPTX
Sandeep kaushik npp certification exam (4.5) certificate
Dell XC630-10 Nutanix on VMware ESXi reference architecture
NUTANIX and SPLUNK
Nutanix Community Editionのご紹介
Databases love nutanix
VMware vROps Management Pack for Nutanix Overview
Nutanix vdi workshop presentation
Becoming a Professional with Prism
Visitor Management System
SYN 104: Citrix and Nutanix
Ad

Similar to Setup guide nos-v3_5 (20)

PDF
Services-Academy-NCS-Core v7.0 - Customised(1).pdf
PDF
View RA_v1_HiRez
PDF
Start Your Preparation for Nutanix NCM-MCI Exam.pdf
PDF
Nutanix Technology Bootcamp
PDF
VMware Cookbook A Real World Guide to Effective VMware Use Second Edition Rya...
PDF
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
PDF
Why Nutanix for Enterprise Workloads
PPTX
Big Data LDN 2016: Kick Start your Big Data project with Hyperconverged Infra...
PPTX
Nutanix Fundamentals The Enterprise Cloud Company
PDF
Presentation cloud infrastructure launch – what’s new
PDF
Presentation cloud infrastructure launch – what’s new
PDF
Cvs solution-brief
PPTX
VMware Advance Troubleshooting Workshop - Day 3
PPTX
VMware vSphere 6.0 - Troubleshooting Training - Day 3
PPTX
Nutanix
PDF
Presentation v mware v-sphere distributed switch—technical deep dive
PDF
VMworld 2013: vSphere Distributed Switch – Design and Best Practices
PPTX
Reference design for v mware nsx
PDF
VMworld 2014: vSphere Distributed Switch
PPTX
Nutanix NEXT on Tour - Maarssen, Netherlands
Services-Academy-NCS-Core v7.0 - Customised(1).pdf
View RA_v1_HiRez
Start Your Preparation for Nutanix NCM-MCI Exam.pdf
Nutanix Technology Bootcamp
VMware Cookbook A Real World Guide to Effective VMware Use Second Edition Rya...
VMworld 2013: vSphere Networking and vCloud Networking Suite Best Practices a...
Why Nutanix for Enterprise Workloads
Big Data LDN 2016: Kick Start your Big Data project with Hyperconverged Infra...
Nutanix Fundamentals The Enterprise Cloud Company
Presentation cloud infrastructure launch – what’s new
Presentation cloud infrastructure launch – what’s new
Cvs solution-brief
VMware Advance Troubleshooting Workshop - Day 3
VMware vSphere 6.0 - Troubleshooting Training - Day 3
Nutanix
Presentation v mware v-sphere distributed switch—technical deep dive
VMworld 2013: vSphere Distributed Switch – Design and Best Practices
Reference design for v mware nsx
VMworld 2014: vSphere Distributed Switch
Nutanix NEXT on Tour - Maarssen, Netherlands

Recently uploaded (20)

PDF
Electronic commerce courselecture one. Pdf
PDF
cuic standard and advanced reporting.pdf
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PDF
Advanced methodologies resolving dimensionality complications for autism neur...
PDF
KodekX | Application Modernization Development
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
Network Security Unit 5.pdf for BCA BBA.
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
Empathic Computing: Creating Shared Understanding
PPTX
MYSQL Presentation for SQL database connectivity
PDF
Spectral efficient network and resource selection model in 5G networks
PPTX
20250228 LYD VKU AI Blended-Learning.pptx
PDF
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
PDF
CIFDAQ's Market Insight: SEC Turns Pro Crypto
PPTX
Digital-Transformation-Roadmap-for-Companies.pptx
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
Chapter 3 Spatial Domain Image Processing.pdf
Electronic commerce courselecture one. Pdf
cuic standard and advanced reporting.pdf
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
Advanced methodologies resolving dimensionality complications for autism neur...
KodekX | Application Modernization Development
Unlocking AI with Model Context Protocol (MCP)
Network Security Unit 5.pdf for BCA BBA.
“AI and Expert System Decision Support & Business Intelligence Systems”
Building Integrated photovoltaic BIPV_UPV.pdf
Empathic Computing: Creating Shared Understanding
MYSQL Presentation for SQL database connectivity
Spectral efficient network and resource selection model in 5G networks
20250228 LYD VKU AI Blended-Learning.pptx
Bridging biosciences and deep learning for revolutionary discoveries: a compr...
CIFDAQ's Market Insight: SEC Turns Pro Crypto
Digital-Transformation-Roadmap-for-Companies.pptx
Per capita expenditure prediction using model stacking based on satellite ima...
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
Chapter 3 Spatial Domain Image Processing.pdf

Setup guide nos-v3_5

  • 2. Copyright | Setup Guide | NOS 3.5 | 2 Notice Copyright Copyright 2013 Nutanix, Inc. Nutanix, Inc. 1740 Technology Drive, Suite 400 San Jose, CA 95110 All rights reserved. This product is protected by U.S. and international copyright and intellectual property laws. Nutanix is a trademark of Nutanix, Inc. in the United States and/or other jurisdictions. All other marks and names mentioned herein may be trademarks of their respective companies. Conventions Convention Description variable_value The action depends on a value that is unique to your environment. ncli> command The commands are executed in the Nutanix nCLI. user@host$ command The commands are executed as a non-privileged user (such as nutanix) in the system shell. root@host# command The commands are executed as the root user in the hypervisor host (vSphere or KVM) shell. output The information is displayed as output from a command or in a log file. Default Cluster Credentials Interface Target Username Password Nutanix web console Nutanix Controller VM admin admin vSphere client ESXi host root nutanix/4u SSH client or console ESXi host root nutanix/4u SSH client or console KVM host root nutanix/4u SSH client Nutanix Controller VM nutanix nutanix/4u IPMI web interface or ipmitool Nutanix node ADMIN ADMIN IPMI web interface or ipmitool Nutanix node (NX-3000) admin admin Version Last modified: September 26, 2013 (2013-09-26-13:03 GMT-7)
  • 3. 3 Contents Overview........................................................................................................................... 4 Setup Checklist....................................................................................................................................4 Nonconfigurable Components............................................................................................................. 4 Reserved IP Addresses...................................................................................................................... 5 Network Information............................................................................................................................ 5 Product Mixing Restrictions.................................................................................................................6 Three Node Cluster Considerations....................................................................................................7 1: IP Address Configuration.......................................................................8 To Configure the Cluster.....................................................................................................................8 To Configure the Cluster in a VLAN-Segmented Network............................................................... 11 To Assign VLAN Tags to Nutanix Nodes...............................................................................11 To Configure VLANs (KVM)...................................................................................................12 2: Storage Configuration.......................................................................... 14 To Create the Datastore................................................................................................................... 14 3: vCenter Configuration.......................................................................... 16 To Create a Nutanix Cluster in vCenter........................................................................................... 16 To Add a Nutanix Node to vCenter.................................................................................................. 19 vSphere Cluster Settings...................................................................................................................21 4: Final Configuration............................................................................... 23 To Set the Timezone of the Cluster................................................................................................. 23 To Make Optional Settings................................................................................................................23 Diagnostics VMs................................................................................................................................24 To Run a Test Using the Diagnostics VMs............................................................................24 Diagnostics Output................................................................................................................. 25 To Test Email Alerts......................................................................................................................... 26 To Check the Status of Cluster Services......................................................................................... 27 Appendix A: Manual IP Address Configuration..................................... 28 To Verify IPv6 Link-Local Connectivity............................................................................................. 28 To Configure the Cluster (Manual)................................................................................................... 29 Remote Console IP Address Configuration...................................................................................... 31 To Configure the Remote Console IP Address (NX-1000, NX-3050, NX-6000).....................31 To Configure the Remote Console IP Address (NX-3000).................................................... 32 To Configure the Remote Console IP Address (NX-2000).................................................... 32 To Configure the Remote Console IP Address (command line)............................................ 33 To Configure Host Networking..........................................................................................................34 To Configure Host Networking (KVM).............................................................................................. 35 To Configure the Controller VM IP Address.....................................................................................36
  • 4. Overview | Setup Guide | NOS 3.5 | 4 Overview This guide provides step-by-step instructions on the post-shipment configuration of a Nutanix Virtual Computing Platform. Nutanix support recommends that you review field advisories on the support portal before installing a cluster. Setup Checklist Confirm network settings with customer. Network Information on page 5 Unpack and rack cluster hardware. Refer to the Physical Installation Guide for your hardware model Connect network and power cables. Refer to the Physical Installation Guide for your hardware model Assign IP addresses to all nodes in the cluster. IP Address Configuration on page 8 Configure storage for the cluster. Storage Configuration on page 14 (vSphere only) Add the vSphere hosts to the customer vCenter. vCenter Configuration on page 16 Set the timezone of the cluster. To Set the Timezone of the Cluster on page 23 Make optional configurations. To Make Optional Settings on page 23 Run a performance diagnostic. Diagnostics VMs on page 24 Test email alerts. To Test Email Alerts on page 26 Confirm that the cluster is running. To Check the Status of Cluster Services on page 27 Nonconfigurable Components The components listed here are configured by the Nutanix manufacturing process. Do not modify any of these components except under the direction of Nutanix support.
  • 5. Overview | Setup Guide | NOS 3.5 | 5 Caution: Modifying any of the settings listed here may render your cluster inoperable. In particular, do not under any circumstances use the Reset System Configuration option of ESXi or delete the Nutanix Controller VM. Nutanix Software • Local datastore name • Settings and contents of any Controller VM, including the name Important: If you create vSphere resource pools, Nutanix Controller VMs must have the top share. ESXi Settings • NFS settings • VM swapfile location • VM startup/shutdown order • iSCSI software adapter settings • vSwitchNutanix standard virtual switch • vmk0 interface in port group "Management Network" • SSH enabled • Firewall disabled KVM Settings • iSCSI settings • Open vSwitch settings Reserved IP Addresses The Nutanix cluster uses the following IP addresses for internal communication: • 192.168.5.1 • 192.168.5.2 • 192.168.5.254 Important: The ESXi and CVM interfaces on vSwitch0 cannot use IP addresses in any subnets that overlap with subnet 192.168.5.0/24. Network Information Confirm the following network information with the customer before the new block or blocks are connected to the customer network. • 10 Gbps Ethernet ports [NX-3000, NX-3050, NX-6000: 2 per node/8 per block] [NX-2000: 1 per node/4 per block] • (optional) 1 Gbps Ethernet ports [1-2 per node/4-8 per block] • 10/100 Mbps Ethernet ports [1 per node/4 per block] • Default Gateway • Subnet mask • (optional) VLAN ID • NTP servers • DNS domain • DNS servers
  • 6. Overview | Setup Guide | NOS 3.5 | 6 • Host servers IP Addresses (remote console) [1 per node/4 per block] • Host servers IP Addresses (hypervisor management) [1 per node/4 per block] • Nutanix Controller VMs IP addresses [1 per node/4 per block] • Reverse SSH port (outgoing connection to nsc.nutanix.com) [default 80] • (optional) HTTP proxy for reverse SSH port Product Mixing Restrictions While a Nutanix cluster can include different products, there are some restrictions. Caution: Do not configure a cluster that violates any of the following rules. Compatibility Matrix NX-1000 NX-2000 NX-2050 NX-3000 NX-3050 NX-6000 NX-1000 1 • • • • • • NX-2000 • • • • • NX-2050 • • • • • • NX-3000 • • • • • • NX-3050 • • • • • • NX-6000 2 • • • • 3 • 1. NX-1000 nodes can be mixed with other products in the same cluster only when they are running 10 GbE networking; they cannot be mixed when running 1 GbE networking. If NX-1000 nodes are using the 1 GbE interface, the maximum cluster size is 8 nodes. If the nodes are using the 10 GbE interface, the cluster has no limits other than the maximum supported cluster size that applies to all products. 2. NX-6000 nodes cannot be mixed NX-2000 nodes in the same cluster. 3. Because it has a larger Flash tier, NX-3050 is recommended to be mixed with NX-6000 over other products. • Any combination of NX-2000, NX-2050, NX-3000, and NX-3050 nodes can be mixed in the same cluster. • All nodes in a cluster must be the same hypervisor type (ESXi or KVM). • All Controller VMs in a cluster must have the same NOS version. • Mixed Nutanix clusters comprising NX-2000 nodes and other products are supported as specified above. However, because the NX-2000 processor architecture differs from other models, vSphere does not support enhanced/live vMotion of VMs from one type of node to another unless Enhanced vMotion Capability (EVC) is enabled. For more information about EVC, see the vSphere 5 documentation and the following VMware knowledge base articles: • Enhanced vMotion Compatibility (EVC) processor support [1003212] • EVC and CPU Compatibility FAQ [1005764]
  • 7. Overview | Setup Guide | NOS 3.5 | 7 Three Node Cluster Considerations A Nutanix cluster must have at least three nodes. Minimum configuration (three node) clusters provide the same protections as larger clusters, and a three node cluster can continue normally after a node failure. However, one condition applies to three node clusters only. When a node failure occurs in a cluster containing four or more nodes, you can dynamically remove that node to bring the cluster back into full health. The newly configured cluster will still have at least three nodes, so the cluster is fully protected. You can then replace the failed hardware for that node as needed and add the node back into the cluster as a new node. However, when the cluster has just three nodes, the failed node cannot be dynamically removed from the cluster. The cluster continues running without interruption on two healthy nodes and one failed node, but the failed node cannot be removed when there are only two healthy nodes. Therefore, the cluster is not fully protected until you fix the problem (such as replacing a bad root disk) for the existing node.
  • 8. IP Address Configuration | Setup Guide | NOS 3.5 | 8 1 IP Address Configuration NOS includes a web-based configuration tool that automates the modification of Controller VMs and configures the cluster to use these new IP addresses. Other cluster components must be modified manually. Requirements The web-based configuration tool requires that IPv6 link-local be enabled on the subnet. If IPv6 link-local is not available, you must configure the Controller VM IP addresses and the cluster manually. The web-based configuration tool also requires that the Controller VMs be able to communicate with each other. All Controller VMs and hypervisor hosts must be on the same subnet. If the IPMI interfaces are connected, Nutanix recommends that they be on the same subnet as the Controller VMs and hypervisor hosts. Guest VMs can be on a different subnet. To Configure the Cluster Before you begin. • Confirm that the system you are using to configure the cluster meets the following requirements: • IPv6 link-local enabled. • Windows 7, Vista, or MacOS. • (Windows only) Bonjour installed (included with iTunes or downloadable from http:// support.apple.com/kb/DL999). • Determine the IPv6 service of any Controller VM in the cluster. IPv6 service names are uniquely generated at the factory and have the following form (note the final period): NTNX-block_serial_number-node_location-CVM.local. On the right side of the block toward the front is a label that has the block_serial_number (for example, 12AM3K520060). The node_location is a number 1-4 for NX-3000, a letter A-D for NX-1000/NX-2000/ NX-3050, or a letter A-B for NX-6000.
  • 9. IP Address Configuration | Setup Guide | NOS 3.5 | 9 If you need to confirm if IPv6 link-local is enabled on the network or if you do not have access to get the node serial number, see the Nutanix support knowledge base for alternative methods. 1. Open a web browser. Nutanix recommends using Internet Explorer 9 for Windows and Safari for Mac OS. Note: Internet Explorer requires protected mode to be disabled. Go to Tools > Internet Options > Security, clear the Enable Protected Mode check box, and restart the browser. 2. Navigate to http://cvm_host_name:2100/cluster_init.html. Replace cvm_host_name with the IPv6 service name of any Controller VM that will be added to the cluster.
  • 10. IP Address Configuration | Setup Guide | NOS 3.5 | 10 Following is an example URL to access the cluster creation page on a Controller VM: http://NTNX-12AM3K520060-1-CVM.local.:2100/cluster_init.html If the cluster_init.html page is blank, then the Controller VM is already part of a cluster. Connect to a Controller VM that is not part of a cluster. 3. Type a meaningful value in the Cluster Name field. This value is appended to all automated communication between the cluster and Nutanix support. It should include the customer's name and if necessary a modifier that differentiates this cluster from any other clusters that the customer might have. Note: This entity has the following naming restrictions: • The maximum length is 75 characters. • Allowed characters are uppercase and lowercase standard Latin letters (A-Z and a-z), decimal digits (0-9), dots (.), hyphens (-), and underscores (_). 4. Type the appropriate DNS and NTP addresses in the respective fields. 5. Type the appropriate subnet masks in the Subnet Mask row. 6. Type the appropriate default gateway IP addresses in the Default Gateway row. 7. Select the check box next to each node that you want to add to the cluster. All unconfigured nodes on the current network are presented on this web page. If you will be configuring multiple clusters, be sure that you only select the nodes that should be part of the current cluster. 8. Provide an IP address for all components in the cluster. Note: The unconfigured nodes are not listed according to their position in the block. Ensure that you assign the intended IP address to each node. 9. Click Create. Wait until the Log Messages section of the page reports that the cluster has been successfully configured. Output similar to the following indicates successful cluster configuration. Configuring IP addresses on node 12AM2K420010/A... Configuring IP addresses on node 12AM2K420010/B... Configuring IP addresses on node 12AM2K420010/C... Configuring IP addresses on node 12AM2K420010/D... Configuring Zeus on node 12AM2K420010/A... Configuring Zeus on node 12AM2K420010/B... Configuring Zeus on node 12AM2K420010/C... Configuring Zeus on node 12AM2K420010/D... Initializing cluster... Cluster successfully initialized! Initializing the cluster DNS and NTP servers... Successfully updated the cluster NTP and DNS server list 10. Log on to any Controller VM in the cluster with SSH. 11. Start the Nutanix cluster. nutanix@cvm$ cluster start If the cluster starts properly, output similar to the following is displayed for each node in the cluster: CVM: 172.16.8.167 Up, ZeusLeader Zeus UP [3148, 3161, 3162, 3163, 3170, 3180] Scavenger UP [3333, 3345, 3346, 11997]
  • 11. IP Address Configuration | Setup Guide | NOS 3.5 | 11 ConnectionSplicer UP [3379, 3392] Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447] Medusa UP [3488, 3501, 3502, 3523, 3569] DynamicRingChanger UP [4592, 4609, 4610, 4640] Pithos UP [4613, 4625, 4626, 4678] Stargate UP [4628, 4647, 4648, 4709] Cerebro UP [4890, 4903, 4904, 4979] Chronos UP [4906, 4918, 4919, 4968] Curator UP [4922, 4934, 4935, 5064] Prism UP [4939, 4951, 4952, 4978] AlertManager UP [4954, 4966, 4967, 5022] StatsAggregator UP [5017, 5039, 5040, 5091] SysStatCollector UP [5046, 5061, 5062, 5098] To Configure the Cluster in a VLAN-Segmented Network The automated IP address and cluster configuration utilities depend on Controller VMs being able to communicate with each other. If the customer network is segmented using VLANs, that communication is not possible until the Controller VMs are assigned to a valid VLAN. Note: The web-based configuration tool requires that IPv6 link-local be enabled on the subnet. If IPv6 link-local is not available, see To Configure the Cluster (Manual) on page 29. 1. Configure the IPMI IP addresses by following the procedure for your hardware model. → To Configure the Remote Console IP Address (NX-1000, NX-3050, NX-6000) on page 31 → To Configure the Remote Console IP Address (NX-3000) on page 32 → To Configure the Remote Console IP Address (NX-2000) on page 32 Alternatively, you can set the IPMI IP address using a command-line utility by following To Configure the Remote Console IP Address (command line) on page 33. 2. Configure the hypervisor host IP addresses. → vSphere: To Configure Host Networking on page 34 → KVM: To Configure Host Networking (KVM) on page 35 3. Assign VLAN tags to the hypervisor hosts and Controller VMs by following . → vSphere: To Assign VLAN Tags to Nutanix Nodes on page 11 → KVM: To Configure VLANs (KVM) on page 12 4. Configure the Controller VM IP addresses and the cluster using the automated utilities by following To Configure the Cluster on page 8 . To Assign VLAN Tags to Nutanix Nodes 1. Assign the ESXi hosts to the pre-defined host VLAN. a. Access the ESXi host console. b. Press F2 and then provide the ESXi host logon credentials. c. Press the down arrow key until Configure Management Network is highlighted and then press Enter. d. Select VLAN (optional) and press Enter.
  • 12. IP Address Configuration | Setup Guide | NOS 3.5 | 12 e. Type the VLAN ID specified by the customer and press Enter. f. Press Esc and then Y to apply all changes and restart the management network. g. Repeat this process for all remaining ESXi hosts. 2. Assign the Controller VMs to the pre-defined virtual machine VLAN. a. Log on to an ESXi host with the vSphere client. b. Select the host and then click the Configuration tab. c. Click Networking. d. Click the Properties link above vSwitch0. e. Select VM Network and then click Edit. f. Type the VLAN ID specified by the customer and click OK. g. Click Close. h. Repeat this process for all remaining ESXi hosts. To Configure VLANs (KVM) In an environment with separate VLANs for hosts and guest VMs, VLAN tagging is configured differently for each type of VLAN. Perform these steps on every KVM host in the cluster. 1. Log on to the KVM host with SSH. 2. Configure VLAN tagging on the host interface. a. Set the tag for the bridge. root@kvm# ovs-vsctl set port br0 tag=host_vlan_tag Replace host_vlan_tag with the VLAN tag for hosts. b. Confirm VLAN tagging on the interface. root@kvm# ovs-vsctl list port br0 Check the value of the tag parameter that is shown. 3. Configure VLAN tagging for guest VMs. a. Copy the existing network configuration and open the configuration file. root@kvm# virsh net-dumpxml VM-Network > /tmp/network.xml root@kvm# vi /tmp/network.xml b. Update the configuration file to describe the new network. • Delete the uuid and mac parameters. • Change the name and portgroup name parameters. • Add a vlan tag element with the ID for the guest VM VLAN.
  • 13. IP Address Configuration | Setup Guide | NOS 3.5 | 13 The resulting configuration file should look like this. <network connections='1'> <name>new_network_name</name> <forward mode='bridge'/> <bridge name='br0' /> <virtualport type='openvswitch'/> <portgroup name='new_network_name' default='yes'> </portgroup> <vlan> <tag id="vm_vlan_tag"> </vlan> </network> • Replace new_network_name with the desired name for the network. Ensure that both instances of this parameter match. • Replace vm_vlan_tag with the VLAN tag for guest VMs. c. Create the new network. root@kvm# virsh net-define /tmp/network.xml d. Start the new network. root@kvm# virsh net-start new_network_name root@kvm# virsh net-autostart new_network_name e. Confirm that the new network is running. root@kvm# virsh net-list --all To create a VM on this VLAN, specify new_network_name instead of VM-Network.
  • 14. Storage Configuration | Setup Guide | NOS 3.5 | 14 2 Storage Configuration At the conclusion of the setup process, you will need to create the following entities: • 1 storage pool that comprises all physical disks in the cluster. • 1 container that uses all available storage capacity in the pool. • 1 NFS datastore that is mounted from all hosts in the cluster. A single datastore comprising all available cluster storage will suit the needs of most customers. If the customer requests additional NFS datastores during setup, you can create the necessary containers, and then mount them as datastores. For future datastore needs, refer the customer to the Nutanix Administration Guide. To Create the Datastore 1. Sign in to the Nutanix web console. 2. In the Storage dashboard, click the Storage Pool button. The Create Storage Pool dialog box appears. 3. In the Name field, enter a name for the storage pool. • For vSphere clusters, name the storage pool sp1. • For KVM clusters, name the storage pool default. 4. In the Capacity field, check the box to use the available unallocated capacity for this storage pool. 5. When all the field entries are correct, click the Save button.
  • 15. Storage Configuration | Setup Guide | NOS 3.5 | 15 6. In the Storage dashboard, click the Container button. The Create Container dialog box appears. 7. Create the container. Do the following in the indicated fields: a. Name: Enter a name for the container. • For vSphere clusters, name the container nfs-ctr. • For KVM clusters, name the container default. b. Storage Pool: Select the sp1 (vSphere) or default (KVM) storage pool from the drop-down list. The following field, Max Capacity, displays the amount of free space available in the selected storage pool. c. NFS Datastore: Select the Mount on all hosts button to mount the container on all hosts. d. When all the field entries are correct, click the Save button.
  • 16. vCenter Configuration | Setup Guide | NOS 3.5 | 16 3 vCenter Configuration VMware vCenter enables the centralized management of multiple ESXi hosts. The Nutanix cluster in vCenter must be configured according to Nutanix best practices. While most customers prefer to use an existing vCenter, Nutanix provides a vCenter OVF, which is on the Controller VMs in /home/nutanix/data/images/vcenter. You can deploy the OVF using the standard procedures for vSphere. To Create a Nutanix Cluster in vCenter 1. Log on to vCenter with the vSphere client. 2. If you want the Nutanix cluster to be in its own datacenter or if there is no datacenter, click File > New > Datacenter and type a meaningful name for the datacenter, such as NTNX-DC. Otherwise, proceed to the next step. You can also create the Nutanix cluster within an existing datacenter. 3. Right-click the datacenter node and select New Cluster. 4. Type a meaningful name for the cluster in the Name field, such as NTNX-Cluster. 5. Select the Turn on vSphere HA check box and click Next. 6. Select Admission Control > Enable. 7. Select Admission Control Policy > Percentage of cluster resources reserved as failover spare capacity and enter the percentage appropriate for the number of Nutanix nodes in the cluster the click Next. Hosts (N+1) Percentage Hosts (N+2) Percentage Hosts (N+3) Percentage Hosts (N+4) Percentage 1 N/A 9 23% 17 18% 25 16% 2 N/A 10 20% 18 17% 26 15% 3 33% 11 18% 19 16% 27 15% 4 25% 12 17% 20 15% 28 14% 5 20% 13 15% 21 14% 29 14% 6 18% 14 14% 22 14% 30 13% 7 15% 15 13% 23 13% 31 13% 8 13% 16 13% 24 13% 32 13% 8. Click Next on the following three pages to accept the default values. • Virtual Machine Options • VM monitoring • VMware EVC
  • 17. vCenter Configuration | Setup Guide | NOS 3.5 | 17 9. Verify that Store the swapfile in the same directory as the virtual machine (recommended) is selected and click Next. 10. Review the settings and then click Finish. 11. Add all Nutanix nodes to the vCenter cluster inventory. See To Add a Nutanix Node to vCenter on page 19. 12. Right-click the Nutanix cluster node and select Edit Settings. 13. If vSphere HA and DRS are not enabled, select them on the Cluster Features page. Otherwise, proceed to the next step. Note: vSphere HA and DRS must be configured even if the customer does not plan to use the features. The settings will be preserved within the vSphere cluster configuration, so if the customer later decides to enable the feature, it will be pre-configured based on Nutanix best practices. 14. Configure vSphere HA. a. Select vSphere HA > Virtual Machine Options. b. Change the VM restart priority of all Controller VMs to Disabled. Tip: Controller VMs include the phrase CVM in their names. It may be necessary to expand the Virtual Machine column to view the entire VM name. c. Change the Host Isolation Response setting of all Controller VMs to Leave Powered On.
  • 18. vCenter Configuration | Setup Guide | NOS 3.5 | 18 d. Select vSphere HA > VM Monitoring e. Change the VM Monitoring setting for all Controller VMs to Disabled. f. Select vSphere HA > Datastore Heartbeating. g. Click Select only from my preferred datastores and select the Nutanix datastore (NTNX-NFS). h. If the cluster has only one datastore as recommended, click Advanced Options, add an Option named das.ignoreInsufficientHbDatastore with Value of true, and click OK. i. If the cluster does not use vSphere HA, disable it on the Cluster Features page. Otherwise, proceed to the next step. 15. Configure vSphere DRS. a. Select vSphere DRS > Virtual Machine Options. b. Change the Automation Level setting of all Controller VMs to Disabled.
  • 19. vCenter Configuration | Setup Guide | NOS 3.5 | 19 c. Select vSphere DRS > Power Management. d. Confirm that Off is selected as the default power management for the cluster. e. If the cluster does not use vSphere DRS, disable it on the Cluster Features page. Otherwise, proceed to the next step. 16. Click OK to close the cluster settings window. To Add a Nutanix Node to vCenter The cluster must be configured according to Nutanix specifications given in vSphere Cluster Settings on page 21. Tip: Refer to Default Cluster Credentials on page 2 for the default credentials of all cluster components. 1. Log on to vCenter with the vSphere client. 2. Right-click the cluster and select Add Host. 3. Type the IP address of the ESXi host in the Host field. 4. Enter the ESXi host logon credentials in the Username and Password fields. 5. Click Next. If a security or duplicate management alert appears, click Yes. 6. Review the Host Summary page and click Next. 7. Select a license to assign to the ESXi host and click Next. 8. Ensure that the Enable Lockdown Mode check box is left unselected and click Next. Lockdown mode is not supported. 9. Click Finish. 10. Select the ESXi host and click the Configuration tab. 11. Configure DNS servers. a. Click DNS and Routing > Properties. b. Select Use the following DNS server address.
  • 20. vCenter Configuration | Setup Guide | NOS 3.5 | 20 c. Type DNS server addresses in the Preferred DNS Server and Alternate DNS Server fields and click OK. 12. Configure NTP servers. a. Click Time Configuration > Properties > Options > NTP Settings > Add. b. Type the NTP server address. Add multiple NTP servers if required. c. Click OK in the NTP Daemon (ntpd) Options and Time Configuration windows. d. Click Time Configuration > Properties > Options > General. e. Select Start automatically under Startup Policy. f. Click Start g. Click OK in the NTP Daemon (ntpd) Options and Time Configuration windows. 13. Click Storage and confirm that NFS datastores are mounted. 14. Set the Controller VM to start automatically when the ESXi host is powered on. a. Click the Configuration tab. b. Click Virtual Machine Startup/Shutdown in the Software frame. c. Select the Controller VM and click Properties. d. Ensure that the Allow virtual machines to start and stop automatically with the system check box is selected. e. If the Controller VM is listed in Manual Startup, click Move Up to move the Controller VM into the Automatic Startup section.
  • 21. vCenter Configuration | Setup Guide | NOS 3.5 | 21 f. Click OK. 15. (NX-2000 only) Click Host Cache Configuration and confirm that the host cache is stored on the local datastore. If it is not correct, click Properties to update the location. vSphere Cluster Settings Certain vSphere cluster settings are required for Nutanix clusters. vSphere HA and DRS must be configured even if the customer does not plan to use the feature. The settings will be preserved within the vSphere cluster configuration, so if the customer later decides to enable the feature, it will be pre-configured based on Nutanix best practices. vSphere HA Settings Enable host monitoring Enable admission control and use the percentage-based policy with a value based on the number of nodes in the cluster. Set the VM Restart Priority of all Controller VMs to Disabled. Set the Host Isolation Response of all Controller VMs to Leave Powered On. Disable VM Monitoring for all Controller VMs. Enable Datastore Heartbeating by clicking Select only from my preferred datastores and choosing the Nutanix NFS datastore. If the cluster has only one datastore, add an advanced option das.ignoreInsufficientHbDatastore=true.
  • 22. vCenter Configuration | Setup Guide | NOS 3.5 | 22 vSphere DRS Settings Disable automation on all Controller VMs. Leave power management disabled (set to Off). Other Cluster Settings Store VM swapfiles in the same directory as the virtual machine. (NX-2000 only) Store host cache on the local datastore. Failover Reservation Percentages Hosts (N+1) Percentage Hosts (N+2) Percentage Hosts (N+3) Percentage Hosts (N+4) Percentage 1 N/A 9 23% 17 18% 25 16% 2 N/A 10 20% 18 17% 26 15% 3 33% 11 18% 19 16% 27 15% 4 25% 12 17% 20 15% 28 14% 5 20% 13 15% 21 14% 29 14% 6 18% 14 14% 22 14% 30 13% 7 15% 15 13% 23 13% 31 13% 8 13% 16 13% 24 13% 32 13%
  • 23. Final Configuration | Setup Guide | NOS 3.5 | 23 4 Final Configuration The final steps in the Nutanix block setup are to confirm email alerts, set the timezone, and confirm that it is running. To Set the Timezone of the Cluster 1. Log on to any Controller VM in the cluster with SSH. 2. Locate the timezone template for the customer site. nutanix@cvm$ ls /usr/share/zoneinfo/* The timezone templates of some common timezones are shown below. Location Timezone Template US East Coast /usr/share/zoneinfo/US/Eastern England /usr/share/zoneinfo/Europe/London Japan /usr/share/zoneinfo/Asia/Tokyo 3. Copy the timezone template to all Controller VMs in the cluster. nutanix@cvm$ for i in `svmips`; do echo $i; ssh $i "sudo cp template_path /etc/localtime; date"; done Replace template_path with the location of the desired timezone template. If a host authenticity warning is displayed, type yes to continue connecting. The expected output is the IP address of each Controller VM and the current time in the desired timezone, for example: 192.168.1.200 Fri Jan 25 19:43:32 GMT 2013 To Make Optional Settings You can make one or more of the following settings if necessary to meet customer requirements. • Add customer email addresses to alerts. Web Console > Alert Email Contacts nCLI ncli> cluster add-to-email-contacts email-addresses="customer_email" Replace customer_email with a comma-separated list of customer email addresses to receive alert messages.
  • 24. Final Configuration | Setup Guide | NOS 3.5 | 24 • Specify an outgoing SMTP server. Web Console > SMTP Server nCLI ncli> cluster set-smtp-server address="smtp_address" Replace smtp_address with the IP address or name of the SMTP server to use for alert messages. • If the site security policy allows the remote support tunnel, enable it. Warning: Failing to enable remote support prevents Nutanix support from directly addressing cluster issues. Nutanix recommends that all customers allow email alerts at minimum because it allows proactive support of customer issues. Web Console > Remote Support Services > Enable nCLI ncli> cluster start-remote-support • If the site security policy does not allow email alerting, disable it. Web Console > Email Alert Services > Disable nCLI ncli> cluster stop-email-alerts Diagnostics VMs Nutanix provides a diagnostics capability to allow partners and customers run performance tests on the cluster. This is a useful tool in pre-sales demonstrations of the cluster and while identifying the source of performance issues in a production cluster. Diagnostics should also be run as part of setup to ensure that the cluster is running properly before the customer takes ownership of the cluster. The diagnostic utility deploys a VM on each node in the cluster. The Controller VMs control the diagnostic VM on their hosts and report back the results to a single system. The diagnostics test provide the following data: • Sequential write bandwidth • Sequential read bandwidth • Random read IOPS • Random write IOPS Because the test creates new cluster entities, it is necessary to run a cleanup script when you are finished. To Run a Test Using the Diagnostics VMs Before you begin. Ensure that 10 GbE ports are active on the ESXi hosts using esxtop or vCenter. The tests will run very slow if the nodes are not using the 10 GbE ports. For more information about this known issue with ESXi 5.0 update 1, see VMware KB article 2030006. 1. Log on to any Controller VM in the cluster with SSH. 2. Set up the diagnostics test. → vSphere nutanix@cvm$ ~/diagnostics/diagnostics.py cleanup
  • 25. Final Configuration | Setup Guide | NOS 3.5 | 25 In vCenter, right-click any diagnostic VMs labeled as "orphaned", select Remove from Inventory, and click Yes to confirm removal. → KVM nutanix@cvm$ ~/diagnostics/setup_diagnostics_kvm.py --force 3. Start the diagnostics test. → vSphere nutanix@cvm$ ~/diagnostics/diagnostics.py run → KVM nutanix@cvm$ ~/diagnostics/diagnostics.py --hypervisor kvm --skip_setup run Include the parameter --default_ncli_password='admin_password' if the Nutanix admin user password has been changed from the default. If the command fails with ERROR:root:Zookeeper host port list is not set, refresh the environment by running source /etc/profile or bash -l and run the command again. The diagnostic may take up to 15 minutes to complete. The script performs the following tasks: 1. Installs a diagnostic VM on each node. 2. Creates cluster entities to support the test, if necessary. 3. Runs four performance tests, using the Linux fio utility. 4. Reports the results. 4. Review the results. 5. Remove the entities from this diagnostic. → vSphere nutanix@cvm$ ~/diagnostics/diagnostics.py cleanup In vCenter, right-click any diagnostic VMs labeled as "orphaned", select Remove from Inventory, and click Yes to confirm removal. → KVM nutanix@cvm$ ~/diagnostics/setup_diagnostics_kvm.py --cleanup_ctr Perform these steps for each KVM host. a. Log on to the KVM host with SSH. b. Get the diagnostics VM name. root@kvm# virsh list | grep -i diagnostics c. Destroy the diagnostics VM. root@kvm# virsh destroy diagnostics_vm_name Replace diagnostics_vm_name with the VM name found in the previous step. Diagnostics Output System output similar to the following indicates a successful test.
  • 26. Final Configuration | Setup Guide | NOS 3.5 | 26 Checking if an existing storage pool can be used ... Using storage pool sp1 for the tests. Checking if the diagnostics container exists ... does not exist. Creating a new container NTNX-diagnostics-ctr for the runs ... done. Mounting NFS datastore 'NTNX-diagnostics-ctr' on each host ... done. Deploying the diagnostics UVM on host 172.16.8.170 ... done. Preparing the UVM on host 172.16.8.170 ... done. Deploying the diagnostics UVM on host 172.16.8.171 ... done. Preparing the UVM on host 172.16.8.171 ... done. Deploying the diagnostics UVM on host 172.16.8.172 ... done. Preparing the UVM on host 172.16.8.172 ... done. Deploying the diagnostics UVM on host 172.16.8.173 ... done. Preparing the UVM on host 172.16.8.173 ... done. VM on host 172.16.8.170 has booted. 3 remaining. VM on host 172.16.8.171 has booted. 2 remaining. VM on host 172.16.8.172 has booted. 1 remaining. VM on host 172.16.8.173 has booted. 0 remaining. Waiting for the hot cache to flush ... done. Running test 'Prepare disks' ... done. Waiting for the hot cache to flush ... done. Running test 'Sequential write bandwidth (using fio)' ... bandwidth MBps Waiting for the hot cache to flush ... done. Running test 'Sequential read bandwidth (using fio)' ... bandwidth MBps Waiting for the hot cache to flush ... done. Running test 'Random read IOPS (using fio)' ... operations IOPS Waiting for the hot cache to flush ... done. Running test 'Random write IOPS (using fio)' ... operations IOPS Tests done. Note: • Expected results vary based on the specific NOS version and hardware model used. Refer to the Release Notes for the values appropriate for your environment. • The IOPS values reported by the diagnostics script will be higher than the values reported by the Nutanix management interfaces. This difference is because the diagnostics script reports physical disk I/O, and the management interfaces show IOPS reported by the hypervisor. • If the reported values are lower than expected, the 10 GbE ports may not be active. For more information about this known issue with ESXi 5.0 update 1, see VMware KB article 2030006. To Test Email Alerts 1. Log on to any Controller VM in the cluster with SSH. 2. Send a test email. nutanix@cvm$ ~/serviceability/bin/email-alerts --to_addresses="support@nutanix.com, customer_email" --subject="[alert test] `ncli cluster get-params`" Replace customer_email with a customer email address that receives alerts. 3. Confirm with Nutanix support that the email was received.
  • 27. Final Configuration | Setup Guide | NOS 3.5 | 27 To Check the Status of Cluster Services Verify that all services are up on all Controller VMs. nutanix@cvm$ cluster status If the cluster is running properly, output similar to the following is displayed for each node in the cluster: CVM: 172.16.8.167 Up, ZeusLeader Zeus UP [3148, 3161, 3162, 3163, 3170, 3180] Scavenger UP [3333, 3345, 3346, 11997] ConnectionSplicer UP [3379, 3392] Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447] Medusa UP [3488, 3501, 3502, 3523, 3569] DynamicRingChanger UP [4592, 4609, 4610, 4640] Pithos UP [4613, 4625, 4626, 4678] Stargate UP [4628, 4647, 4648, 4709] Cerebro UP [4890, 4903, 4904, 4979] Chronos UP [4906, 4918, 4919, 4968] Curator UP [4922, 4934, 4935, 5064] Prism UP [4939, 4951, 4952, 4978] AlertManager UP [4954, 4966, 4967, 5022] StatsAggregator UP [5017, 5039, 5040, 5091] SysStatCollector UP [5046, 5061, 5062, 5098]
  • 28. Manual IP Address Configuration | Setup Guide | NOS 3.5 | 28 Appendix A Manual IP Address Configuration To Verify IPv6 Link-Local Connectivity The automated IP address and cluster configuration utilities depend on IPv6 link-local addresses, which are enabled on most networks. Use this procedure to verify that IPv6 link-local is enabled. 1. Connect two Windows, Linux, or Apple laptops to the switch to be used. 2. Disable any firewalls on the laptops. 3. Verify that each laptop has an IPv6 link-local address. → Windows (Control Panel) Start > Control Panel > View network status and tasks > Change adapter settings > Local Area Connection > Details → Windows (command-line interface) > ipconfig Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : corp.example.com Link-local IPv6 Address . . . . . : fe80::ed67:9a32:7fc4:3be1%12 IPv4 Address. . . . . . . . . . . : 172.16.21.11 Subnet Mask . . . . . . . . . . . : 255.240.0.0
  • 29. Manual IP Address Configuration | Setup Guide | NOS 3.5 | 29 Default Gateway . . . . . . . . . : 172.16.0.1 → Linux $ ifconfig eth0 eth0 Link encap:Ethernet HWaddr 00:0c:29:dd:e3:0b inet addr:10.2.100.180 Bcast:10.2.103.255 Mask:255.255.252.0 inet6 addr: fe80::20c:29ff:fedd:e30b/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:2895385616 errors:0 dropped:0 overruns:0 frame:0 TX packets:3063794864 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:2569454555254 (2.5 TB) TX bytes:2795005996728 (2.7 TB) → Mac OS $ ifconfig en0 en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether 70:56:81:ae:a7:47 inet6 fe80::7256:81ff:feae:a747 en0 prefixlen 64 scopeid 0x4 inet 172.16.21.208 netmask 0xfff00000 broadcast 172.31.255.255 media: autoselect status: active Note the IPv6 link-local addresses, which always begin with fe80. Omit the / character and anything following. 4. From one of the laptops, ping the other laptop. → Windows > ping -6 ipv6_linklocal_addr%interface → Linux/Mac OS $ ping6 ipv6_linklocal_addr%interface • Replace ipv6_linklocal_addr with the IPv6 link-local address of the other laptop. • Replace interface with the interface identifier on the other laptop (for example, 12 for Windows, eth0 for Linux, or en0 for Mac OS). If the ping packets are answered by the remote host, IPv6 link-local is enabled on the subnet. If the ping packets are not answered, ensure that firewalls are disabled on both laptops and try again before concluding that IPv6 link-local is not enabled. 5. Reenable the firewalls on the laptops and disconnect them from the network. Results. • If IPv6 link-local is enabled on the subnet, you can use automated IP address and cluster configuration utility. • If IPv6 link-local is not enabled on the subnet, you have to manually create the cluster by following To Configure the Cluster (Manual) on page 29, which includes manually setting IP addresses. To Configure the Cluster (Manual) Use this procedure if IPv6 link-local is not enabled on the subnet. 1. Configure the IPMI IP addresses by following the procedure for your hardware model.
  • 30. Manual IP Address Configuration | Setup Guide | NOS 3.5 | 30 → To Configure the Remote Console IP Address (NX-1000, NX-3050, NX-6000) on page 31 → To Configure the Remote Console IP Address (NX-3000) on page 32 → To Configure the Remote Console IP Address (NX-2000) on page 32 Alternatively, you can set the IPMI IP address using a command-line utility by following To Configure the Remote Console IP Address (command line) on page 33. 2. Configure networking on node the by following the hypervisor-specific procedure. → vSphere: To Configure Host Networking on page 34 → KVM: To Configure Host Networking (KVM) on page 35 3. Configure the Controller VM IP addresses by following To Configure the Controller VM IP Address on page 36. 4. Log on to any Controller VM in the cluster with SSH. 5. Create the cluster. nutanix@cvm$ cluster -s cvm_ip_addrs create Replace cvm_ip_addrs with a comma-separated list of Controller VM IP addresses. Include all Controller VMs that will be part of the cluster. For example, if the new cluster should comprise all four nodes in a block, include all the IP addresses of all four Controller VMs. 6. Start the Nutanix cluster. nutanix@cvm$ cluster start If the cluster starts properly, output similar to the following is displayed for each node in the cluster: CVM: 172.16.8.167 Up, ZeusLeader Zeus UP [3148, 3161, 3162, 3163, 3170, 3180] Scavenger UP [3333, 3345, 3346, 11997] ConnectionSplicer UP [3379, 3392] Hyperint UP [3394, 3407, 3408, 3429, 3440, 3447] Medusa UP [3488, 3501, 3502, 3523, 3569] DynamicRingChanger UP [4592, 4609, 4610, 4640] Pithos UP [4613, 4625, 4626, 4678] Stargate UP [4628, 4647, 4648, 4709] Cerebro UP [4890, 4903, 4904, 4979] Chronos UP [4906, 4918, 4919, 4968] Curator UP [4922, 4934, 4935, 5064] Prism UP [4939, 4951, 4952, 4978] AlertManager UP [4954, 4966, 4967, 5022] StatsAggregator UP [5017, 5039, 5040, 5091] SysStatCollector UP [5046, 5061, 5062, 5098] 7. Set the name of the cluster. nutanix@cvm$ ncli cluster edit-params new-name=cluster_name Replace cluster_name with a name for the cluster chosen by the customer. 8. Configure the DNS servers. nutanix@cvm$ ncli cluster add-to-name-servers servers="dns_server" Replace dns_server with the IP address of a single DNS server or with a comma-separated list of DNS server IP addresses.
  • 31. Manual IP Address Configuration | Setup Guide | NOS 3.5 | 31 9. Configure the NTP servers. nutanix@cvm$ ncli cluster add-to-ntp-servers servers="ntp_server" Replace ntp_server with the IP address or host name of a single NTP server or a with a comma- separated list of NTP server IP addresses or host names. Remote Console IP Address Configuration The Intelligent Platform Management Interface (IPMI) is a standardized interface used to manage a host and monitor its operation. To enable remote access to the console of each host, you must configure the IPMI settings within BIOS. The Nutanix cluster provides a Java application to remotely view the console of each node, or host server. You can use this console to configure additional IP addresses in the cluster. The procedure for configuring the remote console IP address is slightly different for each hardware platform. To Configure the Remote Console IP Address (NX-1000, NX-3050, NX-6000) 1. Connect a keyboard and monitor to a node in the Nutanix block. 2. Restart the node and press Delete to enter the BIOS setup utility. You will have a limited amount of time to enter BIOS before the host completes the restart process. 3. Press the right arrow key to select the IPMI tab. 4. Press the down arrow key until BMC network configuration is highlighted and then press Enter. 5. Select Configuration Address source and press Enter. 6. Select Static and press Enter. 7. Assign the Station IP address, Subnet mask, and Router IP address.
  • 32. Manual IP Address Configuration | Setup Guide | NOS 3.5 | 32 8. Review the BIOS settings and press F4 to save the configuration changes and exit the BIOS setup utility. The node restarts. To Configure the Remote Console IP Address (NX-3000) 1. Connect a keyboard and monitor to a node in the Nutanix block. 2. Restart the node and press Delete to enter the BIOS setup utility. You will have a limited amount of time to enter BIOS before the host completes the restart process. 3. Press the right arrow key to select the Server Mgmt tab. 4. Press the down arrow key until BMC network configuration is highlighted and then press Enter. 5. Select Configuration source and press Enter. 6. Select Static on next reset and press Enter. 7. Assign the Station IP address, Subnet mask, and Router IP address. 8. Press F10 to save the configuration changes. 9. Review the settings and then press Enter. The node restarts. To Configure the Remote Console IP Address (NX-2000) 1. Connect a keyboard and monitor to a node in the Nutanix block. 2. Restart the node and press Delete to enter the BIOS setup utility. You will have a limited amount of time to enter BIOS before the host completes the restart process. 3. Press the right arrow key to select the Advanced tab.
  • 33. Manual IP Address Configuration | Setup Guide | NOS 3.5 | 33 4. Press the down arrow key until IPMI Configuration is highlighted and then press Enter. 5. Select Set LAN Configuration and press Enter. 6. Select Static to assign an IP address, subnet mask, and gateway address. 7. Press F10 to save the configuration changes. 8. Review the settings and then press Enter. 9. Restart the node. To Configure the Remote Console IP Address (command line) You can configure the management interface from the hypervisor host on the same node. Perform these steps once from each hypervisor host in the cluster where the management network configuration need to be changed. 1. Log on to the hypervisor host with SSH or the IPMI remote console. 2. Set the networking parameters. root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 ipsrc static root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 ipaddr mgmt_interface_ip_addr root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 netmask mgmt_interface_subnet_addr root@esx# /ipmitool -U ADMIN -P ADMIN lan set 1 defgw ipaddr mgmt_interface_gateway root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 ipsrc static root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 ipaddr mgmt_interface_ip_addr root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 netmask mgmt_interface_subnet_addr root@kvm# ipmitool -U ADMIN -P ADMIN lan set 1 defgw ipaddr mgmt_interface_gateway 3. Show current settings. root@esx# /ipmitool -v -U ADMIN -P ADMIN lan print 1 root@kvm# ipmitool -v -U ADMIN -P ADMIN lan print 1
  • 34. Manual IP Address Configuration | Setup Guide | NOS 3.5 | 34 Confirm that the parameters are set to the correct values. To Configure Host Networking You can access the ESXi console either through IPMI or by attaching a keyboard and monitor to the node. 1. On the ESXi host console, press F2 and then provide the ESXi host logon credentials. 2. Press the down arrow key until Configure Management Network is highlighted and then press Enter. 3. Select Network Adapters and press Enter. 4. Ensure that the connected network adapters are selected. If they are not selected, press Space to select them and press Enter to return to the previous screen. 5. If a VLAN ID needs to be configured on the Management Network, select VLAN (optional) and press Enter. In the dialog box, provide the VLAN ID and press Enter. 6. Select IP Configuration and press Enter. 7. If necessary, highlight the Set static IP address and network configuration option and press Space to update the setting. 8. Provide values for the following: IP Address, Subnet Mask, and Default Gateway fields based on your environment and then press Enter . 9. Select DNS Configuration and press Enter. 10. If necessary, highlight the Use the following DNS server addresses and hostname option and press Space to update the setting. 11. Provide values for the Primary DNS Server and Alternate DNS Server fields based on your environment and then press Enter. 12. Press Esc and then Y to apply all changes and restart the management network. 13. Select Test Management Network and press Enter. 14. Press Enter to start the network ping test.
  • 35. Manual IP Address Configuration | Setup Guide | NOS 3.5 | 35 15. Verify that the default gateway and DNS servers reported by the ping test match those that you specified earlier in the procedure and then press Enter. Ensure that the tested addresses pass the ping test. If they do not, confirm that the correct IP addresses are configured. Press Enter to close the test window. 16. Press Esc to log out. To Configure Host Networking (KVM) You can access the hypervisor host console either through IPMI or by attaching a keyboard and monitor to the node. 1. Log on to the host as root. 2. Open the network interface configuration file. root@kvm# vi /etc/sysconfig/network-scripts/ifcfg-br0 3. Press A to edit values in the file. 4. Update entries for netmask, gateway, and address. The block should look like this: ONBOOT="yes" NM_CONTROLLED="no" NETMASK="subnet_mask" IPADDR="host_ip_addr" DEVICE="eth0" TYPE="ethernet" GATEWAY="gateway_ip_addr" BOOTPROTO="none" • Replace host_ip_addr with the IP address for the hypervisor host. • Replace subnet_mask with the subnet mask for host_ip_addr. • Replace gateway_ip_addr with the gateway address for host_ip_addr. 5. Press Esc. 6. Type :wq and press Enter to save your changes. 7. Open the name services configuration file. root@kvm# vi /etc/resolv.conf 8. Update the values for the nameserver parameter then save and close the file.
  • 36. Manual IP Address Configuration | Setup Guide | NOS 3.5 | 36 9. Restart networking. root@kvm# /etc/init.d/network restart To Configure the Controller VM IP Address 1. Log on to the hypervisor host with SSH or the IPMI remote console. 2. Log on to the Controller VM with SSH. root@host# ssh nutanix@192.168.5.254 Enter the Controller VM nutanix password. 3. Change the network interface configuration. a. Open the network interface configuration file. nutanix@cvm$ sudo vi /etc/sysconfig/network-scripts/ifcfg-eth0 Enter the nutanix password. b. Press A to edit values in the file. c. Update entries for netmask, gateway, and address. The block should look like this: ONBOOT="yes" NM_CONTROLLED="no" NETMASK="subnet_mask" IPADDR="cvm_ip_addr" DEVICE="eth0" TYPE="ethernet" GATEWAY="gateway_ip_addr" BOOTPROTO="none" • Replace cvm_ip_addr with the IP address for the Controller VM. • Replace subnet_mask with the subnet mask for cvm_ip_addr. • Replace gateway_ip_addr with the gateway address for cvm_ip_addr. d. Press Esc. e. Type :wq and press Enter to save your changes. 4. Restart the Controller VM. nutanix@cvm$ sudo reboot Enter the nutanix password if prompted. Wait to proceed until the Controller VM has finished starting, which takes approximately 5 minutes.