Cisco HyperFlex
Cisco HyperFlex
http://www.cisco.com/go/designzone.
THE DESIGNS ARE SUBJECT TO CHANGE WITHOUT NOTICE. USERS ARE SOLELY RESPONSIBLE FOR THEIR APPLICATION
OF THE DESIGNS. THE DESIGNS DO NOT CONSTITUTE THE TECHNICAL OR OTHER PROFESSIONAL ADVICE OF CISCO,
ITS SUPPLIERS OR PARTNERS. USERS SHOULD CONSULT THEIR OWN TECHNICAL ADVISORS BEFORE IMPLEMENTING
THE DESIGNS. RESULTS MAY VARY DEPENDING ON FACTORS NOT TESTED BY CISCO.
CCDE, CCENT, Cisco Eos, Cisco Lumin, Cisco Nexus, Cisco StadiumVision, Cisco TelePresence, Cisco WebEx, the Cisco
logo, DCE, and Welcome to the Human Network are trademarks; Changing the Way We Work, Live, Play, and Learn
and Cisco Store are service marks; and Access Registrar, Aironet, AsyncOS, Bringing the Meeting To You, Catalyst,
CCDA, CCDP, CCIE, CCIP, CCNA, CCNP, CCSP, CCVP, Cisco, the Cisco Certified Internetwork Expert logo, Cisco IOS, Cis-
co Press, Cisco Systems, Cisco Systems Capital, the Cisco Systems logo, Cisco Unity, Collaboration Without Limitation,
EtherFast, EtherSwitch, Event Center, Fast Step, Follow Me Browsing, FormShare, GigaDrive, HomeLink, Internet
Quotient, IOS, iPhone, iQuick Study, IronPort, the IronPort logo, LightStream, Linksys, MediaTone, MeetingPlace,
MeetingPlace Chime Sound, MGX, Networkers, Networking Academy, Network Registrar, PCNow, PIX, PowerPanels,
ProConnect, ScriptShare, SenderBase, SMARTnet, Spectrum Expert, StackWise, The Fastest Way to Increase Your In-
ternet Quotient, TransPath, WebEx, and the WebEx logo are registered trademarks of Cisco Systems, Inc. and/or its
affiliates in the United States and certain other countries.
All other trademarks mentioned in this document or website are the property of their respective owners. The use of
the word partner does not imply a partnership relationship between Cisco and any other company. (0809R)
Introduction ......................................................................................................................................................................... 8
Audience ............................................................................................................................................................................. 9
Requirements .................................................................................................................................................................... 24
Considerations................................................................................................................................................................... 28
Scale ............................................................................................................................................................................. 28
Capacity ........................................................................................................................................................................ 29
Multicast ........................................................................................................................................................................ 43
Installation ............................................................................................................................................................................. 77
Prerequisites...................................................................................................................................................................... 77
IP Addressing ................................................................................................................................................................ 77
DHCP ............................................................................................................................................................................ 80
DNS ............................................................................................................................................................................... 81
NTP ............................................................................................................................................................................... 82
VLANs ........................................................................................................................................................................... 83
Physical Installation........................................................................................................................................................ 86
Cabling .......................................................................................................................................................................... 86
NTP ............................................................................................................................................................................... 93
Server Ports................................................................................................................................................................... 98
Resources............................................................................................................................................................................ 210
Cisco HyperFlex systems have been developed using the Cisco UCS platform, which combines Cisco HX-
Series x86 servers and integrated networking technologies with Cisco UCS Fabric Interconnects, into a
single management domain, along with industry leading virtualization hypervisor software from VMware, and
new software defined storage technology. The combination creates a virtualization platform that also
provides the network connectivity for the guest virtual machine (VM) connections, and the distributed
storage to house the VMs using Cisco UCS x86 servers instead of specialized components. The unique
storage features of the newly developed log based filesystem enable rapid cloning of VMs, snapshots
without the traditional performance penalties, data deduplication and compression, without having to
purchase all-flash based storage systems. All configuration, deployment, management, and monitoring tasks
of the solution can be done with the existing tools for Cisco UCS and VMware, such as Cisco UCS Manager
and VMware vCenter. This powerful linking of advanced technology stacks into a single, simple, rapidly
deployable solution makes Cisco HyperFlex a true second generation hyperconverged platform for the
modern data center.
Solution Overview
Introduction
The Cisco HyperFlex System provides an all-purpose virtualized server platform, with hypervisor hosts,
network connectivity, and virtual server storage across a set of Cisco UCS HX-Series x86 rack-mount
servers. Legacy data center deployments relied on a disparate set of technologies, each performing a
distinct and specialized function, such as network switches connecting endpoints and transferring Ethernet
network traffic, and Fibre Channel (FC) storage arrays providing block based storage devices via a unique
storage array network (SAN). Each of these systems had unique requirements for hardware, connectivity,
management tools, operational knowledge, monitoring, and ongoing support. A legacy virtual server
environment operated in silos, within which only a single technology operated, along with their correlated
software tools and support staff. Silos were often divided between the x86 computing hardware, the
networking connectivity of those x86 servers, SAN connectivity and storage device presentation, the
hypervisors, virtual platform management, and the guest VMs themselves along with their Operating Systems
and applications. This model proves to be inflexible, difficult to navigate, and is susceptible to numerous
operational inefficiencies.
To cater for the needs of the modern and agile data center, a new model called converged architecture
gained wide acceptance. A converged architecture attempts to collapse the traditional siloed architecture by
combining various technologies into a single environment, which has been designed to operate together in
pre-defined, tested, and validated designs. A key component of the converged architecture was the
revolutionary combination of x86 rack and blade servers, along with converged Ethernet and Fibre Channel
networking offered by the Cisco UCS platform. Converged architectures leverage Cisco UCS, plus new
deployment tools, management software suites, automation processes, and orchestration tools to overcome
the difficulties deploying traditional environments, and do so in a much more rapid fashion. These new tools
place the ongoing management and operation of the system into the hands of fewer staff, with faster
deployment of workloads based on business needs, while still remaining at the forefront in providing
flexibility to adapt to changing workload needs, and offering the highest possible performance. Cisco has
proved to be incredibly successful in these areas with our partners, developing leading solutions such as
Cisco FlexPod, SmartStack, VersaStack, and vBlock architectures. Despite the advancements, since these
converged architectures incorporate legacy technology stacks, particularly in the storage subsystems, there
often remained a division of responsibility amongst multiple teams of administrators. Alongside the
tremendous advantages of converged infrastructure approach, there is also a downside wherein these
architectures use a complex combination of components, where a simpler system would suffice to serve the
required workloads.
Significant changes in the storage marketplace have given rise to the software defined storage (SDS)
system. Legacy FC storage arrays continued to utilize a specialized subset of hardware, such as Fibre
Channel Arbitrated Loop (FC-AL) based controllers and disk shelves along with optimized Application
Specific Integrated Circuits (ASIC), read/write data caching modules and cards, plus highly customized
software to operate the arrays. With the rise in the Serial Attached SCSI (SAS) bus technology and its
inherent benefits, storage array vendors began to transition their internal architectures to SAS, and with
dramatic increases in processing power in the recent x86 processor architectures, fewer or no custom
ASICs are used. With the shrink in the disk physical sizes, servers began to have the same density of storage
per rack unit (RU) as the arrays themselves, and with the proliferation of NAND based flash memory solid
state disks (SSD), they also now had access to input/output (IO) devices whose speed rivaled that of
dedicated caching devices. As servers now contained storage devices and technology to rival many
dedicated arrays in the market, the remaining major differentiator between them was the software providing
allocation, presentation and management of the storage, plus the advanced features many vendors offered.
This led to the increased adoption of software defined storage, where the x86 servers with the storage
devices ran software to effectively turn one or more of them into a storage array much the same as the
traditional arrays were. In a somewhat unexpected turn of events, some of the major storage array vendors
themselves were pioneers in this field, recognizing the shift in the market and attempting to profit from their
unique software features, versus specialized hardware as they had done in the past.
Some early uses of SDS systems simply replaced the traditional storage array in the converged architectures
as described earlier. This infrastructure approach still used a separate storage system from the virtual server
hypervisor platform, and depending on the solution provider, also still used separate network devices. If the
server that hosted the virtual servers, also provided the SDS environment in the same model of servers,
could they not simply do both things at once and collapse the two functions into one? This idea and
combination of resources is what the industry has given the moniker of a hyperconverged infrastructure.
Hyperconverged infrastructures combine the computing, memory, hypervisor, and storage devices of
servers into a single monolithic platform for virtual servers. There is no longer a separate storage system, as
the servers running the hypervisors also provide the software defined storage resources to store the virtual
servers, effectively storing the virtual machines on themselves. A hyperconverged infrastructure is far more
self-contained, simpler to use, faster to deploy, easier to consume, yet flexible and with high performance.
By combining the convergence of compute and network resources provided by Cisco UCS, along with the
new hyperconverged storage software, the Cisco HyperFlex system uniquely provides the compute
resources, network connectivity, storage, and hypervisor platform to run an entire virtual environment, all
contained in a single system.
Audience
The intended audience for this document includes, but is not limited to, sales engineers, field consultants,
professional services, IT managers, partner engineering, and customers deploying the Cisco HyperFlex
System. External references are provided wherever applicable, but readers are expected to be familiar with
VMware specific technologies, infrastructure concepts, networking connectivity, and security policies of the
customer installation.
Solution Summary
The Cisco HyperFlex system provides a fully contained virtual server platform, with compute and memory
resources, integrated networking connectivity, a distributed high performance log-based filesystem for VM
storage, and the hypervisor software for running the virtualized servers, all within a single Cisco UCS
management domain.
Figure 1 HyperFlex System Overview
Computing The system is based on an entirely new class of computing system that incorporates
rack-mount and blade servers based on Intel Xeon Processors.
Network The system is integrated onto a low-latency, lossless, 10-Gbps unified network fabric.
This network foundation consolidates LANs, SANs, and high-performance computing networks
which are separate networks today. The unified fabric lowers costs by reducing the number of
network adapters, switches, and cables, and by decreasing the power and cooling requirements.
Virtualization The system unleashes the full potential of virtualization by enhancing the scalability,
performance, and operational control of virtual environments. Cisco security, policy enforcement, and
diagnostic features are now extended into virtualized environments to better support changing
business and IT requirements.
Storage access The system provides consolidated access to both SAN storage and Network
Attached Storage (NAS) over the unified fabric. By unifying the storage access the Cisco Unified
Computing System can access storage over Ethernet, Fibre Channel, Fibre Channel over Ethernet
(FCoE), and iSCSI. This provides customers with choice for storage access and investment
protection. In addition, the server administrators can pre-assign storage-access policies for system
connectivity to storage resources, simplifying storage connectivity, and management for increased
productivity.
Management The system uniquely integrates all system components which enable the entire
solution to be managed as a single entity by the Cisco UCS Manager (UCSM). The Cisco UCS
Manager has an intuitive graphical user interface (GUI), a command-line interface (CLI), and a robust
application programming interface (API) to manage all system configuration and operations.
A cohesive, integrated system which unifies the technology in the data center. The system is
managed, serviced and tested as a whole.
Scalability through a design for hundreds of discrete servers and thousands of virtual machines and
the capability to scale I/O bandwidth to match demand.
The Cisco UCS 6200 Series provides the management and communication backbone for the Cisco UCS C-
Series and HX-Series rack-mount servers, Cisco UCS B-Series Blade Servers and Cisco UCS 5100 Series
Blade Server Chassis. All servers and chassis, and therefore all blades, attached to the Cisco UCS 6200
Series Fabric Interconnects become part of a single, highly available management domain. In addition, by
supporting unified fabric, the Cisco UCS 6200 Series provides both the LAN and SAN connectivity for all
blades within its domain.
From a networking perspective, the Cisco UCS 6200 Series uses a cut-through architecture, supporting
deterministic, low-latency, line-rate 10 Gigabit Ethernet on all ports, 1Tb switching capacity, 160 Gbps
bandwidth per chassis, and independent of packet size and enabled services. The product family supports
Cisco low-latency, lossless 10 Gigabit Ethernet unified network fabric capabilities, which increase the
reliability, efficiency, and scalability of Ethernet networks. The Fabric Interconnect supports multiple traffic
classes over a lossless Ethernet fabric from a server through an interconnect. Significant TCO savings come
from an FCoE-optimized server design in which network interface cards (NICs), host bus adapters (HBAs),
cables, and switches can be consolidated.
Four single-phase, hot-swappable power supplies are accessible from the front of the chassis. These power
supplies are 92 percent efficient and can be configured to support non-redundant, N+1 redundant, and grid
redundant configurations. The rear of the chassis contains eight hot-swappable fans, four power connectors
(one per power supply), and two I/O bays for Cisco UCS Fabric Extenders. A passive mid-plane provides up
to 40 Gbps of I/O bandwidth per server slot from each Fabric Extender. The chassis is capable of supporting
40 Gigabit Ethernet standards.
Figure 8 Cisco UCS 5108 Blade Chassis Front and Rear Views
The Cisco UCS 2204XP Fabric Extender has four 10 Gigabit Ethernet, FCoE-capable, SFP+ ports that
connect the blade chassis to the fabric interconnect. Each Cisco UCS 2204XP has sixteen 10 Gigabit
Ethernet ports connected through the midplane to each half-width slot in the chassis. Typically configured in
pairs for redundancy, two fabric extenders provide up to 80 Gbps of I/O to the chassis.
Figure 9 Cisco UCS 2204XP Fabric Extender
Replication replicates data across the cluster so that data availability is not affected if single or
multiple components fail (depending on the replication factor configured).
Deduplication is always on, helping reduce storage requirements in virtualization clusters in which
multiple operating system instances in client virtual machines result in large amounts of replicated
data.
Compression further reduces storage requirements, lowering costs, and the log-structured file
system is designed to store variable-sized blocks, minimizing internal fragmentation.
Thin provisioning allows large volumes to be created without requiring storage to support them until
proposition.
Fast, space-efficient clones, called HyperFlex ReadyClones, rapidly replicate storage volumes so
that virtual machines can be replicated simply through a few small metadata operations, with actual
data copied only for new write operations.
Snapshots help facilitate backup and remote-replication operations: needed in enterprises that
require always-on data availability.
IO Visor: This VIB provides a network file system (NFS) mount point so that the ESXi hypervisor can
VMware API for Array Integration (VAAI): This storage offload API allows vSphere to request
advanced file system operations such as snapshots and cloning. The controller implements these
operations through manipulation of metadata rather than actual data copying, providing rapid
response, and thus rapid deployment of new environments.
Replication Factor
The policy for the number of duplicate copies of each storage block is chosen during cluster setup, and is
referred to as the replication factor (RF). The default setting for the Cisco HyperFlex HX Data Platform is
replication factor 3 (RF=3).
Replication Factor 3: For every I/O write committed to the storage layer, 2 additional copies of the
blocks written will be created and stored in separate locations, for a total of 3 copies of the blocks.
Blocks are distributed in such a way as to ensure multiple copies of the blocks are not stored on the
same disks, nor on the same nodes of the cluster. This setting can tolerate simultaneous failures of 2
disks, or 2 entire nodes without losing data and without resorting to restore from backup or other
recovery processes.
Replication Factor 2: For every I/O write committed to the storage layer, 1 additional copy of the
blocks written will be created and stored in separate locations, for a total of 2 copies of the blocks.
Blocks are distributed in such a way as to ensure multiple copies of the blocks are not stored on the
same disks, nor on the same nodes of the cluster. This setting can tolerate a failure of 1 disk, or 1
entire node without losing data and without resorting to restore from backup or other recovery
processes.
Requirements
The following sections detail the physical hardware, software revisions, and firmware versions required to
install the Cisco HyperFlex system. The components described are for a single 8 node Cisco HX cluster, or
for a si
Physical Components
Table 1 HyperFlex System Components
Component Hardware Required
Or
Or
Four Cisco HX-Series HX240c M4SX servers plus four Cisco UCS B200 M4
blade servers
Chassis Cisco UCS 5108 Blade Chassis (only if using the B200 M4 servers)
Fabric Extenders Cisco UCS 2204XP Fabric Extenders (required for the 5108 blade chassis
and B200 M4 blades)
The following table lists the hardware component options for the HX220c M4S server model:
Or
And
For full specifications for the HX220c M4S, download the following document:
http://www.cisco.com/c/dam/en/us/products/collateral/hyperconverged-infrastructure/hyperflex-hx-
series/datasheet-c78-736817.pdf
The following table lists the hardware component options for the HX240c-M4SX server model:
Or
SSD One 120 GB 2.5 Inch Enterprise Value 6G SATA SSD (in the rear
internal device bay)
And
For full specifications for the HX240c M4SX, download the following document:
http://www.cisco.com/c/dam/en/us/products/collateral/hyperconverged-infrastructure/hyperflex-hx-
series/datasheet-c78-736818.pdf
The following table lists the hardware component options for the B200 M4 server model:
Processors Chose any Intel E5-26xx v3 or v4 processor model. Use of all v3 or all v4
processors within the same HyperFlex cluster is recommended.
SSD none
HDD none
Or
HX-VSI-HX220M4-C1XKA1 1x HX220c M4S server, with E5-2680 v3 CPUs and 384 GB RAM to expand
an existing cluster
HX-VSI-HX220M4-C1VXKA1 1x HX220c M4S server, with E5-2680 v3 CPUs and 384 GB RAM to expand
an existing cluster
HX-VSI-HX240M4-C2SKA1 2x Cisco UCS 6248UP Fabric Interconnects
4x HX240c M4S servers, with E5-2680 v3 CPUs, 384 GB RAM and 15 1.2TB
capacity disks
4x HX240c M4S servers, with E5-2680 v3 CPUs, 384 GB RAM and 15 1.2TB
capacity disks
HX-VSI-HX240M4-C2XKA1 1x HX240c M4S server, with E5-2680 v3 CPUs, 384 GB RAM and 15 1.2TB
capacity disks to expand an existing cluster
HX-VSI-HX240M4-C2VXKA1 1x HX240c M4S server, with E5-2680 v3 CPUs, 384 GB RAM and 15 1.2TB
capacity disks to expand an existing cluster
4x HX240c M4S servers, with E5-2680 v3 CPUs, 384 GB RAM and 23 1.2TB
capacity disks
4x HX240c M4S servers, with E5-2680 v3 CPUs, 384 GB RAM and 23 1.2TB
capacity disks
HX-VSI-HX240M4-C3VXKA1 1x HX240c M4S server, with E5-2680 v3 CPUs, 384 GB RAM and 23 1.2TB
capacity disks to expand an existing cluster
Software Components
The following table lists the software components and the versions required for the Cisco HyperFlex system:
Hypervisor VMware ESXi 5.5 update 3b, patch 1 (Or) VMware ESXi 6.0 update 1b or update 2
VMware vSphere Standard, Essentials Plus, and ROBO editions are also
supported but only for vSphere 6.0 versions. Currently for these editions,
upgrades of HX Data Platform software would need to occur in an offline
maintenance window.
Management Server VMware vCenter Server for Windows or vCenter Server Appliance 5.5 update 3b or later.
Cisco UCS Firmware Cisco UCS Infrastructure software, B-Series and C-Series bundles, revision 2.2(6f)
To support Intel E5-26xx v4 processors, the Cisco UCS Infrastructure software, B-Series
and C-Series bundles must be upgraded to revision 2.2(7c)
Considerations
Version Control
Note that the software revisions listed in Table 6 are the only valid and supported configuration at the time
of the publishing of this validated design. Special care must be taken not to alter the revision of the
hypervisor, vCenter server, Cisco HX platform software, or the Cisco UCS firmware without first consulting
the appropriate release notes and compatibility matrixes to ensure that the system is not being modified into
an unsupported configuration.
vCenter Server
This document does not cover the installation and configuration of VMware vCenter Server for Windows, or
the vCenter Server Appliance. The vCenter Server must be installed and operational prior to the installation
of the Cisco HyperFlex HX Data Platform software. The following best practice guidance applies to
installations of HyperFlex 1.7.1:
Do not modify the default TCP port settings of the vCenter installation. Using non-standard ports can
lead to failures during the installation.
Building the vCenter server as a virtual machine inside the HyperFlex cluster environment on a HX-
Series node is not allowed. There are no valid locations within the HX-Series servers that will contain
enough usable space in a VMFS datastore to house a vCenter server, as once the software is
installed, all of the available disks and space will be fully used by the HyperFlex cluster. Building any
virtual machines on the HyperFlex servers prior to installing the HyperFlex HX Data Platform software
is therefore not possible, as it will lead to installation failures. Please build the vCenter server on a
physical server or in a virtual environment outside of the HyperFlex cluster.
Scale
Cisco HyperFlex clusters currently scale up from a minimum of 3 to a maximum of 8 converged nodes per
cluster, i.e. 8 nodes providing storage resources to the HX Distributed Filesystem. For the compute intensive
iguration with 3-8 Cisco HX240c M4SX model servers can be combined with up to 4
Cisco B200 M4 blades, called compute-only nodes. The number of compute-only nodes cannot exceed the
number of converged nodes.
additional servers to the Cisco UCS domain, installing an additional HyperFlex cluster on them, and
controlling them via the same vCenter server. A maximum of 4 HyperFlex clusters can be managed by a
single vCenter server, therefore the maximum size of a single HyperFlex environment is 32 converged
nodes, plus up to 16 additional compute-only blades.
Capacity
Overall usable cluster capacity is based on a number of factors. The number of nodes in the cluster must be
considered, plus the number and size of the capacity layer disks. Caching disk sizes are not calculated as
part of the cluster capacity. The replication factor of the HyperFlex HX Data Platform also affects the cluster
capacity as it defines the number of copies of each block of data written.
Disk drive manufacturers have adopted a size reporting methodology using calculation by powers of 10, also
known as decimal prefix. As an example, a 120 GB disk is listed with a minimum of 120 x 10^9 bytes of
usable addressable capacity, or 120 billion bytes. However, many operating systems and filesystems report
their space based on standard computer binary exponentiation, or calculation by powers of 2, also called
binary prefix. In this example, 2^10 or 1024 bytes make up a kilobyte, 2^10 kilobytes make up a megabyte,
2^10 megabytes make up a gigabyte, and 2^10 gigabytes make up a terabyte. As the values increase, the
disparity between the two systems of measurement and notation get worse, at the terabyte level, the
deviation between a decimal prefix value and a binary prefix value is nearly 10%.
The International System of Units (SI) defines values and decimal prefix by powers of 10 as follows:
1000 kB MB megabyte
1000 MB GB gigabyte
1000 GB TB terabyte
The International Organization for Standardization (ISO) and the International Electrotechnical Commission
(IEC) defines values and binary prefix by powers of 2 in ISO/IEC 80000-13:2008 Clause 4 as follows:
For the purpose of this document, the decimal prefix numbers are used only for raw disk capacity as listed
by the respective manufacturers. For all calculations where raw or usable capacities are shown from the
perspective of the HyperFlex software, filesystems or operating systems, the binary prefix numbers are
used. This is done primarily to show a consistent set of values as seen by the end user from within the
HyperFlex vCenter Web Plugin when viewing cluster capacity, allocation and consumption, and also within
most operating systems.
The following table lists a set of HyperFlex HX Data Platform cluster usable capacity values, using binary
prefix, for an array of cluster configurations. These values are useful for determining the appropriate size of
HX cluster to initially purchase, and how much capacity can be gained by adding capacity disks. The
calculations for these values are listed in appendix A.
Physical Topology
Topology Overview
The Cisco HyperFlex system is composed of a pair of Cisco UCS 6248UP Fabric Interconnects, along with
up to 8 HX-Series rack mount servers per cluster, with the option of adding Cisco UCS 5108 Blade chassis
to house Cisco UCS B200 M4 blade servers for additional compute resources in a hybrid cluster design. Up
to 4 separate HX clusters can be installed under a single pair of Fabric Interconnects. The Fabric
Interconnects both connect to every HX-Series rack mount server, and both connect to every Cisco UCS
are made from the Fabric Interconnects to the customer data center network at the time of installation.
Figure 12 HyperFlex Standard Topology
Figure 13 HyperFlex Hybrid Topology
Fabric Interconnects
Fabric Interconnects (FI) are deployed in pairs, wherein the two units operate as a management cluster,
while forming two separate network fabrics, referred to as the A side and B side fabrics. Therefore, many
design elements will refer to FI A or FI B, alternatively called fabric A or fabric B. Both Fabric Interconnects
are active at all times, passing data on both network fabrics for a redundant and highly available
configuration. Management services, including Cisco UCS Manager, are also provided by the two FIs but in a
clustered manner, where one FI is the primary, and one is secondary, with a roaming clustered IP address.
This primary/secondary relationship is only for the management cluster, and has no effect on data
transmission.
Fabric Interconnects have the following ports, which must be connected for proper management of the
Cisco UCS domain:
Mgmt: A 10/100/1000 Mbps port for managing the Fabric Interconnect and the Cisco UCS domain
via GUI and CLI tools. Also used by remote KVM, IPMI and SoL sessions to the managed servers
within the domain. This is typically connected to the customer management network.
L1: A cross connect port for forming the Cisco UCS management cluster. This is connected directly
to the L1 port of the paired Fabric Interconnect using a standard CAT5 or CAT6 Ethernet cable with
RJ45 plugs. It is not necessary to connect this to a switch or hub.
L2: A cross connect port for forming the Cisco UCS management cluster. This is connected directly
to the L2 port of the paired Fabric Interconnect using a standard CAT5 or CAT6 Ethernet cable with
RJ45 plugs. It is not necessary to connect this to a switch or hub.
Console: An RJ45 serial port for direct console access to the Fabric Interconnect. Typically used
during the initial FI setup process with the included serial to RJ45 adapter cable. This can also be
plugged into a terminal aggregator or remote console server device.
Logical Topology
Management Zone: This zone comprises the connections needed to manage the physical hardware,
the hypervisor hosts, and the storage platform controller virtual machines (SCVM). These interfaces
and IP addresses need to be available to all staff who will administer the HX system, throughout the
LAN/WAN. This zone must provide access to Domain Name System (DNS) and Network Time
Protocol (NTP) services, and allow Secure Shell (SSH) communication. The VLAN used for
management traffic must be able to traverse the network uplinks from the Cisco UCS domain,
reaching both FI A and FI B. In this zone are multiple physical and virtual components:
— Cisco UCS external management interfaces used by the servers and blades, which answer via the FI
management ports.
VM Zone: This zone comprises the connections needed to service network IO to the guest VMs that
will run inside the HyperFlex hyperconverged system. This zone typically contains multiple VLANs,
that are trunked to the Cisco UCS Fabric Interconnects via the network uplinks, and tagged with
802.1Q VLAN IDs. These interfaces and IP addresses need to be available to all staff and other
computer endpoints which need to communicate with the guest VMs in the HX system, throughout
the LAN/WAN.
Storage Zone: This zone comprises the connections used by the Cisco HX Data Platform software,
ESXi hosts, and the storage controller VMs to service the HX Distributed Filesystem. These interfaces
and IP addresses need to be able to communicate with each other at all times for proper operation.
During normal operation, this traffic all occurs within the Cisco UCS domain, however there are
hardware failure scenarios where this traffic would need to traverse the network northbound of the
Cisco UCS domain. For that reason, the VLAN used for HX storage traffic must be able to traverse
the network uplinks from the Cisco UCS domain, reaching FI A from FI B, and vice-versa. This zone
is primarily jumbo frame traffic therefore jumbo frames must be enabled on the Cisco UCS uplinks. In
this zone are multiple components:
— A vmkernel interface on each ESXi host in the HX cluster, used for storage traffic.
VMotion Zone: This zone comprises the connections used by the ESXi hosts to enable vMotion of
the guest VMs from host to host. During normal operation, this traffic all occurs within the Cisco UCS
domain, however there are hardware failure scenarios where this traffic would need to traverse the
network northbound of the Cisco UCS domain. For that reason, the VLAN used for HX storage traffic
must be able to traverse the network uplinks from the Cisco UCS domain, reaching FI A from FI B,
and vice-versa.
Refer to the following figure for an illustration of the logical network design:
Figure 16 Logical Network Design
Design Elements
Installation of the HyperFlex system is primarily done via a deployable HyperFlex installer virtual machine,
available for download at cisco.com as an OVA file. The installer VM does most of the Cisco UCS
configuration work, it can be leveraged to simplify the installation of ESXi on the HyperFlex hosts, and also
performs significant portions of the ESXi configuration. Finally, the installer VM is used to install the
HyperFlex HX Data Platform software and create the HyperFlex cluster. Because this simplified installation
method has been developed by Cisco, this CVD will not give detailed manual steps for the configuration of
all the elements that are handled by the installer. Instead, the elements configured will be described and
documented in this section, and the subsequent sections will guide you through the manual steps needed for
installation, and how to utilize the HyperFlex Installer for the remaining configuration steps.
Network Design
Cisco UCS Fabric Interconnects appear on the network as a collection of endpoints versus another network
switch. Internally, the Fabric Interconnects do not participate in spanning-tree protocol (STP) domains, and
the Fabric Interconnects cannot form a network loop, as they are not connected to each other with a layer 2
Ethernet link. All link up/down decisions via STP will be made by the upstream root bridges.
Uplinks need to be connected and active from both of the Fabric Interconnects. For redundancy, multiple
uplinks can be used on each FI, either as 802.3ad Link Aggregation Control Protocol (LACP) port-channels,
or using individual links. For the best level of performance and redundancy, uplinks can be made as LACP
port-channels to a pair of upstream Cisco switches using the virtual port channel (vPC) feature. Using vPC
uplinks allows all uplinks to be active passing data, plus protects against any individual link failure, and the
failure of an upstream switch. Other uplink configurations can be redundant, but spanning-tree protocol loop
avoidance may disable links if vPC is not available.
All uplink connectivity methods must allow for traffic to pass from one Fabric Interconnect to the other, or
from fabric A to fabric B. There are scenarios where cable, port or link failures would require traffic that
normally does not leave the Cisco UCS domain, to now be forced over the Cisco UCS uplinks. Additionally,
this traffic flow pattern can be seen briefly during maintenance procedures, such as updating firmware on
the Fabric Interconnects, which requires them to be rebooted. The following sections and figures detail
several uplink connectivity options.
Example configurations for Cisco Nexus 9372 switches using vPC are provided in appendix C: Example Nex-
us 9372 Switch Configurations.
Table 10 VLANs
VLAN Name VLAN ID Purpose
HX Storage Controller VM
management interfaces
A dedicated network or subnet for physical device management is often used in data centers. In this scenario,
the mgmt0 interfaces of the Fabric Interconnects would be connected to that dedicated network or subnet.
This is a valid configuration for HyperFlex installations with the following caveat; wherever the HyperFlex in-
staller is deployed it must have IP connectivity to the subnet of the mgmt0 interfaces of the Fabric Intercon-
nects, and also have IP connectivity to the subnets used by the hx-inband-mgmt and hx-storage-data VLANs
listed above.
Jumbo Frames
All HyperFlex storage traffic traversing the hx-storage-data VLAN and subnet is configured to use jumbo
frames, or to be precise all communication is configured to send IP packets with a Maximum Transmission
Unit (MTU) size of 9000 bytes. Using a larger MTU value means that each IP packet sent carries a larger
payload, therefore transmitting more data per packet, and consequently sending and receiving data faster.
This requirement also means that the Cisco UCS uplinks must be configured to pass jumbo frames. Failure to
configure the Cisco UCS uplink switches to allow jumbo frames can lead to service interruptions during
some failure scenarios, particularly when cable or port failures would cause storage traffic to traverse the
northbound Cisco UCS uplink switches.
Multicast
The HyperFlex Storage Platform Controller VMs configure a roaming management virtual IP address using
UCARP, which relies on IPv4 multicast to function properly. The controller VMs advertise to one another
using the standard VRRP IPv4 multicast address of 224.0.0.18, which is a link-local address, and is therefore
not forwarded by routers or inspected by IGMP snooping. This requirement means the Cisco UCS uplinks
and uplink switches must allow IPv4 multicast traffic for the HyperFlex management VLAN. Failure to
configure the Cisco UCS uplink switches to allow IPv4 multicast traffic can lead to service interruptions
during some failure scenarios, particularly when cable or port failures would cause multicast management
traffic to traverse the northbound Cisco UCS uplink switches.
Cisco Nexus switches allow multicast traffic and enable IGMP snooping by default.
The Silver QoS system class has the multicast optimized setting enabled because the Silver class is used by
the Silver QoS policy, which is in turn used by the management network vNICs. They transmit IPv4 multicast
packets to determine the master server for the roaming cluster management IP address, configured by
UCARP.
QoS Policies
In order to apply the settings defined in the Cisco UCS QoS System Classes, specific QoS Policies must be
created, and then assigned to the vNICs, or vNIC templates used in Cisco UCS Service Profiles. The
following table details the QoS Policies configured for HyperFlex, and their default assignment to the vNIC
templates created:
VLANs
VLANs are created by the HyperFlex installer to support a base HyperFlex system, with a single VLAN
defined for guest VM traffic, and a VLAN for vMotion. Names and IDs for the VLANs are defined in the Cisco
UCS configuration page of the HyperFlex installer web interface. The VLANs listed in Cisco UCS must
already be present on the upstream network, and the Cisco UCS FIs do not participate in VLAN Trunk
Protocol (VTP). The following table and figure details the VLANs configured for HyperFlex:
Best practices mandate that MAC addresses used for Cisco UCS domains use 00:25:B5 as the first three
bytes, which is one of the Organizationally Unique Identifiers (OUI) registered to Cisco Systems, Inc. The
fourth byte (for example, 00:25:B5:xx) is specified during the HyperFlex installation. The fifth byte is set
automatically by the HyperFlex installer, to correlate to the Cisco UCS fabric and the vNIC placement order.
Finally, the last byte is incremented according to the number of MAC addresses created in the pool. To avoid
overlaps, you must ensure that the first four bytes of the MAC address pools are unique for each HyperFlex
system installed in the same layer 2 network, and also different from other Cisco UCS domains which may
exist.
The following table details the MAC Address Pools configured for HyperFlex, and their default assignment to
the vNIC templates created:
hv-vmotion-a
hv-vmotion-b
storage-data-a
storage-data-b
HyperFlex-vm Enabled Only Link-down Forged: vm-network-a
Native Allow
VLAN vm-network-b
Figure 28 Network Control Policy
vNIC Templates
Cisco UCS Manager has a feature to configure vNIC templates, which can be used to simplify and speed up
configuration efforts. VNIC templates are referenced in service profiles and LAN connectivity policies, versus
configuring the same vNICs individually in each service profile, or service profile template. VNIC templates
contain all the configuration elements that make up a vNIC, including VLAN assignment, MAC address pool
selection, fabric A or B assignment, fabric failover, MTU, QoS policy, Network Control Policy, and more.
Templates are created as either initial templates, or updating templates. Updating templates retain a link
between the parent template and the child object, therefore when changes are made to the template, the
changes are propagated to all remaining linked child objects. The following tables detail the settings in each
of the vNIC templates created by the HyperFlex installer:
Setting Value
Fabric ID A
Target Adapter
MTU 1500
Setting Value
Fabric ID B
Target Adapter
MTU 1500
Setting Value
Fabric ID A
Target Adapter
MTU 9000
Setting Value
Fabric ID B
Target Adapter
Setting Value
Fabric ID A
Target Adapter
MTU 9000
Setting Value
Fabric ID B
Target Adapter
MTU 9000
Setting Value
Fabric ID A
Target Adapter
MTU 1500
Setting Value
Fabric ID B
Target Adapter
MTU 1500
hv-mgmt-b hv-mgmt-b
hv-vmotion-a hv-vmotion-a
hv-vmotion-b hv-vmotion-b
storage-data-a storage-data-a
storage-data-b storage-data-b
vm-network-a vm-network-a
vm-network-b vm-network-b
Adapter Policies
Cisco UCS Adapter Policies are used to configure various settings of the Converged Network Adapter (CNA)
installed in the Cisco UCS blade or rack-mount servers. Various advanced hardware features can be enabled
or disabled depending on the software or operating system being used. The following figures detail the LAN
Connectivity Policy configured for HyperFlex:
Figure 29 Cisco UCS Adapter Policy Resources
Figure 30 Cisco UCS Adapter Policy Options
BIOS Policies
Cisco HX-Series servers have a set of pre-defined BIOS setting defaults defined in Cisco UCS Manager.
These settings have been optimized for the Cisco HX-Series servers running HyperFlex. The HyperFlex
h all settings set to the defaults, except for enabling
the Serial Port A for Serial over LAN (SoL) functionality. This policy allows for future flexibility in case
situations arise where the settings need to be modified from the default configuration.
Boot Policies
Cisco UCS Boot Policies define the boot devices used by blade and rack-mount servers, and the order that
they are attempted to boot from. Cisco HX-Series rack-mount servers and compute-only B200 M4 blade
servers have their VMware ESXi hypervisors installed to an internal pair of mirrored Cisco FlexFlash SD
cards, therefore they require a boot policy defining that the servers should boot from that location. The
m SD card. The
following figure details the HyperFlex Boot Policy configured to boot from SD card:
Figure 31 Cisco UCS Boot Policy
A compute-only B200 M4 blade server can optionally be configured to boot from an FC based SAN LUN. A
custom boot policy must be made to support this configuration, and used in the service profiles and templates
assigned to the B200 M4 blades.
applying firmware revisions to all components that match a specific Cisco UCS firmware bundle, versus
defining the firmware revisions part by part. The following figure details the Host Firmware Package
configured by the HyperFlex installer:
Figure 32 Cisco UCS Host Firmware Package
Maintenance Policies
Cisco UCS Maintenance Policies define the behavior of the attached blades and rack-mount servers when
changes are made to the associated service profiles. The default Cisco UCS Maintenance Policy setting is
result in an immediate reboot of that server. The Cisco best practice is to use a Maintenance Policy set to
- knowledgement by a user with the appropriate rights within Cisco
UCS, before the server is rebooted to apply the changes. The HyperFlex installer creates a Maintenance
- tails the
Maintenance Policy configured by the HyperFlex installer:
Figure 34 Cisco UCS Maintenance Policy
capping disabled, and fans allowed to run at full speed when necessary. The following figure details the
Power Control Policy configured by the HyperFlex installer:
Figure 35 Cisco UCS Power Control Policy
Scrub Policies
Cisco UCS Scrub Policies are used to scrub, or erase data from local disks, BIOS settings and FlexFlash SD
cards. If the policy settings are enabled, the information is wiped when the service profile using the policy is
all settings disabled, therefore all data on local disks, SD cards and BIOS settings will be preserved if a
service profile is disassociated. The following figure details the Scrub Policy configured by the HyperFlex
installer:
Figure 36 Cisco UCS Scrub Policy
vMedia Policies
Cisco UCS Virtual Media (vMedia) Policies automate the connection of virtual media files to the remote KVM
session of the Cisco UCS blades and rack-mount servers. Using a vMedia policy can speed up installation
time by automatically attaching an installation ISO file to the server, without having to manually launch the
remote KVM console and connect them one-by-one. The HyperFlex installer creates a vMedia Policy named
ined. The following figure details the vMedia Policy
configured by the HyperFlex installer:
Figure 38 Cisco UCS vMedia Policy
only nodes needs to differ from the configuration of the HyperFlex converged storage nodes. The following
table details the service profile templates configured by the HyperFlex installer:
vNIC/vHBA Placement
In order to control the order of detection of the vNICs and vHBAs defined in service profiles, Cisco UCS
allows for the definition of the placement of the vNICs and vHBAs across the cards in a blade or rack-mount
server, and the order they are seen. Since HX-series servers are configured with a single Cisco UCS VIC
1227 mLOM card, the only valid placement is on card number 1. In certain hardware configurations, the
physical mapping of cards and port extenders to their logical order is not linear, therefore each card is
referred to as a virtual connection, or vCon. Because of this, the interface where the placement and order is
defined does not refer to physical cards, but instead refers to vCons. Therefore, all the vNICs defined in the
service profile templates for HX-series servers places them on vCon 1, then their order is defined.
Through the combination of the vNIC templates created (vNIC Templates), the LAN Connectivity Policy (LAN
Connectivity Policies), and the vNIC placement, every VMware ESXi server will detect the network interfaces
in the same order, and they will always be connected to the same VLANs via the same network fabrics. The
following table outlines the vNICs, their placement, their order, the fabric they are connected to, their default
VLAN, and how they are enumerated by the ESXi hypervisor:
ESXi VMDirectPath relies on a fixed PCI address for the pass-through devices. If the vNIC configuration is
changed (add/remove vNICs), then the order of the devices seen in the PCI tree will change. The administra-
tor will have to reconfigure the ESXi VMDirectPath configuration to select the 12 Gbps SAS HBA card, and re-
configure the storage controller settings of the controller VM.
vswitch-hx-inband-mgmt: This is the default vSwitch0 which is renamed by the ESXi kickstart file
as part of the automated installation. The default vmkernel port, vmk0, is configured in the standard
Management Network port group. The switch has two uplinks, active on fabric A and standby on
fabric B, without jumbo frames. A second port group is created for the Storage Platform Controller
VMs to connect to with their individual management interfaces. By default, all traffic is untagged.
vswitch-hx-vm-network: This vSwitch is created as part of the automated installation. The switch
has two uplinks, active on both fabrics A and B, and without jumbo frames. The HyperFlex installer
does not configure any port groups by default, as it is expected to be done as part of a post-install
step. By default, all traffic to this vSwitch is tagged.
vmotion: This vSwitch is created as part of the automated installation. A vmkernel port, used for
vMotion is not created, as it is expected to be done as part of a post-install step. The switch has two
uplinks, active on fabric A and standby on fabric B, with jumbo frames required. By default, all traffic
is untagged.
The following table and figures help give more details into the ESXi virtual networking design as built by the
HyperFlex installer:
Storage
Controller
Management
Network
Storage
Hypervisor
Data
Network
Controller VM Locations
The physical storage location of the controller VMs differs between the Cisco HX220c-M4S and HX240c-
M4SX model servers, due to differences with the physical disk location and connections on the two models
of servers. The storage controller VM is operationally no different from any other typical virtual machines in
an ESXi environment. The VM must have a virtual disk with the bootable root filesystem available in a location
separate from the SAS HBA that the VM is controlling via VMDirectPath I/O. The configuration details of the
models are as follows:
HX220c
placed on a 3.5 GB VMFS datastore, and that datastore is provisioned from the internal mirrored SD
cards. The controller VM has full control of all the front facing hot-swappable disks via PCI pass-
through control of the SAS HBA. The controller VM operating system sees the 120 GB SSD, also
and logs on
this disk. The remaining disks seen by the controller VM OS are used by the HX Distributed
filesystem for caching and capacity layers.
HX240c: The HX240c-M4SX server has a built-in SATA controller provided by the Intel Wellsburg
Platform Controller Hub (PCH) chip, and the 120 GB housekeeping disk is connected to it, placed in
an internal drive carrier. Since this model does not connect the 120 GB housekeeping disk to the
SAS HBA, the ESXi hypervisor remains in control of this disk, and a VMFS datastore is provisioned
there, using the entire disk. On this VMFS datastore, a 2.2 GB virtual disk is created and used by the
controller VM as /dev/sda for the root filesystem, and an 87 GB virtual disk is created and used by
the controller VM as /dev/sdb, placing the HyperFlex binaries and logs on this disk. The front-facing
hot swappable disks, seen by the controller VM OS via PCI pass-through control of the SAS HBA,
are used by the HX Distributed filesystem for caching and capacity layers.
The following figures detail the Storage Platform Controller VM placement on the ESXi hypervisor hosts:
Figure 41 HX220c Controller VM Placement
B200 M4 compute-only blades also place a lightweight storage controller VM on a 3.5 GB VMFS datastore,
provisioned from the boot drive.
Figure 42 HX240c Controller VM Placement
HX220c-M4S 48 GB Yes
HX240c-M4SX 72 GB Yes
B200 M4 compute-only blades have a lightweight storage controller VM, it is configured with only 1 vCPU and
512 MB of memory reservation.
Installation
Cisco HyperFlex systems are normally ordered with a factory pre-installation process having been done prior
to the hardware delivery. This factory integration work will deliver the HyperFlex servers with the proper
firmware revisions pre-set, a copy of the VMware ESXi hypervisor software pre-installed, and some
components of the Cisco HyperFlex software already installed. Once on site, the final steps to be performed
to the previous factory work. For the purpose of this document, the entire setup process is described as
though no factory pre-installation work was done, yet still leveraging the tools and processes developed by
Cisco to simplify the installation and dramatically reduce the deployment time.
Installation of the Cisco HyperFlex system is primarily done via a deployable HyperFlex installer virtual
machine, available for download at cisco.com as an OVA file. The installer VM performs most of the Cisco
UCS configuration work, and it is used to simplify the installation of ESXi on the HyperFlex hosts. The
HyperFlex installer VM also installs the HyperFlex HX Data Platform software and creates the HyperFlex
cluster, while concurrently performing many of the ESXi configuration tasks automatically. Because this
simplified installation method has been developed by Cisco, this CVD will not give detailed manual steps for
the configuration of all the elements that are handled by the installer. The following sections will guide you
through the prerequisites and manual steps needed prior to using the HyperFlex installer, how to utilize the
HyperFlex Installer, and finally how to perform the remaining post-installation tasks.
Prerequisites
Prior to beginning the installation activities, it is important to gather the following information:
IP Addressing
IP addresses for the Cisco HyperFlex system need to be allocated from the appropriate subnets and VLANs
to be used. IP addresses that are used by the system fall into the following groups:
Cisco UCS Manager: These addresses are used and assigned by Cisco UCS manager. Three IP
addresses are used by Cisco UCS Manager, one address is assigned to each Cisco UCS Fabric
Interconnect, and the third IP address is a roaming address for managing the active FI of the Cisco
UCS cluster. In addition, at least one IP address per Cisco UCS blade or HX-series rack-mount
server is required for the default ext-mgmt IP address pool, which is assigned to the CIMC interface
of the physical servers. Since these management addresses are assigned from a pool, they need to
be provided in a contiguous block of addresses. These addresses must all be in the same subnet.
HyperFlex and ESXi Management: These addresses are used to manage the ESXi hypervisor hosts,
and the HyperFlex Storage Platform Controller VMs. Two IP addresses per node in the HyperFlex
cluster are required, and a single additional IP address is needed as the roaming HyperFlex cluster
management interface from the same subnet. These addresses can be assigned from the same
subnet at the Cisco UCS Manager addresses, or they may be separate.
HyperFlex Storage: These addresses are used by the HyperFlex Storage Platform Controller VMs, as
vmkernel interfaces on the ESXi hypervisor hosts, for sending and receiving data to/from the HX
Distributed Data Platform Filesystem. Two IP addresses per node in the HyperFlex cluster are
required, and a single additional IP address is needed as the roaming HyperFlex cluster storage
interface from the same subnet. It is recommended to provision a subnet that is not used in the
network for other purposes, and it is also possible to use non-routable IP address ranges for these
interfaces. Finally, if the Cisco UCS domain is going to contain multiple HyperFlex clusters, it is
possible to use a different subnet and VLAN ID for the HyperFlex storage traffic for each cluster. This
is a safer method, guaranteeing that storage traffic from multiple clusters cannot intermix.
VMotion: These IP addresses are used by the ESXi hypervisor hosts as vmkernel interfaces to enable
vMotion capabilities. One or more IP addresses per node in the HyperFlex cluster are required from
the same subnet. Multiple addresses and vmkernel interfaces can be used if you wish to enable
multi-nic vMotion.
The following tables will assist with gathering the required IP addresses for the installation of an 8 node
standard HyperFlex cluster, or a 4+4 hybrid cluster, by listing the addresses required, and an example
configuration that will be used during the setup steps in this CVD:
VLAN ID:
Subnet:
Subnet Mask:
Gateway:
Fabric Interconnect
A
Fabric Interconnect
B
UCS Manager
HyperFlex Cluster
HyperFlex Node #1
HyperFlex Node #2
HyperFlex Node #3
HyperFlex Node #4
HyperFlex Node #5
HyperFlex Node #6
HyperFlex Node #7
HyperFlex Node #8
Table 32 HyperFlex Hybrid Cluster IP Addressing
UCS Manage-
Address Group: ment HyperFlex and ESXi Management HyperFlex Storage VMotion
VLAN ID:
Subnet:
Subnet Mask:
Gateway:
Fabric Interconnect
A
Fabric Interconnect
B
UCS Manager
HyperFlex Cluster
HyperFlex Node #1
HyperFlex Node #2
HyperFlex Node #3
HyperFlex Node #4
Blade #1
Blade #2
Blade #3
Blade #4
Fabric Interconnect
A 10.29.133.104
Fabric Interconnect
B 10.29.133.105
The Cisco UCS Management, and HyperFlex and ESXi Management IP addresses can come from the same
subnet, or be separate, as long as the HyperFlex installer can reach them both.
DHCP
By default, the ESXi hypervisor installation is configured to use Dynamic Host Configuration Protocol (DHCP)
for automatic IP address assignment to the default ESXi management interfaces. In many environments,
these management interfaces are configured manually, and this CVD will document that procedure.
However, if it is preferred to use DHCP, you may do so following these guidelines:
Configure a DHCP server with the appropriate scope for the subnet containing the ESXi hypervisor
management interfaces.
Configure a listening interface of the DHCP server in the same subnet as the ESXi hypervisor
management interfaces, or configure a DHCP relay agent on the
requests to the remote server.
Configure a DHCP reservation for each ESXi hypervisor host, so that each host will always be offered
the same IP address lease. The MAC addresses for the ESXi hypervisor management vNICs can be
found in Cisco UCS Manager, and exported to a file to ease the task of creating the reservations.
Figure 43 Export MAC Addresses
An example PowerShell script for creating the DHCP reservations from the Cisco UCS Manager CSV export
file on a Windows DHCP server is included in the appendix: Create DHCP Reservations
DNS
DNS servers must be available to query in the HyperFlex and ESXi Management group. DNS records need to
be created prior to beginning the installation. At a minimum, it is highly recommended to create records for
agement interfaces. Additional records can be created for the Storage
Controller Management interfaces, ESXi Hypervisor Storage interfaces, and the Storage Controller Storage
interfaces if desired.
The following tables will assist with gathering the required DNS information for the installation of an 8 node
standard HyperFlex cluster, or a 4+4 hybrid cluster, by listing the information required, and an example
configuration that will be used during the setup steps in this CVD:
DNS Server #1
DNS Server #2
DNS Domain
HX Server #1 Name
HX Server #2 Name
HX Server #3 Name
HX Server #4 Name
HX Server #5 Name
HX Server #6 Name
HX Server #7 Name
HX Server #8 Name
DNS Server #2
NTP
Consistent time is required across the components of the HyperFlex system, provided by reliable NTP
servers, accessible for synchronization in the Cisco UCS Management network group, and the HyperFlex and
ESXi Management group. NTP is used by Cisco UCS Manager, the ESXi hypervisor hosts, and the HyperFlex
Storage Platform Controller VMs.
The following tables will assist with gathering the required NTP information for the installation of an 8 node
standard HyperFlex cluster, or a 4+4 hybrid cluster, by listing the information required, and an example
configuration that will be used during the setup steps in this CVD:
NTP Server #1
NTP Server #2
Timezone
Table 37 NTP Server Example Information
Item Value
NTP Server #2
VLANs
Prior to the installation, the required VLAN IDs need to be documented, and created in the upstream network
if necessary. At a minimum there are 4 VLANs that need to be trunked to the Cisco UCS Fabric Interconnects
that comprise the HyperFlex system; a VLAN for the HyperFlex and ESXi Management group, a VLAN for the
HyperFlex Storage group, a VLAN for the VMotion group, and at least one VLAN for the guest VM traffic. The
VLAN IDs must be supplied during the HyperFlex Cisco UCS configuration step, and the VLAN names can
optionally be customized.
The following tables will assist with gathering the required VLAN information for the installation of an 8 node
standard HyperFlex cluster, or a 4+4 hybrid cluster, by listing the information required, and an example
configuration that will be used during the setup steps in this CVD:
<<hx-inband-mgmt>>
<<hx-storage-data>>
<<vm-network>>
<<hx-vmotion>>
hx-inband-mgmt 10
hx-storage-data 51
vm-prod-100 100
hx-vmotion 200
Network Uplinks
The Cisco UCS uplink connectivity design needs to be finalized prior to beginning the installation. One of the
early manual tasks to be completed is to configure the Cisco UCS network uplinks and verify their operation,
prior to beginning the HyperFlex installation steps. Refer to the network uplink design possibilities in the
Network Design section.
The following tables will assist with gathering the required network uplink information for the installation of
an 8 node standard HyperFlex cluster, or a 4+4 hybrid cluster, by listing the information required, and an
example configuration that will be used during the setup steps in this CVD:
Table 40 Network Uplink Configuration
Table 41 Network Uplink Example Configuration
Physical Installation
Install the Fabric Interconnects, the HX-Series rack-mount servers, the Cisco UCS 5108 chassis, the Cisco
UCS 2204XP Fabric Extenders, and the Cisco UCS B200 M4 blades according to their corresponding
hardware installation guides:
HX220c-M4S Server:
http://www.cisco.com/c/en/us/td/docs/hyperconverged_systems/HX_series/HX220c_M4/HX220c.html
HX240c-M4SX Server:
http://www.cisco.com/c/en/us/td/docs/hyperconverged_systems/HX_series/HX240c_M4/HX240c.html
Cabling
The physical layout of the HyperFlex system was previously described in section Physical Topology. The
Fabric Interconnects, HX-series rack-mount servers, Cisco UCS chassis and blades need to be cabled
properly before beginning the installation activities.
The following table provides a cabling example for installation of a Cisco HyperFlex system, with eight
HX240c-M4SX servers, and one Cisco UCS 5108 chassis.
1. Ensure the Fabric Interconnect cabling is properly connected, including the L1 and L2 cluster links,
and power the Fabric Interconnects on by inserting the power cords.
2. Connect to the console port on the first Fabric Interconnect, which will be designated as the A fabric
device. Use the supplied Cisco console cable (CAB-CONSOLE-RJ45=), and connect it to a built-in
DB9 serial port, or use a USB to DB9 serial port adapter.
4.
the terminal emulation to VT100, and the settings to 9600 baud, 8 data bits, no parity, and 1 stop bit.
5. Open the connection just created. You may have to press ENTER to see the first prompt.
6. Configure the first Fabric Interconnect, using the following example as a guideline:
This setup utility will guide you through the basic configuration of
the Fabric interconnect and its clustering mode is performed through these
steps.
to apply configuration.
Enter the setup mode; setup newly or restore from backup. (setup/restore) ?
setup
You have chosen to setup a new Fabric interconnect. Continue? (y/n): y
Switch Fabric=A
System Name=SJC02-151-AAK29
Default Gateway=10.29.133.1
Ipv6 value=0
DNS Server=171.70.168.183
Domain Name=hx.lab.cisco.com
Cluster Enabled=yes
Cluster IP Address=10.29.133.106
NOTE: Cluster IP will be configured only after both Fabric Interconnects are
initialized
Apply and save the configuration (select 'no' if you want to re-enter)?
(yes/no): yes
Configuration file - Ok
1. Connect to the console port on the first Fabric Interconnect, which will be designated as the B fabric
device. Use the supplied Cisco console cable (CAB-CONSOLE-RJ45=), and connect it to a built-in
DB9 serial port, or use a USB to DB9 serial port adapter.
3.
the terminal emulation to VT100, and the settings to 9600 baud, 8 data bits, no parity, and 1 stop bit.
4. Open the connection just created. You may have to press ENTER to see the first prompt.
5. Configure the second Fabric Interconnect, using the following example as a guideline:
This setup utility will guide you through the basic configuration of
the Fabric interconnect and its clustering mode is performed through these
steps.
to apply configuration.
Installer has detected the presence of a peer Fabric interconnect. This Fabric
interconnect will be added to the cluster. Continue (y/n) ? y
Apply and save the configuration (select 'no' if you want to re-enter)?
(yes/no): yes
Configuration file – Ok
1. Open a web browser and navigate to the Cisco UCS Manager Cluster IP address, For example,
https://10.29.133.106
2. Click the Launch UCS Manager button to download and run the Cisco UCS Manager Java web ap-
plet.
3. If any security prompts appear, click Continue to download the Cisco UCS Manager JNLP file.
6. At th
entered during the initial console configuration.
7. Click No when prompted to enable Cisco Smart Call Home, this feature can be enabled at a later
time.
Figure 44 Cisco UCS Manager
To support Intel E5-26xx v4 processors, the Cisco UCS Infrastructure software, B-Series and C-Series bun-
dles must be upgraded to revision 2.2(7c).
NTP
To synchronize the Cisco UCS environment time to the NTP server, complete the following steps:
1. In Cisco UCS Manager, click the Admin tab in the navigation pane.
3. In the Properties pane, select the appropriate time zone in the Timezone menu.
6. Click OK.
7. Click Save Changes, and then click OK.
Figure 45 NTP Settings
Uplink Ports
The Ethernet ports of a Cisco UCS 6248UP Fabric Interconnect are all capable of performing several
functions, such as network uplinks or server ports, and more. By default, all ports are unconfigured, and their
function must be defined by the administrator. To define the specified ports to be used as network uplinks to
the upstream network, complete the following steps:
1. In Cisco UCS Manager, click the Equipment tab in the navigation pane.
2. Select Fabric Interconnects > Fabric Interconnect A > Fixed Module or Expansion Module 2 as
appropriate > Ethernet Ports
3. Select the ports that are to be uplink ports, right click them, and click Configure as Uplink Port.
5. Select Fabric Interconnects > Fabric Interconnect B > Fixed Module or Expansion Module 2 as
appropriate > Ethernet Ports
6. Select the ports that are to be uplink ports, right click them, and click Configure as Uplink Port.
8. Verify all the necessary ports are now configured as uplink ports, their role will be listed as network.
Figure 46 Cisco UCS Uplink Ports
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Under LAN > LAN Cloud, click to expand the Fabric A tree.
3. Right-click Port Channels underneath Fabric A and click Create Port Channel.
4. Enter the port channel ID number as the unique ID of the port channel.
6. Click Next.
7. Click on each port from Fabric Interconnect A that will participate in the port channel, and click the
>> button to add them to the port channel.
8. Click Finish.
9. Click OK.
10. Under LAN > LAN Cloud, click to expand the Fabric B tree.
11. Right-click Port Channels underneath Fabric B and click Create Port Channel.
12. Enter the port channel ID number as the unique ID of the port channel.
15. Click on each port from Fabric Interconnect B that will participate in the port channel, and click the
>> button to add them to the port channel.
18. Verify the necessary port channels have been created. It can take a few minutes for the newly
formed port channels to converge and come online.
Figure 47 Cisco UCS Uplink Port Channels
1. In Cisco UCS Manager, click the Equipment tab in the navigation pane, and click on Equipment in
the top of the navigation tree on the left.
3. Click the Global Policies sub-tab, set the Chassis/FEX Discovery Policy to match the number of up-
link ports that
are cabled per side, between the chassis and the Fabric Interconnects.
6. Click OK.
Figure 48 Cisco UCS Chassis Discovery Policy
Server Ports
The Ethernet ports of a Cisco UCS Fabric Interconnect connected to the rack-mount servers, or to the blade
chassis must be defined as server ports. Once a server port is activated, the connected server or chassis will
begin the discovery process shortly afterwards. Rack-mount servers and blade chassis are automatically
numbered in the order which they are first discovered. For this reason, it is important to configure the server
ports sequentially in the order you wish the physical servers and/or chassis to appear within Cisco UCS
manager. For example, if you installed your servers in a cabinet or rack with server #1 on the bottom,
counting up as you go higher in the cabinet or rack, then you need to enable the server ports to the bottom-
most server first, and enable them one-by-one as you move upward. You must wait until the server appears
in the Equipment tab of Cisco UCS Manager before configuring the ports for the next server. The same
numbering procedure applies to blade server chassis.
To define the specified ports to be used as server ports, complete the following steps:
1. In Cisco UCS Manager, click the Equipment tab in the navigation pane.
2. Select Fabric Interconnects > Fabric Interconnect A > Fixed Module or Expansion Module 2 as
appropriate > Ethernet Ports.
3. Select the first port that is to be a server port, right click it, and click Configure as Server Port.
5. Select Fabric Interconnects > Fabric Interconnect B > Fixed Module or Expansion Module 2 as
appropriate > Ethernet Ports.
6. Select the matching port as chosen for Fabric Interconnect A that is to be a server port, right click it,
and click Configure as Server Port.
8. Wait for a brief period, until the rack-mount server appears in the Equipment tab underneath Equip-
ment > Rack Mounts > Servers, or the chassis appears underneath Equipment > Chassis.
9. Repeat Steps 2-8 for each server port, until all rack-mount servers and chassis appear in the order
desired in the Equipment tab.
Figure 49 Cisco UCS Server Ports
Server Discovery
As previously described, once the server ports of the Fabric Interconnects are configured and active, the
servers connected to those ports will begin a discovery process. During
hardware inventories are collected, along with their current firmware revisions. Before continuing with the
HyperFlex installation processes, which will create the service profiles and associate them with the servers,
wait for all of the servers to finish their discovery process and to show as unassociated servers that are
1. In Cisco UCS Manager, click the Equipment tab in the navigation pane, and click on Equipment in
the top of the navigation tree on the left.
This document is based on the Cisco HyperFlex 1.7.1 release filename: Cisco-HX-Data-Platform-
Installer-v1.7.1-14786.ova
The HyperFlex installer OVA file can be deployed as a virtual machine in an existing VMware vSphere
environment, VMware Workstation, VMware Fusion, or other virtualization environment which supports the
import of OVA format files. For the purpose of this document, the process describes uses an existing ESXi
server to run the HyperFlex installer OVA, and deploying it via the VMware vSphere (thick) client.
Installer Connectivity
The Cisco HyperFlex Installer VM must be deployed in a location that has connectivity to the following
network locations and services:
Connectivity to the vCenter Server or vCenter Appliance which will manage the HyperFlex cluster(s)
to be installed.
Connectivity to the management interfaces of the Fabric Interconnects that contain the HyperFlex
cluster(s) to be installed.
Connectivity to the management interface of the ESXi hypervisor hosts which will host the HyperFlex
cluster(s) to be installed.
Connectivity to the DNS server(s) which will resolve host names used by the HyperFlex cluster(s) to
be installed.
Connectivity to the NTP server(s) which will synchronize time for the HyperFlex cluster(s) to be
installed.
Connectivity from the staff operating the installer to the webpage hosted by the installer, and to log
in to the installer via SSH.
If the network where the HyperFlex installer VM is deployed has DHCP services available to assign the
proper IP address, subnet mask, default gateway, and DNS servers, the HyperFlex installer can be deployed
using DHCP. If a static address must be defined, use the following table to document the settings to be used
for the HyperFlex installer VM:
Hostname
IP Address
Subnet Mask
Default Gateway
DNS Server #1
DNS Server #2
1. Open the vSphere (thick) client, connect and log in to an ESXi host or vCenter server where the in-
staller OVA will be deployed.
5. Modify the name of the virtual machine to be created if desired, and click on a folder location to
place the virtual machine, and then click Next.
6. Click on a specific host or cluster to locate the virtual machine and click Next.
8. Modify the network port group selection from the drop down list in the Destination Networks column,
choosing the network the installer VM will communicate on, and click Next.
9. If DHCP is to be used for the installer VM, leave the fields blank and click Next. If static address set-
tings are to be used, fill in the fields for the installer VM hostname, default gateway, DNS server, IP
address, and subnet mask, then click Next. (Figure 51)
10. Check the box to power on after deployment, and click Finish. (Figure 52)
The installer VM will take a few minutes to deploy, once it has deployed and the virtual machine has started,
proceed to the next step.
Figure 51 HyperFlex OVA Deployment Settings
Figure 52 HyperFlex OVA Deployment Summary
*******************************************
http://192.168.10.210
*******************************************
Cisco-HX-Data-Platform-Installer login:
5. Verify the version of the installer in the lower right-hand corner of the Welcome page is the correct
version, and click Continue.
6. Read and accept the End User Licensing Agreement, and click Continue.
Figure 53 HyperFlex Installer Welcome Page
- sub-organization.
Create the service profile templates and service profiles for the HX-Series servers.
To configure the Cisco UCS settings, policies, and templates for HyperFlex, complete the following steps:
3. Enter the Cisco UCS manager DNS hostname or IP address, the admin username, and the password,
then click Continue. (Figure 55)
4. On the following page, click the boxes next to the discovered and unassociated HX-Series servers
which will be used to build the HyperFlex cluster(s). (Figure 56)
5. Enter a custom 4th byte for the MAC address pools. MAC addresses can only be entered in hexadec-
imal format (00 to FF). (Figure 57)
6. Click to expand the LAN Configuration section. Enter the desired VLAN names and VLAN IDs for the
four required VLANs. (Figure 58)
7. -
management IP address pool. (Figure 59)
8. Click to expand the Advanced Configuration section. Customize the HyperFlex Cluster Name value
as desired. This text is entered as the User Label in Cisco UCS manager for the service profiles.
(Figure 60)
9. Click Configure.
10. On the following page, monitor the configuration process until it completes. Click Show Details to
see detailed information on the steps being performed. The configuration will take a few minutes.
(Figure 61)
Figure 54 HyperFlex Installer Main Page
Figure 55 HyperFlex Cisco UCS Manager Configuration
Figure 56 Cisco UCS Server Selection
Figure 57 Cisco UCS MAC Address Pool Configuration
Figure 58 Cisco UCS VLAN Configuration
Figure 59 Cisco UCS Management IP Address Configuration
Figure 60 Cisco UCS Advanced Configuration
Figure 61 HyperFlex Cisco UCS Configuration Completed
1. Click Launch UCS Manager from the Cisco UCS Manager Configuration summary page of the Hy-
perFlex installer webpage, or open Cisco UCS manager as documented in Cisco UCS Manager.
3. Expand Servers > Service Profiles > root > Sub-Organizations > hx-cluster and verify the service
profiles were configured for the servers you selected in the configuration page during the previous
task. (Figure 62)
4. In Cisco UCS Manager, click the LAN tab in the navigation pane.
5. Click VLANs in the navigation tree, and verify in the configuration pane the VLANs have been created
with the names and IDs specified in the previous task. (Figure 63)
Figure 62 Cisco UCS Service Profiles
Figure 63 Cisco UCS VLANs
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
2. Expand LAN > Policies > root > Sub-Organizations > hx-cluster > vNIC-Templates.
5. In the Modify VLANs window, click the radio button in the Native VLAN column which is already se-
lected, to clear or unselect that button, and click OK. (Figure 64)
6. Click OK.
7. Repeat steps 3-6 for each vNIC Template that is required to carry tagged traffic. For example, to
have all vNICs carry tagged traffic, the changes must be made to the following vNIC templates:
a. hv-mgmt-a
b. hv-mgmt-b
c. hv-vmotion-a
d. hv-vmotion-b
e. storage-data-a
f. storage-data-b
Figure 64 Modify Cisco UCS Native VLANs
This document is based on the Cisco custom ESXi 6.0 Update 1B ISO release filename: Vmware-ESXi-
6.0.0-3380124-Custom-Cisco-6.0.1.2.iso
The kickstart process will automatically perform the following tasks with no user interaction required:
Set the default management network to use vmnic0, and obtain an IP address via DHCP.
Enable serial port com1 console access to facilitate Serial over LAN access to the host.
Configure the ESXi configuration to always use the current hardware MAC address of the network
interfaces, even if they change.
2. Login to the HyperFlex installer VM via SSH, using Putty or a similar tool. Username is: root Pass-
word is: Cisco123
3. Enter the following commands to integrate the kickstart file with the base ISO file, and create a new
ISO file from them:
# cd /opt/springpath/install-esxi
# ./makeiso.sh /opt/springpath/install-esxi/cisco-hx-esxi-6.0.ks.cfg \
/var/hyperflex/source-images/Vmware-ESXi-6.0.0-3380124-Custom-Cisco-6.0.1.2.iso \
/var/www/localhost/images/HX-Vmware-ESXi-6.0.0-3380124-Custom-Cisco-6.0.1.2.iso
To configure the Cisco UCS vMedia and Boot Policies, complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Expand Servers > Policies > root > Sub-Organizations > hx-cluster > vMedia Policies.
8. Enter the IP address of the HyperFlex installer VM. For example, 192.168.10.210
13. Select Servers > Service Profile Templates > root > Sub-Organizations > hx-cluster.
18. Select Servers > Policies > root > Sub-Organizations > hx-cluster > Boot Policies.
20. In the navigation pane, click to expand the section titled CIMC Mounted vMedia.
22. Select the CIMC Mounted CD/DVD entry in the Boot Order list, and click the Move Up button until
the CIMC Mounted CD/DVD entry is listed first. (Figure 68)
Install ESXi
To begin the installation after modifying the vMedia policy, Boot policy and service profile template, the
servers need to be rebooted. To monitor the progress of one or more servers, it is advisable to open a
remote KVM console session to watch the installation. To open the KVM console and reboot the servers,
complete the following steps:
1. In Cisco UCS Manager, click the Servers tab in the navigation pane.
2. Expand Servers > Service Profiles > root > Sub-Organizations > hx-cluster.
5. Click Continue to any security alerts that appear. The remote KVM Console window will appear
6. Repeat Steps 2-4 for any additional servers whose console you wish to monitor during the installa-
tion.
7. In Cisco UCS Manager, click the Equipment tab in the navigation pane.
9. In the configuration pane, click the first server to be rebooted, then shift+click the last server to be
rebooted, selecting them all.
1. Select Servers > Policies > root > Sub-Organizations > hx-cluster > Boot Policies.
3. Select the CIMC Mounted CD/DVD entry in the Boot Order list, and click Delete.
The changes made to the vMedia policy and service profile template may also be undone once the ESXi
installations have all completed fully, or they may be left in place for future installation work.
1. In Cisco UCS Manager, open the server KVM console of the first ESXi host.
2. Press F2 to login to the ESXi Direct Console User Interface (DCUI), with username: root and pass-
word: Cisco123. (Figure 71)
3. Press the down arrow to select Configure Management Network and press ENTER.
4. If the management interfaces have been configured to carry VLAN tagged traffic, as documented in
Tagged VLANs Alternate Configuration, press the down arrow to select VLAN (optional) and press
ENTER, then continue to step 5. If traffic is untagged, continue to step 6.
6. Press the down arrow to select IPv4 Configuration and press ENTER.
7. Press the down arrow to highlight Set Static IPv4 Address and Network Configuration, and then
press space bar to select it.
8. Press the down arrow, and enter the IP address, Subnet Mask and Default Gateway information in
the fields, and then press OK. (Figure 72)
9. Press the down arrow to select DNS Configuration and press ENTER.
10. Enter the primary DNS server address, secondary DNS server address (if applicable), and the ESXi
elds, then press ENTER. (Figure 73)
11. Press the down arrow to select Custom DNS Suffixes and press ENTER.
12. Enter the preferred DNS suffix, and then press ENTER.
13. Press ESC to exit the network configuration settings, and press Y to accept the changes and restart
the management network.
14. Press the down arrow to select Test Management Network and press ENTER.
15. Press ENTER to test using the values you supplied earlier, all addresses should ping and the DNS
server should resolve the hostname of the ESXi host. If any tests do not work properly, re-check your
values and correct as necessary.
18. Repeat steps 1-17 for each remaining server that will be a part of the HyperFlex cluster.
A sample script exists in the HyperFlex installer VM to configure the ESXi hosts’ management interfaces via
SoL. To use the script, log in to the installer VM via SSH and run /bin/setstaticip_ucs.py
Figure 71 ESXi DCUI
Figure 72 ESXi IP Address Configuration
Figure 73 ESXi DNS Configuration
Figure 74 ESXi Network Test
1. Open the vCenter Web Client webpage and login, For example,
https://vcenter1.hx.lab.cisco.com/vsphere-client/
3. In the Navigator pane, right-click on the vCenter server name, and click New Datacenter.
5. In the navigator pane, right-click the datacenter that was just created, and click New Cluster.
6. Enter the cluster name, leave all other settings disabled or turned off, and click OK. (Figure 77)
Figure 75 vCenter Web Client Home Page
Figure 76 vCenter New Datacenter
Figure 77 vCenter New Cluster
1. In the vSphere Web Client, right-click the cluster object where the HyperFlex nodes will reside and
click Add Host.
2. Enter the name of the host to be added and click Next. (Figure 78)
3. Enter the root username and password, and click Next. (Figure 79)
6. Click on a license key to apply to the server, if applicable, and click Next.
7. Leave the lockdown mode option set to disabled, which is the default, and click Next.
9. Repeat steps 1-8 for each additional host in the HyperFlex cluster.
Figure 78 vCenter Add Host
Figure 79 vCenter Host Password
Configure NTP
NTP is required for proper operation of the HyperFlex environment. To configure NTP settings on the ESXi
hosts, complete the following steps:
1. In the vSphere Web Client, right-click the ESXi host to be configured, and click Settings.
4. Change the NTP Service Startup Policy to Start and Stop with the Host.
7. Click OK.
8. Repeat steps 1-7 for each additional host in the HyperFlex cluster.
Figure 80 vCenter NTP Settings
Maintenance Mode
The ESXi hosts must be placed into maintenance mode prior to beginning the HyperFlex installation. To
place the ESXi hosts into maintenance mode, complete the following steps:
1. In the vSphere Web Client, right-click the ESXi host to be configured, click Maintenance Mode, and
click Enter Maintenance Mode.
2. Click OK.
3. Repeat steps 1-2 for each additional host in the HyperFlex cluster.
Figure 81 vCenter Enter Maintenance Mode
An example script to configure NTP, place the ESXi hosts into maintenance mode, and add them to vCenter is
provided in the appendix: Configure ESXi Hosts
HyperFlex Installation
The installation of the HyperFlex platform software is done via the HyperFlex installer webpage, installed as
documented in section HyperFlex Installer Deployment. To deploy a HyperFlex cluster, complete the
following steps:
3. Enter the IP address values or DNS names in the first section. (Figure 82)
a. Click the Add button if more than 4 hosts are being installed, until there are enough lines to ac-
commodate the number of nodes being installed.
b. If IP addresses are entered into the first line, which has a grey background, the lines underneath
it will auto-populate with incremented IP addresses.
The first line with the grey background is not a node that is being configured.
4. Enter the subnet mask and gateway address for the Hypervisor and ESXi management group, and
the HyperFlex storage group.
It is necessary to enter the gateway addresses for both fields, even though they are marked as option-
al.
10. Enter the values in the Auto Support section. (Figure 85)
13. (Optional) If the management and/or storage interfaces have been configured to carry VLAN tagged
traffic, as documented in Tagged VLANs Alternate Configuration, check the box to override the de-
fault network settings, then continue to step 14. If traffic is untagged, continue to step 16.
14. Enter the VLAN IDs for the management and storage networks if you are using tagged traffic, and
optionally change the names of the vSwitches that are created.
16. If the validation checks return any errors, re-check the values entered and correct as necessary, oth-
erwise click Deploy. (Figure 86)
The cluster configuration values entered can be saved in a JSON file format for re-use at a later time by click-
ing the link Save Configuration File.
The HyperFlex installer combines many steps, configuring the ESXi hosts, rebooting them, deploying storage
controller VMs, installing software and creating the cluster. The installation can take over 30 minutes to com-
plete.
Figure 82 HyperFlex Installer IP Addresses
Figure 83 HyperFlex Installer IP Addressing Hints
Figure 84 HyperFlex Installer Configuration
Figure 85 HyperFlex Installer Advanced Configuration
Figure 86 HyperFlex Installer Validation
Figure 87 HyperFlex Installer Deployment Progress
Figure 88 HyperFlex Installer Creating Cluster
Figure 89 HyperFlex Installation Complete
When the HyperFlex installation has completed, continue with the post-installation tasks as documented in
HyperFlex Post Installation Configuration. If a hybrid cluster is being built, continue with the tasks
documented in Hybrid Cluster Expansion.
Verify the storage cluster state is healthy in the HyperFlex Web Client Plugin Summary.
The new node is a server that meets the compute node system requirements listed in Requirements,
including network and disk requirements.
The software and firmware versions on the node match the Cisco HX Data Platform software version
and the vSphere software version Requirements, and match the existing servers.
The new node must be physically installed and discovered as described in Server Discovery.
The new node uses the same network configuration as the other nodes in the storage cluster,
including the VLAN IDs and VLAN tagging. This is accomplished via the use of Cisco UCS service
profiles.
To prevent errors during cluster expansions, it is recommended to disable vSphere High Availability
(HA) during the expansion, and then re-enable the feature after the expansion completes.
Standard Cluster Expansion
Prior to expanding the cluster, the new nodes must be prepared and configured similar to a cluster
installation. There are some specific differences in the processes which will be outlined below. To expand an
existing standard HyperFlex cluster, complete the following steps:
3. Enter the Cisco UCS manager DNS hostname or IP address, the admin username, and the password,
and click Continue.
4. On the following page, click the boxes next to the discovered and unassociated HX-Series servers
which will be used to expand the HyperFlex cluster(s). (Figure 90)
5. Click Configure.
The appropriate service profile(s) will be created and associated with the new server that will expand the
cluster.
Figure 90 HyperFlex Installer Select Expansion Server
Install ESXi
Install and configure the ESXi hypervisor as documented in the section ESXi Hypervisor Installation. The node
must meet the following requirements before continuing with the HyperFlex cluster expansion:
The new node must be in maintenance mode while configuring the storage cluster.
The new node must be a member in the same vCenter cluster and vCenter datacenter as the existing
nodes.
The new node has at least one valid DNS and NTP server configured.
b. If IP addresses are entered into the first line, which has a grey background, the lines underneath
it will auto-populate with incremented IP addresses.
The first line with the grey background is not a node that is being configured.
7. Enter the values in the vCenter Configuration section for the existing vCenter server.
11. If the validation checks return any errors, re-check the values entered and correct as necessary, oth-
erwise click Deploy.
Figure 91 HyperFlex Installer Expand Cluster
After the cluster expansion has completed, continue with the HyperFlex Post Installation Configuration steps.
2. Expand Servers > Service Profile Templates > root > Sub-Organizations > hx-cluster.
3. Right-click the Service Profile Template compute-nodes and click Create Service Profiles
from Template.
4. Enter the naming prefix of the service profiles that will be created from the template.
5. Enter the starting number of the service profiles being created and the number of service profiles to
be created. (Figure 92)
6. Click OK twice.
7. Expand Servers > Service Profiles > root > Sub-Organizations > hx-cluster.
8. Click on the first service profile created for the additional B200 M4 compute-only nodes, and right
click.
10. In the Server Assignment dropdown list, change the selection to select Existing Server.
12. In the list of available servers, chose the B200 M4 blade server to assign the service profile to and
click OK. (Figure 93)
13. Click Yes to accept that the server will reboot to apply the service profile.
15. Repeat steps 7-14 for each additional compute-only B200 M4 blade that will be added to the clus-
ter.
Figure 92 Cisco UCS Manager Create Service Profiles from Template
Figure 93 Cisco UCS Manager Associate Service Profile
Install ESXi
Install and configure the ESXi hypervisor as documented in the section ESXi Hypervisor Installation. The node
must meet the following requirements before continuing with the HyperFlex cluster expansion:
The new node must be in maintenance mode while configuring the storage cluster.
The new node must be a member in the same vCenter cluster and vCenter datacenter as the existing
nodes.
The new node has at least one valid DNS and NTP server configured.
3. Click the black X next to the first line of IP addresses to delete it.
4. Click the downward arrow on the side of the Add button, and then click Add Compute Node to add
a line for a compute-only node to be added to the cluster.
a. Click the downward arrow on the side of the Add button, and then click Add Compute Node if
more than 1 new host is being added, until there are enough lines to accommodate the number
of nodes being added.
5. Enter the IP address values or DNS names for the new node(s).
9. Enter the values in the vCenter Configuration section for the existing vCenter server.
11. If the validation checks return any errors, re-check the values entered and correct as necessary, oth-
erwise click Deploy.
Figure 94 HyperFlex Installer Expand Hybrid Cluster
After the cluster expansion has completed, continue with the HyperFlex Post Installation Configuration steps.
1. After the HyperFlex installation is complete, close the vCenter Web Client, and re-open it from the
start.
4. In the center pane, a new section titled Cisco HyperFlex Systems now exists. Click on the link with
the HyperFlex cluster name. (Figure 96)
6. Click Datastores.
8. Enter the datastore name and size desired, and click OK.
1. Log in to the management IP address of the first storage platform controller VM via SSH, or open the
interactive CLI web interface of each controller VM.
2. Enter the commands as shown in the example below to set the recipient address, and the confirm
the configuration:
recipientList:
enabled: True
smtpServer: 192.168.0.4
fromAddress: [email protected]
# sendasup –t
3. Repeat steps 1-2 for each remaining storage platform controller VM in the HyperFlex cluster.
Enable vCenter Features
After the HyperFlex cluster has been installed, additional features available in vCenter should be enabled for
the cluster, specifically vCenter High Availability (HA) and Distributed Resource Scheduler (DRS). To
configure the HA and DRS features, complete the following steps:
3. Right-click on the cluster containing the HyperFlex nodes and click Settings.
5. In the Edit Cluster Settings window, check the box to Turn on vSphere DRS, and leave the remaining
settings at their defaults. (Figure 99)
8. Click on the section Datastore for Heartbeating to expand the section and view the options.
9. Click the option Automatically select datastores accessible from the host. (Figure 100)
1. In Cisco UCS Manager, click the LAN tab in the navigation pane.
9. Click OK twice.
10. Repeat steps 3-9 for each additional VLAN required.
Figure 101 Cisco UCS Create VLAN
Configure VM Networking
The HyperFlex installer does not configure any ESXi port groups to connect the guest virtual machines. The
configuration of virtual machine port groups used by the guest VMs must be completed after the HyperFlex
installation, and configured according to the guest VM requirements. To configure guest VM networking,
complete the following steps:
5. Click the name of the virtual switch for guest VM traffic; by default this is named vswitch-hx-vm-
network.
7. In the Add Networking window, select Virtual Machine Port Group for a Standard Switch, and click
Next. (Figure 103)
9. Enter the desired name for the port group, and enter the VLAN ID number that matches the VLAN ID
for the guest VM traffic, then click Next. (Figure 104)
Repeat the steps in this section for each additional port group required for guest VM traffic, each of them par-
ticipating in a different VLAN as defined in Cisco UCS Manager.
Figure 102 ESXi Host Networking
Figure 103 ESXi Add Networking
Figure 104 ESXi Port Group Settings
5. Click the name of the virtual switch for vMotion traffic; by default this is named vMotion.
7. In the Add Networking window, select VMkernel Networking Adapter, and click Next. (Figure 105)
10. If the vMotion interfaces have been configured to carry VLAN tagged traffic, as documented in
Tagged VLANs Alternate Configuration, then modify the VLAN ID to match the value of the vMotion
VLAN defined in Cisco UCS Manager.
11. Check the box for vMotion traffic, and click Next.
12. Select Use Static IPv4 Settings, enter the desired IP address and subnet mask, and click Next.
(Figure 107)
14. In the center pane, click to expand the VMkernel port that was just created, and click on the port.
(Figure 108)
17. Change the MTU value to 9000, and click OK. (Figure 109)
18. Repeat steps 3-17 for each additional host in the HyperFlex cluster.
Figure 105 ESXi Add VMkernel Interface
Figure 106 ESXi VMkernel Settings
Figure 107 ESXi VMkernel IP Settings
Figure 108 ESXi Edit VMkernel Settings
Figure 109 ESXi VMkernel Jumbo Frames
4. In the center pane, under the System section, click Advanced System Settings.
5. From the list on the right, select Syslog.global.logHost, then click Edit Settings button.
6. Enter the IP address of the vCenter server or vCenter appliance which will receive the syslog mes-
sages, followed by the TCP port, which is 514 by default. For example, 192.168.10.101:514
7. Click OK.
8. Repeat steps 3-7 for each additional host in the HyperFlex cluster.
Figure 110 ESXi Configure Syslog
An example script to configure ESXi guest VM networking, vMotion networking, and syslog settings is included
in the appendix: Configure ESXi Post Install
Management
4. In the center pane, a section titled Cisco HyperFlex Systems exists. Click on the link with the Hyper-
Flex cluster name.
Summary
From the Web Client Plugin Summary screen, several elements are presented:
Overall cluster usable capacity, used capacity, free capacity, datastore capacity provisioned, and the
amount of datastore capacity provisioned beyond the actual cluster capacity.
Deduplication and compression savings percentages calculated against the data stored in the
cluster.
The cluster health state, and the number of node failures that can occur before the cluster goes into
read-only or offline mode.
A snapshot of performance over the previous hour, showing IOPS, throughput, and latencies.
Monitor
From the Web Client Plugin Monitor tab, several elements are presented:
Clicking on the Performance button gives a larger view of the performance charts. If a full webpage
screen view is desired, click the Preview Interactive Performance charts hyperlink.
Clicking on the Events button displays a HyperFlex event log, which can be used to diagnose errors
and view system activity events.
Figure 112 HyperFlex Plugin Performance Monitor
Figure 113 HyperFlex Plugin Events
Manage
From the Web Client Plugin Manage tab, several elements are presented:
Clicking on the Cluster button gives an inventory of the HyperFlex cluster and the physical assets of
the cluster hardware.
Clicking on the Datastores button allows datastores to be created, edited, deleted, mounted and
unmounted, along with space summaries and performance snapshots of that datastore.
Figure 114 HyperFlex Plugin Manage Cluster
Figure 115 HyperFlex Plugin Manage Datastores
ReadyClones
For the best possible performance and functionality of the virtual machines that will be created using the
HyperFlex ReadyClone feature, the following guidelines for preparation of the base VMs to be cloned should
be followed:
All virtual disks of the base VM must be stored in the same HyperFlex datastore.
Base VMs can only have HyperFlex native snapshots, no VMware redo-log based snapshots can be
present.
Create a copy of the base VM to be cloned on each of the nodes in the HyperFlex cluster. To follow
this recommendation, temporarily disable the vSphere DRS feature, afterwards the base VM can be
copied into the HyperFlex system multiple times, targeting each node in turn. Alternatively, the base
VM can be created in the HyperFlex cluster on one node, and then cloned via the standard vSphere
clone feature to the other nodes.
Create ReadyClones from the multiple base VMs evenly. For example, on a 4 node cluster where 80
clones are required, create one base VM located on each of the 4 nodes, then create 20
ReadyClones of each base VM.
Snapshots
HyperFlex native snapshots are high performance snapshots that are space-efficient, crash-consistent, and
application consistent, taken by the HyperFlex Distributed Filesystem, rather than using VMware native redo-
log based snapshots. For the best possible performance and functionality of HyperFlex native snapshots, the
following guidelines should be followed:
Ensure that the first snapshot taken of a guest VM is a HyperFlex native snapshot, by using the
apshot Now or
Schedule Snapshot. Failure to do so reverts to VMware redo-log based snapshots. (Figure 116)
Take the initial HyperFlex native snapshot with the virtual machine powered off. This creates what is
called the Sentinel snapshot. The Sentinel snapshot becomes a base snapshot that all future
snapshots are added to, and prevents the VM from reverting to VMware redo-log based snapshots.
Failure to do so can cause performance degradation when taking snapshots later, while the VM is
performing large amounts of storage IO.
client snapshot menu. As long as the initial snapshot was a HyperFlex native snapshot, each
additional snapshot is also considered to be a HyperFlex native snapshot.
Do not delete the Sentinel snapshot unless you are deleting all the snapshots entirely.
If large numbers of scheduled snapshots need to be taken, distribute the time of the snapshots taken
by placing the VMs into multiple folders or resource pools. For example, schedule two resource
groups, each with several VMs, to take snapshots separated by 15 minute intervals in the scheduler
window. Snapshots will be processed in batches of 8 at a time, until the scheduled task is
completed. (Figure 118)
Figure 116 vSphere Web Client HX Snapshot Now
Figure 117 HyperFlex Sentinel Snapshot
Figure 118 vSphere Client HX Schedule Snapshots
Storage vMotion
The Cisco HyperFlex Distributed Filesystem can create multiple datastores for storage of virtual machines.
While there can be multiple datastores for logical separation, all of the files are located within a single
distributed filesystem. As such, performing storage vMotions of virtual machine disk files has little value in
the HyperFlex system. Furthermore, storage vMotions create additional filesystem consumption and
generate additional unnecessary metadata within the filesystem, which must later be cleaned up via the
nal cleaner process.
It is recommended to not perform storage vMotions of the guest VMs between datastores within the same Hy-
perFlex cluster. Storage vMotions between different HyperFlex clusters, or between HyperFlex and non-
HyperFlex datastores are permitted.
HX Maintenance Mode
has been installed by the
HyperFlex plugin. This option directs the storage platform controller on the node to shutdown gracefully,
redistributing storage IO to the other nodes with minimal impact. Using the standard Maintenance Mode
menu in the vSphere Web Client, or the vSphere (thick) Client can be used, but graceful failover of storage
IO and shutdown of the controller VM is not guaranteed.
In order to minimize the performance impact of placing a HyperFlex converged storage node into maintenance
mode, it is recommended to use the HX Maintenance Mode menu selection to enter or exit maintenance mode
whenever possible.
Figure 119 vSphere Web Client HX Maintenance Mode
Failure Scenarios
This section provides a description of several failure scenarios within a HyperFlex system, the response
behavior by the HyperFlex software, ESXi and the VMs, and recovery behavior for each scenario. The
following scenarios apply to the default HyperFlex cluster settings of replication factor set to 3, and access
policy set to strict:
Network Failures
Response
Within ESXi, all vmnic interfaces connected to the failed link will go offline.
Traffic that was previously transmitted and received on that link, based on the explicit vSwitch uplink
failover order, will now traverse the surviving link.
All VMs remain online and fully functional, no migrations occur and no storage interruptions are seen.
Recovery
Once the link failure is repaired, all VMnic interfaces will return to an online state, and all network
traffic will resume their normal pathways.
Response
Traffic that was previously transmitted and received on that fabric, based on the explicit vSwitch
uplink failover order, will now traverse the opposite fabric for all nodes.
All VMs remain online and fully functional, no migrations occur and no storage interruptions are seen.
Recovery
Once the uplink failure is repaired, all vNICs will return to an online state, and all network traffic will
resume their normal pathways.
Additional east-west traffic going from fabric A to fabric B, and vice-versa, across the Cisco UCS
network uplinks will occur briefly, as the vNICs are not always brought online at the exact same
times.
Response
All of the vNICs serviced by the failed FI fabric go offline and the corresponding ESXi vmnic
interfaces will show link down.
Traffic that was previously transmitted and received on that fabric, based on the explicit vSwitch
uplink failover order, will now traverse the opposite fabric for all nodes.
All VMs remain online and fully functional, no migrations occur and no storage interruptions are seen.
Recovery
Once the FI is returned to service, after the network uplinks automatically reconnect and come
online, all vNICs will return to an online state, and all network traffic will resume their normal
pathways.
Additional east-west traffic going from fabric A to fabric B, and vice-versa, across the Cisco UCS
network uplinks will occur briefly, as the vNICs are not always brought online at the exact same
times.
Disk Failures
Response
The HyperFlex cluster state changes to unhealthy. Alerts are raised in vCenter for the disk failure and
cluster health.
An immediate rebalance job is triggered to return the system to the specified replication factor,
replicating the missing data on the disk from the remaining copies.
When the rebalance job finishes, the cluster state returns to healthy.
All VMs remain online and fully functional, no migrations occur and no storage interruptions are seen.
Recovery
Once the failed disk is replaced, the HX software will automatically begin using the replaced disk for
new data storage.
A default nightly automatic rebalance job will evenly distribute data across the cluster, including the
new disk, ensuring that there are no hotspots or cold spots where space consumption is
unnecessarily high or low.
Response
The HyperFlex cluster state changes to unhealthy. Alerts are raised in vCenter for the disk failure and
cluster health.
All VMs remain online and fully functional, no migrations occur and no storage interruptions are seen.
Recovery
Once the failed disk is replaced, the HX software will not automatically begin using the replaced disk
for storage, a manual rebalance job must be started from the command line of one of the controller
VMs:
Response
HX220c-M4S server: the storage controller VM remains online, although services within the VM will
fail. All guest VMs remain online and fully functional, no migrations occur and no storage interruptions
are seen.
Recovery
Once the failed disk is replaced, specific recovery steps must be taken to return the node to full
service. Please contact Cisco TAC for assistance.
Maintenance Mode
An HX-Series converged node must be taken offline for planned maintenance.
Response
With VMware DRS enabled: placing the node into HX Maintenance Mode will evacuate the guest VMs
via vMotion, the controller VM will shut down, and the node is ready to shut down or reboot.
Without VMware DRS enabled: placing the node into HX Maintenance Mode will not succeed until the
guest VMs are manually moved via vMotion or powered off, then the controller VM will shut down,
and the node is ready to shut down or reboot.
The cluster state changes to unhealthy. Alerts are raised in vCenter for cluster health and node
removal.
All VMs remain online and fully functional, and no storage interruptions are seen.
If the node remains offline for more than 2 hours, a rebalance job will be triggered to redistribute
data across the cluster and bring the number of data copies back in line with the configured
replication factor. The cluster state would then return to healthy.
Recovery
Once the node is back online, exiting from maintenance mode will trigger the storage controller VM
to start.
After a brief period for services on the controller VM to start, the cluster state will return to healthy.
The node is available to host VMs, by manually moving them or allowing DRS to automatically
balance the load.
If the node was offline for more than 2 hours and the auto rebalance job ran, the node is considered
to be empty when it re-enters the cluster; even if the disks still have their old data, the data will be
discarded. To redistribute the data across the node, either wait for the default automatic rebalance
job to run, or manually run a rebalance from the command line of one of the controller VMs:
Node Failure
An HX-Series converged node goes offline due to an unexpected failure.
Response
With VMware HA enabled: the guest VMs will be restarted on the remaining nodes of the cluster after
a brief period of time.
Without VMware HA enabled: the guest VMs will not be restarted on the remaining nodes of the
cluster and will remain offline without manual intervention.
The cluster state changes to unhealthy. Alerts are raised in vCenter for cluster health and node
removal.
If the node remains offline for more than 2 hours, a rebalance job will be triggered to redistribute
data across the cluster and bring the number of data copies back in line with the configured
replication factor. The cluster state would then return to healthy.
Recovery
Once the node is back online the storage controller VM will start.
After a brief period for services on the controller VM to start, the cluster state will return to healthy.
The node is available to host VMs, by manually moving them or allowing DRS to automatically
balance the load.
If the node was offline for more than 2 hours and the auto rebalance job ran, the node is considered
to be empty when it re-enters the cluster; even if the disks still have their old data, the data will be
discarded. To redistribute the data across the node, either wait for the default automatic rebalance
job to run, or manually run a rebalance from the command line of one of the controller VMs:
Verify the expected number of converged storage nodes and compute-only nodes are members of
the HyperFlex cluster in the vSphere Web Client plugin manage cluster screen.
Verify the expected cluster capacity is seen in the vSphere Web Client plugin summary screen.
Create a test virtual machine that accesses the HyperFlex datastore and is able to perform read/write
operations.
Perform the virtual machine migration (vMotion) of the test virtual machine to a different host in the
cluster.
During the vMotion of the virtual machine, make sure the test virtual machine can perform a
continuous ping to default gateway and to check if the network connectivity is maintained during and
after the migration.
Verify Redundancy
The following redundancy checks can be performed to verify the robustness of the system. Network traffic,
such as a continuous ping from VM to VM, or from vCenter to the ESXi hosts should not show significant
failures (one or two ping drops might be observed at times). Also, all of the HyperFlex datastores must
remain mounted and accessible from all the hosts at all times.
Administratively disable one of the server ports on Fabric Interconnect A which is connected to one
of the HyperFlex converged storage hosts. The ESXi virtual switch uplinks for fabric A should now
show as failed, and the standby uplinks on fabric B will be in use for the management and vMotion
virtual switches. Upon administratively re-enabling the port, the uplinks in use should return to
normal.
Administratively disable one of the server ports on Fabric Interconnect B which is connected to one
of the HyperFlex converged storage hosts. The ESXi virtual switch uplinks for fabric B should now
show as failed, and the standby uplinks on fabric A will be in use for the storage virtual switch. Upon
administratively re-enabling the port, the uplinks in use should return to normal.
Place a representative load of guest virtual machines on the system. Put one of the ESXi hosts in
maintenance mode, using the HyperFlex HX maintenance mode option. All the VMs running on that
host should be migrated via vMotion to other active hosts via vSphere DRS, except for the storage
platform controller VM, which will be powered off. No guest VMs should lose any network or storage
accessibility during or after the migration. This test assumes that enough RAM is available on the
remaining ESXi hosts to accommodate VMs from the host put in maintenance mode. The HyperFlex
cluster will show in an unhealthy state.
Reboot the host that is in maintenance mode, and exit it from maintenance mode after the reboot.
The storage platform controller will automatically start when the host exits maintenance mode. The
HyperFlex cluster will show as healthy after a brief time to restart the services on that node. VSphere
DRS should rebalance the VM distribution across the cluster over time.
vCenter alerts do not automatically clear, even when the fault has been resolved. Once the cluster
health is verified, the alerts must be manually cleared.
Reboot one of the Cisco UCS Fabric Interconnects while traffic is being sent and received on the
storage datastores and the network. The reboot should not affect the proper operation of storage
access and network traffic generated by the VMs. Numerous faults and errors will be noted in Cisco
UCS Manager, but all will be cleared after the FI comes back online.
Appendix
(((<capacity disk size in GB> X 10^9) / 1024^3) X <number of capacity disks per node> X <number of
HyperFlex nodes> X 0.92) / replication factor
The replication factor value is 3 if the HX cluster is set to RF=3, and the value is 2 if the HX cluster is set to
RF=2.
The 0.92 multiplier accounts for an 8% reservation set aside on each disk by the HX Data Platform software
for various internal filesystem functions.
Calculation example:
<number of capacity disks per node> = 15 for an HX240c M4SX model server
replication factor = 3
B: Example Scripts
Example PowerShell scripts are provided to assist with the configuration of the HyperFlex system. The
scripts must be modified for their proper use according to the settings required by the system being
installed. Additional PowerShell integrations may be required to be installed on the system where the scripts
will be run, such as vSphere PowerCLI or Cisco UCS PowerTool Suite.
# Description: Imports a UCS MAC address pool CSV export file and creates DHCP
reservations
# Usage: Modify the $csv variable to point to the UCS CSV export file. Modify the
$ip
# variable to set the starting IP address of the last octet. Modify the $address
variable to
# contain the first three octets of the IP addresses. Modify the -ScopeId value
to the name
# of the DHCP scope where the reservations are being added. Run the script from
the Windows
$csv="mgmt-a.csv"
$ip=11
$address="192.168.10."+$ip
$ip++
# Usage: Modify the variables to specify the vCenter server address, user,
password, ESXi
# host root password, vCenter cluster name, and the servers to be configured.
$vc = "vcenter1.hx.lab.cisco.com"
$user = "hx\1"
$password = "1"
$rootpw = "Cisco123"
$cluster = "Cluster1"
$servers="hx220-01.hx.lab.cisco.com","hx220-02.hx.lab.cisco.com","hx220-
03.hx.lab.cisco.com","hx220-04.hx.lab.cisco.com","hx220-
05.hx.lab.cisco.com","hx220-06.hx.lab.cisco.com","hx220-
07.hx.lab.cisco.com","hx220-08.hx.lab.cisco.com"
# connect to each ESXi host and configure their base settings
# configure NTP
$myntpserver1="192.168.10.4"
# Connect to vCenter
# Usage: Modify the variables to specify the ESXi root password, the servers to
be
# configured, the guest VLAN ID, and the IP addresses used for the vMotion
vmkernel
# interfaces.
$rootpw = "Cisco123"
$servers="hx220-01.hx.lab.cisco.com","hx220-02.hx.lab.cisco.com","hx220-
03.hx.lab.cisco.com","hx220-04.hx.lab.cisco.com","hx220-
05.hx.lab.cisco.com","hx220-06.hx.lab.cisco.com","hx220-
07.hx.lab.cisco.com","hx220-08.hx.lab.cisco.com"
$ip=11
$ip=$ip+1
Switch A
hostname HX-9K-A
no feature telnet
feature interface-vlan
feature lacp
feature vpc
ip domain-lookup
ip domain-list cisco.com
mtu 9216
system qos
vlan 1
vlan 10
name Management
vlan 51
name HXCluster1
vlan 100
name VM-Prod-100
vlan 200
name VMotion
cdp enable
vpc domain 50
role priority 10
auto-recovery
interface Vlan1
interface port-channel50
description VPC-Peer
vpc peer-link
interface port-channel10
mtu 9216
vpc 10
interface port-channel20
mtu 9216
vpc 20
interface Ethernet1/1
description uplink
interface Ethernet1/2
description NX9372-A_P1/2--UCS6248-A_2/1
interface Ethernet1/4
description NX9372-A_P1/4--UCS6248-B_2/1
description NX9372-A_P1/47--NX9372-B_P1/47
interface Ethernet1/48
description NX9372-A_P1/48--NX9372-B_P1/48
interface mgmt0
ip address 10.29.133.101/24
Switch B
hostname HX-9K-B
no feature telnet
feature interface-vlan
feature lacp
feature vpc
ip domain-lookup
ip domain-list cisco.com
mtu 9216
system qos
vlan 1
vlan 10
name Management
vlan 51
name HXCluster1
vlan 100
name VM-Prod-100
vlan 200
name VMotion
cdp enable
vpc domain 50
role priority 10
auto-recovery
interface Vlan1
interface port-channel50
description VPC-Peer
vpc peer-link
interface port-channel10
mtu 9216
vpc 10
interface port-channel20
mtu 9216
vpc 20
interface Ethernet1/1
description uplink
interface Ethernet1/2
description NX9372-A_P1/2--UCS6248-A_2/3
description NX9372-A_P1/4--UCS6248-B_2/3
interface Ethernet1/47
description NX9372-B_P1/47--NX9372-A_P1/47
interface Ethernet1/48
description NX9372-B_P1/48--NX9372-A_P1/48
interface mgmt0
ip address 10.29.133.102/24
Brian Everitt is a Technical Marketing Engineer with the Cisco UCS Data Center Engineering Solutions group.
He is an IT industry veteran with over 18 years of experience deploying server, network, and storage
infrastructures for companies around the world. During his tenure at Cisco, he has previously been a lead
Advanced Services Solutions Architect for Microsoft solutions, virtualization, and SAP Hana on UCS.
Currentl
Infrastructure solutions.
Acknowledgements
Jeffrey Fultz, Technical Marketing Engineer, Cisco Systems, Inc.
Resources
Glossary
API Application Programming Interface. A set of remote calls, verbs, queries or actions
exposed to external users for automation of various application tasks.
ASIC Application Specific Integrated Circuit. An integrated circuit specially designed for a
specific use versus a general purpose processor.
CDP Cisco Discovery Protocol. A Cisco developed protocol to share information about
directly connected networking devices.
CLI Command Line Interface. A text based method of entering commands one-by-one in
successive lines.
DHCP Dynamic Host Configuration Protocol. A protocol for dynamically assigning and
distributing network settings to clients on demand.
DNS Domain Name System. A hierarchical service primarily used for mapping of host and
domain names to the IP addresses of the computers configured to use those names
and addresses.
FC Fibre Channel. A high speed, lossless networking protocol, using fixed packet sizes
and a worldwide unique identification technique, primarily used to connect computing
systems to storage devices.
FC-AL Fibre Channel Arbitrated Loop. A Fibre Channel protocol topology where the devices
are connected via a high speed, one-way ring network.
FCoE Fibre Channel over Ethernet. Fibre channel traffic encapsulated in Ethernet frames,
carried over Ethernet networks.
FI Fabric Interconnect. The primary component of the Cisco UCS system, connecting all
end devices and networks, and providing management services.
GbE Gigabit Ethernet. One gigabit of network transmission rate is equal to one billion bits
transferred per second.
GUI Graphical User Interface. In contrast to a CLI, a user interface where the primary
interaction between the user and the computer is via a series of graphical windows and
buttons, typically using an on screen pointer controlled by a computer mouse.
HBA Host Bus Adapter. A hardware device that connects the computer host to an external
device, such as a storage system or other network device.
HDD Hard Disk Drive. A computer storage device which stores data on rotating magnetic
discs or platters, read and written to by a moving magnetic head.
IOM IO Module. A common name for Cisco Fabric Extender modules installed in the rear of
a Cisco UCS 5108 blade chassis.
LACP Link Aggregation Control Protocol. A specification for bundling multiple physical
network interfaces into a single logical interface, providing additional bandwidth and
failure tolerance.
LAN Local Area Network. Interconnection of computer devices within a limited area, in
contrast to a WAN.
MTU Maximum Transmission Unit. The maximum size of the data unit transmittable by the
network protocol.
NAND Negative-AND logic gate. A Boolean function circuit design used in SSD devices,
emulating the bit state of traditional magnetic media as used in hard disk drives.
NAS Network Attached Storage. A storage system providing file-level access to resources
via a network protocol, in contrast to a block-level storage system.
NFS Network File System. A file system protocol for accessing files via a server across a
network.
NTP Network Time Protocol. A protocol for synchronizing computer system time clocks over
data networks.
OVA Open Virtual Appliance. A compressed open virtualization format (OVF) file containing
all the files necessary to deploy and run a pre-created virtual machine.
PCI Peripheral Component Interconnect. A computer bus interface for connecting internal
hardware components.
PCIe PCI Express. A revised standard replacing PCI, that uses a serial architecture, higher
speeds and fewer pins on the devices.
QoS Quality of Service. The overall performance of the network as seen by the end users
and computers. Various standards and methods have been developed to provide
different levels of control and prioritization of network traffic used by specific devices
and applications.
RBAC Role-based Access Control. Restricting system access to authorized users, and
granting them privileges in the system based on a defined functional or job role.
RF Replication Factor. The number of times a written block of data is replicated across
independent nodes in a HyperFlex storage cluster.
RU Rack Unit. EIA-310 specification define the height of each unit of height in a 19 inch
SAN Storage Area Network. A data network, typically using Fibre Channel protocol, to
provide computer hosts access to centralized storage devices.
SAS Serial Attached SCSI. A serial computer bus for connecting storage devices using SCSI
commands.
SCSI Small Computer Systems Interface. A set of standards for connections between
computers and storage devices and peripherals, using a common command set.
SD Secure Digital. A portable and non-volatile format for memory card storage devices.
SDS Software Defined Storage. A method for providing access to and provisioning of
computer storage resources that are independent of the underlying physical storage
resources.
SoL Serial over LAN. Transmission of serial port output via a LAN versus the physical serial
port.
SSD Solid State Disk. A computer data storage device typically using NAND based flash
memory for persistent data storage, versus spinning magnetic media as used in a hard
disk.
SSH Secure Shell. An encrypted protocol allowing remote login across unsecured networks.
Cisco UCS Cisco Unified Computing System. The Cisco product line combining rack-mount
servers, blade servers, and Fabric Interconnects into a single domain, with integrated
networking and management.
UCSM Cisco UCS Manager. The management GUI and CLI software for control of a Cisco UCS
domain.
VIC Virtual Interface Card. A Cisco product offering the ability to create multiple virtual
network or storage interfaces on a single physical hardware card.
VLAN Virtual LAN. A partitioned and isolated LAN segment, breaking a flat network into
multiple subdivided networks.
VMDK Virtual Machine Disk. A file format for storing the contents of a typical hard disk as a
file, used by virtual machines as their hard disk contents.
vNIC Virtual NIC. The virtualized definition of a traditional NIC running in the Cisco UCS
system. The vNIC is defined in a service profile, and dynamically programmed to a VIC.
VPC Virtual Port Channel. A Cisco technology for connecting network devices to multiple
partner devices without creating loops and without using STP.
WAN Wide Area Network. Interconnection of computer devices across long distances and
geographically large areas, in contrast to a LAN.
References
1. Cisco HyperFlex HX220c M4 Node Installation Guide:
http://www.cisco.com/c/en/us/td/docs/hyperconverged_systems/HX_series/HX220c_M4/HX220c.h
tml