SlideShare a Scribd company logo
!
!
!
!
Design Proposal - Challenge 2
Prepared for: VirtualDesignMaster.com
Prepared by: Joel Gibson
July 21, 2014
Proposal number: Challenge 2


Page ! of !1 13
JOEL GIBSON
EXECUTIVE SUMMARY
!
Objective
The objective of this design is to support the mission critical infrastructure on the moon base. Based on the
description provided, the moon base will be used to temporarily house humans until the Mars colony has been
completed.
Assumptions
It is assumed that the moon base is not a manufacturing depot that produces space ships, as was the case on
Earth. The documentation provided would indicate that the moon base is a piece of supporting infrastructure;
therefore, for the purpose of this design, it is assumed that the main purpose of these systems will be of an
operational nature to ensure availability of administrative, security, logistics, and other services.
Goals
This solution should be reliable, easily deployable, and be conscious of power, cooling, and space limitations.
Downtime is to be minimized.
Constraints
- Due to severe power, cooling, and space limitations, the infrastructure must fit into a 21U rack space.
- The solution must be built utilizing components from one or more of the following vendors: VMware, Cisco,
NetApp, Synology, RedHat, and Puppet.
- The design must utilize IPv6 only.
Solution
With the above goals in mind, the solution has been designed around the following key elements:
!
- All supporting elements must be highly available.
- The physical and logical infrastructure must be able to scale easily, and quickly.
- The application framework must be able to scale automatically.
!
The physical infrastructure has been kept simple, and relies heavily on converged systems. It is easy to rack,
stack, and cable this design.
!
!
Page ! of !2 13
JOEL GIBSON
DESIGN OVERVIEW
!
Background
The Cape Canaveral Space Port is the first, of at least four, critical production facilities. The facilities in the
Netherlands, Australia, and New Zealand will be ready soon. In addition, there is a base located on the moon
which will be used as a temporary colony until the human race can be moved to Mars.
!
The infrastructure design for the moon base has been configured in a highly available manner. It is the goal of this
design to eliminate any potential single points of failure, such that it is able to support the mission critical systems.
The physical design of the infrastructure has been kept relatively simple, so that scarce resources can be best
utilized. Due to severe power, cooling, and space limitations, the infrastructure must fit into a 21U rack space. In
addition, only IPv6 infrastructure is available on the moon base; therefore, all supporting infrastructure must be
compatible.
Objective
The objective of this design is to support the mission critical infrastructure on the moon base. Based on the
description provided, the moon base will be used to temporarily house humans until the Mars colony has been
completed.
Constraints
- Due to severe power, cooling, and space limitations, the infrastructure must fit into a 21U rack space.
- The solution must be built utilizing components from one or more of the following vendors: VMware, Cisco,
NetApp, Synology, RedHat, and Puppet. (based on the challenge framework provided)
- The design must utilize IPv6 only.
Assumptions
It is assumed that the moon base is not a manufacturing depot that produces space ships, as was the case on
Earth. The documentation provided would indicate that the moon base is a piece of supporting infrastructure;
therefore, for the purpose of this design, it is assumed that the main purpose of these systems will be of an
operational nature to ensure availability of administrative, security, logistics, and other services. It is also assumed
that while the design must stick to the vendors noted above, that VMware is not a requirement.
Goals
This solution should be reliable, easily deployable, and be conscious of power, cooling, and space limitations.
Downtime is to be minimized.
Page ! of !3 13
JOEL GIBSON
!
!
PHYSICAL DESIGN
!
Infrastructure
As noted earlier, the physical infrastructure has been kept simple, and relies heavily on converged systems. It is
easy to rack and stack this design.
The physical systems will reside in a single 21U rack space, supplied by two UPS systems. The rack will contain
the required network gear to provide moon-spaceport MPLS connectivity, top-of-rack network, compute, and
storage.
!
Figure 1 - Physical Rack Layout
Page ! of !4 13
JOEL GIBSON
!
Network
The Cisco Nexus 5672UP was chosen as the top-of-rack switch, which provides ample bandwidth and ports for
the existing infrastructure, along with room to grow. It also supports network overlay technology, such as VxLAN
should the future need arise.
Moon-to-spaceport connectivity will be established using a specialized high latency, high bandwidth connection to
the MPLS network via 3rd party Earth stations.
The design will utilize IPv6 only.
Internet
No direct connectivity to the Internet will be available from the moon base. If traffic to the IPv6 Internet is required,
traffic will be routed from the moon base to the Internet via the Cape Canaveral Space Port. If a connection to the
IPv4 Internet is required from the moon base, and future provisions in the design requirements are allowed to do
so, a tunnel will be created between two compatible routers to encapsulate IPv4 traffic within an IPv6 packet.
Figure 2 - High Level Overview of Inter-Facility Connectivity
Page ! of !5 13
JOEL GIBSON
Compute
The Cisco UCS 5108 blade-system was chosen for scalable compute, instead of the C460 M4 rack mount servers
that are utilized in the space ports. The main reason for this was to increase density of the compute nodes. The C460
M4s contained several disk trays which were not being utilized at the space ports, and were seen as utilizing
excessive space, based on the amount of compute capability per rack unit (RU).
Based on the estimates noted in tables one and two below, it is believed that by utilizing the UCS 5108 blade system
the following efficiencies can be gained:
- 50% reduction in physical rack space
- roughly 87% more compute
- approximately 3-5% gain in power and cooling efficiency based on the compute density
In addition to the efficiencies gained through the increased density, this design enables a certain amount of flexibility
to scale up/down the compute infrastructure. In the event that power availability becomes constrained, one or more
blades can be shutdown while maintaining core management and application availability. If more compute resources
are required, and the power is available, scaling up can be accomplished by replacing the CPUs, and/or adding
memory.
These statements, and the physical characteristics described are estimates, and for learning purposes only.
Page ! of !6 13
JOEL GIBSON
Table 1 - Estimation of Physical Characteristics (Cisco C460 M4 Rack Mount)
75% Utilization
Description CPU Clock Rate Cores Memory QTY
Power
Consumption
(Watts)
Cooling (BTU) Rack Space
C460 M4
Rack Mount
Server
4x E7-4809 1.9 24 128 4 532 1814 4
Total M4 182.4 96 512 4 2128 7256 16
Table 2 - Estimation of Physical Characteristics (Cisco 5108 with B230 M2 Blades)
75% Utilization
Description CPU Clock Rate Cores Memory QTY
Power
Consumption
(Watts)
Cooling (BTU) Rack Space
B230 M2
Cisco 5108
with Blade
Server
2x EL-8867L 2.13 20 128 8 410 1397 0.75
6248UP FI
Fabric
Interconnect
2 256 875 1
Subtotal M2 340.8 160 1024 8 3280 11176 6
Subtotal FI 512 1750 2
Total 3792 12926 8
Storage
The NetApp EF550 flash storage array was chosen, instead of the E2624 which is utilized in the space ports. The
main reason for this was to increase power and cooling efficiency.
Based on the estimates noted in tables three and four below, it is believed that by utilizing the flash storage array
the following efficiencies can be gained:
- approximate reduction in power and cooling requirements by 33-34%
- approximate increase in available raw disk capacity by 33%
These statements, and the physical characteristics described are estimates, and for learning purposes only.
In addition, some of the savings in power and cooling could be utilized to maintain a higher density of compute
nodes for longer periods.
!
!
Page ! of !7 13
JOEL GIBSON
Table 3 - Estimation of Physical Characteristics (NetApp E2624 Storage Array Config)
75% Utilization
Description QTY
Power
Consumption
(Max Watts)
Cooling (Max
BTU)
Rack Space Raw Capacity (TB)
E2624
NetApp Storage
Array Populated
with 120 x 1.2 TB
SAS 10k Drives
2 482 1644 2
Expansion 2 371 1267 2
Total 1706 5822 8 144
Table 4 - Estimation of Physical Characteristics (NetApp EF550 Flash Storage Array Config)
75% Utilization
Description QTY
Power
Consumption
(Max Watts)
Cooling (Max
BTU)
Rack Space Raw Capacity (TB)
EF550
NetApp Flash Array
Populated with 120
x 1.6 TB SSD
Drives
1 498 1630 2 38
Expansion 3 216 738 2
Total 1146 3844 8 192
Physical Configuration Maximums
The following configuration maximums are based on the physical constraint of 21U of available rack space.
- Cisco UCS 5108 Chassis (1)
- Cisco B230 M2 Blades (8)
- Intel CPUs (2)
- Memory (512 GB)
- Local Disk Space SSD (800 GB)
- Cisco 6248UP Fabric Interconnects
- 10 GB Ports (48 each)
- Expansion Ports (2)
- NetApp EF550 Flash Storage Array
- SSD Drives (120)
- Size of SSD Disk (1.6 TB)
- Raw Capacity (192 TB)
- Cisco 5672UP Switch
- 10 GB Ports (48 each)
Risks
Due to the physical power, space, and cooling constraints, there is no multi-site, multi-room, multi-rack
redundancy. The solution is highly available, but within a single rack.
If the data centre is physically compromised, the mission critical services provided by the moon base infrastructure
could be at significant risk.
Assumptions
It is assumed that the Cape Canaveral Space Port has been designed and built to Tier 3 or above standards
according to the Uptime Institute. In addition, there is enough physical space, and power and cooling capacity
available to scale the physical equipment, if required.
!
!
Page ! of !8 13
JOEL GIBSON
Figure 3 - High Level Overview of Virtualization Infrastructure
Page ! of !9 13
JOEL GIBSON
LOGICAL DESIGN
!
Overview
The premise for this design is based on high availability and orchestration. The purpose of this is to protect the
critical components (i.e. application workload, and data), and ensure that the infrastructure can scale quickly and
efficiently within the physical power, space, and cooling constraints.
!
The virtualization and private cloud infrastructure will be based on RedHat supported OpenStack framework, and
will include a highly available management cluster. The virtual servers supporting the operational applications will
reside on a separate cluster of nodes managed by OpenStack.
!
Network
For the IP network, IPv6 will be utilized. The network addresses will be based on the Unique Local Address (ULA)
range fc00::/7, which is not Internet routable. The moon base infrastructure will utilize the following /64 subnets as
shown in the diagram below. In addition, most addresses (with the exception of core physical equipment) will be
assigned using DHCPv6.
!
Figure 4 - High Level Overview of IPv6 Network Addressing
Page ! of !10 13
OpenStack Cluster
The management cluster will reside on two physical nodes (UCS Blades 1 and 2) and is based on a highly available
application topology.
!
The chosen operating system for the bare-metal install, as well as the supporting virtual servers is RedHat Enterprise
Linux 6.x with OpenStack. The bare-metal install will be installed directly onto the two SSD drives installed in each of the
UCS B-series blades. The two SSDs will be configured using RAID 1. By utilizing the drive capacity in the blades for the
bare-metal instal (versus a boot-on-SAN configuration), valuable space on the flash storage array is conserved. The
management components within the cluster will reside on virtual machines, running on top of the KVM hypervisor.
!
The underlying core services supporting the OpenStack cluster are messaging (RabbitMQ), databases (MySQL with
Galera), and orchestration (Puppet with a multi-master config). HAProxy has been chosen to act as a highly available
load-balancer for the web tier (Horizon), as well as the OpenStack API nodes.
!
Puppet will be used to orchestrate and maintain the state and consistency of the OpenStack cluster. It can easily
integrate with, and maintain the desired state of the OpenStack projects, HAProxy, the supporting core services.
!
Figure 5- High Level Overview of OpenStack Cluster
Page ! of !11 13
Application Cluster
The chosen operating system for the bare-metal install (Cisco UCS Blades 3 - 8) is RedHat Enterprise Linux 6.x
with OpenStack. The type two hypervisor to be used is KVM, and will be managed by OpenStack Nova.
!
The virtual servers which form the application framework will utilize RedHat Enterprise Linux 6.x for their guest
operating system.
!
As part of this design, it was mentioned earlier that the application framework would have the ability to scale
automatically. This requirement will be satisfied utilizing OpenStack Heat and Ceilometer to spin-up/down
instances based on workload, and Puppet to set the desired state of the application configuration.
!
It was decided that a combination approach to orchestration would be best suited to this design to enable both
automatic scaling, while maintaining desired state.
!
Storage
The storage utilized by the physical nodes will be presented by the NetApp flash storage array in the form of iSCSI
LUNs. The management and application servers will reside on separate LUNs, as well as any additional special
configuration required for the highly available databases.
!
Logical Configuration Maximums
The following configuration maximums are based on the physical equipment.
- Storage (~188,880 GB based on Raid 6)
- Virtual Machines (based on Table 5)
- Management Cluster
- m1.tiny (256)
- m1.small (448)
- m1.medium (224)
- m1.large (112)
- m1.xlarge (56)
Page ! of !12 13
- Application Cluster
- m1.tiny (1280 or 256 per blade)
- m1.small (320 or 64 per blade)
- m1.medium (224 or 32 per blade)
- m1.large (160 or 16 per blade)
- m1.xlarge (40 or 8 per blade)	
	 … or a combination of the totals above, depending on instance size.
!
Assumptions on VM Maximums
- Please note, the VM quantities noted above are estimates only, and are based on the assumption that the
largest bottleneck is memory.
- It is also assumed that physical CPU cores will be oversubscribed, and that it is not desirable to do likewise with
memory.
- These numbers do not factor in memory compression, swapping, deduplication, or other optimization methods.
- In addition, the calculations were based on the physical blade characteristics described in Table 2, and that the
resources equivalent to at least one blade per cluster would be reserved for failover in the event of hardware
failure.
!
Source: http://docs.openstack.org/openstack-ops/content/flavors.html
Page ! of !13 13
Table 5 - OpenStack Default Flavors
Flavors Memory (MB) Disk (GB) Ephemeral (GB) VCPUs
m1.tiny 512 1 0 1
m1.small 2048 10 20 1
m1.medium 4096 10 40 3
m1.large 8192 10 80 4
m1.xlarge 16384 104 160 8

More Related Content

PDF
Fuxi論文まとめ
PPT
HDF-EOS Status and Developments
PDF
From Rack scale computers to Warehouse scale computers
PPT
Hdf eos status-workshp_xi_nov_2007
PDF
POWER10 innovations for HPC
PDF
CICS Memory Objects and MEMLIMIT
PDF
SGI HPC DAY 2011 Kiev
PPT
HDF-EOS to GeoTIFF Conversion Tool and HDF-EOS Plug-in for HDFView
Fuxi論文まとめ
HDF-EOS Status and Developments
From Rack scale computers to Warehouse scale computers
Hdf eos status-workshp_xi_nov_2007
POWER10 innovations for HPC
CICS Memory Objects and MEMLIMIT
SGI HPC DAY 2011 Kiev
HDF-EOS to GeoTIFF Conversion Tool and HDF-EOS Plug-in for HDFView

What's hot (20)

PPTX
55a remote cluster
PPSX
Naprostá bezpečnost vašich dat díky jednoduchému, škálovatelnému, flexibilním...
PPTX
52 nfs
PPTX
22 configuration
PDF
SGI HPC Update for June 2013
PDF
QCon2016--Drive Best Spark Performance on AI
PDF
Flow-centric Computing - A Datacenter Architecture in the Post Moore Era
PPTX
深入解析Oracle-数据库架构设计与性能优化实践
PDF
knoSYS_Hardware
PPTX
50a volumes
PDF
@IBM Power roadmap 8
PDF
intel speed-select-technology-base-frequency-enhancing-performance
PPT
Status of HDF-EOS, Related Software, and Tools
PPTX
Modular by Design: Supermicro’s New Standards-Based Universal GPU Server
PDF
IEEE CloudCom 2014参加報告
PDF
IBM Data Centric Systems & OpenPOWER
PDF
User-space Network Processing
PDF
MVAPICH: How a Bunch of Buckeyes Crack Tough Nuts
PDF
Network Processing on an SPE Core in Cell Broadband EngineTM
55a remote cluster
Naprostá bezpečnost vašich dat díky jednoduchému, škálovatelnému, flexibilním...
52 nfs
22 configuration
SGI HPC Update for June 2013
QCon2016--Drive Best Spark Performance on AI
Flow-centric Computing - A Datacenter Architecture in the Post Moore Era
深入解析Oracle-数据库架构设计与性能优化实践
knoSYS_Hardware
50a volumes
@IBM Power roadmap 8
intel speed-select-technology-base-frequency-enhancing-performance
Status of HDF-EOS, Related Software, and Tools
Modular by Design: Supermicro’s New Standards-Based Universal GPU Server
IEEE CloudCom 2014参加報告
IBM Data Centric Systems & OpenPOWER
User-space Network Processing
MVAPICH: How a Bunch of Buckeyes Crack Tough Nuts
Network Processing on an SPE Core in Cell Broadband EngineTM
Ad

Viewers also liked (20)

PDF
From Paper to in Person: A Resume and Cover Letter Workshop
PPTX
coca cola and minions
PPT
Presentation from St.Johnston-Ireland
PPT
Иной органон, или Кризис Бэконовской парадигмы познания
PDF
Understanding the Multichannel Customer - Morgan McKeagney McKeagney Consulting
PDF
Hypnosis through time - from healing to influence and persuasion
DOCX
Caratulas fed
PDF
CXPA Finland 2015 - Sirte Pihlaja - CXPA Finland
PPTX
Ncslma pre conference makerspaces1
DOCX
Trabajos autonomos
PPTX
Chemical engineering brunson
PDF
CDM-Whitepaper-website1
PDF
Assignment 3
PDF
03 palvele asiakastasi digitaalisesti - jari jalonen - vintor
PPTX
Alice's adventures in wonderland
PDF
Human Body - Respiratory System
PDF
Research on Afghan Legal Courts
PPTX
арифметические операции в позиционных системах счисления
PDF
UI研究会④資料: WWF Together
PPTX
Stm c8 1st_year pt_group-e
From Paper to in Person: A Resume and Cover Letter Workshop
coca cola and minions
Presentation from St.Johnston-Ireland
Иной органон, или Кризис Бэконовской парадигмы познания
Understanding the Multichannel Customer - Morgan McKeagney McKeagney Consulting
Hypnosis through time - from healing to influence and persuasion
Caratulas fed
CXPA Finland 2015 - Sirte Pihlaja - CXPA Finland
Ncslma pre conference makerspaces1
Trabajos autonomos
Chemical engineering brunson
CDM-Whitepaper-website1
Assignment 3
03 palvele asiakastasi digitaalisesti - jari jalonen - vintor
Alice's adventures in wonderland
Human Body - Respiratory System
Research on Afghan Legal Courts
арифметические операции в позиционных системах счисления
UI研究会④資料: WWF Together
Stm c8 1st_year pt_group-e
Ad

Similar to Joel Gibson - Challenge 2 - Virtual Design Master (20)

PDF
#VirtualDesignMaster 3 Challenge 2 - Harshvardhan Gupta
PDF
Geoff Wilmington - Challenge 1 - Virtual Design Master
PPTX
OpenStack at Scale Inside NetApp
PDF
Container Attached Storage (CAS) with OpenEBS - Berlin Kubernetes Meetup - Ma...
PDF
Deep Learning and Gene Computing Acceleration with Alluxio in Kubernetes
PPTX
CDH on Board Cubesat.pptx
PDF
Designing a Fault-Tolerant Channel Extension Network for Internal Recovery
PDF
Acunu Whitepaper v1
PDF
#VirtualDesignMaster 3 Challenge 1 - Abdullah Abdullah
PDF
HPE Solutions for Challenges in AI and Big Data
PDF
Saviak lviv ai-2019-e-mail (1)
PDF
PPTX
W jak sposób architektura hipekonwergentna cisco simplivity usprawni działani...
PDF
#VirtualDesignMaster 3 Challenge 3 – James Brown
PDF
【旧版】Oracle Database Cloud Service:サービス概要のご紹介 [2020年1月版]
PPTX
Unleash oracle 12c performance with cisco ucs
PDF
Rob Nelson - Challenge 2 - Virtual Design Master
PDF
Handling Increasing Load and Reducing Costs Using Aerospike NoSQL Database - ...
PDF
TUT18972: Unleash the power of Ceph across the Data Center
DOCX
#VirtualDesignMaster 3 Challenge 1 - Lubomir Zvolensky
#VirtualDesignMaster 3 Challenge 2 - Harshvardhan Gupta
Geoff Wilmington - Challenge 1 - Virtual Design Master
OpenStack at Scale Inside NetApp
Container Attached Storage (CAS) with OpenEBS - Berlin Kubernetes Meetup - Ma...
Deep Learning and Gene Computing Acceleration with Alluxio in Kubernetes
CDH on Board Cubesat.pptx
Designing a Fault-Tolerant Channel Extension Network for Internal Recovery
Acunu Whitepaper v1
#VirtualDesignMaster 3 Challenge 1 - Abdullah Abdullah
HPE Solutions for Challenges in AI and Big Data
Saviak lviv ai-2019-e-mail (1)
W jak sposób architektura hipekonwergentna cisco simplivity usprawni działani...
#VirtualDesignMaster 3 Challenge 3 – James Brown
【旧版】Oracle Database Cloud Service:サービス概要のご紹介 [2020年1月版]
Unleash oracle 12c performance with cisco ucs
Rob Nelson - Challenge 2 - Virtual Design Master
Handling Increasing Load and Reducing Costs Using Aerospike NoSQL Database - ...
TUT18972: Unleash the power of Ceph across the Data Center
#VirtualDesignMaster 3 Challenge 1 - Lubomir Zvolensky

More from vdmchallenge (20)

PDF
#VirtualDesignMaster 3 Challenge 4 - Steven Viljoen
PDF
#VirtualDesignMaster 3 Challenge 4 - Harshvardhan Gupta
PDF
#VirtualDesignMaster 3 Challenge 4 - Dennis George
PDF
#VirtualDesignMaster 3 Challenge 4 – James Brown
PDF
#VirtualDesignMaster 3 Challenge 4 - Abdullah Abdullah
PDF
#VirtualDesignMaster 3 Challenge 3 - Steven Viljoen
DOCX
#VirtualDesignMaster 3 Challenge 3 - Lubomir Zvolensky
PDF
#VirtualDesignMaster 3 Challenge 3 - Harshvardhan Gupta
PDF
#VirtualDesignMaster 3 Challenge 3 - Dennis George
PDF
#VirtualDesignMaster 3 Challenge 3 - Abdullah Abdullah
PDF
#VirtualDesignMaster 3 Challenge 2 - Steven Viljoen
DOCX
#VirtualDesignMaster 3 Challenge 2 - Lubomir Zvolensky
PDF
#VirtualDesignMaster 3 Challenge 2 – James Brown
PDF
#VirtualDesignMaster 3 Challenge 2 - Dennis George
PDF
#VirtualDesignMaster 3 Challenge 2 - Abdullah Abdullah
PDF
#VirtualDesignMaster 3 Challenge 1 - Dennis George
PDF
#VirtualDesignMaster 3 Challenge 1 - Harshvardhan Gupta
PDF
#VirtualDesignMaster 3 Challenge 1 – James Brown
PDF
#VirtualDesignMaster 3 Challenge 1 - Mohamed Ibrahim
PDF
#VirtualDesignMaster 3 Challenge 1 - Steven Viljoen
#VirtualDesignMaster 3 Challenge 4 - Steven Viljoen
#VirtualDesignMaster 3 Challenge 4 - Harshvardhan Gupta
#VirtualDesignMaster 3 Challenge 4 - Dennis George
#VirtualDesignMaster 3 Challenge 4 – James Brown
#VirtualDesignMaster 3 Challenge 4 - Abdullah Abdullah
#VirtualDesignMaster 3 Challenge 3 - Steven Viljoen
#VirtualDesignMaster 3 Challenge 3 - Lubomir Zvolensky
#VirtualDesignMaster 3 Challenge 3 - Harshvardhan Gupta
#VirtualDesignMaster 3 Challenge 3 - Dennis George
#VirtualDesignMaster 3 Challenge 3 - Abdullah Abdullah
#VirtualDesignMaster 3 Challenge 2 - Steven Viljoen
#VirtualDesignMaster 3 Challenge 2 - Lubomir Zvolensky
#VirtualDesignMaster 3 Challenge 2 – James Brown
#VirtualDesignMaster 3 Challenge 2 - Dennis George
#VirtualDesignMaster 3 Challenge 2 - Abdullah Abdullah
#VirtualDesignMaster 3 Challenge 1 - Dennis George
#VirtualDesignMaster 3 Challenge 1 - Harshvardhan Gupta
#VirtualDesignMaster 3 Challenge 1 – James Brown
#VirtualDesignMaster 3 Challenge 1 - Mohamed Ibrahim
#VirtualDesignMaster 3 Challenge 1 - Steven Viljoen

Recently uploaded (20)

PPTX
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
PPTX
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
PPT
Teaching material agriculture food technology
PDF
Encapsulation theory and applications.pdf
PDF
cuic standard and advanced reporting.pdf
PDF
Mobile App Security Testing_ A Comprehensive Guide.pdf
PDF
Unlocking AI with Model Context Protocol (MCP)
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Per capita expenditure prediction using model stacking based on satellite ima...
PDF
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
PDF
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
PDF
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
PDF
Machine learning based COVID-19 study performance prediction
PPT
“AI and Expert System Decision Support & Business Intelligence Systems”
DOCX
The AUB Centre for AI in Media Proposal.docx
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PPTX
MYSQL Presentation for SQL database connectivity
PDF
KodekX | Application Modernization Development
PDF
The Rise and Fall of 3GPP – Time for a Sabbatical?
Detection-First SIEM: Rule Types, Dashboards, and Threat-Informed Strategy
KOM of Painting work and Equipment Insulation REV00 update 25-dec.pptx
Teaching material agriculture food technology
Encapsulation theory and applications.pdf
cuic standard and advanced reporting.pdf
Mobile App Security Testing_ A Comprehensive Guide.pdf
Unlocking AI with Model Context Protocol (MCP)
Programs and apps: productivity, graphics, security and other tools
Reach Out and Touch Someone: Haptics and Empathic Computing
Per capita expenditure prediction using model stacking based on satellite ima...
Optimiser vos workloads AI/ML sur Amazon EC2 et AWS Graviton
How UI/UX Design Impacts User Retention in Mobile Apps.pdf
Peak of Data & AI Encore- AI for Metadata and Smarter Workflows
Machine learning based COVID-19 study performance prediction
“AI and Expert System Decision Support & Business Intelligence Systems”
The AUB Centre for AI in Media Proposal.docx
Diabetes mellitus diagnosis method based random forest with bat algorithm
MYSQL Presentation for SQL database connectivity
KodekX | Application Modernization Development
The Rise and Fall of 3GPP – Time for a Sabbatical?

Joel Gibson - Challenge 2 - Virtual Design Master

  • 1. ! ! ! ! Design Proposal - Challenge 2 Prepared for: VirtualDesignMaster.com Prepared by: Joel Gibson July 21, 2014 Proposal number: Challenge 2 
 Page ! of !1 13 JOEL GIBSON
  • 2. EXECUTIVE SUMMARY ! Objective The objective of this design is to support the mission critical infrastructure on the moon base. Based on the description provided, the moon base will be used to temporarily house humans until the Mars colony has been completed. Assumptions It is assumed that the moon base is not a manufacturing depot that produces space ships, as was the case on Earth. The documentation provided would indicate that the moon base is a piece of supporting infrastructure; therefore, for the purpose of this design, it is assumed that the main purpose of these systems will be of an operational nature to ensure availability of administrative, security, logistics, and other services. Goals This solution should be reliable, easily deployable, and be conscious of power, cooling, and space limitations. Downtime is to be minimized. Constraints - Due to severe power, cooling, and space limitations, the infrastructure must fit into a 21U rack space. - The solution must be built utilizing components from one or more of the following vendors: VMware, Cisco, NetApp, Synology, RedHat, and Puppet. - The design must utilize IPv6 only. Solution With the above goals in mind, the solution has been designed around the following key elements: ! - All supporting elements must be highly available. - The physical and logical infrastructure must be able to scale easily, and quickly. - The application framework must be able to scale automatically. ! The physical infrastructure has been kept simple, and relies heavily on converged systems. It is easy to rack, stack, and cable this design. ! ! Page ! of !2 13 JOEL GIBSON
  • 3. DESIGN OVERVIEW ! Background The Cape Canaveral Space Port is the first, of at least four, critical production facilities. The facilities in the Netherlands, Australia, and New Zealand will be ready soon. In addition, there is a base located on the moon which will be used as a temporary colony until the human race can be moved to Mars. ! The infrastructure design for the moon base has been configured in a highly available manner. It is the goal of this design to eliminate any potential single points of failure, such that it is able to support the mission critical systems. The physical design of the infrastructure has been kept relatively simple, so that scarce resources can be best utilized. Due to severe power, cooling, and space limitations, the infrastructure must fit into a 21U rack space. In addition, only IPv6 infrastructure is available on the moon base; therefore, all supporting infrastructure must be compatible. Objective The objective of this design is to support the mission critical infrastructure on the moon base. Based on the description provided, the moon base will be used to temporarily house humans until the Mars colony has been completed. Constraints - Due to severe power, cooling, and space limitations, the infrastructure must fit into a 21U rack space. - The solution must be built utilizing components from one or more of the following vendors: VMware, Cisco, NetApp, Synology, RedHat, and Puppet. (based on the challenge framework provided) - The design must utilize IPv6 only. Assumptions It is assumed that the moon base is not a manufacturing depot that produces space ships, as was the case on Earth. The documentation provided would indicate that the moon base is a piece of supporting infrastructure; therefore, for the purpose of this design, it is assumed that the main purpose of these systems will be of an operational nature to ensure availability of administrative, security, logistics, and other services. It is also assumed that while the design must stick to the vendors noted above, that VMware is not a requirement. Goals This solution should be reliable, easily deployable, and be conscious of power, cooling, and space limitations. Downtime is to be minimized. Page ! of !3 13 JOEL GIBSON
  • 4. ! ! PHYSICAL DESIGN ! Infrastructure As noted earlier, the physical infrastructure has been kept simple, and relies heavily on converged systems. It is easy to rack and stack this design. The physical systems will reside in a single 21U rack space, supplied by two UPS systems. The rack will contain the required network gear to provide moon-spaceport MPLS connectivity, top-of-rack network, compute, and storage. ! Figure 1 - Physical Rack Layout Page ! of !4 13 JOEL GIBSON
  • 5. ! Network The Cisco Nexus 5672UP was chosen as the top-of-rack switch, which provides ample bandwidth and ports for the existing infrastructure, along with room to grow. It also supports network overlay technology, such as VxLAN should the future need arise. Moon-to-spaceport connectivity will be established using a specialized high latency, high bandwidth connection to the MPLS network via 3rd party Earth stations. The design will utilize IPv6 only. Internet No direct connectivity to the Internet will be available from the moon base. If traffic to the IPv6 Internet is required, traffic will be routed from the moon base to the Internet via the Cape Canaveral Space Port. If a connection to the IPv4 Internet is required from the moon base, and future provisions in the design requirements are allowed to do so, a tunnel will be created between two compatible routers to encapsulate IPv4 traffic within an IPv6 packet. Figure 2 - High Level Overview of Inter-Facility Connectivity Page ! of !5 13 JOEL GIBSON
  • 6. Compute The Cisco UCS 5108 blade-system was chosen for scalable compute, instead of the C460 M4 rack mount servers that are utilized in the space ports. The main reason for this was to increase density of the compute nodes. The C460 M4s contained several disk trays which were not being utilized at the space ports, and were seen as utilizing excessive space, based on the amount of compute capability per rack unit (RU). Based on the estimates noted in tables one and two below, it is believed that by utilizing the UCS 5108 blade system the following efficiencies can be gained: - 50% reduction in physical rack space - roughly 87% more compute - approximately 3-5% gain in power and cooling efficiency based on the compute density In addition to the efficiencies gained through the increased density, this design enables a certain amount of flexibility to scale up/down the compute infrastructure. In the event that power availability becomes constrained, one or more blades can be shutdown while maintaining core management and application availability. If more compute resources are required, and the power is available, scaling up can be accomplished by replacing the CPUs, and/or adding memory. These statements, and the physical characteristics described are estimates, and for learning purposes only. Page ! of !6 13 JOEL GIBSON Table 1 - Estimation of Physical Characteristics (Cisco C460 M4 Rack Mount) 75% Utilization Description CPU Clock Rate Cores Memory QTY Power Consumption (Watts) Cooling (BTU) Rack Space C460 M4 Rack Mount Server 4x E7-4809 1.9 24 128 4 532 1814 4 Total M4 182.4 96 512 4 2128 7256 16 Table 2 - Estimation of Physical Characteristics (Cisco 5108 with B230 M2 Blades) 75% Utilization Description CPU Clock Rate Cores Memory QTY Power Consumption (Watts) Cooling (BTU) Rack Space B230 M2 Cisco 5108 with Blade Server 2x EL-8867L 2.13 20 128 8 410 1397 0.75 6248UP FI Fabric Interconnect 2 256 875 1 Subtotal M2 340.8 160 1024 8 3280 11176 6 Subtotal FI 512 1750 2 Total 3792 12926 8
  • 7. Storage The NetApp EF550 flash storage array was chosen, instead of the E2624 which is utilized in the space ports. The main reason for this was to increase power and cooling efficiency. Based on the estimates noted in tables three and four below, it is believed that by utilizing the flash storage array the following efficiencies can be gained: - approximate reduction in power and cooling requirements by 33-34% - approximate increase in available raw disk capacity by 33% These statements, and the physical characteristics described are estimates, and for learning purposes only. In addition, some of the savings in power and cooling could be utilized to maintain a higher density of compute nodes for longer periods. ! ! Page ! of !7 13 JOEL GIBSON Table 3 - Estimation of Physical Characteristics (NetApp E2624 Storage Array Config) 75% Utilization Description QTY Power Consumption (Max Watts) Cooling (Max BTU) Rack Space Raw Capacity (TB) E2624 NetApp Storage Array Populated with 120 x 1.2 TB SAS 10k Drives 2 482 1644 2 Expansion 2 371 1267 2 Total 1706 5822 8 144 Table 4 - Estimation of Physical Characteristics (NetApp EF550 Flash Storage Array Config) 75% Utilization Description QTY Power Consumption (Max Watts) Cooling (Max BTU) Rack Space Raw Capacity (TB) EF550 NetApp Flash Array Populated with 120 x 1.6 TB SSD Drives 1 498 1630 2 38 Expansion 3 216 738 2 Total 1146 3844 8 192
  • 8. Physical Configuration Maximums The following configuration maximums are based on the physical constraint of 21U of available rack space. - Cisco UCS 5108 Chassis (1) - Cisco B230 M2 Blades (8) - Intel CPUs (2) - Memory (512 GB) - Local Disk Space SSD (800 GB) - Cisco 6248UP Fabric Interconnects - 10 GB Ports (48 each) - Expansion Ports (2) - NetApp EF550 Flash Storage Array - SSD Drives (120) - Size of SSD Disk (1.6 TB) - Raw Capacity (192 TB) - Cisco 5672UP Switch - 10 GB Ports (48 each) Risks Due to the physical power, space, and cooling constraints, there is no multi-site, multi-room, multi-rack redundancy. The solution is highly available, but within a single rack. If the data centre is physically compromised, the mission critical services provided by the moon base infrastructure could be at significant risk. Assumptions It is assumed that the Cape Canaveral Space Port has been designed and built to Tier 3 or above standards according to the Uptime Institute. In addition, there is enough physical space, and power and cooling capacity available to scale the physical equipment, if required. ! ! Page ! of !8 13 JOEL GIBSON
  • 9. Figure 3 - High Level Overview of Virtualization Infrastructure Page ! of !9 13 JOEL GIBSON
  • 10. LOGICAL DESIGN ! Overview The premise for this design is based on high availability and orchestration. The purpose of this is to protect the critical components (i.e. application workload, and data), and ensure that the infrastructure can scale quickly and efficiently within the physical power, space, and cooling constraints. ! The virtualization and private cloud infrastructure will be based on RedHat supported OpenStack framework, and will include a highly available management cluster. The virtual servers supporting the operational applications will reside on a separate cluster of nodes managed by OpenStack. ! Network For the IP network, IPv6 will be utilized. The network addresses will be based on the Unique Local Address (ULA) range fc00::/7, which is not Internet routable. The moon base infrastructure will utilize the following /64 subnets as shown in the diagram below. In addition, most addresses (with the exception of core physical equipment) will be assigned using DHCPv6. ! Figure 4 - High Level Overview of IPv6 Network Addressing Page ! of !10 13
  • 11. OpenStack Cluster The management cluster will reside on two physical nodes (UCS Blades 1 and 2) and is based on a highly available application topology. ! The chosen operating system for the bare-metal install, as well as the supporting virtual servers is RedHat Enterprise Linux 6.x with OpenStack. The bare-metal install will be installed directly onto the two SSD drives installed in each of the UCS B-series blades. The two SSDs will be configured using RAID 1. By utilizing the drive capacity in the blades for the bare-metal instal (versus a boot-on-SAN configuration), valuable space on the flash storage array is conserved. The management components within the cluster will reside on virtual machines, running on top of the KVM hypervisor. ! The underlying core services supporting the OpenStack cluster are messaging (RabbitMQ), databases (MySQL with Galera), and orchestration (Puppet with a multi-master config). HAProxy has been chosen to act as a highly available load-balancer for the web tier (Horizon), as well as the OpenStack API nodes. ! Puppet will be used to orchestrate and maintain the state and consistency of the OpenStack cluster. It can easily integrate with, and maintain the desired state of the OpenStack projects, HAProxy, the supporting core services. ! Figure 5- High Level Overview of OpenStack Cluster Page ! of !11 13
  • 12. Application Cluster The chosen operating system for the bare-metal install (Cisco UCS Blades 3 - 8) is RedHat Enterprise Linux 6.x with OpenStack. The type two hypervisor to be used is KVM, and will be managed by OpenStack Nova. ! The virtual servers which form the application framework will utilize RedHat Enterprise Linux 6.x for their guest operating system. ! As part of this design, it was mentioned earlier that the application framework would have the ability to scale automatically. This requirement will be satisfied utilizing OpenStack Heat and Ceilometer to spin-up/down instances based on workload, and Puppet to set the desired state of the application configuration. ! It was decided that a combination approach to orchestration would be best suited to this design to enable both automatic scaling, while maintaining desired state. ! Storage The storage utilized by the physical nodes will be presented by the NetApp flash storage array in the form of iSCSI LUNs. The management and application servers will reside on separate LUNs, as well as any additional special configuration required for the highly available databases. ! Logical Configuration Maximums The following configuration maximums are based on the physical equipment. - Storage (~188,880 GB based on Raid 6) - Virtual Machines (based on Table 5) - Management Cluster - m1.tiny (256) - m1.small (448) - m1.medium (224) - m1.large (112) - m1.xlarge (56) Page ! of !12 13
  • 13. - Application Cluster - m1.tiny (1280 or 256 per blade) - m1.small (320 or 64 per blade) - m1.medium (224 or 32 per blade) - m1.large (160 or 16 per blade) - m1.xlarge (40 or 8 per blade) … or a combination of the totals above, depending on instance size. ! Assumptions on VM Maximums - Please note, the VM quantities noted above are estimates only, and are based on the assumption that the largest bottleneck is memory. - It is also assumed that physical CPU cores will be oversubscribed, and that it is not desirable to do likewise with memory. - These numbers do not factor in memory compression, swapping, deduplication, or other optimization methods. - In addition, the calculations were based on the physical blade characteristics described in Table 2, and that the resources equivalent to at least one blade per cluster would be reserved for failover in the event of hardware failure. ! Source: http://docs.openstack.org/openstack-ops/content/flavors.html Page ! of !13 13 Table 5 - OpenStack Default Flavors Flavors Memory (MB) Disk (GB) Ephemeral (GB) VCPUs m1.tiny 512 1 0 1 m1.small 2048 10 20 1 m1.medium 4096 10 40 3 m1.large 8192 10 80 4 m1.xlarge 16384 104 160 8