SlideShare a Scribd company logo
#VirtualDesignMaster 3 Challenge 2 - Harshvardhan Gupta
Virtual	Design	Master	
Challenge	2	
Prepared	for:	VirtualDesignMaster.com	
Prepared	by:	Harshvardhan	Gupta	
July	12,	2015
#VirtualDesignMaster 3 Challenge 2 - Harshvardhan Gupta
SYNOPSIS
We’ve examined how we can rebuild infrastructure from scratch, but now let’s think outside the
box, and inside the clouds. Before the zombie apocalypse began, many organizations were
beginning to leverage public cloud infrastructures for a number of reasons. Some were using it for
burst capacity, others for development and test. Some start-ups even used public cloud for
everything! Our billionaire philanthropist from Season 1 is a huge fan of public cloud, and was one
of the early adopters. HIs task for you now is to design an environment to meet our needs using any
existing public cloud infrastructure to run it on top of. Be sure to let him know why you picked this
particular public cloud infrastructure, as if he’s going to re-create it on Mars, he wants to make sure
he’s making the best choice. Now for the fun part. We talked about a number of business critical
applications in the first challenge. You must deploy one web application (think the time tracking
application for the botanists in the greenhouses) and one business critical enterprise application
(think life support systems) inside the public cloud infrastructure. As part of your design, you must
state what the application requirements you are using are. Remember to think about things like
performance, capacity, latency, and high availability. In this case, complexity can be your enemy,
but the details can be your friend. Besides your overall design, you will also be judged on the
application requirements you develop.
2 | P a g e
Table of Contents
1. Overview................................................................................................................... ii
2. Requirements........................................................................................................... 2
3. Constraints............................................................................................................... 2
4. Risks ......................................................................................................................... 2
5. Assumptions ............................................................................................................ 3
6. Public Cloud infrastructure of choice.................................................................... 3
7. H/W Specs of choice for Public Cloud infrastructure........................................... 4
8. Logical diagram for Public Cloud........................................................................... 7
9. Application Architecture ......................................................................................... 9
10.References ............................................................................................................. 14
2 | P a g e
1. Overview
Millionaire philanthropist sent me back in time on earth before zombie outbreak happen
to evaluate public cloud providers and comeback with a plan for replicating same
infrastructure here on Mars, definitely with some tweaks.
2. Requirements
The Permanent IT infrastructure strategy for Mars is to move the services and data
repositories that are currently delivered out of Temporary datacentre located in Human
Pods to Public Cloud.
The IT approach for this Public Cloud facility is to have a replica of best available Public
cloud Provider Company operating from Earth before zombie apocalypse.
1. Create a replica of Public Cloud Provider here in Mars
2. Host/Deploy one web application i.e. time tracking application for the botanists in the
Greenhouses.
3. And, one business critical enterprise application i.e. life support systems inside the
public cloud infrastructure.
4. Applications must be highly available, lower latency and performance are utmost
important.
3. Constraints
Major Cloud related Constraints depends upon the choice of public cloud provider
chosen for re-built in mars.
One Web Application and One Business critical enterprise application must be
deployable on the public cloud provider of your choice.
4. Risks
Major risks are Replacement parts for maintaining large public cloud infrastructure and
trained staff for day-to-day operations.
Unavailability of Experienced Data centre architects for designing DC on Mars. So we
leverage Open Compute project design specification.
3 | P a g e
5. Assumptions
There will be two Availability zones on Mars for hosting public cloud infrastructure and
both are connected with high capacity intranet/internet link for inter-DC replication.
Sufficient Compute, Networking and Storage h/w is available to build a public cloud
infrastructure.
Sufficient cooling and power sources for keeping the facility up and running 24x7x365
(this may vary due measurement of time on mars).
6. Public Cloud infrastructure of choice
We choose VMware vCloud Air as our preferred choice for building public cloud on
Mars.
Pros-
1. New born cloud infrastructure with latest compute/networking and storage gear.
2. No use of secret/proprietary stuff, like other cloud providers do (AWS/Azure).
3. VMware eats his own dog food this time, vCloud Air offering runs above same
technology i.e. ESXi hypervisor/vCloud Director etc.
4. Two models only i.e. Dedicated and Virtual private.
5. No need to reinvent the wheel, existing VM’s can be migrated to public cloud with
the use of vCloud connector appliance.
6. No need to re-architect existing applications to make them suitable for public
cloud.
7. Automation/orchestration tools built around it vCAC/vRealize and can leverage
existing automation techniques being used in temporary Datacenters.
8. Seem less migration from existing infra to public cloud and vice versa.
9. Work load can be balanced between on-premises and cloud infrastructure as ‘n’
when required.
10.No need to learn new skills, easy to use web interface for day-to-day
administration.
11.Petting VM’s can be easy, rather than treating them as cattle’s (on
AWS/Openstack).
12.High Performance Hardware, Sophisticated Resource Management and Careful
Management of Capacity are inherent on vCloud Air.
13.vCloud Air 35% Cheaper Than Azure and 83% Cheaper Than AWS.
14.vCloud API for faster integration and development of applications on vCloud Air.
15.vCloud Air disaster recovery for taming rough circumstances on Mars like
solarwinds/dust storms etc.
Cons-
1. One of the limitations with vCD, and hence this service, is that you can't define
anti-affinity rules (where a master and slave server are guaranteed to be on
different physical hosts).
4 | P a g e
2. Didn’t find anything else negative about it.
7. H/W Specs of choice for Public Cloud infrastructure
We choose Open Compute Project as our preferred choice for hardware
(Compute/Storage/Networking) on Mars. To keep tap on costly resources like space, cooling
and power.
1. Server Technology
a. Server Design- The high availability (HA) server leverages the Intel Motherboard
Hardware Specification v2.0. Instead of accommodating two server motherboard
trays with one shared PSU, it accommodates one server motherboard tray with one
PSU tray holding two PSUs.
b. Open Vault (Storage) - The Open Vault is a simple and cost-effective storage
solution with a modular I/O topology that’s built for the Open Rack. The Open Vault
offers high disk densities, holding 30 drives in a 2U chassis, and can operate with
almost any host server. Its innovative, expandable design puts serviceability first,
with easy drive replacement no matter the mounting height.
5 | P a g e
c. Hardware Management (out-band management)
d. Power Supply- The OCP 700W-SH AC/DC power converter, a single voltage 12.5Vdc,
closed frame, self-cooled power supply used in high efficiency IT applications. The
supply is configurable to a 450W-SH power rating (like the Open Compute Project 450W
power supply), as both models use the same PCBs, with just pin-to-pin component
replacements.
6 | P a g e
e. Chassis- The Open Compute Project chassis is designed to accommodate the other
components in a server, including the custom motherboard and power supply. Overall it
is vanity free, has no sharp corners and is designed for easy servicing. It is completely
screw-less, uses quick release components such that the motherboard snaps into place
with a series of mounting holes, and the hard drives use snap-in rails to slide into the
drive bay.
2. Data Center Technology
a. Networking- Switch Specifications:
48x10G SFP+ and 4x40G QSFP+
1 RJ-45 Out-of-band Management Port (10/100/1000M)
1 Console port
1+1 Hot-Swappable PSU
ONIE Supported Boot Loader
b. Open Rack
c. Battery Cabinet - The battery cabinet is a standalone independent cabinet that
provides backup power at 48 volt DC nominal to a pair of triplet racks in the event of
an AC outage in the data center. The batteries are a sealed 12.5 volt DC nominal,
high-rate discharge type with a 10 year lifespan, commonly used in UPS systems,
connected in a series of four elements for each group (called a string), for a nominal
string voltage of 48VDC. There are five strings in parallel in the cabinet.
7 | P a g e
d. Data Center Electrical
e. Data Center Mechanical
f. Data Center
8. Logical diagram for Public Cloud
Public cloud will be based on vCloud Air, which relies upon vCloud Director.
8 | P a g e
Management Cluster for hosting core components of VMware vSphere, vCloud Director
and vRealize. This cluster will also host customized portal for easy consumption of
Cloud resources.
Cloud resource groups provide resources for end-user consumption on Mars.
Disaster Recovery consideration-
9 | P a g e
Multisite consideration-
NOTE:
Each Site/datacenter will be setup with the same configuration to provide simplicity through
standardization.
For brevity only one availability zone layout is shown above unless specifically mentioned.
9. Application Architecture
1. Web Application-Time Tracking for the Botanists in Greenhouses-
Botanists are still studying environment and their adverse effect on plantation. They
require a time tracking application to monitor the growth of plantations in greenhouses,
our only source of green vegetables and vital nutrients required for steady growth of
human beings. Time tracking is a very cumbersome job and requires high precision and
concentration. Web interface provides a single pane of glass for all aspects of time
tracking application and for number crunching huge data public cloud suites best. This
application is designed with Disaster recovery in mind, as this vital information will keep
botanists to keep progressing on their research work without any hassle.
Availability Requirements-
Web and SQL server components must be highly available.
Latency Requirements-
Latency between database servers must be low, i.e. within permissible limit.
10 | P a g e
Capacity- Each vApp can support one Greenhouse. We are also planning to create a
centralized command center for monitoring greenhouses on mars like life support system.
Compute Requirements-
CPU RAM Storage
Web Server 4vCPU 16 GB 150 GB
SQL Server with Always-on 4vCPU 24 GB 500 GB
Stats Server 2vCPU 8 GB 150 GB
Alerting service server 2vCPU 8 GB 150 GB
Log Analyzer 2vCPU 8 GB 150 GB
Load Balancer appliance 4vCPU 16 GB 72 GB
The components that make up Common Services include the following:
 Alerting Service – Software used to integrate our Stats Server and Log Analyzer with Exchange
based alerting and messaging system.
 Log Analyzer – Software used to aggregate and to parse logs collected from On-prem agents.
The Log Analyzer is integrated with our Alerting Service to provide active monitoring.
 Stats Server – Used to monitor the health of On-prem agents and ensuring that all services are
up and running. The Stats Server achieves this by communicating with the Stats/Health Agent
11 | P a g e
for each On-prem agents to receive status. Stats server also runs some analytics for forecasting
data.
 Web Server – A Web Server based on IIS used to manage services for each Greenhouse,
including running per-botanists scheduled prescription for plantation. This also uses powershell
scripts for automation.
 vCloud Catalog – Catalog stores vApp for faster deployment of application/components
2. Business (Life) critical enterprise application-Life support systems-
Life support system is one of the most critical applications on Mars; this application takes
care of Oxygen supply, cooling, energy consumption, safety and security. This
application comprises of shared SQL Always-on Database, a web interface, CCTV
Footage storage/archival and some clients to gather information from different sensors
spread across human pods. The Life support command center system must be highly
available and fault tolerant, as human lives are on stake here.
Availability Requirements-
Web and SQL server components must be highly available Except On-Premises collectors. On-
Premises collectors will be using shared database; if any collector fails or any mechanical failure
happen email alert will be triggered. CCTV collector will synchronize with deduplication
appliance hosted on cloud and does video footage compression, which later kept on VTL for
archival.
Latency Requirements-
Latency between database servers must be low, i.e. within permissible limit.
Capacity-
vApp used as a centralized command center, for monitoring human pods on mars.
Collectors/Agent can capture data from three sensors only, more sensors require more
agents/collectors installation to distribute load.
Compute Requirements-
CPU RAM Storage
Web Server 4vCPU 16 GB 150 GB
SQL Server with Always-on 4vCPU 24 GB 500 GB
Alerting service server 2vCPU 8 GB 150 GB
Log Analyzer 2vCPU 8 GB 150 GB
Building Mgmt Agent 4vCPU 16 GB 72 GB
Energy Mgmt Agent 2vCPU 4 GB 72 GB
Intrusion alarm Agent 2vCPU 4 GB 72 GB
CCTV Video Collector 2vCPU 4 GB 72 GB
Access Control Agent 2vCPU 4 GB 72 GB
12 | P a g e
Life Safety Agent 2vCPU 4 GB 72 GB
Deduplication Appliance 8 vCPU 32 GB 300 GB
13 | P a g e
The components that make up Common Services include the following:
 Alerting Service – Software used to integrate our Stats Server and Log Analyzer with Exchange
based alerting and messaging system.
 Log Analyzer – Software used to aggregate and to parse logs collected from On-prem agents.
The Log Analyzer is integrated with our Alerting Service to provide active monitoring.
 Deduplication Appliance – These appliance provide deduplication facility for CCTV footage
captured through collectors and will be stored on VTL.
 Web Server – A Web Server based on IIS used to manage services for each Greenhouse,
including running per-botanists scheduled prescription for plantation. This also uses powershell
scripts for automation.
 vCloud Catalog – this stores vApp for faster deployment of application/components
 CCTV Footage Archival- VTL for footage archival older than 90 days with optimal
deduplication
 Building Mgmt Agent- Interfaces with Lighting control system and smoke management
system.
 Energy Mgmt Agent- Interfaces with Oxygen, water, energy, electrical and BTU Meters
 Intrusion alarm Agent- Interfaces with Remote sensors for Magnetic trip alarm devices
 CCTV Video Collector- Captures Audio/Video from CCTV.
 Access Control Agent- Keeps track of Access within Human Pods controlled environment
and chaos avoidance
 Life Safety Agent- Interfaces with heat/smoke detector and triggers fire suppression
systems.
14 | P a g e
10. References
1. blogs.vmware.com
2. hypervizor.com
3. Honeywell automation
4. Wikipedia

More Related Content

PDF
SGI HPC Update for June 2013
PDF
SGI HPC DAY 2011 Kiev
PDF
Introduction to High-Performance Computing (HPC) Containers and Singularity*
PDF
Bullx HPC eXtreme computing cluster references
PDF
Qct quick stack ubuntu openstack
PDF
From Rack scale computers to Warehouse scale computers
PDF
Iris: Inter-cloud Resource Integration System for Elastic Cloud Data Center
PDF
Vector processor : Notes
SGI HPC Update for June 2013
SGI HPC DAY 2011 Kiev
Introduction to High-Performance Computing (HPC) Containers and Singularity*
Bullx HPC eXtreme computing cluster references
Qct quick stack ubuntu openstack
From Rack scale computers to Warehouse scale computers
Iris: Inter-cloud Resource Integration System for Elastic Cloud Data Center
Vector processor : Notes

What's hot (19)

PDF
Ceph Day Amsterdam 2015 - Building your own disaster? The safe way to make C...
PPTX
Google warehouse scale computer
PPTX
MEW22 22nd Machine Evaluation Workshop Microsoft
PDF
IEEE CloudCom 2014参加報告
PPTX
November 2014 HUG: Lessons from Hadoop 2+Java8 migration at LinkedIn
PDF
Enabling Cognitive Workloads on the Cloud: GPUs with Mesos, Docker and Marath...
PDF
AIST Super Green Cloud: lessons learned from the operation and the performanc...
PDF
Transparent Hardware Acceleration for Deep Learning
PDF
"The OpenVX Computer Vision and Neural Network Inference Library Standard for...
PDF
Expectations for optical network from the viewpoint of system software research
PDF
Hadoop As The Platform For The Smartgrid At TVA
PDF
A tutorial on GreenCloud
PDF
MIG 5th Data Centre Summit 2016 PTS Presentation v1
PDF
Tesla Accelerated Computing Platform
PDF
TAU E4S ON OpenPOWER /POWER9 platform
PPTX
OpenPOWER foundation
PDF
OpenNebulaConf2015 2.06 OpenNebula in the Wild - Ander Astudillo
PPTX
Taking High Performance Computing to the Cloud: Windows HPC and
PDF
Nvidia SC16: The Greatest Challenges Can't Wait
Ceph Day Amsterdam 2015 - Building your own disaster? The safe way to make C...
Google warehouse scale computer
MEW22 22nd Machine Evaluation Workshop Microsoft
IEEE CloudCom 2014参加報告
November 2014 HUG: Lessons from Hadoop 2+Java8 migration at LinkedIn
Enabling Cognitive Workloads on the Cloud: GPUs with Mesos, Docker and Marath...
AIST Super Green Cloud: lessons learned from the operation and the performanc...
Transparent Hardware Acceleration for Deep Learning
"The OpenVX Computer Vision and Neural Network Inference Library Standard for...
Expectations for optical network from the viewpoint of system software research
Hadoop As The Platform For The Smartgrid At TVA
A tutorial on GreenCloud
MIG 5th Data Centre Summit 2016 PTS Presentation v1
Tesla Accelerated Computing Platform
TAU E4S ON OpenPOWER /POWER9 platform
OpenPOWER foundation
OpenNebulaConf2015 2.06 OpenNebula in the Wild - Ander Astudillo
Taking High Performance Computing to the Cloud: Windows HPC and
Nvidia SC16: The Greatest Challenges Can't Wait
Ad

Viewers also liked (20)

PDF
Catalog proiecte Cerc de Donatori Cluj, tema Tineri
PDF
Resíduos de Solow industriais: um estudo empírico para o Brasil
PDF
Sustainable closter initative update
PDF
Daemon Behr - Challenge 4 - Virtual Design Master
PPTX
PDF
Quantum foundation
PPTX
Article research
PPSX
Branding for Blue Island
DOCX
#VirtualDesignMaster 3 Challenge 3 - Lubomir Zvolensky
PDF
Recovery Enhancing Meds-Press Ready
PDF
New Presentation
PPTX
Phil. literature project
PPTX
Presentation
PDF
Marketing Infographic
PDF
CXPA Finland 2015 - Sirte Pihlaja - CXPA Finland
PPTX
Chemical engineering brunson
DOCX
F inal shotlist
PPT
Онтологические плетения
PDF
Transcript+Cer
PPT
Метабъекты и социальная гидродинамика
Catalog proiecte Cerc de Donatori Cluj, tema Tineri
Resíduos de Solow industriais: um estudo empírico para o Brasil
Sustainable closter initative update
Daemon Behr - Challenge 4 - Virtual Design Master
Quantum foundation
Article research
Branding for Blue Island
#VirtualDesignMaster 3 Challenge 3 - Lubomir Zvolensky
Recovery Enhancing Meds-Press Ready
New Presentation
Phil. literature project
Presentation
Marketing Infographic
CXPA Finland 2015 - Sirte Pihlaja - CXPA Finland
Chemical engineering brunson
F inal shotlist
Онтологические плетения
Transcript+Cer
Метабъекты и социальная гидродинамика
Ad

Similar to #VirtualDesignMaster 3 Challenge 2 - Harshvardhan Gupta (20)

PDF
Cloud computing lab open stack
PDF
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
PPTX
2 Disaster_Recovery_Solution_Deployment_and_Management.pptx
PDF
20150704 benchmark and user experience in sahara weiting
PDF
Nextflow on Velsera: a data-driven journey from failure to cutting-edge
PDF
At the Crossroads of HPC and Cloud Computing with Openstack
PDF
#VirtualDesignMaster 3 Challenge 2 - Abdullah Abdullah
PDF
Distributed and Cloud Computing 1st Edition Hwang Solutions Manual
PDF
High Performance Computing (HPC) and Engineering Simulations in the Cloud
PDF
High Performance Computing (HPC) and Engineering Simulations in the Cloud
PDF
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...
PDF
HPE Solutions for Challenges in AI and Big Data
PDF
Saviak lviv ai-2019-e-mail (1)
DOCX
Virtual Design Master Challenge 1 - Joe
PPTX
HPC and cloud distributed computing, as a journey
PDF
Red Hat Ceph Storage: Past, Present and Future
PPTX
Tlu introduction-to-cloud
PDF
Bringing Private Cloud computing to HPC and Science - EGI TF tf 2013
PDF
EGITF 2013 - Bringing Private Cloud Computing to HPC and Science with OpenNebula
PDF
Virtual Design Master Challenge 1 - Akmal
Cloud computing lab open stack
Ceph Day Shanghai - Hyper Converged PLCloud with Ceph
2 Disaster_Recovery_Solution_Deployment_and_Management.pptx
20150704 benchmark and user experience in sahara weiting
Nextflow on Velsera: a data-driven journey from failure to cutting-edge
At the Crossroads of HPC and Cloud Computing with Openstack
#VirtualDesignMaster 3 Challenge 2 - Abdullah Abdullah
Distributed and Cloud Computing 1st Edition Hwang Solutions Manual
High Performance Computing (HPC) and Engineering Simulations in the Cloud
High Performance Computing (HPC) and Engineering Simulations in the Cloud
Building a GPU-enabled OpenStack Cloud for HPC - Blair Bethwaite, Monash Univ...
HPE Solutions for Challenges in AI and Big Data
Saviak lviv ai-2019-e-mail (1)
Virtual Design Master Challenge 1 - Joe
HPC and cloud distributed computing, as a journey
Red Hat Ceph Storage: Past, Present and Future
Tlu introduction-to-cloud
Bringing Private Cloud computing to HPC and Science - EGI TF tf 2013
EGITF 2013 - Bringing Private Cloud Computing to HPC and Science with OpenNebula
Virtual Design Master Challenge 1 - Akmal

More from vdmchallenge (20)

PDF
#VirtualDesignMaster 3 Challenge 4 - Steven Viljoen
PDF
#VirtualDesignMaster 3 Challenge 4 - Harshvardhan Gupta
PDF
#VirtualDesignMaster 3 Challenge 4 - Dennis George
PDF
#VirtualDesignMaster 3 Challenge 4 – James Brown
PDF
#VirtualDesignMaster 3 Challenge 4 - Abdullah Abdullah
PDF
#VirtualDesignMaster 3 Challenge 3 - Steven Viljoen
PDF
#VirtualDesignMaster 3 Challenge 3 – James Brown
PDF
#VirtualDesignMaster 3 Challenge 3 - Harshvardhan Gupta
PDF
#VirtualDesignMaster 3 Challenge 3 - Dennis George
PDF
#VirtualDesignMaster 3 Challenge 3 - Abdullah Abdullah
PDF
#VirtualDesignMaster 3 Challenge 2 - Steven Viljoen
DOCX
#VirtualDesignMaster 3 Challenge 2 - Lubomir Zvolensky
PDF
#VirtualDesignMaster 3 Challenge 2 – James Brown
PDF
#VirtualDesignMaster 3 Challenge 2 - Dennis George
PDF
#VirtualDesignMaster 3 Challenge 1 - Abdullah Abdullah
PDF
#VirtualDesignMaster 3 Challenge 1 - Dennis George
PDF
#VirtualDesignMaster 3 Challenge 1 - Harshvardhan Gupta
PDF
#VirtualDesignMaster 3 Challenge 1 – James Brown
DOCX
#VirtualDesignMaster 3 Challenge 1 - Lubomir Zvolensky
PDF
#VirtualDesignMaster 3 Challenge 1 - Mohamed Ibrahim
#VirtualDesignMaster 3 Challenge 4 - Steven Viljoen
#VirtualDesignMaster 3 Challenge 4 - Harshvardhan Gupta
#VirtualDesignMaster 3 Challenge 4 - Dennis George
#VirtualDesignMaster 3 Challenge 4 – James Brown
#VirtualDesignMaster 3 Challenge 4 - Abdullah Abdullah
#VirtualDesignMaster 3 Challenge 3 - Steven Viljoen
#VirtualDesignMaster 3 Challenge 3 – James Brown
#VirtualDesignMaster 3 Challenge 3 - Harshvardhan Gupta
#VirtualDesignMaster 3 Challenge 3 - Dennis George
#VirtualDesignMaster 3 Challenge 3 - Abdullah Abdullah
#VirtualDesignMaster 3 Challenge 2 - Steven Viljoen
#VirtualDesignMaster 3 Challenge 2 - Lubomir Zvolensky
#VirtualDesignMaster 3 Challenge 2 – James Brown
#VirtualDesignMaster 3 Challenge 2 - Dennis George
#VirtualDesignMaster 3 Challenge 1 - Abdullah Abdullah
#VirtualDesignMaster 3 Challenge 1 - Dennis George
#VirtualDesignMaster 3 Challenge 1 - Harshvardhan Gupta
#VirtualDesignMaster 3 Challenge 1 – James Brown
#VirtualDesignMaster 3 Challenge 1 - Lubomir Zvolensky
#VirtualDesignMaster 3 Challenge 1 - Mohamed Ibrahim

Recently uploaded (20)

PDF
Enhancing emotion recognition model for a student engagement use case through...
PDF
Encapsulation_ Review paper, used for researhc scholars
PPTX
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
PDF
August Patch Tuesday
PDF
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
PPTX
A Presentation on Artificial Intelligence
PDF
Unlocking AI with Model Context Protocol (MCP)
PDF
1 - Historical Antecedents, Social Consideration.pdf
PPTX
Group 1 Presentation -Planning and Decision Making .pptx
PPTX
A Presentation on Touch Screen Technology
PDF
A novel scalable deep ensemble learning framework for big data classification...
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PPTX
OMC Textile Division Presentation 2021.pptx
PPTX
Chapter 5: Probability Theory and Statistics
PPTX
TLE Review Electricity (Electricity).pptx
PDF
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PDF
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
PDF
Getting Started with Data Integration: FME Form 101
Enhancing emotion recognition model for a student engagement use case through...
Encapsulation_ Review paper, used for researhc scholars
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
August Patch Tuesday
Microsoft Solutions Partner Drive Digital Transformation with D365.pdf
A Presentation on Artificial Intelligence
Unlocking AI with Model Context Protocol (MCP)
1 - Historical Antecedents, Social Consideration.pdf
Group 1 Presentation -Planning and Decision Making .pptx
A Presentation on Touch Screen Technology
A novel scalable deep ensemble learning framework for big data classification...
Building Integrated photovoltaic BIPV_UPV.pdf
SOPHOS-XG Firewall Administrator PPT.pptx
OMC Textile Division Presentation 2021.pptx
Chapter 5: Probability Theory and Statistics
TLE Review Electricity (Electricity).pptx
Transform Your ITIL® 4 & ITSM Strategy with AI in 2025.pdf
Assigned Numbers - 2025 - Bluetooth® Document
From MVP to Full-Scale Product A Startup’s Software Journey.pdf
Getting Started with Data Integration: FME Form 101

#VirtualDesignMaster 3 Challenge 2 - Harshvardhan Gupta

  • 4. SYNOPSIS We’ve examined how we can rebuild infrastructure from scratch, but now let’s think outside the box, and inside the clouds. Before the zombie apocalypse began, many organizations were beginning to leverage public cloud infrastructures for a number of reasons. Some were using it for burst capacity, others for development and test. Some start-ups even used public cloud for everything! Our billionaire philanthropist from Season 1 is a huge fan of public cloud, and was one of the early adopters. HIs task for you now is to design an environment to meet our needs using any existing public cloud infrastructure to run it on top of. Be sure to let him know why you picked this particular public cloud infrastructure, as if he’s going to re-create it on Mars, he wants to make sure he’s making the best choice. Now for the fun part. We talked about a number of business critical applications in the first challenge. You must deploy one web application (think the time tracking application for the botanists in the greenhouses) and one business critical enterprise application (think life support systems) inside the public cloud infrastructure. As part of your design, you must state what the application requirements you are using are. Remember to think about things like performance, capacity, latency, and high availability. In this case, complexity can be your enemy, but the details can be your friend. Besides your overall design, you will also be judged on the application requirements you develop.
  • 5. 2 | P a g e Table of Contents 1. Overview................................................................................................................... ii 2. Requirements........................................................................................................... 2 3. Constraints............................................................................................................... 2 4. Risks ......................................................................................................................... 2 5. Assumptions ............................................................................................................ 3 6. Public Cloud infrastructure of choice.................................................................... 3 7. H/W Specs of choice for Public Cloud infrastructure........................................... 4 8. Logical diagram for Public Cloud........................................................................... 7 9. Application Architecture ......................................................................................... 9 10.References ............................................................................................................. 14
  • 6. 2 | P a g e 1. Overview Millionaire philanthropist sent me back in time on earth before zombie outbreak happen to evaluate public cloud providers and comeback with a plan for replicating same infrastructure here on Mars, definitely with some tweaks. 2. Requirements The Permanent IT infrastructure strategy for Mars is to move the services and data repositories that are currently delivered out of Temporary datacentre located in Human Pods to Public Cloud. The IT approach for this Public Cloud facility is to have a replica of best available Public cloud Provider Company operating from Earth before zombie apocalypse. 1. Create a replica of Public Cloud Provider here in Mars 2. Host/Deploy one web application i.e. time tracking application for the botanists in the Greenhouses. 3. And, one business critical enterprise application i.e. life support systems inside the public cloud infrastructure. 4. Applications must be highly available, lower latency and performance are utmost important. 3. Constraints Major Cloud related Constraints depends upon the choice of public cloud provider chosen for re-built in mars. One Web Application and One Business critical enterprise application must be deployable on the public cloud provider of your choice. 4. Risks Major risks are Replacement parts for maintaining large public cloud infrastructure and trained staff for day-to-day operations. Unavailability of Experienced Data centre architects for designing DC on Mars. So we leverage Open Compute project design specification.
  • 7. 3 | P a g e 5. Assumptions There will be two Availability zones on Mars for hosting public cloud infrastructure and both are connected with high capacity intranet/internet link for inter-DC replication. Sufficient Compute, Networking and Storage h/w is available to build a public cloud infrastructure. Sufficient cooling and power sources for keeping the facility up and running 24x7x365 (this may vary due measurement of time on mars). 6. Public Cloud infrastructure of choice We choose VMware vCloud Air as our preferred choice for building public cloud on Mars. Pros- 1. New born cloud infrastructure with latest compute/networking and storage gear. 2. No use of secret/proprietary stuff, like other cloud providers do (AWS/Azure). 3. VMware eats his own dog food this time, vCloud Air offering runs above same technology i.e. ESXi hypervisor/vCloud Director etc. 4. Two models only i.e. Dedicated and Virtual private. 5. No need to reinvent the wheel, existing VM’s can be migrated to public cloud with the use of vCloud connector appliance. 6. No need to re-architect existing applications to make them suitable for public cloud. 7. Automation/orchestration tools built around it vCAC/vRealize and can leverage existing automation techniques being used in temporary Datacenters. 8. Seem less migration from existing infra to public cloud and vice versa. 9. Work load can be balanced between on-premises and cloud infrastructure as ‘n’ when required. 10.No need to learn new skills, easy to use web interface for day-to-day administration. 11.Petting VM’s can be easy, rather than treating them as cattle’s (on AWS/Openstack). 12.High Performance Hardware, Sophisticated Resource Management and Careful Management of Capacity are inherent on vCloud Air. 13.vCloud Air 35% Cheaper Than Azure and 83% Cheaper Than AWS. 14.vCloud API for faster integration and development of applications on vCloud Air. 15.vCloud Air disaster recovery for taming rough circumstances on Mars like solarwinds/dust storms etc. Cons- 1. One of the limitations with vCD, and hence this service, is that you can't define anti-affinity rules (where a master and slave server are guaranteed to be on different physical hosts).
  • 8. 4 | P a g e 2. Didn’t find anything else negative about it. 7. H/W Specs of choice for Public Cloud infrastructure We choose Open Compute Project as our preferred choice for hardware (Compute/Storage/Networking) on Mars. To keep tap on costly resources like space, cooling and power. 1. Server Technology a. Server Design- The high availability (HA) server leverages the Intel Motherboard Hardware Specification v2.0. Instead of accommodating two server motherboard trays with one shared PSU, it accommodates one server motherboard tray with one PSU tray holding two PSUs. b. Open Vault (Storage) - The Open Vault is a simple and cost-effective storage solution with a modular I/O topology that’s built for the Open Rack. The Open Vault offers high disk densities, holding 30 drives in a 2U chassis, and can operate with almost any host server. Its innovative, expandable design puts serviceability first, with easy drive replacement no matter the mounting height.
  • 9. 5 | P a g e c. Hardware Management (out-band management) d. Power Supply- The OCP 700W-SH AC/DC power converter, a single voltage 12.5Vdc, closed frame, self-cooled power supply used in high efficiency IT applications. The supply is configurable to a 450W-SH power rating (like the Open Compute Project 450W power supply), as both models use the same PCBs, with just pin-to-pin component replacements.
  • 10. 6 | P a g e e. Chassis- The Open Compute Project chassis is designed to accommodate the other components in a server, including the custom motherboard and power supply. Overall it is vanity free, has no sharp corners and is designed for easy servicing. It is completely screw-less, uses quick release components such that the motherboard snaps into place with a series of mounting holes, and the hard drives use snap-in rails to slide into the drive bay. 2. Data Center Technology a. Networking- Switch Specifications: 48x10G SFP+ and 4x40G QSFP+ 1 RJ-45 Out-of-band Management Port (10/100/1000M) 1 Console port 1+1 Hot-Swappable PSU ONIE Supported Boot Loader b. Open Rack c. Battery Cabinet - The battery cabinet is a standalone independent cabinet that provides backup power at 48 volt DC nominal to a pair of triplet racks in the event of an AC outage in the data center. The batteries are a sealed 12.5 volt DC nominal, high-rate discharge type with a 10 year lifespan, commonly used in UPS systems, connected in a series of four elements for each group (called a string), for a nominal string voltage of 48VDC. There are five strings in parallel in the cabinet.
  • 11. 7 | P a g e d. Data Center Electrical e. Data Center Mechanical f. Data Center 8. Logical diagram for Public Cloud Public cloud will be based on vCloud Air, which relies upon vCloud Director.
  • 12. 8 | P a g e Management Cluster for hosting core components of VMware vSphere, vCloud Director and vRealize. This cluster will also host customized portal for easy consumption of Cloud resources. Cloud resource groups provide resources for end-user consumption on Mars. Disaster Recovery consideration-
  • 13. 9 | P a g e Multisite consideration- NOTE: Each Site/datacenter will be setup with the same configuration to provide simplicity through standardization. For brevity only one availability zone layout is shown above unless specifically mentioned. 9. Application Architecture 1. Web Application-Time Tracking for the Botanists in Greenhouses- Botanists are still studying environment and their adverse effect on plantation. They require a time tracking application to monitor the growth of plantations in greenhouses, our only source of green vegetables and vital nutrients required for steady growth of human beings. Time tracking is a very cumbersome job and requires high precision and concentration. Web interface provides a single pane of glass for all aspects of time tracking application and for number crunching huge data public cloud suites best. This application is designed with Disaster recovery in mind, as this vital information will keep botanists to keep progressing on their research work without any hassle. Availability Requirements- Web and SQL server components must be highly available. Latency Requirements- Latency between database servers must be low, i.e. within permissible limit.
  • 14. 10 | P a g e Capacity- Each vApp can support one Greenhouse. We are also planning to create a centralized command center for monitoring greenhouses on mars like life support system. Compute Requirements- CPU RAM Storage Web Server 4vCPU 16 GB 150 GB SQL Server with Always-on 4vCPU 24 GB 500 GB Stats Server 2vCPU 8 GB 150 GB Alerting service server 2vCPU 8 GB 150 GB Log Analyzer 2vCPU 8 GB 150 GB Load Balancer appliance 4vCPU 16 GB 72 GB The components that make up Common Services include the following:  Alerting Service – Software used to integrate our Stats Server and Log Analyzer with Exchange based alerting and messaging system.  Log Analyzer – Software used to aggregate and to parse logs collected from On-prem agents. The Log Analyzer is integrated with our Alerting Service to provide active monitoring.  Stats Server – Used to monitor the health of On-prem agents and ensuring that all services are up and running. The Stats Server achieves this by communicating with the Stats/Health Agent
  • 15. 11 | P a g e for each On-prem agents to receive status. Stats server also runs some analytics for forecasting data.  Web Server – A Web Server based on IIS used to manage services for each Greenhouse, including running per-botanists scheduled prescription for plantation. This also uses powershell scripts for automation.  vCloud Catalog – Catalog stores vApp for faster deployment of application/components 2. Business (Life) critical enterprise application-Life support systems- Life support system is one of the most critical applications on Mars; this application takes care of Oxygen supply, cooling, energy consumption, safety and security. This application comprises of shared SQL Always-on Database, a web interface, CCTV Footage storage/archival and some clients to gather information from different sensors spread across human pods. The Life support command center system must be highly available and fault tolerant, as human lives are on stake here. Availability Requirements- Web and SQL server components must be highly available Except On-Premises collectors. On- Premises collectors will be using shared database; if any collector fails or any mechanical failure happen email alert will be triggered. CCTV collector will synchronize with deduplication appliance hosted on cloud and does video footage compression, which later kept on VTL for archival. Latency Requirements- Latency between database servers must be low, i.e. within permissible limit. Capacity- vApp used as a centralized command center, for monitoring human pods on mars. Collectors/Agent can capture data from three sensors only, more sensors require more agents/collectors installation to distribute load. Compute Requirements- CPU RAM Storage Web Server 4vCPU 16 GB 150 GB SQL Server with Always-on 4vCPU 24 GB 500 GB Alerting service server 2vCPU 8 GB 150 GB Log Analyzer 2vCPU 8 GB 150 GB Building Mgmt Agent 4vCPU 16 GB 72 GB Energy Mgmt Agent 2vCPU 4 GB 72 GB Intrusion alarm Agent 2vCPU 4 GB 72 GB CCTV Video Collector 2vCPU 4 GB 72 GB Access Control Agent 2vCPU 4 GB 72 GB
  • 16. 12 | P a g e Life Safety Agent 2vCPU 4 GB 72 GB Deduplication Appliance 8 vCPU 32 GB 300 GB
  • 17. 13 | P a g e The components that make up Common Services include the following:  Alerting Service – Software used to integrate our Stats Server and Log Analyzer with Exchange based alerting and messaging system.  Log Analyzer – Software used to aggregate and to parse logs collected from On-prem agents. The Log Analyzer is integrated with our Alerting Service to provide active monitoring.  Deduplication Appliance – These appliance provide deduplication facility for CCTV footage captured through collectors and will be stored on VTL.  Web Server – A Web Server based on IIS used to manage services for each Greenhouse, including running per-botanists scheduled prescription for plantation. This also uses powershell scripts for automation.  vCloud Catalog – this stores vApp for faster deployment of application/components  CCTV Footage Archival- VTL for footage archival older than 90 days with optimal deduplication  Building Mgmt Agent- Interfaces with Lighting control system and smoke management system.  Energy Mgmt Agent- Interfaces with Oxygen, water, energy, electrical and BTU Meters  Intrusion alarm Agent- Interfaces with Remote sensors for Magnetic trip alarm devices  CCTV Video Collector- Captures Audio/Video from CCTV.  Access Control Agent- Keeps track of Access within Human Pods controlled environment and chaos avoidance  Life Safety Agent- Interfaces with heat/smoke detector and triggers fire suppression systems.
  • 18. 14 | P a g e 10. References 1. blogs.vmware.com 2. hypervizor.com 3. Honeywell automation 4. Wikipedia