Cloud computing
Fundamentals
Unit 2: Cloud Technologies and Concepts
Virtualization - Hypervisor, Virtual machine monitor;
Types of virtualization - Hardware,Operating system;
Server virtualization; Software defined networking,
Network function virtualization; Data Center-
Components, types, Characteristics; Service level
agreement; Load balancing; Scalability and elasticity.
Virtualization
Virtualization is technology that you can use to create
virtual representations of servers, storage, networks, and
other physical machines. Virtual software mimics the
functions of physical hardware to run multiple virtual
machines simultaneously on a single physical machine.
● Flexibility: Virtualization lets you manage hardware resources
like software, allowing easy adjustments and control.
● Resource Efficiency: It reduces the need for physical servers,
saving electricity, space, and maintenance.
● Remote Access: You can access virtual resources from
anywhere, overcoming physical and network limitations.
● Software Control: Hardware is abstracted into software,
making management and use as simple as interacting with a
web application.
virtu`alization makes hardware management more flexible, efficient, and
accessible.
Cloud computing Fundamentals-Cloud Technologies and Concepts.pptx
Virtualization is a process that allows a computer to share its
hardware resources with multiple digitally separated environments.
Each virtualized environment runs within its allocated resources,
such as memory, processing power, and storage. With virtualization,
organizations can switch between different operating systems on the
same server without rebooting.
Virtual machines and hypervisors are two important concepts in
virtualization.
Virtual machine
A virtual machine is a software-defined computer that runs on a physical
computer with a separate operating system and computing resources. The
physical computer is called the host machine and virtual machines are guest
machines. Multiple virtual machines can run on a single physical machine.
Virtual machines are abstracted from the computer hardware by a
hypervisor.
Hypervisor
The hypervisor is a software component that manages multiple virtual
machines in a computer. It ensures that each virtual machine gets the
allocated resources and does not interfere with the operation of other virtual
machines.
Types of hypervisors
Type 1 hypervisor
A type 1 hypervisor, or bare-metal hypervisor, is a hypervisor
program installed directly on the computer’s hardware instead of the
operating system.
Type 2 hypervisor
Also known as a hosted hypervisor, the type 2 hypervisor is installed
on an operating system. Type 2 hypervisors are suitable for end-user
computing.
Cloud computing Fundamentals-Cloud Technologies and Concepts.pptx
Benefits Of Virtualization
Efficient Resource Use:
● Virtual Servers: Instead of having one physical server for each job, you can use one physical server to
create many virtual servers.
● Savings: This means you need less physical space, use less electricity, and save on cooling and power
costs.
Automated IT Management:
● Templates: Administrators set up standard configurations for virtual machines.
● Easy Duplication: You can quickly create and manage many virtual machines using these templates,
making setup faster and reducing errors.
Faster Disaster Recovery:
● Quick Recovery: If something goes wrong (like a natural disaster or cyberattack), you can restore
virtual machines in minutes, not days.
● Business Continuity: This helps keep your operations running smoothly and quickly recovers from
disruptions.
When you install virtualization software on your computer, you can create
virtual machines (VMs). Think of a VM as a computer within your computer.
Your main computer is called the "host," and the VMs are the "guests."
You can run multiple VMs on one host, each with its own operating system,
which can be different from the host's. Each VM works like a normal
computer, with its own settings, programs, and storage. You can manage and
update each VM just like you would with a physical server, without changing
anything on the host computer.
Virtualization allows one physical computer to run multiple virtual
computers.
Hypervisor is the software that makes this possible by managing
how resources are shared between the virtual machines and the
physical hardware.
Virtual Machines act like real computers, with their own operating
systems and applications, but share the physical resources of the
host computer.
VIRTUALIZATION REFERENCE MODEL
What are the different types of virtualization?
1. Hardware Virtualization
2. Operating system Virtualization
3. Application Virtualization
4. Network Virtualization
5. Desktop Virtualization
6. Storage Virtualization
7. Server Virtualization
8. Data virtualization
Hardware Virtualization
● Definition: Creating virtual versions of physical hardware
components.
● Example:
○ Scenario: A company has one powerful physical server but
wants to run multiple different environments (like a web server,
database server, and testing environment) on it.
○ Tool: VMware ESXi.
○ Result: The single physical server is divided into multiple virtual
machines (VMs), each running a different operating system and
applications as if they were on separate hardware.
Operating System Virtualization
● Definition: Running multiple operating systems on a single
physical machine.
● Example:
○ Scenario: A developer needs to test software on both
Windows and Linux.
○ Tool: Microsoft Hyper-V.
○ Result: The developer runs both Windows and Linux
operating systems simultaneously on the same physical
machine, using virtual machines.
Application Virtualization
● Definition: Running applications in a virtual environment separate from
the underlying OS.
● Example:
○ Scenario: An organization needs to run a legacy application that
only works on an older version of Windows.
○ Tool: VMware ThinApp.
○ Result: The legacy application is packaged and run in a virtual
environment on modern Windows versions without compatibility
issues.
Network Virtualization
● Definition: Creating virtual networks that operate independently
of the physical network infrastructure.
● Example:
○ Scenario: A data center wants to manage its network traffic
more efficiently.
○ Tool: Cisco’s Software-Defined Networking (SDN).
○ Result: The physical network is abstracted into multiple virtual
networks, allowing for more flexible and efficient management
of network traffic.
Desktop Virtualization
● Definition: Running desktop environments on a remote server
rather than on local devices.
● Example:
○ Scenario: A company wants employees to access their
desktops from anywhere.
○ Tool: VMware Horizon.
○ Result: Employees access their desktop environment
(including applications, files, and settings) from any device
with an internet connection, as it’s hosted on a remote server.
Storage Virtualization
● Definition: Pooling multiple physical storage devices into a single
virtual storage system.
● Example:
○ Scenario: An organization has various storage devices from
different vendors and wants to manage them as one.
○ Tool: IBM’s SAN Volume Controller.
○ Result: All the different storage devices are combined into one
virtual storage pool, simplifying storage management and
improving utilization.
Server Virtualization
● Definition: Partitioning a physical server into multiple smaller
virtual servers.
● Example:
○ Scenario: A business wants to run several different
applications on one physical server but keep them isolated
from each other.
○ Tool: VMware vSphere.
○ Result: The single physical server is divided into several
virtual servers, each running a different application in isolation.
Data Virtualization
● Definition: Providing a single virtual view of data from multiple
sources without moving or replicating it.
● Example:
○ Scenario: A company needs to analyze data from various
databases and cloud services.
○ Tool: Denodo Platform.
○ Result: The company’s analysts can access and query data
from different sources as if it were all in one place, without
worrying about where the data is actually stored.
Network virtualization involves abstracting and combining network
resources to create a more flexible and efficient network environment. It
allows for the creation of multiple virtual networks on top of a physical
network infrastructure. There are several types of network virtualization
● Network Function Virtualization (NFV)
● Software-Defined Networking (SDN)
Software-Defined Networking (SDN)
● Software-Defined Networking (SDN): Is the form of network
virtualization which is a separation of the control and data plane
(control plane makes decisions as to where to send traffic, and
data plane sends the traffic). This division creates a viable
opportunity for more centralized and programmable network
administration.
● Example: An enterprise uses SDN to manage traffic routing
across multiple data centers from a single software controller.
● Advantage: Enables dynamic, automated network configuration
and centralized management.
Data Plane: All the activities involving as well as resulting from data
packets sent by the end-user belong to this plane. This includes:
● Forwarding of packets.
● Segmentation and reassembly of data.
● Replication of packets for multicasting.
Control Plane: All activities necessary to perform data plane activities but
do not involve end-user data packets belong to this plane. In other words, this
is the brain of the network. The activities of the control plane include:
● Making routing tables.
● Setting packet handling policies.(Traffic Prioritization,Security Policies
etc)
How Does Software-Defined Networking (SDN) Work?
Separation of Control and Data:
● Control Plane (Software): Centralized software (SDN controller) makes
decisions about routing and network management.
● Data Plane (Hardware): Physical devices like routers and switches forward
data based on instructions from the control plane.
Centralized Control: Network management is done centrally through the SDN
controller, allowing for easier configuration and control of the entire network.
Use of Virtual Switches: Virtual switches may replace or work with physical
switches, performing multiple tasks like packet verification and forwarding
efficiently.
SDN Architecture
Components of Software-Defined Networking (SDN)
1. SDN Applications:
○ These are software programs that request network services. They
communicate with the SDN controller using APIs to manage and
optimize the network.
2. SDN Controller:
○ The SDN controller is the central unit that collects data from the network
devices and decides how the network should operate. It sends
instructions to the devices based on the needs of the SDN applications.
3. SDN Networking Devices:
○ These are the physical or virtual switches and routers that handle data
forwarding and processing. They follow the instructions provided by the
SDN controller.
In traditional networks, each switch manages both the decision-making (control
plane) and data movement (data plane). In SDN, the control plane is moved to a
centralized SDN controller, making it easier to manage the network.
● Application Layer:
○ This layer includes network applications like firewalls, load balancers,
and intrusion detection systems.
● Control Layer:
○ This layer is where the SDN controller resides, acting as the brain of the
network. It provides a unified view and control over the entire network.
● Infrastructure Layer:
○ This layer consists of the physical or virtual switches and routers that
handle the actual data movement, guided by the flow tables created by the
controller.
Cloud computing Fundamentals-Cloud Technologies and Concepts.pptx
Cloud computing Fundamentals-Cloud Technologies and Concepts.pptx
Network Function Virtualization (NFV)
Network Functions Virtualization (NFV) represents a shift from
traditional, hardware-based network functions to a more flexible,
software-based approach. By moving functions like load balancing,
firewall protection, and encryption from dedicated physical devices to
virtual machines (VMs) running on standard servers
Benefits of NFV
Cost Reduction: NFV cuts costs by using general-purpose servers instead of
expensive specialized hardware.
Scalability and Flexibility: Virtual functions can easily scale up or down to meet
demand, avoiding the need for excess capacity.
Faster Deployment: Software-based NFV speeds up the process of adding or
changing network functions compared to physical devices.
Improved Management: NFV works with management tools for easier automation
and monitoring of network functions.
Enhanced Agility: NFV allows organizations to quickly test and deploy new services
without being limited by hardware.
NFV architecture
NFV (Network Functions Virtualization) architecture consists of several key
components that work together to provide a flexible and efficient network
infrastructure.
Virtualized Network Functions (VNFs): These are the software
implementations of network functions like firewalls, load balancers, and routers,
running on standard servers instead of dedicated hardware.
NFV Infrastructure (NFVI): This layer includes the physical and virtual
resources needed to support VNFs. It encompasses:
● Compute: Servers and virtual machines where VNFs run.
● Storage: Systems for storing data and configurations.
● Network: Connectivity and networking components enabling
1. NFV Orchestrator (NFVO): This component manages the lifecycle of VNFs and
NFVI. It handles the deployment, scaling, and termination of VNFs, as well as the
allocation of resources.
2. Virtualized Infrastructure Manager (VIM): This manages the NFVI resources. It
controls the compute, storage, and network resources, and ensures they are allocated
efficiently to the VNFs.
3. Network Function Virtualization Management (NFV-MANO): This is a set of
functions that includes the NFVO, VIM, and additional management elements to
oversee the operation and coordination of VNFs and NFVI.
NFV architecture separates network functions from hardware and manages them through
virtualization, enabling more flexibility, efficiency, and scalability.
Cloud computing Fundamentals-Cloud Technologies and Concepts.pptx
Software-Defined Networking (SDN):
● Purpose: SDN is about separating the network control layer from the data
forwarding layer. It allows network administrators to manage and
configure network devices centrally through software.
● Components:
○ Control Plane: This is where decisions are made about where traffic
should go. It’s managed by an SDN controller.
○ Data Plane: This is where actual traffic forwarding happens based on
the rules set by the control plane.
● SDN focuses on controlling network behavior and traffic flow through
centralized management.
Virtualized Network Functions (VNF):
● Purpose: VNFs are software-based versions of traditional network
functions (like firewalls, load balancers, and routers) that run on
standard servers instead of specialized hardware.
● Components:
○ VNFs: These are individual network functions that have been
virtualized and run as software applications.
● VNF focuses on replacing traditional network hardware with
software-based solutions for specific functions.
Data center
A data center is a physical location that stores
computing machines and their related hardware
equipment. It contains the computing infrastructure
that IT systems require, such as servers, data storage
drives, and network equipment. It is the physical
facility that stores any company’s digital data.
Data centers bring several benefits, such as:
● Backup power supplies to manage power outages
● Data replication across several machines for disaster recovery
● Temperature-controlled facilities to extend the life of the
equipment
● Easier implementation of security measures for compliance with
data laws
Cloud computing Fundamentals-Cloud Technologies and Concepts.pptx
Data Center Components
1. Servers: Machines that store data and run applications.
2. Storage Systems: Devices like hard drives that keep data.
3. Networking Equipment: Routers, switches, and firewalls that manage
data traffic.
4. Power Supply: Uninterruptible power supplies (UPS) and backup
generators to ensure continuous operation.
5. Cooling Systems: Air conditioning and ventilation to maintain optimal
temperatures.(CRAC,CRAH)
6. Physical Security: Cameras, biometric access controls, and alarms to
protect the facility.
Types of data centers
Enterprise Data Centers:
● Description: Owned and operated by individual organizations.
● Purpose: To manage and store an organization’s data and applications.
● Features: Customized to the specific needs of the organization, with a focus on
security and performance tailored to their requirements.
Colocation Data Centers:
● Description: Facilities where businesses rent space to house their own servers
and other hardware.
● Purpose: To provide physical space, power, cooling, and connectivity while
allowing businesses to maintain control over their equipment.
● Features: Offers shared infrastructure, which can be more cost-effective than
building a private data center.
Cloud Data Centers:
● Description: Operated by cloud service providers (e.g., AWS, Google Cloud, Microsoft
Azure).
● Purpose: To provide scalable and on-demand computing resources and storage.
● Features: Resources are virtualized and can be accessed over the internet, allowing for
flexible and scalable solutions.
Managed Data Centers:
● Description: Facilities where the infrastructure is managed by a third-party provider on
behalf of the client.
● Purpose: To offer data center management services including monitoring, maintenance,
and support.
● Features: Combines aspects of colocation and outsourcing with managed services.
Hyperscale Data Centers:
● Description: Extremely large data centers designed to support the enormous scale of
cloud service providers.
● Purpose: To handle vast amounts of data and provide high levels of scalability and
redundancy.
● Features: High-density and highly automated, designed for efficiency and scalability.
Edge Data Centers:
● Description: Smaller data centers located closer to the end-users or data sources.
● Purpose: To reduce latency and improve performance for applications requiring real-
time processing.
● Features: Often used for applications like IoT, streaming, and content delivery.
Data Center Characteristics
1. Reliability: High uptime and availability, often supported by
redundant systems.
2. Scalability: Ability to expand resources (like adding more servers) as
needed.
3. Security: Physical and digital measures to protect data and
infrastructure.
4. Efficiency: Optimized use of power, cooling, and space to reduce
costs and environmental impact.
Service Level Agreement (SLA)
A Service Level Agreement (SLA) is a contract between a service
provider and a customer that defines the expected level of service.
Key elements include:
● Uptime Guarantee: A promise on the minimum percentage of
uptime, such as 99.9%.
● Performance Metrics: Specific targets for response times and
system performance.
● Penalties: Consequences if the provider fails to meet the
agreed-upon terms.
Load Balancing
Load Balancing is the process of distributing network or
application traffic across multiple servers. This ensures no single
server becomes overwhelmed, improving:
● Performance: By spreading the workload, servers respond
faster.
● Reliability: If one server fails, others can take over, preventing
downtime.
● Scalability: Makes it easier to add more servers to handle
increasing demand.
Scalability and Elasticity
1. Scalability: The ability of a system to handle a growing amount of work
by adding resources (like more servers or storage). There are two types:
○ Vertical Scalability: Adding more power (CPU, RAM) to an
existing server.
○ Horizontal Scalability: Adding more servers to distribute the
workload.
2. Elasticity: The ability to automatically increase or decrease resources
based on current demand. This is especially useful in cloud
environments, where resources can be adjusted in real-time to optimize
cost and performance.

More Related Content

PPTX
Virtualization unit 3.pptx
PPTX
week 3 cloud computing northumbria foudation
PPTX
lecture5-virtualization-190301171613.pptx
PDF
Lecture5 virtualization
PPTX
Sna lab prj (1)
PPTX
UNIT 2_cloud Computing.pptx Virtualization
PPTX
cloud concepts and technologies
PPTX
aravind_kmdfdgmfmfmmfmkmkmmgmbmgmbmgbmgmkm.pptx
Virtualization unit 3.pptx
week 3 cloud computing northumbria foudation
lecture5-virtualization-190301171613.pptx
Lecture5 virtualization
Sna lab prj (1)
UNIT 2_cloud Computing.pptx Virtualization
cloud concepts and technologies
aravind_kmdfdgmfmfmmfmkmkmmgmbmgmbmgbmgmkm.pptx

Similar to Cloud computing Fundamentals-Cloud Technologies and Concepts.pptx (20)

PDF
Introduction to Essentials of Virtualization
PPTX
1 virtualization
PPTX
Lecture 11 (Virtualization and Load Balancer).pptx
PPTX
VIRTUALIZATION for computer science.pptx
PPTX
Virtualization Types of Virtualization and Types of Hupervisors
PPTX
What is Virtualization and its types & Techniques.What is hypervisor and its ...
PPTX
Virtualization: A Key to Efficient Cloud Computing
PPTX
Virtualization- Cloud Computing
PPTX
Virtualization And Containerization.pptx
PPTX
Virtualizaiton-3.pptx
PDF
Cloud Computing Course Material - Virtualization
PPTX
Virtualization
PDF
virtualization-190329110832.pdf
PPTX
Parth virt
PDF
Virtualisation and Related Concepts in Cloud Computing.pdf
PPTX
Virtulization submission
PPTX
Virtualization Technology
PPTX
Chap 2 virtulizatin
PPTX
Virtuaization jwneilhw pehfpijwrhfipuwrhiwh iufhgipuhriph riup hiuefhv 9ufeh
PPTX
Virtualization Technique.pptx in operating systems
Introduction to Essentials of Virtualization
1 virtualization
Lecture 11 (Virtualization and Load Balancer).pptx
VIRTUALIZATION for computer science.pptx
Virtualization Types of Virtualization and Types of Hupervisors
What is Virtualization and its types & Techniques.What is hypervisor and its ...
Virtualization: A Key to Efficient Cloud Computing
Virtualization- Cloud Computing
Virtualization And Containerization.pptx
Virtualizaiton-3.pptx
Cloud Computing Course Material - Virtualization
Virtualization
virtualization-190329110832.pdf
Parth virt
Virtualisation and Related Concepts in Cloud Computing.pdf
Virtulization submission
Virtualization Technology
Chap 2 virtulizatin
Virtuaization jwneilhw pehfpijwrhfipuwrhiwh iufhgipuhriph riup hiuefhv 9ufeh
Virtualization Technique.pptx in operating systems
Ad

More from SonaShaiju1 (7)

PDF
OPS_Unit1OpenSourceDemystifyingpart2.pdf
PDF
OPS_Unit-1--Open Source Demystifying.pdf
PDF
OPS_Unit-2--Open Source Demystifying.pdf
PPTX
OPS-Unit 3-Open Source Demystifying.pptx
PDF
OperatingSystem-Unit2_Process Management
PDF
Os-unit1-Introduction to Operating Systems.pdf
PPTX
Software Engineering-Process Models.pptx
OPS_Unit1OpenSourceDemystifyingpart2.pdf
OPS_Unit-1--Open Source Demystifying.pdf
OPS_Unit-2--Open Source Demystifying.pdf
OPS-Unit 3-Open Source Demystifying.pptx
OperatingSystem-Unit2_Process Management
Os-unit1-Introduction to Operating Systems.pdf
Software Engineering-Process Models.pptx
Ad

Recently uploaded (20)

PDF
advance database management system book.pdf
PDF
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
PDF
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
PDF
LIFE & LIVING TRILOGY - PART (3) REALITY & MYSTERY.pdf
PDF
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
PDF
Uderstanding digital marketing and marketing stratergie for engaging the digi...
PPTX
Share_Module_2_Power_conflict_and_negotiation.pptx
PPTX
B.Sc. DS Unit 2 Software Engineering.pptx
PDF
Skin Care and Cosmetic Ingredients Dictionary ( PDFDrive ).pdf
PPTX
Education and Perspectives of Education.pptx
PDF
My India Quiz Book_20210205121199924.pdf
PDF
LEARNERS WITH ADDITIONAL NEEDS ProfEd Topic
PDF
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 2).pdf
PPTX
Unit 4 Computer Architecture Multicore Processor.pptx
DOCX
Cambridge-Practice-Tests-for-IELTS-12.docx
PPTX
Core Concepts of Personalized Learning and Virtual Learning Environments
PDF
Empowerment Technology for Senior High School Guide
PPTX
A powerpoint presentation on the Revised K-10 Science Shaping Paper
PDF
BP 505 T. PHARMACEUTICAL JURISPRUDENCE (UNIT 2).pdf
PDF
HVAC Specification 2024 according to central public works department
advance database management system book.pdf
Τίμαιος είναι φιλοσοφικός διάλογος του Πλάτωνα
1.3 FINAL REVISED K-10 PE and Health CG 2023 Grades 4-10 (1).pdf
LIFE & LIVING TRILOGY - PART (3) REALITY & MYSTERY.pdf
MBA _Common_ 2nd year Syllabus _2021-22_.pdf
Uderstanding digital marketing and marketing stratergie for engaging the digi...
Share_Module_2_Power_conflict_and_negotiation.pptx
B.Sc. DS Unit 2 Software Engineering.pptx
Skin Care and Cosmetic Ingredients Dictionary ( PDFDrive ).pdf
Education and Perspectives of Education.pptx
My India Quiz Book_20210205121199924.pdf
LEARNERS WITH ADDITIONAL NEEDS ProfEd Topic
BP 704 T. NOVEL DRUG DELIVERY SYSTEMS (UNIT 2).pdf
Unit 4 Computer Architecture Multicore Processor.pptx
Cambridge-Practice-Tests-for-IELTS-12.docx
Core Concepts of Personalized Learning and Virtual Learning Environments
Empowerment Technology for Senior High School Guide
A powerpoint presentation on the Revised K-10 Science Shaping Paper
BP 505 T. PHARMACEUTICAL JURISPRUDENCE (UNIT 2).pdf
HVAC Specification 2024 according to central public works department

Cloud computing Fundamentals-Cloud Technologies and Concepts.pptx

  • 2. Unit 2: Cloud Technologies and Concepts Virtualization - Hypervisor, Virtual machine monitor; Types of virtualization - Hardware,Operating system; Server virtualization; Software defined networking, Network function virtualization; Data Center- Components, types, Characteristics; Service level agreement; Load balancing; Scalability and elasticity.
  • 3. Virtualization Virtualization is technology that you can use to create virtual representations of servers, storage, networks, and other physical machines. Virtual software mimics the functions of physical hardware to run multiple virtual machines simultaneously on a single physical machine.
  • 4. ● Flexibility: Virtualization lets you manage hardware resources like software, allowing easy adjustments and control. ● Resource Efficiency: It reduces the need for physical servers, saving electricity, space, and maintenance. ● Remote Access: You can access virtual resources from anywhere, overcoming physical and network limitations. ● Software Control: Hardware is abstracted into software, making management and use as simple as interacting with a web application. virtu`alization makes hardware management more flexible, efficient, and accessible.
  • 6. Virtualization is a process that allows a computer to share its hardware resources with multiple digitally separated environments. Each virtualized environment runs within its allocated resources, such as memory, processing power, and storage. With virtualization, organizations can switch between different operating systems on the same server without rebooting. Virtual machines and hypervisors are two important concepts in virtualization.
  • 7. Virtual machine A virtual machine is a software-defined computer that runs on a physical computer with a separate operating system and computing resources. The physical computer is called the host machine and virtual machines are guest machines. Multiple virtual machines can run on a single physical machine. Virtual machines are abstracted from the computer hardware by a hypervisor. Hypervisor The hypervisor is a software component that manages multiple virtual machines in a computer. It ensures that each virtual machine gets the allocated resources and does not interfere with the operation of other virtual machines.
  • 8. Types of hypervisors Type 1 hypervisor A type 1 hypervisor, or bare-metal hypervisor, is a hypervisor program installed directly on the computer’s hardware instead of the operating system. Type 2 hypervisor Also known as a hosted hypervisor, the type 2 hypervisor is installed on an operating system. Type 2 hypervisors are suitable for end-user computing.
  • 10. Benefits Of Virtualization Efficient Resource Use: ● Virtual Servers: Instead of having one physical server for each job, you can use one physical server to create many virtual servers. ● Savings: This means you need less physical space, use less electricity, and save on cooling and power costs. Automated IT Management: ● Templates: Administrators set up standard configurations for virtual machines. ● Easy Duplication: You can quickly create and manage many virtual machines using these templates, making setup faster and reducing errors. Faster Disaster Recovery: ● Quick Recovery: If something goes wrong (like a natural disaster or cyberattack), you can restore virtual machines in minutes, not days. ● Business Continuity: This helps keep your operations running smoothly and quickly recovers from disruptions.
  • 11. When you install virtualization software on your computer, you can create virtual machines (VMs). Think of a VM as a computer within your computer. Your main computer is called the "host," and the VMs are the "guests." You can run multiple VMs on one host, each with its own operating system, which can be different from the host's. Each VM works like a normal computer, with its own settings, programs, and storage. You can manage and update each VM just like you would with a physical server, without changing anything on the host computer.
  • 12. Virtualization allows one physical computer to run multiple virtual computers. Hypervisor is the software that makes this possible by managing how resources are shared between the virtual machines and the physical hardware. Virtual Machines act like real computers, with their own operating systems and applications, but share the physical resources of the host computer.
  • 14. What are the different types of virtualization? 1. Hardware Virtualization 2. Operating system Virtualization 3. Application Virtualization 4. Network Virtualization 5. Desktop Virtualization 6. Storage Virtualization 7. Server Virtualization 8. Data virtualization
  • 15. Hardware Virtualization ● Definition: Creating virtual versions of physical hardware components. ● Example: ○ Scenario: A company has one powerful physical server but wants to run multiple different environments (like a web server, database server, and testing environment) on it. ○ Tool: VMware ESXi. ○ Result: The single physical server is divided into multiple virtual machines (VMs), each running a different operating system and applications as if they were on separate hardware.
  • 16. Operating System Virtualization ● Definition: Running multiple operating systems on a single physical machine. ● Example: ○ Scenario: A developer needs to test software on both Windows and Linux. ○ Tool: Microsoft Hyper-V. ○ Result: The developer runs both Windows and Linux operating systems simultaneously on the same physical machine, using virtual machines.
  • 17. Application Virtualization ● Definition: Running applications in a virtual environment separate from the underlying OS. ● Example: ○ Scenario: An organization needs to run a legacy application that only works on an older version of Windows. ○ Tool: VMware ThinApp. ○ Result: The legacy application is packaged and run in a virtual environment on modern Windows versions without compatibility issues.
  • 18. Network Virtualization ● Definition: Creating virtual networks that operate independently of the physical network infrastructure. ● Example: ○ Scenario: A data center wants to manage its network traffic more efficiently. ○ Tool: Cisco’s Software-Defined Networking (SDN). ○ Result: The physical network is abstracted into multiple virtual networks, allowing for more flexible and efficient management of network traffic.
  • 19. Desktop Virtualization ● Definition: Running desktop environments on a remote server rather than on local devices. ● Example: ○ Scenario: A company wants employees to access their desktops from anywhere. ○ Tool: VMware Horizon. ○ Result: Employees access their desktop environment (including applications, files, and settings) from any device with an internet connection, as it’s hosted on a remote server.
  • 20. Storage Virtualization ● Definition: Pooling multiple physical storage devices into a single virtual storage system. ● Example: ○ Scenario: An organization has various storage devices from different vendors and wants to manage them as one. ○ Tool: IBM’s SAN Volume Controller. ○ Result: All the different storage devices are combined into one virtual storage pool, simplifying storage management and improving utilization.
  • 21. Server Virtualization ● Definition: Partitioning a physical server into multiple smaller virtual servers. ● Example: ○ Scenario: A business wants to run several different applications on one physical server but keep them isolated from each other. ○ Tool: VMware vSphere. ○ Result: The single physical server is divided into several virtual servers, each running a different application in isolation.
  • 22. Data Virtualization ● Definition: Providing a single virtual view of data from multiple sources without moving or replicating it. ● Example: ○ Scenario: A company needs to analyze data from various databases and cloud services. ○ Tool: Denodo Platform. ○ Result: The company’s analysts can access and query data from different sources as if it were all in one place, without worrying about where the data is actually stored.
  • 23. Network virtualization involves abstracting and combining network resources to create a more flexible and efficient network environment. It allows for the creation of multiple virtual networks on top of a physical network infrastructure. There are several types of network virtualization ● Network Function Virtualization (NFV) ● Software-Defined Networking (SDN)
  • 24. Software-Defined Networking (SDN) ● Software-Defined Networking (SDN): Is the form of network virtualization which is a separation of the control and data plane (control plane makes decisions as to where to send traffic, and data plane sends the traffic). This division creates a viable opportunity for more centralized and programmable network administration. ● Example: An enterprise uses SDN to manage traffic routing across multiple data centers from a single software controller. ● Advantage: Enables dynamic, automated network configuration and centralized management.
  • 25. Data Plane: All the activities involving as well as resulting from data packets sent by the end-user belong to this plane. This includes: ● Forwarding of packets. ● Segmentation and reassembly of data. ● Replication of packets for multicasting.
  • 26. Control Plane: All activities necessary to perform data plane activities but do not involve end-user data packets belong to this plane. In other words, this is the brain of the network. The activities of the control plane include: ● Making routing tables. ● Setting packet handling policies.(Traffic Prioritization,Security Policies etc)
  • 27. How Does Software-Defined Networking (SDN) Work? Separation of Control and Data: ● Control Plane (Software): Centralized software (SDN controller) makes decisions about routing and network management. ● Data Plane (Hardware): Physical devices like routers and switches forward data based on instructions from the control plane. Centralized Control: Network management is done centrally through the SDN controller, allowing for easier configuration and control of the entire network. Use of Virtual Switches: Virtual switches may replace or work with physical switches, performing multiple tasks like packet verification and forwarding efficiently.
  • 29. Components of Software-Defined Networking (SDN) 1. SDN Applications: ○ These are software programs that request network services. They communicate with the SDN controller using APIs to manage and optimize the network. 2. SDN Controller: ○ The SDN controller is the central unit that collects data from the network devices and decides how the network should operate. It sends instructions to the devices based on the needs of the SDN applications. 3. SDN Networking Devices: ○ These are the physical or virtual switches and routers that handle data forwarding and processing. They follow the instructions provided by the SDN controller.
  • 30. In traditional networks, each switch manages both the decision-making (control plane) and data movement (data plane). In SDN, the control plane is moved to a centralized SDN controller, making it easier to manage the network. ● Application Layer: ○ This layer includes network applications like firewalls, load balancers, and intrusion detection systems. ● Control Layer: ○ This layer is where the SDN controller resides, acting as the brain of the network. It provides a unified view and control over the entire network. ● Infrastructure Layer: ○ This layer consists of the physical or virtual switches and routers that handle the actual data movement, guided by the flow tables created by the controller.
  • 33. Network Function Virtualization (NFV) Network Functions Virtualization (NFV) represents a shift from traditional, hardware-based network functions to a more flexible, software-based approach. By moving functions like load balancing, firewall protection, and encryption from dedicated physical devices to virtual machines (VMs) running on standard servers
  • 34. Benefits of NFV Cost Reduction: NFV cuts costs by using general-purpose servers instead of expensive specialized hardware. Scalability and Flexibility: Virtual functions can easily scale up or down to meet demand, avoiding the need for excess capacity. Faster Deployment: Software-based NFV speeds up the process of adding or changing network functions compared to physical devices. Improved Management: NFV works with management tools for easier automation and monitoring of network functions. Enhanced Agility: NFV allows organizations to quickly test and deploy new services without being limited by hardware.
  • 35. NFV architecture NFV (Network Functions Virtualization) architecture consists of several key components that work together to provide a flexible and efficient network infrastructure. Virtualized Network Functions (VNFs): These are the software implementations of network functions like firewalls, load balancers, and routers, running on standard servers instead of dedicated hardware. NFV Infrastructure (NFVI): This layer includes the physical and virtual resources needed to support VNFs. It encompasses: ● Compute: Servers and virtual machines where VNFs run. ● Storage: Systems for storing data and configurations. ● Network: Connectivity and networking components enabling
  • 36. 1. NFV Orchestrator (NFVO): This component manages the lifecycle of VNFs and NFVI. It handles the deployment, scaling, and termination of VNFs, as well as the allocation of resources. 2. Virtualized Infrastructure Manager (VIM): This manages the NFVI resources. It controls the compute, storage, and network resources, and ensures they are allocated efficiently to the VNFs. 3. Network Function Virtualization Management (NFV-MANO): This is a set of functions that includes the NFVO, VIM, and additional management elements to oversee the operation and coordination of VNFs and NFVI. NFV architecture separates network functions from hardware and manages them through virtualization, enabling more flexibility, efficiency, and scalability.
  • 38. Software-Defined Networking (SDN): ● Purpose: SDN is about separating the network control layer from the data forwarding layer. It allows network administrators to manage and configure network devices centrally through software. ● Components: ○ Control Plane: This is where decisions are made about where traffic should go. It’s managed by an SDN controller. ○ Data Plane: This is where actual traffic forwarding happens based on the rules set by the control plane. ● SDN focuses on controlling network behavior and traffic flow through centralized management.
  • 39. Virtualized Network Functions (VNF): ● Purpose: VNFs are software-based versions of traditional network functions (like firewalls, load balancers, and routers) that run on standard servers instead of specialized hardware. ● Components: ○ VNFs: These are individual network functions that have been virtualized and run as software applications. ● VNF focuses on replacing traditional network hardware with software-based solutions for specific functions.
  • 40. Data center A data center is a physical location that stores computing machines and their related hardware equipment. It contains the computing infrastructure that IT systems require, such as servers, data storage drives, and network equipment. It is the physical facility that stores any company’s digital data.
  • 41. Data centers bring several benefits, such as: ● Backup power supplies to manage power outages ● Data replication across several machines for disaster recovery ● Temperature-controlled facilities to extend the life of the equipment ● Easier implementation of security measures for compliance with data laws
  • 43. Data Center Components 1. Servers: Machines that store data and run applications. 2. Storage Systems: Devices like hard drives that keep data. 3. Networking Equipment: Routers, switches, and firewalls that manage data traffic. 4. Power Supply: Uninterruptible power supplies (UPS) and backup generators to ensure continuous operation. 5. Cooling Systems: Air conditioning and ventilation to maintain optimal temperatures.(CRAC,CRAH) 6. Physical Security: Cameras, biometric access controls, and alarms to protect the facility.
  • 44. Types of data centers
  • 45. Enterprise Data Centers: ● Description: Owned and operated by individual organizations. ● Purpose: To manage and store an organization’s data and applications. ● Features: Customized to the specific needs of the organization, with a focus on security and performance tailored to their requirements. Colocation Data Centers: ● Description: Facilities where businesses rent space to house their own servers and other hardware. ● Purpose: To provide physical space, power, cooling, and connectivity while allowing businesses to maintain control over their equipment. ● Features: Offers shared infrastructure, which can be more cost-effective than building a private data center.
  • 46. Cloud Data Centers: ● Description: Operated by cloud service providers (e.g., AWS, Google Cloud, Microsoft Azure). ● Purpose: To provide scalable and on-demand computing resources and storage. ● Features: Resources are virtualized and can be accessed over the internet, allowing for flexible and scalable solutions. Managed Data Centers: ● Description: Facilities where the infrastructure is managed by a third-party provider on behalf of the client. ● Purpose: To offer data center management services including monitoring, maintenance, and support. ● Features: Combines aspects of colocation and outsourcing with managed services.
  • 47. Hyperscale Data Centers: ● Description: Extremely large data centers designed to support the enormous scale of cloud service providers. ● Purpose: To handle vast amounts of data and provide high levels of scalability and redundancy. ● Features: High-density and highly automated, designed for efficiency and scalability. Edge Data Centers: ● Description: Smaller data centers located closer to the end-users or data sources. ● Purpose: To reduce latency and improve performance for applications requiring real- time processing. ● Features: Often used for applications like IoT, streaming, and content delivery.
  • 48. Data Center Characteristics 1. Reliability: High uptime and availability, often supported by redundant systems. 2. Scalability: Ability to expand resources (like adding more servers) as needed. 3. Security: Physical and digital measures to protect data and infrastructure. 4. Efficiency: Optimized use of power, cooling, and space to reduce costs and environmental impact.
  • 49. Service Level Agreement (SLA) A Service Level Agreement (SLA) is a contract between a service provider and a customer that defines the expected level of service. Key elements include: ● Uptime Guarantee: A promise on the minimum percentage of uptime, such as 99.9%. ● Performance Metrics: Specific targets for response times and system performance. ● Penalties: Consequences if the provider fails to meet the agreed-upon terms.
  • 50. Load Balancing Load Balancing is the process of distributing network or application traffic across multiple servers. This ensures no single server becomes overwhelmed, improving: ● Performance: By spreading the workload, servers respond faster. ● Reliability: If one server fails, others can take over, preventing downtime. ● Scalability: Makes it easier to add more servers to handle increasing demand.
  • 51. Scalability and Elasticity 1. Scalability: The ability of a system to handle a growing amount of work by adding resources (like more servers or storage). There are two types: ○ Vertical Scalability: Adding more power (CPU, RAM) to an existing server. ○ Horizontal Scalability: Adding more servers to distribute the workload. 2. Elasticity: The ability to automatically increase or decrease resources based on current demand. This is especially useful in cloud environments, where resources can be adjusted in real-time to optimize cost and performance.