SlideShare a Scribd company logo
18CS643 Cloud Computing and
its Applications
Module 1 –Chapter 1 &3
by,
Dr. B.Loganayagi, Prof. , Dept. of CSE,
SEACET
Module 1
Chapter 1: Introduction
• Cloud Computing at a Glance, The Vision of Cloud Computing
• Defining a Cloud, A Closer Look
• Cloud Computing Reference Model
• Characteristicsand Benefits
• Challenges Ahead
• HistoricalDevelopments:Distributed Systems, Virtualization,Web 2.0, Service-
Oriented Computing, Utility-OrientedComputing
• Building Cloud Computing Environments:Application Development,Infrastructure
and System Development,
• Computing Platforms and Technologies:Amazon Web Services (AWS), Google
AppEngine,Microsoft Azure, Hadoop,Force.com and Salesforce.com, Manjrasoft
Aneka
Chapter 3 :Virtualization
• Introduction,Characteristicsof VirtualizedEnvironments
• Taxonomy of Virtualization Techniques
• Execution Virtualization
• Other Types of Virtualization
• Virtualization andCloudComputing
• Pros and Cons of Virtualization
• Technology Examples :
– Xen: Paravirtualization , VMware: Full Virtualization, Microsoft Hyper-V
2
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Introduction
• Computing is being transformed into a model consisting of services
that are commoditized and delivered in a manner similar to utilities
such as water, electricity, gas, and telephony.
• In such a model, users access services based on their requirements,
regardless of where the services are hosted.
• Several computing paradigms, such as grid computing, have
promised to deliver this utility computing vision. Cloud computing is
the most recent emerging paradigm promising to turn the vision of
“computing utilities” in to a reality.
• Cloud computing is a technological advancement that focuses on
the way we design computing systems, develop applications, and
leverage existing services for building software.
• It is based on the concept of dynamic provisioning, which is applied
not only to services but also to compute capability, storage,
networking, and information technology (IT) infrastructure in
general.
3
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
• One of the most diffused views of cloud
computing can be summarized as follows:
“I don’t care where my servers are, who
manages them, where my documents are
stored, or where my applications are hosted. I
just want them always available and access
them from any device connected through
Internet. And I am willing to pay for this
service for as a long as I need it.”
4
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
• In 1969, Leonard Kleinrock, one of the chief scientists of the original
Advanced Research Projects Agency Network (ARPANET), which seeded
the Internet, said:
“As of now, computer networks are still in their infancy, but as they grow
up and become sophisticated, we will probably see the spread of ‘computer
utilities’ which, like present electric and telephone utilities, will service
individual homes and offices across the country.”
• Cloud computing allows renting infrastructure, runtime environments, and
services on a pay-peruse basis. This principle finds several practical
applications and then gives different images of cloud computing to
different people. Chief information and technology officers of large
enterprises see opportunities for scaling their infrastructure on demand
and sizing it according to their business needs. End users leveraging cloud
computing services can access their documents and data anytime,
anywhere, and from any device connected to the Internet. Many other
points of view exist.
5
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
The vision of cloud computing
• Cloud computing allows anyone with a credit card to provision
virtual hardware, runtime environments, and services. These
are used for as long as needed, with no up-front
commitmentsRequired.
• The entire stack of a computing system is transformed into a
collection of utilities, which can be provisioned and composed
together to deploy systems in hours rather than days and with
virtually no maintenance costs.
• The long-term vision of cloud computing is that IT services are
traded as utilities in an open market, without technological
and legal barriers. In this cloud marketplace, cloud service
providers and consumers, trading cloud services as utilities,
play a central role.
6
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
I need to grow my
infrastructure, but
I do not know for
how long…
I cannot invest in
infrastructure, I
just started my
business….
I want to focus on
application logic and
not maintenance and
scalability issues
I want to access and
edit my documents
and photos from
everywhere..
I have a surplus of
infrastructure that I
want to make use of
I have a lot of
infrastructure that I
want to rent …
I have infrastructure
and middleware and I
can host applications
I have infrastructure
and provide
application services
Fig.1.1. Cloud Computing Vision
Cloud Computing Vision
7
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Vision contd..
• Many of the technological elements contributing to
this vision already exist. Different stake-holders
leverage clouds for a variety of services. The need for
ubiquitous storage and compute power on demand is
the most common reason to consider cloud computing.
A scalable runtime for applications is an attractive
option for application and system developers that do
not have infrastructure or cannot afford any further
expansion of existing infrastructure.
• This approach provides opportunities for optimizing
datacenter facilities and fully utilizing theircapabilities
to serve multiple users. This consolidation model will
reduce the waste of energy and carbon emissions, thus
contributing to a greener IT on one end and increasing
revenue on the other end.
8
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
IToutsourcing
Pay as you go
No capital
investments
Quality of Service
Security
Billing
Cloud
Computing?
Defining a Cloud
Fig.1.2. Cloud Computing Technologies, Concepts and Ideas9
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
• The term cloud has historically been used in the
telecommunications industry as an abstraction of the network
in system diagrams. It then became the symbol of the most
popular computer network, the Internet. This meaning also
applies to cloud computing, which refers to an Internet-centric
way of computing. The Internet plays a fundamental role in
cloud computing, since it represents either the medium or the
platform through which many cloud computing services are
delivered and made accessible.
• This aspect is also reflected in the definition given by Armbrust
et al.:
“Cloud computing refers to both the applications delivered as
services over the Internet and the hardware and system
software in the datacenters that provide those services.”
10
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
NIST Definition of Cloud computing
• The notion of multiple parties using a shared cloud
computing environment is highlighted in a definition
proposed by the U.S. National Institute of Standards
and Technology (NIST):
“Cloud computing is a model for enabling ubiquitous,
convenient, on-demand network access to a shared
pool of configurable computing resources (e.g.,
networks, servers, storage, applications, and services)
that can be rapidly provisioned and released with
minimal management effort or service provider
interaction.”
11
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
According to Reese, we can define three
criteria to discriminate whether a service is
delivered in the cloud computing style:
• The service is accessible via a Web browser
(nonproprietary) or a Web services application
programming interface (API).
• Zero capital expenditure is necessary to get
started.
• You pay only for what you use as you use it.
12
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
A closer look
• Cloud computing is helping enterprises, governments, public and
private institutions, and research organizations shape more
effective and demand-driven computing systems. Access to, as well
as integration of, cloud computing resources and systems is now as
easy as performing a credit card transaction over the Internet.
Practical examples of such systems exist across all market
segments:
– Large enterprises can offload some of their activities to cloud-based
systems.
– Small enterprises and start-ups can afford to translate their ideas into
business results more quickly, without excessive up-front costs.
– System developers can concentrate on the business logic rather than
dealing with the complexity of infrastructure management and
scalability.
– End users can have their documents accessible from everywhere and
any device.
Cloud computing does not only contribute with the opportunity of
easily accessing IT services on demand, it also introduces a new way
of thinking about IT services and resources: as utilities. A bird’s-eye
view of a cloud computing environment is shown in Figure
13
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
10
Manjrasoft
Compute
Storage
Applications
Development and
Runtime Platform
Public Clouds
Subscription-Oriented Cloud Services:
X{compute, apps, data, ..}
as a Service (..aaS)
Clients
Other
Cloud Services
Govt.
Cloud Services
Private
Cloud
Cloud
Manager
14
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Compute
Storage
Applications
Development and
Runtime Platform
Private Resources
Cloud Manager
Private Cloud Private Cloud(Government)
Public Clouds
Government Agencies
Organization Personnel
All users, on any device
Bird’s Eye View of Cloud Computing
Fig.1.3.Bird’s Eye View of Cloud
Computing
15
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Fig.1.4. Cloud Deployment Models
Private/Enterprise
Clouds
* A public Cloud model
withina company’s
own DataCenter /
infrastructure for
internal and/or
partners use.
Public/Internet
Clouds
* 3rd party,
multi-tenant Cloud
infrastructure
& services:
* available on
subscriptionbasis toall.
Hybrid/Inter
Clouds
* Mixed usage of
private andpublic
Clouds:Leasing public
cloud services
whenprivate cloud
capacity is
insufficient
16
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Runtime Environment for Applications
Development and Data Processing Platforms
Examples: WindowsAzure, Hadoop,Google AppEngine,Aneka
Platformas a Service
Virtualized Servers
Storage and Networking
Examples: Amazon EC2, S3, Rightscale, vCloud
Infrastructureas a Service
End user applications
Scientific applications
Office automation, Photo editing,
CRM, and Social Networking
Examples: Google Documents, Facebook, Flickr,Salesforce
Softwareas a Service
Web 2.0
Interfaces
Cloud Computing Reference Model
Fig. 1.5.Cloud Computing Reference Model
17
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
• A fundamental characteristic of cloud computing is the capability to
deliver, on demand, a variety of IT services that are quite diverse from
each other. cloud computing services offerings into three major
categories: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS),
and Software-as a-Service (SaaS).
• These categories are related to each other as described in Figure 1.5,
which provides an organic view of cloud computing.
• At the base of the stack, Infrastructure-as-a-Service solutions deliver
infrastructure on demand in the form of virtual hardware, storage, and
networking. Virtual hardware is utilized to provide compute on demand in
the form of virtual machine instances.
• Platform-as-a-Service solutions are the next step in the stack. They deliver
scalable and elastic runtime environments on demand and host the
execution of applications. These services are backed by a core middleware
platform that is responsible for creating the abstract environment where
applications are deployed and executed.
• At the top of the stack, Software-as-a-Service solutions provide
applications and services on demand. Most of the common functionalities
of desktop applications. Each layer provides a different service to users.
IaaS solutions are sought by users who want to leverage cloud computing
from building dynamically scalable computing systems requiring a spe-
cific software stack. IaaS services are therefore used to develop scalable
Websites or for back- ground processing.
18
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Characteristics and benefits
• Cloud computing has some interesting
characteristics that bring benefits to both cloud
service consumers (CSCs) and cloud service
providers (CSPs).
These characteristics are:
➢No up-front commitments
➢ On-demand access
➢ Nice pricing
➢ Simplified application acceleration and scalability
➢ Efficient resource allocation
➢ Energy efficiency
➢ Seamless creation and use of third-party services
19
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
• The most evident benefit from the use of cloud computing
systems and technologies is the increased economical return
due to the reduced maintenance costs and operational costs
related to IT software and infrastructure.
• This is mainly because IT assets, namely software and
infrastructure, are turned into utility costs, which are paid
for as long as they are used, not paid for up front.
• IT infrastructure and software generated capital costs, since
they were paid up front so that business start-ups could
afford a computing infrastructure, enabling the business
activities of the organization. The revenue of the business is
then utilized to compensate over time for these costs.
• End users can benefit from cloud computing by having their
data and the capability of operating on it always available,
from anywhere, at any time, and through multiple devices.
Information and services stored in the cloud are exposed to
users by Web-based interfaces that make them accessible
from portable devices as well as desktops at home.
20
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Challenges Ahead
• New, interesting problems and challenges are regularly being
posed to the cloud community, including IT practitioners,
managers, governments, and regulators. Technical challenges also
arise for cloud service providers for the management of large
computing infrastructures and the use of virtualization
technologies on top of them.
• Security in terms of confidentiality, secrecy, and protection of data
in a cloud environment is another important challenge.
Organizations do not own the infrastructure they use to process
data and store information. This condition poses challenges for
confidential data, which organizations cannot afford to reveal.
• Legal issues may also arise. These are specifically tied to the
ubiquitous nature of cloud computing, which spreads computing
infrastructure across diverse geographical locations. Different
legislation about privacy in different countries may potentially
create disputes as to the rights that third parties (including
government agencies) have to your data.
21
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Historical Developments
1950 1960 1970 1980 1990 2000 2010
Mainframes
Clusters
1999: Grid Computing
Grids
Clouds
1966: Flynn’s Taxonomy
SISD, SIMD, MISD, MIMD
1969: ARPANET
1970: DARPA’s TCP/IP
1984: DEC’s
VMScluster
1984: IEEE 802.3
Ethernet & LAN
1975: Xerox PARC
Invented Ethernet
1990: Lee-Calliau
WWW, HTTP, HTML
2004: Web 2.0
2005: Amazon
AWS (EC2, S3)
1960: Cray’s First
Supercomputer
2010: Microsoft
Azure
1997: IEEE
802.11 (Wi-Fi)
1989: TCP/IP
IETF RFC 1122
2007: Manjrasoft Aneka
2008: Google
AppEngine
1951: UNIVAC I,
First Mainframe
Fig.1.6. The evolutionofdistributedcomputingtechnologies,1950s2010s.
22
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Historical Developments contd..
• The idea of renting computing services by
leveraging large distributed computing facilities has
been around for long time. It dates back to the days
of the mainframes in the early 1950s.
• Figure 1.6 provides an overview of the evolution of
the distributed computing technologies that have
influenced cloud computing.
• In tracking the historical evolution, we briefly
review five core technologies that played an
important role in the realization of cloud
computing.
• These technologies are distributed systems,
virtualization, Web 2.0, service orientation, and
utility computing.
23
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Distributes Systems
• Clouds are essentially large distributed
computing facilities that make available their
services to third parties on demand. As a
reference, we consider the characterization of
a distributed system proposed by Tanenbaum
et al.:
• “A distributed system is a collection of
independent computers that appears to its
users as a single coherent system.”
24
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Three major milestones have led to
cloud computing evolution
• Mainframes
• Clusters
• Grids
25
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
• Mainframes. These were the first examples of large computational
facilities leveraging multiple processing units. Mainframes were
powerful, highly reliable computers specialized for large data
movement and massive input/output (I/O) operations. They were
mostly used by large organizations for bulk data processing tasks such
as online transactions, enterprise resource planning, and other
operations involving the processing of significant amounts of data.
• Clusters. Cluster computing started as a low-cost alternative to the
use of mainframes and supercomputers. The technology advancement
that created faster and more powerful mainframes and
supercomputers eventually generated an increased availability of
cheap commodity machines as a side effect. These machines could
then be connected by a high-bandwidth network and controlled by
specific software tools that manage them as a single system. Starting in
the 1980s.
Cluster technology contributed considerably to the evolution of tools
and frameworks for distributed computing, including Condor, Parallel
Virtual Machine (PVM), and Message Passing Interface (MPI).
26
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
• Grid computing appeared in the early 1990s as an
evolution of cluster computing.
In an analogy to the power grid, grid computing proposed a
new approach to access large computational power, huge
storage facilities, and a variety of services.
A computing grid was a dynamic aggregation of
heterogeneous computing nodes, and its scale was
nationwide or even worldwide. Several developments
made possible the diffusion of computing grids:
i. clusters became quite common resources;
ii. they were often underutilized;
iii. new problems were requiring computational power that
went beyond the capability of single clusters; and
iv. the improvements in networking and the diffusion of the
Internet made possible long-distance, high-bandwidth
connectivity.
27
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
• Clouds are characterized by the fact of having
virtually infinite capacity, being tolerant to
failures, and being always on, as in the case of
mainframes.
• In many cases, the computing nodes that form
the infrastructure of computing clouds are
commodity machines, as in the case of clusters.
• The services made available by a cloud vendor
are consumed on a pay-per-use basis, and clouds
fully implement the utility vision introduced by
grid computing.
28
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Virtualization
• Virtualization is another core technology for cloud computing. It
encompasses a collection of solutions allowing the abstraction of
some of the fundamental elements for computing, such as
hardware, runtime environments, storage, and networking.
Virtualization has been around for more than 40 years, but its
application has always been limited by technologies that did not
allow an efficient use of virtualization solutions.
• Virtualization is essentially a technology that allows creation of
different computing environments. These environments are called
virtual because they simulate the interface that is expected by a
guest. The most common example of virtualization is hardware
virtualization.
• Virtualization technologies are also used to replicate runtime
environments for programs. Applications in the case of process
virtual machines (which include the foundation of technologies
such as Java or .NET), instead of being executed by the operating
system, are run by a specific program called a virtual machine. This
technique allows isolating the execution of applications and
providing a finer control on the resource they access.
29
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Web 2.0
• The Web is the primary interface through which cloud computing
delivers its services. At present, the Web encompasses a set of
technologies and services that facilitate interactive information sharing,
collaboration, user-centered design, and application composition. This
evolution has transformed the Web into a rich platform for application
development and is known as Web 2.0.
• This term captures a new way in which developers architect
applications and deliver services through the Internet and provides new
experience for users of these applications and services. Web 2.0 brings
interactivity and flexibility into Web pages, providing enhanced user
experience by gaining Web-based access to all the functions that are
normally found in desktop applications.
• These capabilities are obtained by integrating a collection of standards
and technologies such as XML, Asynchronous JavaScript and XML
(AJAX), Web Services, and others. These technologies allow us to build
applications leveraging the contribution of users, who now become
providers of content. 30
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Web 2.0 contd..
• Web 2.0 applications are extremely dynamic: they improve continuously,
and new updates and features are integrated at a constant rate by
following the usage trend of the community. There is no need to deploy
new software releases on the installed base at the client side.
• Web 2.0 applications aim to leverage the “long tail” of Internet users by
making themselves available to everyone in terms of either media
accessibility or affordability.
• Examples of Web 2.0 applications are Google Documents, Google Maps,
Flickr, Facebook, Twitter, YouTube, de.li.cious, Blogger, and Wikipedia. In
particular, social networking Websites take the biggest advantage of Web
2.0. The level of interaction in Websites such as Facebook or Flickr would
not have been possible without the support of AJAX, Really Simple
Syndication (RSS), and other tools that make the user experience
incredibly interactive.
• This idea of the Web as a transport that enables and enhances interaction
was introduced in 1999 by Darcy DiNucci 5 and started to become fully
realized in 2004. Today it is a mature platform for supporting the needs of
cloud computing, which strongly leverages Web 2.0. Applications and
frameworks for delivering rich Internet applications (RIAs) are
fundamental for making cloud services accessible to the wider public.
31
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Service Oriented Computing
• Service orientation is the core reference model for cloud computing
systems. This approach adopts the concept of services as the main
building blocks of application and system development. Service-oriented
computing (SOC) supports the development of rapid, low-cost, flexible,
interoperable, and evolvable applications and systems.
• A service is an abstraction representing a self-describing and platform-
agnostic component that can perform any function—anything from a
simple function to a complex business process.
• A service is supposed to be loosely coupled, reusable, programming
language independent, and location transparent. Loose coupling allows
services to serve different scenarios more easily and makes them
reusable. Independence from a specific platform increases services
accessibility. Thus, a wider range of clients, which can look up services in
global registries and consume them in a location-transparent manner, can
be served.
32
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Service Oriented Computing contd..
• Service-oriented computing introduces and diffuses
two important concepts, which are also fundamental
to cloud computing: quality of service (QoS) and
Software-as-a-Service (SaaS).
➢Quality of service (QoS) identifies a set of functional and
nonfunctional attributes that can be used to evaluate the
behavior of a service from different perspectives. These
could be performance metrics such as response time, or
security attributes, transactional integrity, reliability,
scalability, and availability.
➢The concept of Software-as-a-Service introduces a new
delivery model for applications. The term has been
inherited from the world of application service providers
(ASPs), which deliver software services-based solutions
across the wide area network from a central datacenter
and make them available on a subscription or rental basis.
33
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Utility-oriented computing
• Utility computing is a vision of computing that defines a service-provisioning
model for compute services in which resources such as storage, compute
power, applications, and infrastructure are packaged and offered on a pay-
per-use basis. The idea of providing computing as a utility like natural gas,
water, power, and telephone connection has a long history but has become a
reality today with the advent of cloud computing.
• The American scientist John McCarthy, who, in a speech for the
Massachusetts Institute of Technology (MIT) centennial in 1961, observed:
“If computers of the kind I have advocated become the computers of the future,
then computing may someday be organized as a public utility, just as the
telephone system is a public utility . . . The computer utility could become the
basis of a new and important industry.”
• The first traces of this service-provisioning model can be found in the
mainframe era. IBM and other mainframe providers offered mainframe
power to organizations such as banks and government agencies throughout
their datacenters.
• From an application and system development perspective, service-oriented
computing and service oriented architectures (SOAs) introduced the idea of
leveraging external services for performing a specific task within a software
system.
34
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Building cloud computing environments
• The creation of cloud computing environments
encompasses both the development of
applications and systems that leverage cloud
computing solutions and the creation of
frameworks, platforms, and infrastructures
delivering cloud computing services.
➢Application development
➢Infrastructure and system development
35
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Application development
• Applications that leverage cloud computing benefit from its capability to
dynamically scale on demand. One class of applications that takes the biggest
advantage of this feature is that of Web applications. Their performance is mostly
influenced by the workload generated by varying user demands. With the diffusion
of Web 2.0 technologies, the Web has become a platform for developing rich and
complex applications, including enterprise applications that now leverage the
Internet as the preferred channel for service delivery and user interaction.
• Another class of applications that can potentially gain considerable advantage by
leveraging cloud computing is represented by resource-intensive applications.
These can be either data-intensive or compute-intensive applications. In both
cases, considerable amounts of resources are required to complete execution in a
reasonable timeframe.
• Cloud computing provides a solution for on-demand and dynamic scaling across
the entire stack of computing. This is achieved by
(a) providing methods for renting compute power, storage, and networking;
(b) offering runtime environments designed for scalability and dynamic sizing; and
(c) providing application services that mimic the behavior of desktop applications
but that are completely hosted and managedon the provider side.
36
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Infrastructure and system development
• Distributed computing, virtualization, service orientation, and Web 2.0 form the
core technologies enabling the provisioning of cloud services from anywhere on
the globe. Developing applications and systems that leverage the cloud requires
knowledge across all these technologies.
• Distributed computing is a foundational model for cloud computing because cloud
systems are distributed systems. Besides administrative tasks mostly connected to
the accessibility of resources in the cloud, the extreme dynamism of cloud
systems—where new nodes and services are provisioned on demand—constitutes
the major challenge for engineers and developers.
• Web 2.0 technologies constitute the interface through which cloud computing
services are delivered, managed, and provisioned.
• Cloud computing is often summarized with the acronym XaaS—Everything-as-a-
Service—that clearly underlines the central role of service orientation.
• Virtualization is another element that plays a fundamental role in cloud computing.
This technology is a core feature of the infrastructure used by cloud providers.
• Cloud computing essentially provides mechanisms to address surges in demand by
replicating the required components of computing systems under stress (i.e.,
heavily loaded). Dynamism, scale, and volatility of such components are the main
elements that should guide the design of such systems.
37
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Computing platforms and technologies
Development of a cloud computing
application happens by leveraging platforms
and frameworks that provide different types
of services, from the are-metal infrastructure
to customizable applications serving specific
purposes.
– Amazon web services (AWS)
– Google AppEngine
– Microsoft Azure
– Hadoop
– Force.com and Salesforce.com
– Manjrasoft Aneka
38
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Amazon web services (AWS)
• AWS offers comprehensive cloud IaaS services ranging from virtual
compute, storage, and networking to complete computing stacks.
AWS is mostly known for its compute and storage-on demand
services, namely Elastic Compute Cloud (EC2) and Simple Storage
Service (S3).
• EC2 provides users with customizable virtual hardware that can be
used as the base infrastructure for deploying computing systems on
the cloud. It is possible to choose from a large variety of virtual
hardware configurations, including GPU and cluster instances. EC2
also provides the capability to save a specific running instance as an
image, thus allowing users to create their own templates for
deploying systems. These templates are stored into S3 that delivers
persistent storage on demand.
• S3 is organized into buckets; these are containers of objects that are
stored in binary form and can be enriched with attributes. Users can
store objects of any size, from simple files to entire disk images, and
have them accessible from everywhere.
39
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Google AppEngine
• Google AppEngine is a scalable runtime environment
mostly devoted to executing Web applications. These take
advantage of the large computing infrastructure of Google
to dynamically scale as the demand varies over time.
AppEngine provides both a secure execution environment
and a collection of services that simplify the development
of scalable and high-performance Web applications. These
services include in-memory caching, scalable data store,
job queues, messaging, and cron tasks.
• Developers can build and test applications on their own
machines using the AppEngine software development kit
(SDK). Once development is complete, developers can
easily migrate their application to AppEngine, set quotas to
contain the costs generated, and make the application
available to the world. The languages currently supported
are Python, Java, and Go.
40
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Microsoft Azure
• Microsoft Azure is a cloud operating system and a
platform for developing applications in the cloud.
Applications in Azure are organized around the concept
of roles, which identify a distribution unit for
applications and embody the application’s logic.
• Currently, there are three types of role: Web role,
worker role, and virtual machine role.
– The Web role is designed to host a Web application,
– The worker role is a more generic container of applications
and can be used to perform workload processing, and
– the virtual machine role provides a virtual environment in
which the computing stack can be fully customized,
including the operating systems.
41
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Hadoop
• Apache Hadoop is an open-source framework that is suited for
processing large data sets on commodity hardware. Hadoop is an
implementation of MapReduce, an application programming model
developed by Google, which provides two fundamental operations for
data processing: map and reduce.
• The former transforms and synthesizes the input data provided by the
user; the latter aggregates the output obtained by the map operations.
Hadoop provides the runtime environment, and developers need only
provide the input data and specify the map and reduce functions that
need to be executed.
Force.com and Salesforce.com
• Force.com is a cloud computing platform for developing social
enterprise applications. The platform is the basis for SalesForce.com, a
Software-as-a-Service solution for customer relationship management.
• Force.com allows developers to create applications by composing
ready-to-use blocks; a complete set of components supporting all the
activities of an enterprise are available. The platform provides
complete support for developing applications, from the design of the
data layout to the definition of business rules and workflows and the
definition of the user interface. 42
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Manjrasoft Aneka
• Manjrasoft Aneka is a cloud application platform
for rapid creation of scalable applications and their
deployment on various types of clouds in a
seamless and elastic manner. It supports a
collection of programming abstractions for
developing applications and a distributed runtime
environment that can be deployed on
heterogeneous hardware (clusters, networked
desktop computers, and cloud resources).
• Developers can choose different abstractions to
design their application: tasks, distributed threads,
and map-reduce. These applications are then
executed on the distributed service-oriented
runtime environment, which can dynamically
integrate additional resource on demand. 43
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Chapter 3 – Virtualization
Introduction
• Virtualization is a large umbrella of technologies and concepts that are
meant to provide an abstract environment—whether virtual hardware or
an operating system—torun applications.
• The term virtualization is often synonymous with hardware virtualization,
which plays a fundamental role in efficiently delivering Infrastructure-as-a-
Service (IaaS) solutions for cloud computing.
• Virtualization technologies have gained renewed interested recently due
to the confluence of several phenomena:
➢ Increased performance and computing capacity.
The high-end side of the PC market, where supercomputers can provide
immense compute power that can accommodate the execution of
hundreds or thousands of virtual machines.
➢ Underutilized hardware and software resources.
Hardware and software underutilization is occurring due to (1) increased
performance and computing capacity, and (2) the effect of limited or
sporadic use of resources. Computers today are so powerful that in most
cases only a fraction of their capacity is used by an application or the
system. Using these resources for other purposes after hours could
improve the efficiency of the IT infrastructure.
45
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
➢ Lack of space.
Companies such as Google and Microsoft expand their infrastructures by building data
centers as large as football fields that are able to host thousands of nodes. Although this is
viable for IT giants, in most cases enterprises cannot afford to build another data center to
accommodate additional resource capacity. This condition, along with hardware
underutilization,has led to the diffusion of a technique called server consolidation
➢ Greening initiatives.
Maintaining a data center operation not only involves keeping servers on, but a great deal
of energy is also consumed in keeping them cool. Infrastructures for cooling have a
significant impact on the carbon footprint of a data center. Hence, reducing the number of
servers through server consolidation will definitely reduce the impact of cooling and
power consumption of a data center. Virtualization technologies can provide an efficient
way of consolidatingservers.
➢ Rise of administrativecosts.
The increased demand for additional capacity, which translates into more servers in a data
center, is also responsible for a significant increment in administrative costs. Computers—
in particular, servers—do not operate all on their own, but they require care and feeding
from system administrators. These are labor-intensive operations, and the higher the
number of servers that have to be managed, the higher the administrative costs.
Virtualization can help reduce the number of required servers for a given workload, thus
reducing the cost of the administrativepersonnel.
46
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Characteristics of Virtualized Environments
• Virtualization is a broad concept that refers to the
creation of a virtual version of something, whether
hardware, a software environment, storage, or a
network. In a virtualized environment
• there are three major components: guest, host, and
virtualization layer.
• The guest represents the system component that
interacts with the virtualization layer rather than with
the host, as would normally happen.
• The host represents the original environment where
the guest is supposed to be managed.
• The virtualization layer is responsible for recreating the
same or a different environment where the guest will
operate (see Figure 3.1).
47
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Virtualization Layer
Virtual Hardware Virtual Networking
Virtual Storage
SoftwareEmulation
Host Physical Hardware Physical Storage Physical Networking
Guest Applications
Applications
Virtual Image
Fig.3.1.Virtualization reference model
48
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
The characteristics of virtualized solutions are:
1 Increased security
2 Managed execution
3 Portability
1. Increased security
• The virtual machine represents an emulated environment in
which the guest is executed. All the operations of the guest are
generally performed against the virtual machine, which then
translates and applies them to the host. This level of indirection
allows the virtual machine manager to control and filter the activity
of the guest, thus preventing some harmful operations from being
performed.
For example, applets downloaded from the Internet run in a
sandboxed 3 version of the Java Virtual Machine (JVM), which
provides them with limited access to the hosting operating system
resources. Both the JVM and the .NET runtime provide extensive
security policies for customizing the execution environment of
applications.
49
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
2 Managed execution.
Virtualization of the execution environment
not only allows increased security, but a wider
range of features also can be implemented. In
particular, sharing, aggregation, emulation,
and isolation are the most relevant features
(see Figure 3.2).
50
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Aggregation
Sharing Emulation Isolation Virtualization
Physical
Resources
Virtual
Resources
Fig.3.2 Functions enabled by Managed Execution
51
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
• Sharing. Virtualization allows the creation of a separate computing
environments within the same host. In this way it is possible to fully exploit the
capabilities of a powerful guest, which would otherwise be underutilized.
• Aggregation. Not only is it possible to share physical resource among several
guests, but virtualization also allows aggregation, which is the opposite
process. A group of separate hosts can be tied together and represented to
guests as a single virtual host.
• Emulation. Guest programs are executed within an environment that is
controlled by the virtualization layer, which ultimately is a program. This allows
for controlling and tuning the environment that is exposed to guests. For
instance, a completely different environment with respect to the host can be
emulated, thus allowing the execution of guest programs requiring specific
characteristics that are not present in the physical host.
• Isolation. Virtualization allows providing guests—whether they are operating
systems, applications, or other entities—with a completely separate
environment, in which they are executed. The guest program performs its
activity by interacting with an abstraction layer, which provides access to the
underlying resources.
52
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
• Portability
• The concept of portability applies in different ways
according to the specific type of virtualization considered.
In the case of a hardware virtualization solution, the guest
is packaged into a virtual image that, in most cases, can be
safely moved and executed on top of different virtual
machines.
• In the case of programming-level virtualization, as
implemented by the JVM or the .NET runtime, the binary
code representing application components (jars or
assemblies) can be run without any recompilation on any
implementation of the corresponding virtual machine. This
makes the application development cycle more flexible and
application deployment very straightforward: One version
of the application, in most cases, is able to run on different
platforms with no changes.
53
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Taxonomy of virtualization techniques
• Virtualization covers a wide range of emulation techniques
that are applied to different areas of computing. A
classification of these techniques helps us better
understand their characteristics and use (see Figure 3.3).
• The first classification discriminates against the service or
entity that is being emulated.
• Virtualization is mainly used to emulate execution
environments, storage, and networks. Among these
categories, execution virtualization constitutes the oldest,
most popular, and most developed area. Therefore, it
deserves major investigation and a further categorization.
54
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Virtualization
Execution
Environment
Storage
Network
….
Emulation
High-Level VM
Multiprogramming
Hardware-assisted
Virtualization
Process Level
System Level
Paravirtualization
Full Virtualization
How it is done? Technique Virtualization Model
Application
Programming
Language
Operating
System
Hardware
Partial Virtualization
Fig.3.3 Taxonomyof Virtualization Techniques
55
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Execution virtualization
• Execution virtualization includes all techniques that
aim to emulate an execution environment that is
separate from the one hosting the virtualization layer.
All these techniques concentrate their interest on
providing support for the execution of programs,
whether these are the operating system, a binary
specification of a program compiled against an abstract
machine model, or an application.
1 Machine reference model
2 Hardware-level virtualization
a. Hypervisors
b. Hardware virtualizationtechniques
c. Operating system-levelvirtualization
3. Programming language-level virtualization
4. Application-level virtualization
56
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Machine reference model
• Modern computing systems can be expressed in terms of the
reference model described in Figure 3.4. At the bottom layer, the
model for the hardware is expressed in terms of the Instruction Set
Architecture (ISA), which defines the instruction set for the
processor, registers, memory, and interrupt management.
• ISA is the interface between hardware and software, and it is
important to the operating system (OS) developer (System ISA) and
developers of applications that directly manage the underlying
hardware (User ISA). The application binary interface (ABI)
separates the operating system layer from the applications and
libraries, which are managed by the OS.
• ABI covers details such as low-level data types, alignment, and call
conventions and defines a format for executable programs.
• The highest level of abstraction is represented by the application
programming interface (API), which interfaces applications to
libraries and/or the underlying operating system.
57
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Libraries
API
ABI
Hardware
Operative System
ISA
Applications
Operative System
Hardware
Libraries
Applications
API calls
System calls
ISA
User
ISA
User
ISA
Fig.3.4 Machine Reference Model
58
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Ring 3
Ring 2
Ring 1
Ring 0
Least privileged mode
(user mode)
Privileged modes
Most privileged mode
(supervisor mode)
Fig.3.5. Security Rings and Privileged Modes
59
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
• For this purpose, the instruction set exposed by the hardware has been divided
into different security classes that define who can operate with them. The first
distinction can be made between privileged and nonprivileged instructions.
• Nonprivileged instructions are those instructions that can be used without
interfering with other tasks because they do not access shared resources. This
category contains, for example, all the floating, fixed-point, and arithmetic
instructions.
• Privileged instructions are those that are executed under specific restrictions and
are mostly used for sensitive operations, which expose (behavior-sensitive) or
modify (control-sensitive) the privileged state.
• For instance, a possible implementation features a hierarchy of privileges (see
Figure 3.5) in the form of ring-based security: Ring 0, Ring 1, Ring 2, and Ring 3;
Ring 0 is in the most privileged level and Ring 3 in the least privileged level. Ring 0
is used by the kernel of the OS, rings 1 and 2 are used by the OS-level services,
and Ring 3 is used by the user. Recent systems support only two levels, with Ring
0 for supervisor mode and Ring 3 for user mode. The distinction between user
and supervisor mode allows us to understand the role of the hypervisor and why
it is called that. Conceptually, the hypervisor runs above the supervisor mode, and
from here the prefix hyper- is used. In reality, hypervisors are run in supervisor
mode. 60
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Hardware-level virtualization
• Hardware-level virtualization is a virtualization technique
that provides an abstract execution environment in terms
of computer hardware on top of which a guest operating
system can be run. In this model, the guest is represented
by the operating system, the host by the physical computer
hardware, the virtual machine by its emulation, and the
virtual machine manager by the hypervisor (see Figure 3.6).
• The hypervisor is generally a program or a combination of
software and hardware that allows the abstraction of the
underlying physical hardware.
• Hardware-level virtualization is also called system
virtualization, since it provides ISA to virtual machines,
which is the representation of the hardware interface of a
system. This is to differentiate it from process virtual
machines, which expose ABI to virtual machines.
61
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Host
VMM
Virtual Machine
binary translation
instruction mapping
interpretation
……
Guest
In memory
representation
Storage
Virtual Image
Host emulation
Fig.3.6. Hardware Virtualization Reference Model
62
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Hypervisors
• A fundamental element of hardware virtualization is the hypervisor,
or virtual machine manager (VMM). It recreates a hardware
environment in which guest operating systems are installed. There
are two major types of hypervisor: Type I and Type II (see Figure 3.7).
• Type I hypervisors run directly on top of the hardware. Therefore,
they take the place of the operating systems and interact directly
with the ISA interface exposed by the underlying hardware, and they
emulate this interface in order to allow the management of guest
operating systems. This type of hypervisor is also called a native
virtual machine since it runs natively on hardware.
• Type II hypervisors require the support of an operating system to
provide virtualization services. This means that they are programs
managed by the operating system, which interact with it through the
ABI and emulate the ISA of virtual hardware for guest operating
systems. This type of hypervisor is also called a hosted virtual
machine since it is hosted within an operating system.
63
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
ABI
Hardware
Operative System
ISA
Virtual Machine Manager
ISA
VM VM VM VM
Hardware
ISA
Virtual Machine Manager
ISA
VM VM VM VM
Fig.3.7. Hosted (left) and Native (right) Virtual Machine
64
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
• Conceptually, a virtual machine manager is internally
organized as described in Figure 3.8. Three main modules,
dispatcher, allocator, and interpreter, coordinate their
activity in order to emulate the underlying hardware.
• The dispatcher constitutes the entry point of the monitor
and reroutes the instructions issued by the virtual machine
instance to one of the two other modules.
• The allocator is responsible for deciding the system
resources to be provided to the VM: whenever a virtual
machine tries to execute an instruction that results in
changing the machine resources associated with that VM,
the allocator is invoked by the dispatcher.
• The interpreter module consists of interpreter routines.
These are executed whenever a virtual machine executes a
privileged instruction: a trap is triggered and the
corresponding routine is executed.
65
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Virtual Machine Manager
ISA
Virtual Machine Instance
Instructions (ISA)
Interpreter
Routines
Interpreter
Routines
Allocator
Dispatcher
Fig.3.8 Hypervisor Reference Architecture
66
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
• The design and architecture of a virtual machine manager, together
with the underlying hardware design of the host machine, determine
the full realization of hardware virtualization, where a guest
operating system can be transparently executed on top of a VMM as
though it were run on the underlying hardware.
• The criteria that need to be met by a virtual machine manager to
efficiently support virtualization were established by Goldberg and
Popekin 1974 . Three properties have to be satisfied:
1. Equivalence. A guest running under the control of a virtual machine
manager should exhibit the same behavior as when it is executed
directly on the physical host.
2. Resource control. The virtual machine manager should be in
complete control of virtualized resources.
3. Efficiency. A statistically dominant fraction of the machine
instructions should be executed without intervention from the virtual
machine manager.
67
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
• Popek and Goldberg provided a classification of
the instruction set and proposed three theorems
that define the properties that hardware
instructions need to satisfy in order to efficiently
support virtualization.
• THEOREM3.1 For any conventional third-
generation computer, a VMM may be constructed
if the set of sensitive instructions for that
computer is a subset of the set of privileged
instructions.
• This theorem establishes that all the instructions
that change the configuration of the system
resources should generate a trap in user mode
and be executed under the control of the virtual
machine manager.
68
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
User Instructions
Sensitive Instructions
Privileged Instructions
Fig.3.9.VirtualizableComputer (left) and Non Virtualizable Computer( right)
69
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
THEOREM3.2 A conventional third generation
computer is recursively virtualizable if:
• It is virtualizable and
• a VMM without any timing dependencies can be
constructed for it.
• Recursive virtualization is the ability to run a
virtual machine manager on top of another
virtual machine manager. This allows nesting
hypervisors as long as the capacity of the
underlying resources can accommodate that.
Virtualizable hardware is a prerequisite to
recursive virtualization.
70
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
THEOREM3.3 A hybrid VMM may be constructed
for any conventional third generation machine in
which the set of user sensitive instructions is a
subset of the set of privileged instruction.
• There is another term, hybrid virtual machine
(HVM), which is less efficient than the virtual
machine system. In the case of an HVM, more
instructions are interpreted rather than being
executed directly. All instructions in virtual
supervisor mode are interpreted. Whenever there
is an attempt to execute a behavior-sensitive or
control-sensitive instruction, HVM controls the
execution directly or gains the control via a trap.
Here all sensitive instructions are caught by HVM
that are simulated.
71
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Hardware virtualization techniques
Hardware-assisted virtualization.
• This term refers to a scenario in which the hardware
provides architectural support for building a virtual
machine manager able to run a guest operating system in
complete isolation.
• This technique was originally introduced in the IBM
System/370. At present, examples of hardware-assisted
virtualization are the extensions to the x86-64 bit
architecture introduced with Intel VT (formerly known as
Vanderpool) and AMD V (formerly known as Pacifica).
• Intel and AMD introduced processor extensions, and a wide
range of virtualization solutions took advantage of them:
Kernel-based Virtual Machine (KVM), VirtualBox, Xen,
VMware, Hyper-V, Sun xVM, Parallels, and others.
72
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
• Full virtualization. Full virtualization refers to the ability to run a program, most
likely an operating system, directly on top of a virtual machine and without any
modification, as though it were run on the raw hardware. To make this possible,
virtual machine managers are required to provide a complete emulation of the
entire underlying hardware. The principal advantage of full virtualization is
complete isolation, which leads to enhanced security, ease of emulation of
different architectures, and coexistence of different systems on the same platform.
• Paravirtualization. This is a not-transparent virtualization solution that allows
implementing thin virtual machine managers. Paravirtualization techniques expose
a software interface to the virtual machine that is slightly modified from the host
and, as a consequence, guests need to be modified. The aim of paravirtualization is
to provide the capability to demand the execution of performance critical
operations directly on the host, thus preventing performance losses that would
otherwise be experienced in managedexecution.
• Partial virtualization. Partial virtualization provides a partial emulation of the
underlying hardware, thus not allowing the complete execution of the guest
operating system in complete isolation. Partial virtualization allows many
applications to run transparently, but not all the features of the
• operating system can be supported.
73
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Operating system-level virtualization
• Operating system-level virtualization offers the opportunity to create
different and separated execution environments for applications that are
managed concurrently.
• Differently from hardware virtualization, there is no virtual machine
manager or hypervisor, and the virtualization is done within a single
operating system, where the OS kernel allows for multiple isolated user
space instances.
• The kernel is also responsible for sharing the system resources among
instances and for limiting the impact of instances on each other.
• This virtualization technique can be considered an evolution of the chroot
mechanism in Unix systems. The chroot operation changes the file system
root directory for a process and its children to a specific directory.
• As a result, the process and its children cannot have access to other
portions of the file system than those accessible under the new root
directory.
• Examples of operating system-level virtualizations are FreeBSD Jails, IBM
Logical Partition (LPAR), SolarisZones and Containers, Parallels Virtuozzo
Containers, OpenVZ, iCore Virtual Accounts, Free Virtual Private Server
(FreeVPS).
74
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Programming language-level virtualization
• Programming language-level virtualization is mostly used to achieve ease of
deployment of applications, managed execution, and portability across different
platforms and operatingsystems.
• It consists of a virtual machine executing the byte code of a program, which is the result
of the compilation process. Compilers implemented and used this technology to
produce a binary format representing the machine code for an abstract architecture.
• Programming language-level virtualization has a long trail in computer science history
and originally was used in 1966 for the implementation of Basic Combined
Programming Language (BCPL), a language for writing compilers and one of the
ancestors of the C programming language.
• The ability to support multiple programming languages has been one of the key
elements of the Common Language Infrastructure (CLI), which is the specification
behind .NET Framework.
• Currently, the Java platform and .NET Framework represent the most popular
technologies for enterprise application development. Both Java and the CLI are stack-
based virtualmachines.
• The main advantage of programming-level virtual machines, also called process virtual
machines, is the ability to provide a uniform execution environment across different
platforms.
• The process virtual machines allow for more control over the execution of programs
since they do not provide direct access to the memory. Security is another advantage of
managed programming languages; by filtering the I/O operations, the process virtual
machine can easily support sandboxingof applications.
75
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Application-level virtualization
• Application-level virtualization is a technique allowing applications
to be run in runtime environments that do not natively support all
the features required by such applications.
• In this scenario, applications are not installed in the expected
runtime environment but are run as though they were.
• Emulation can also be used to execute program binaries compiled
for different hardware architectures. In this case, one of the
following strategies can be implemented:
a. Interpretation. In this technique every source instruction is
interpreted by an emulator for executing native ISA instructions,
leading to poor performance. Interpretation has a minimal startup
cost but a huge overhead, since each instruction is emulated.
b. Binary translation. In this technique every source instruction is
converted to native instructions with equivalent functions. After a
block of instructions is translated, it is cached and reused. Binary
translation has a large initial overhead cost, but over time it is
subject to better performance, since previously translated
instruction blocks are directly executed.
76
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Other types of virtualization
Other than execution virtualization, other types of virtualization provide an
abstract environment to interact with. These mainly cover storage,
networking, and client/server interaction.
1 Storage virtualization
Storage virtualization is a system administration practice that allows
decoupling the physical organization of the hardware from its logical
representation. Using this technique, users do not have to be worried about
the specific location of their data, which can be identified using a logical path.
Storage virtualization allows us to harness a wide range of storage facilities
and represent them under a single logical file system. There are different
techniques for storage virtualization, one of the most popular being network-
based virtualization by means of storage area networks (SANs).
2 Network virtualization
Network virtualization combines hardware appliances and specific software
for the creation and management of a virtual network. Network virtualization
can aggregate different physical networks into a single logical network
(external network virtualization) or provide network-like functionality to an
operating system partition (internal network virtualization). The result of
external network virtualization is generally a virtual LAN (VLAN).
77
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
3 Desktop virtualization
Desktop virtualization abstracts the desktop environment available
on a personal computer in order to provide access to it using a
client/server approach. Desktop virtualization provides the same
out come of hardware virtualization but serves a different purpose.
Similarly to hardware virtualization, desktop virtualization makes
accessible a different system as though it were natively installed on
the host, but this system is remotely stored on a different host and
accessed through a network connection. Moreover, desktop
virtualization addresses the problem of making the same desktop
environment accessible from everywhere
4.Application server virtualization
Application server virtualization abstracts a collection of application
servers that provide the same services as a single virtual application
server by using load-balancing strategies and providing a high-
availability infrastructure for the services hosted in the application
server. This is a particular form of virtualization and serves the same
purpose of storage virtualization: providing a better quality of
service rather than emulating a different environment.
78
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Virtualization and cloud computing
• Virtualization plays an important role in cloud computing
since it allows for the appropriate degree of customization,
security, isolation, and manageability that are fundamental
for delivering IT services on demand.
• Particularly important is the role of virtual computing
environment and execution virtualization techniques.
Among these, hardware and programming language
virtualization are the techniques adopted in cloud
computing systems.
• Besides being an enabler for computation on demand,
virtualization also gives the opportunity to design more
efficient computing systems by means of consolidation,
which is performed transparently to cloud computing
service users.
79
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
ServerA
(running)
VM
VM
VM VM
ServerB
(running)
Virtual Machine Manager
VM VM
ServerA
(running)
VM
VM
VM VM
ServerB
(inactive)
Virtual Machine Manager
VM VM
Before Migration
After Migration
Fig.3.10. Live Migration and Server Consolidation
80
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
• Since virtualization allows us to create isolated and controllable
environments, it is possible to serve these environments with
the same resource without them interfering with each other.
• This opportunity is particularly attractive when resources are
underutilized, because it allows reducing the number of active
resources by aggregating virtual machines over a smaller
number of resources that become fully utilized. This practice is
also known as server consolidation, while the movement of
virtual machine instances is called virtual machine migration (see
Figure 3.10).
• Because virtual machine instances are controllable
environments, consolidation can be applied with a minimum
impact, either by temporarily stopping its execution and moving
its data to the new resources or by performing a finer control
and moving the instance while it is running.
• This second techniques is known as live migration and in general
is more complex to implement but more efficient since there is
no disruption of the activity of the virtual machine instance.
81
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Pros and cons of virtualization
• Virtualization has now become extremely popular and widely used, especially
in cloud computing. Today, the capillary diffusion of the Internet connection
and the advancements in computing technology have made virtualization an
interesting opportunity to deliver on-demand IT infrastructure and services.
Advantages of virtualization
1. Managed execution and isolation are perhaps the most important
advantages of virtualization. In the case of techniques supporting the creation
of virtualized execution environments, these two characteristics allow building
secure and controllable computing environments.
2. Portability is another advantage of virtualization, especially for execution
virtualization techniques. Virtual machine instances are normally represented
by one or more files that can be easily transported with respect to physical
systems.
3. Portability and self-containment also contribute to reducing the costs of
maintenance, since the number of hosts is expected to be lower than the
number of virtual machine instances. Since the guest program is executed in a
virtual environment, there is very limited opportunity for the guest program to
damage the underlying hardware.
4. Finally, by means of virtualization it is possible to achieve a more efficient
use of resources. Multiple systems can securely coexist and share the
resources of the underlying host, without interfering with each other.
82
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
The other side of the coin: disadvantages
1 Performance degradation
Performance is definitely one of the major concerns in using virtualization technology. Since
virtualization interposes an abstraction layer between the guest and the host, the guest can experience
increased latencies (delays). For instance, in the case of hardware virtualization, where the
intermediate emulates a bare machine on top of which an entire system can be installed, the causes of
performance degradation can be traced back to the overhead introduced by the following activities:
• Maintaining the status of virtual processors
• Support of privileged instructions (trap and simulate privileged instructions)
• Support of paging within VM
• Console functions
2 Inefficiency and degraded user experience
Virtualization can sometime lead to an inefficient use of the host. In particular, some of the specific
features of the host cannot be exposed by the abstraction layer and then become inaccessible. In the
case of hardware virtualization, this could happen for device drivers: The virtual machine can
sometime simply provide a default graphic card that maps only a subset of the features available in the
host. In the case of programming-level virtual machines, some of the features of the underlying
operating systems may become inaccessible unless specific libraries are used.
3 Security holes and new threats
Virtualization opens the door to a new and unexpected form of phishing. The capability of emulating a
host in a completely transparent manner led the way to malicious programs that are designed to extract
sensitive information from the guest. The same considerations can be made for programming-level
virtual machines: Modified versions of the runtime environment can access sensitive information or
monitor the memory locations utilized by guest applications while these are executed. 83
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Technology Examples : Xen :ParaVirtualization
• Xen is an open source initiative implementing a virtualization platform based on
paravirtualization. Initially developed by a group of researchers at the University
of Cambridge in the United Kingdom, Xen now has a large open-source
community backing it. Citrix also offers it as a commercial solution, XenSource.
• Xen-based technology is used for either desktop virtualization or server
virtualization, and recently it has also been used to provide cloud computing
solutions by means of Xen Cloud Platform(XCP). At the basis of all these solutions
is the Xen Hypervisor, which constitutes the core technology of Xen. Recently Xen
has been advanced to support full virtualization using hardware-assisted
virtualization.
• Xen is the most popular implementation of paravirtualization, which, in contrast
with full virtualization, allows high performance execution of guest operating
systems. This is made possible by eliminating the performance loss while
executing instructions that require special management. This is done by
modifying portions of the guest operating systems run by Xen with reference to
the execution of such instructions. Therefore it is not a transparent solution for
implementing virtualization. This is particularly true for x86, which is the most
popular architecture on commodity machines and servers.
84
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
Xen Hypervisor(VMM)
• Memory management
• CPU state registers
• Devices I/O
User Domains (Domain U)
• Guest OS
• Modified codebase
• Hypercalls into Xen VMM
User Applications
(unmodified ABI)
Management Domain (Domain 0)
• VM Management
• HTTP interface
• Access to the Xen Hypervisor
Ring 3
Ring 2
Ring 1
Ring 0
Hardware(x86)
Privileged
instructions
Hardware trap
Fig3.11 Xen Architecture and Guest OS Management
85
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
• Figure3.11 describes the architecture of Xen and its
mapping on to a classic x86 privilege model. A Xen based
system is managed by the Xen hypervisor, which runs in the
highest privileged mode and controls the access of guest
operating system to the underlying hardware.
• Guest operating systems are executed within domains,
which represent virtual machine instances. Moreover,
specific control software, which has privileged access to the
host and controls all the other guest operating systems, is
executed in a special domain called Domain0.
• This is the first one that is loaded once the virtual machine
manager has completely booted, and it hosts a Hyper Text
Transfer Protocol (HTTP) server that serves requests for
virtual machine creation, configuration, and termination.
This component constitutes the embryonic version of a
distributed virtual machine manager, which is an essential
component of cloud computing systems providing
Infrastructure-as-a-Service(IaaS) solutions.
86
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
• Many of the x86 implementations support four different security levels, called rings,
where Ring 0 represent the level with the highest privileges and Ring3 the level with
the lowest ones.
• Almost all the most popular operating systems, except OS/2, utilize only two levels:
Ring 0 for the kernel code, and Ring 3 for user application and non privileged OS
code. This provides the opportunity for Xen to implement virtualization by executing
the hypervisor in Ring 0, Domain 0, and all the other domains running guest
operating systems generally referred to as Domain U in Ring 1, while the user
applications are running Ring 3. This allows Xen to maintain the ABI unchanged, thus
allowing an easy switch to Xen virtualized solutions from an application point of view.
• Because of the structure of the x86 instruction set, some instructions allow code
executing in Ring 3 to jump in to Ring 0 (kernel mode). Such operation is performed
at the hardware level and therefore within a virtualized environment will result in a
trap or silent fault, thus preventing the normal operations of the guest operating
system, since this is now running in Ring 1. This condition is generally triggered by a
subset of the system calls. To avoid this situation, operating systems need to be
changed in their implementation, and the sensitive system calls need to be re
implemented with hypercalls
• Paravirtualization needs the operating system codebase to be modified, and hence
not all operating systems can be used as guests in a Xen-based environment
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
87
Vmware Full Virtualization
• VMware’s technology is based on the concept of full virtualization,
where the underlying hardware is replicated and made available to
the guest operating system, which runs unaware of such abstrac-tion
layers and does not need to be modified.
• Vmware implements full virtualization either in the desktop
environment, by means of Type II hypervisors, or in the server
environment, by means of Type I hypervisors.
• In both cases, full virtualization is made possible by means of direct
execution (for non sensitive instructions) and binary translation (for
sensitive instructions), thus allowing the virtualization of architecture
such as x86.
• Besides the se two core solutions, Vmware provides additional tools
and software that simplify the use of virtualization technology either
in a desktop environment, with tools enhancing the integration of
virtual guests with the host, or in a server environment, with
solutions for building and managing virtual computing infrastructures.
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
88
Hypervisor
• Binary translation
• Instruction caching
Guest Operating System
• Unmodified codebase
• VMM unaware
User Applications
(unmodified ABI)
Ring 3
Ring 2
Ring 1
Ring 0
Hardware(x86)
Hardware trap (sensitive
instructions)
Dynamic / cachedtranslation
(sensitive instructions)
Fig.3.12. Full Virtualization Reference Model
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
89
• Vmware is well known for the capability to virtualize x86 architectures,
which runs unmodified on top of their hypervisors. With the new
generation of hardware architectures and the introduction of hardware-
assisted virtualization (Intel VT-x and AMDV) in 2006, full virtualization is
made possible with hardware support, but before that date, the use of
dynamic binary translation was the only solution that allowed runningx86
guest operating systems unmodified in a virtualized environment.
• As discussed before, x86 architecture design does not satisfy the first
theorem of virtualization, since the set of sensitive instructions is not a
subset of the privileged instructions.
• This causes a different behavior when such instructions are not executed
in Ring 0, which is the normal case in a virtualization scenario where the
guest OS is run in Ring 1. Generally, a trap is generated and the way it is
managed differentiates the solutions in which virtualization is
implemented for x86 hard- ware.
• In the case of dynamic binary translation, the trap triggers the translation
of the offending instructions into an equivalent set of instructions that
achieves the same goal without generating exceptions. Moreover, to
improve performance, the equivalent set of instruction is cached so that
translation is no longer necessary for further occurrences of the same
instructions. Figure 3.12 gives an idea of the process.
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
90
• This approach has both advantages and disadvantages. The
major advantage is that guests can run unmodified in a
virtualized environment, which is a crucial feature for
operating systems for which source code is not available.
• This is the case, for example, of operating systems in the
Windows family. Binary translation is a more portable
solution for full virtualization.
• On the other hand, translating instructions at runtime
introduces an additional overhead that is not present in
other approaches (paravirtualization or hardware-assisted
virtualization).
• Even though such disadvantage exists, binary translation is
applied to only a subset of the instruction set, whereas the
others are managed through direct execution on the
underlying hardware. This somehow reduces the impact on
performance of binary translation.
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
91
Hardware(x86)
Host Operating System VMware Hypervisor(VMM)
• Direct access to hardware
•I/O, memory, networking for guests
• Save/Restore CPU state for host OS
VMware
Driver
Virtual MachineInstance
User Applications VMware
Workstation
Guest Operating System
User Applications
I/O
Virtualization Solutions:
End-User (Desktop) Virtualization
Fig. 3.13 VMWare WorkstationArchitecture
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
92
• Vmware is a pioneer in virtualization technology and offers a collection of
virtualization solutions covering the entire range of the market, from desktop
computing to enterprise computing and infrastructure virtualization.
• End-user (desktop) virtualization Vmware supports virtualization of
operating system environments and single applications on end user
computers. The first option is the most popular and allows installing a
different operating systems and applications in a completely isolated
environment from the hosting operating system.
• Specific Vmware software Vmware Workstation, for Windows operating
systems, and VMware Fusion, for Mac OS X environments is installed in the
host operating system to create virtual machines and manage their
execution. Besides the creation of an isolated computing environment, the
two products allow a guest operating system to leverage the resources of the
host machine (USB devices, folder sharing, and integration with the graphical
user interface(GUI) of the host operating system). Figure 3.13 provides an
overview of the architecture of these systems. The virtualization environment
is created by an application installed in guest operating systems, which
provides those operating systems with full hardware virtualization of the
underlying hardware
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
93
• This is done by installing a specific driver in the host operating system that provides two main
services: • It deploys a virtual machine manager that can run in privileged mode. • It provides
hooks for the Vmware application to process specific I/O requests eventually by relaying such
requests to the host operating system via system calls.
• Using this architecture also called Hosted Virtual Machine Architecture—it is possible to both
isolate virtual machine instances with in the memory space of a single application and provide
reasonable performance, since the intervention of the Vmware application is required only for
instructions, such as device I/O, that require binary translation. Instructions that can be directly
executed are managed by the virtual machine manager, which takes control of the CPU and the
MMUand alternates its activity with the hostOS.
• Virtual machine images are saved in a collection of files on the host file system, and both
Vmware Workstation and Vmware Fusion allow creation of new images, pause their execution,
create snapshots, and undo operations by rolling back to a previous state of the virtual
machine. Other solutions related to the virtualization of end-user computing environments
include VMware Player, Vmware ACE, and Vmware ThinApp. Vmware Player is a reduced
version of VMware Workstation that allows creating and playing virtual machines in a Windows
or Linux operating environment.
• Vmware ACE, a similar product to Vmware Workstation, creates policy wrapped virtual
machines for deploying secure corporate virtual environments on enduser computers. VMware
ThinApp is a solution for application virtualization. It provides an isolated environment for
applications in order to avoid conflicts due to versioning and incompatible applications. It
detects all the changes to the operating environment made by the installation of a specific
application and stores them together with the application binary into a package that can be run
with Vmware ThinApp.
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
94
Hardware(x86)
Host Operating System VMware Hypervisor(VMM)
• Direct access to hardware
•I/O, memory, networking for guests
• Save/Restore CPU state for host OS
VMware
Driver
VM Instance
serverd
(daemon)
VMware
VMware
VMware
Web
Server VM Instance VM Instance
Virtualization Solutions:Server Virtualization
Fig.3.14. Vmware GSX Server Architecture
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
95
Server virtualization
• Vmware provided solutions for server virtualization with different approaches over time.
Initial support for server virtualization was provided by Vmware GSX server, which replicates
the approach used for end user computers and introduces remote management and
scripting capabilities.The architecture of Vmware GSX Server is depicted in Figure3.14.
• The architecture is mostly designed to serve the virtualization of Web servers. A daemon
process, called serverd, controls and manages Vmware application processes. These
applications are then connected to the virtual machine instances by means of the Vmware
driver installedon the host operating system.
• Virtual machine instances are managed by the VMM as described previously. User requests
for virtual machine management and provisioning are routed from the Web server through
the VMM by means of serverd. Vmware ESX Server and its enhanced version, VMWare ESXi
Server, are examples of the hypervisor based approach. Both can be installed on bare metal
servers and provideservices for virtual machine management.
• The two solutions provide the same services but differ in the internal architecture, more
specifically in the organization of the hypervisor kernel. Vmware ESX embeds a modified
version of a Linux operating system, which provides access through a service console to
hypervisor. Vmware ESXi implements a very thin OS layer and replaces the service console
with interfaces and services for remote management, thus considerably reducing the
hypervisor code size and memory footprint.
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
96
Hardware
VMkernel
hostd VMX
CIM
broker VM
User world API
Resource
scheduling
Device drivers
Storage stack Network stack
Distributed VM
file system
Virtual Ethernet
adapter and switch
VM
VM
VMM VMM VMM
VMX VMX
DCUI syslog
vxpa SNMP
Third-party
CIM plug-ins
Fig.3.15. VMware ESXi Server Architecture
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
97
• The architecture of VMware ESXi is displayed in Figure 3.15.
The base of the infrastructure is the VMkernel, which is a
thin Portable Operating System Interface (POSIX) compliant
operating system that provides the minimal functionality
for processes and thread management, file system, I/O
stacks, and resource scheduling.
• The kernel is accessible through specific APIs called User
world API. These APIs are utilized by all the agents that
provide supporting activities for the management of virtual
machines.
• Remote management of an ESXi server is provided by the
CIM Broker, a system agent that acts as a gateway to the
VM kernel for clients by using the Common Information
Model (CIM) proto col. The ESXi installation can also be
managed locally by a Direct Client User Interface (DCUI),
which provides a BIOS like interface for the management of
local users.
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
98
Server
ESXi
ESX
vSphere
Server
ESXi
ESX
vSphere
Data Center
vCenter
Server
ESXi
ESX
vSphere
Server
ESXi
ESX
vSphere
Data Center
vCenter
vCloud
Cloud
Infrastructure
Virtualization
vFabric
Platform
Virtualization
Zimbra
Application
Virtualization
Infrastructure virtualization and cloud computing solutions
Fig.3.16. Vmware Cloud Solution Stack
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
99
• VMware provides a set of products covering the entire
stack of cloud computing, from infrastructure management
to Software-as-a-Service solutions hosted in the cloud.
• Figure 3.16 gives an overview of the different solutions
offered and how they relate to each other. ESX and ESXi
constitute the building blocks of the solution for virtual
infrastructure management:
• A pool of virtualized servers is tied together and remotely
managed as a whole by VMware vSphere. As a
virtualization platform it provides a set of basic services
besides virtual compute services:
• Virtual file system, virtual storage, and virtual network
constitute the core of the infrastructure; application
services, such as virtual machine migration, storage
migration, data recovery, and security zones, complete the
services offered by vSphere.
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
100
• The management of the infrastructure is operated by VMware vCenter,
which provides centralized administration and management of vSphere
installations in a data center environment.
• A collection of virtualized data centers are turned into a Infrastructure-as-a-
Service cloud by VMware vCloud, which allows service providers to make
available to end users virtual computing environments on demand on a pay-
per-use basis.
• A Web portal provides access to the provisioning services of vCloud, and
end users can self- provision virtual machines by choosing from available
templates and setting up virtual networks among virtual instances.
• VMware also provides a solution for application development in the cloud
with VMware vFabric, which is a set of components that facilitate the
development of scalable Web applications on top of a virtualized
infrastructure.
• vFabric is a collection of components for application monitoring, scalable
data management, and scalable execution and provisioning of Java Web
applications.
• Finally, at the top of the cloud computing stack, VMware provides Zimbra, a
solution for office automation, messaging, and collaboration that is
completely hosted in the cloud and accessible from anywhere.
• This is an SaaS solution that integrates various features into a single software
platform providing email and collaboration management.
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
101
VMware: observations
• Initially starting with a solution for fully virtualized x86 hardware,
Vmware has grown overtime and now provides a complete offering
for virtualizing hardware, infrastructure, applications, and services,
thus covering every segment of the cloud computing market.
• Even though full x86 virtualization is the core technology of
VMware, over time paravirtualization features have been integrated
in to some of the solutions offered by the vendor, especially after
the introduction of hardware-assisted virtualization.
• For instance, the implementation of some device emulations and
the Vmware Tools suite that allows enhanced integration with the
guest and the host operating environment.
• Also, Vmware has strongly contributed to the development and
standardization of a vendor independent Virtual Machine Interface
(VMI), which allows for a general and host agnostic approach to
paravirtualization. Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
102
Microsoft Hyper-V
• Hyper- V is an infrastructure virtualization solution developed by Microsoft for
server virtualization. As the name recalls, it uses a hypervisor based approach to
hardware virtualization, which leverages several techniques to support a variety
of guest operating systems. Hyper-V is currently shipped as a component of
Windows Server 2008 R2 that installs the hypervisor as a role within the server.
Architecture
• Hyper-V supports multiple and concurrent execution of guest operating
systems by means of partitions. A partition is a completely isolated
environment in which an operating system is installed and run. Figure3.17
provides an overview of the architecture of Hyper-V. Despite its straight
forward installation as a component of the host operating system, HyperV takes
control of the hardware, and the host operating system becomes a virtual
machine instance with special privileges,called the parent partition. The parent
partition (also called the root partition) is the only one that has direct access to
the hardware. It runs the virtualization stack, hosts all the drivers required to
configure guest operating systems, and creates child partitions through the
hypervisor. Childpartitions are used to host guest operating systems and do not
have access to the underlying hardware, but their interaction with it is
controlled by either the parent partition or the hypervisor itself.
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
103
Fig3.17.MicrosoftHyper –V Architecture
Hardware (x86)
Hypervisor
(Ring -1) Hypercalls MSRs APIC Scheduler
Address
Management
Partition
Management
Root / Parent Partition
VMWPs
VMMS WMI
Hypervisor-aware
Kernel (Ring 0)
VSPs VID
WinHv
I/O
Stack
Drivers
VMBus
Enlightened Child Partition
User Applications
(Ring 3)
Hypervisor-aware
Wndows Kernel(Ring 0)
VSCs / ICs
WinHv
I/O
Stack
Drivers
VMBus
Enlightened Child Partition
User Applications
(Ring 3)
Hypervisor-aware
Linux Kernel (Ring 0)
VSCs / ICs
LinuxHv
I/O
Stack
Drivers
VMBus
Unenlightened Child
Partition
User Applications
(Ring 3)
Hypervisor-unaware
Kernel (Ring 0)
Processor Memory
Hypervisor
The hypervisor is the component that directly manages the underlying hardware
(processors and memory). It is logicallydefined by the following components:
• Hypercalls interface: This is the entry point for all the partitions for the execution of
sensitive instructions. This is an implementation of the paravirtualization
approach already discussed with Xen. This interface is used by drivers in the
partitioned operating system to contact the hypervisor using the standard
Windows calling convention. The parent partition also uses this interface to create
childpartitions.
• Memory service routines (MSRs). These are the set of functionalities that control
the memory and its access from partitions. By leveraging hardware-assisted
virtualization, the hypervisor uses the Input/Output Memory Management Unit
(I/O MMU or IOMMU) to fast track access to devices from partitions by translating
virtual memory addresses.
• Advanced programmable interrupt controller(APIC). This component represents the
interrupt controller, which manages the signals coming from the underlying
hardware when some event occurs (timer expired, I/O ready, exceptions and
traps). Each virtual processor is equipped with a synthetic interrupt
controller(SynIC), which constitutes an extension of the local APIC. The hypervisor
is responsible of dispatching, when appropriate, the physical interrupts to the
synthetic interrupt controllers.
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
105
• Scheduler. This component schedules the virtual processors to run
on available physical processors. The scheduling is controlled by
policies that are set by the parent partition.
• Address manager. This component is used to manage the virtual
network addresses that are allocated to each guest operating
system.
• Partition manager. This component is in charge of performing
partition creation, finalization, destruction, enumeration, and
configurations. Its services are available through the hypercalls
interface API previously discussed.
• The hypervisorrunsinRing-1 and therefore requires corresponding
hardware technology that enables such a condition. By executing in
this highly privileged mode, the hypervisor can support legacy
operating systems that have been designed for x86 hardware.
• Operating systems of newer generations can take advantage of the
new specific architecture of Hyper-V especially for the I/O
operations performed by child partitions.
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
106
Enlightened I/O and synthetic devices Enlightened I/O provides an optimized way to
perform I/O operations, allowing guest operating systems to leverage an inter partition
communication channel rather than traversing the hardware emulation stack provided
by the hypervisor.
• This option is only available to guest operating systems that are hypervisor aware.
Enlightened I/O leverages VMBus, an inter partition communication channel that is
used to exchange data between partitions (child and parent) and is utilized mostly for
the implementation of virtual device drivers for guest operating systems.
• The architecture of Enlightened I/O is described in Figure 3.17. There are three
fundamental components: VMBus, Virtual Service Providers(VSPs), and Virtual Service
Clients(VSCs). VMBus implements the channel and defines the protocol for
communication between partitions. VSPs are kernel level drivers that are deployed in
the parent partition and provide access to the corresponding hardware devices. These
interact with VSCs, which represent the virtual device drivers(also called synthetic
drivers) seen by the guest operating systems in the child partitions.
• Operating systems supported by Hyper-V utilize this preferred communication channel
to perform I/O for storage, networking, graphics, and input sub systems. This also
results in enhanced performance in child to child I/O as a result of virtual networks
between guest operating systems. Legacy operating systems, which are not hypervisor
aware, can still be run by Hyper-V but rely on device driver emulation, which is
managed by the hypervisor and is less efficient.
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
107
Parent partition
• The parent partition executes the host operating system and implements the
virtualization stack that complements the activity of the hypervisor in running guest
operating systems. This partition always hosts an instance of the Windows Server 2008
R2, which manages the virtualization stackmade availableto the child partitions.
• This partition is the only one that directly accesses device drivers and mediates the
access to them by child partitions by hosting the VSPs. The parent partition is also the
one that manages the creation, execution, and destruction of child partitions. It does so
by means of the Virtualization Infrastructure Driver(VID), which controls access to the
hypervisor and allows the management of virtual processors and memory.
• For each child partition created, a Virtual Machine Worker Process (VMWP) is
instantiated in the parent partition, which manages the child partitions by interacting
with the hypervisor through the VID.
• Virtual Machine Management services are also accessible remotely through a WMI
provider that allows remote hosts to access the VID. Child partitions Child partitions are
used to execute guest operating systems. These are isolated environments that allow
secure and controlled execution of guests. Two types of child partition exist, they differ
on whether the guest operating system is supported by Hyper-V or not. These are
called Enlightened and Unenlightened partitions, respectively. The first one scan benefit
from Enlightened I/O; the other ones are executed by leveraging hardware emulation
from the hypervisor.
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
108
Hyper V :Cloud computing and infrastructure management
• Hyper-V constitutes the basic building block of Microsoft virtualization
infrastructure. Other components contribute to creating a fully featured
platform for server virtualization. To increase the performance of virtualized
environments, a new version of Windows Server 2008, called Windows Server
Core, has been released.
• This is a specific version of the operating system with a reduced set of features
and a smaller footprint. In particular, Windows Server Core has been designed
by removing those features, which are not required in a server environment,
such as the GUI component and other bulky components such as the .NET
Framework and all the applications developed on top of it (for example,
PowerShell).
• This design decision has both advantages and disadvantages. On the plus side,
it allows for reduced maintenance (i.e., fewer software patches), reduced
attack surface, reduced management, and less diskspace. On the negative side,
the embedded features are reduced. Still, there is the opportunity to leverage
all the “removed features” by means of remote management from a fully
featured Windows installation.
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
109
• For instance, administrators can use the PowerShell to remotely
manage the Windows Server Core installation through WMI. Another
component that provides advanced management of virtual machines is
System Center Virtual Machine Manager(SCVMM) 2008. This is a
component of the Microsoft System Center suite, which brings in to the
suite the virtual infrastructure management capabilities from an IT life
cycle point of view. Essentially, SCVMM complements the basic features
offered by Hyper-V with management capabilities, including:
• Management portal for the creation and management of virtual
instances
• Virtual to Virtual (V2V) and Physical to Virtual (P2V) conversions
• Delegated administration
• Library functionality and deep PowerShell integration
• Intelligent placement of virtual machines in the managed
environment
• Host capacity management
SCVMM has also been designed to work with other virtualization
platforms such as VMware vSphere (ESXservers) but benefits most
from the virtual infrastructure management implemented with Hyper-V.
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
110
Hyper-V:Observations
• Compared with Xen and VMware, Hyper-V is a hybrid solution because it leverages both
paravirtualization techniques and full hardware virtualization. The basic architecture of the
hypervisor is based on paravirtualizedarchitecture.
• The hypervisor exposes its services to the guest operating systems by means of hypercalls.
Also, paravirtualized kernels can leverage VMBus for fast I/O operations. Moreover,
partitions are conceptually similar to domains in Xen: The parent partition maps Domain 0,
while child partitions map Domains U.
• The only difference is that the Xen hypervisor is installed on bare hardware and filters all the
access to the underlying hardware, whereas Hyper-V is installed as a role in the existing
operating system, and the way it interacts with partitions is quite similar to the strategy
implemented by VMware, as we discussed. The approach adopted by Hyper-V has both
advantagesand disadvantages.
• The advantages reside in a flexible virtualization platform supporting a wide range of guest
operating systems. The disadvantages are represented by both hardware and software
requirements. Hyper-V is compatible only with Windows Server 2008 and newer Windows
Server platforms running on a x64 architecture.
• Moreover, it requires a 64-bit processor supporting hardware-assisted virtualization and data
execution prevention. Finally, as noted above, Hyper-V is a role that can be installed on a
existing operatingsystem, while vSphere and Xen can be installedon the bare hardware.
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.
111
References
• Rajkumar Buyya, Christian Vecchiola, and Thamarai Selvi,
Mastering Cloud Computing, McGraw Hill, ISBN-13: 978-1-25-
902995-0, New Delhi, India, 2013.
• Rajkumar Buyya, Christian Vecchiola, and Thamarai Selvi,
Mastering Cloud Computing, Morgan Kaufmann, ISBN: 978-0-
12-411454-8, Burlington, Massachusetts,USA, May 2013.
112
Dr B Loganayagi, Professor, Dept. of CSE,
SEACET,Blr.

More Related Content

PPTX
Distributed Transactions(flat and nested) and Atomic Commit Protocols
PPTX
5. IO virtualization
PPTX
Replication in Distributed Systems
PDF
Resource management
PPTX
Data link layer
PPTX
Underlying principles of parallel and distributed computing
PPTX
Presentation - Programming a Heterogeneous Computing Cluster
PPT
Distributed Operating System
Distributed Transactions(flat and nested) and Atomic Commit Protocols
5. IO virtualization
Replication in Distributed Systems
Resource management
Data link layer
Underlying principles of parallel and distributed computing
Presentation - Programming a Heterogeneous Computing Cluster
Distributed Operating System

What's hot (20)

PPTX
Eucalyptus, Nimbus & OpenNebula
PPSX
Foult Tolerence In Distributed System
PPTX
Pram model
PPTX
System dependability
PPTX
Fault tolerance in distributed systems
PPTX
Concurrency
PPTX
Chapter 5.pptx
PPT
Communications is distributed systems
PPTX
Load balancing in cloud computing.pptx
PPT
Parallel computing
PPTX
Optimal load balancing in cloud computing
PPTX
Message queues
PPTX
CREDIT CARD FRAUD DETECTION
PPTX
Load Balancing In Distributed Computing
PPT
System models in distributed system
PPT
Synchronization in distributed systems
PPTX
Distributed Operating System
PPT
Clock synchronization in distributed system
PPTX
QUALITY OF SERVICE(QoS) OF CLOUD
DOC
Synopsis on billing system
Eucalyptus, Nimbus & OpenNebula
Foult Tolerence In Distributed System
Pram model
System dependability
Fault tolerance in distributed systems
Concurrency
Chapter 5.pptx
Communications is distributed systems
Load balancing in cloud computing.pptx
Parallel computing
Optimal load balancing in cloud computing
Message queues
CREDIT CARD FRAUD DETECTION
Load Balancing In Distributed Computing
System models in distributed system
Synchronization in distributed systems
Distributed Operating System
Clock synchronization in distributed system
QUALITY OF SERVICE(QoS) OF CLOUD
Synopsis on billing system
Ad

Similar to cloud computing cc module 1 notes for BE (20)

PDF
Ch-1-INTRODUCTION (1).pdf
PDF
Introduction Of Cloud Computing
PDF
Cloud Computing_2015_03_05
PDF
Understanding the Cloud Computing: A Review
PDF
D045031724
PDF
CLOUD COMPUTING BY SIVASANKARI
PPTX
Introduction to Cloud Computing and Cloud Infrastructure
PPTX
Cloud Computing_Unit 1- Part 1.pptx
DOCX
Cloud computing
PPTX
Unit 1 (1).pptx
ODT
Untitled 1
PDF
A REVIEW ON RESOURCE ALLOCATION MECHANISM IN CLOUD ENVIORNMENT
PDF
Cloud Computing in Resource Management
PDF
An Efficient MDC based Set Partitioned Embedded Block Image Coding
PPTX
UNIT-I-Finallllllllllllllllllllllll.pptx
PDF
A Comprehensive Study On Cloud Computing
DOC
PDF
Introduction to CLoud Computing Technologies
DOCX
Service oriented cloud computing
PPTX
Ch-1-INTRODUCTION (1).pdf
Introduction Of Cloud Computing
Cloud Computing_2015_03_05
Understanding the Cloud Computing: A Review
D045031724
CLOUD COMPUTING BY SIVASANKARI
Introduction to Cloud Computing and Cloud Infrastructure
Cloud Computing_Unit 1- Part 1.pptx
Cloud computing
Unit 1 (1).pptx
Untitled 1
A REVIEW ON RESOURCE ALLOCATION MECHANISM IN CLOUD ENVIORNMENT
Cloud Computing in Resource Management
An Efficient MDC based Set Partitioned Embedded Block Image Coding
UNIT-I-Finallllllllllllllllllllllll.pptx
A Comprehensive Study On Cloud Computing
Introduction to CLoud Computing Technologies
Service oriented cloud computing
Ad

Recently uploaded (20)

PPTX
Leprosy and NLEP programme community medicine
PDF
Introduction to Data Science and Data Analysis
PDF
Capcut Pro Crack For PC Latest Version {Fully Unlocked 2025}
PPT
statistic analysis for study - data collection
PPTX
CYBER SECURITY the Next Warefare Tactics
PPTX
SAP 2 completion done . PRESENTATION.pptx
PPT
Image processing and pattern recognition 2.ppt
PPT
Predictive modeling basics in data cleaning process
PDF
Microsoft Core Cloud Services powerpoint
PPTX
IMPACT OF LANDSLIDE.....................
PPTX
Copy of 16 Timeline & Flowchart Templates – HubSpot.pptx
PDF
Global Data and Analytics Market Outlook Report
PPTX
Topic 5 Presentation 5 Lesson 5 Corporate Fin
PPTX
STERILIZATION AND DISINFECTION-1.ppthhhbx
PPTX
Steganography Project Steganography Project .pptx
PPTX
DS-40-Pre-Engagement and Kickoff deck - v8.0.pptx
PDF
Tetra Pak Index 2023 - The future of health and nutrition - Full report.pdf
PDF
[EN] Industrial Machine Downtime Prediction
PDF
REAL ILLUMINATI AGENT IN KAMPALA UGANDA CALL ON+256765750853/0705037305
PPTX
A Complete Guide to Streamlining Business Processes
Leprosy and NLEP programme community medicine
Introduction to Data Science and Data Analysis
Capcut Pro Crack For PC Latest Version {Fully Unlocked 2025}
statistic analysis for study - data collection
CYBER SECURITY the Next Warefare Tactics
SAP 2 completion done . PRESENTATION.pptx
Image processing and pattern recognition 2.ppt
Predictive modeling basics in data cleaning process
Microsoft Core Cloud Services powerpoint
IMPACT OF LANDSLIDE.....................
Copy of 16 Timeline & Flowchart Templates – HubSpot.pptx
Global Data and Analytics Market Outlook Report
Topic 5 Presentation 5 Lesson 5 Corporate Fin
STERILIZATION AND DISINFECTION-1.ppthhhbx
Steganography Project Steganography Project .pptx
DS-40-Pre-Engagement and Kickoff deck - v8.0.pptx
Tetra Pak Index 2023 - The future of health and nutrition - Full report.pdf
[EN] Industrial Machine Downtime Prediction
REAL ILLUMINATI AGENT IN KAMPALA UGANDA CALL ON+256765750853/0705037305
A Complete Guide to Streamlining Business Processes

cloud computing cc module 1 notes for BE

  • 1. 18CS643 Cloud Computing and its Applications Module 1 –Chapter 1 &3 by, Dr. B.Loganayagi, Prof. , Dept. of CSE, SEACET
  • 2. Module 1 Chapter 1: Introduction • Cloud Computing at a Glance, The Vision of Cloud Computing • Defining a Cloud, A Closer Look • Cloud Computing Reference Model • Characteristicsand Benefits • Challenges Ahead • HistoricalDevelopments:Distributed Systems, Virtualization,Web 2.0, Service- Oriented Computing, Utility-OrientedComputing • Building Cloud Computing Environments:Application Development,Infrastructure and System Development, • Computing Platforms and Technologies:Amazon Web Services (AWS), Google AppEngine,Microsoft Azure, Hadoop,Force.com and Salesforce.com, Manjrasoft Aneka Chapter 3 :Virtualization • Introduction,Characteristicsof VirtualizedEnvironments • Taxonomy of Virtualization Techniques • Execution Virtualization • Other Types of Virtualization • Virtualization andCloudComputing • Pros and Cons of Virtualization • Technology Examples : – Xen: Paravirtualization , VMware: Full Virtualization, Microsoft Hyper-V 2 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 3. Introduction • Computing is being transformed into a model consisting of services that are commoditized and delivered in a manner similar to utilities such as water, electricity, gas, and telephony. • In such a model, users access services based on their requirements, regardless of where the services are hosted. • Several computing paradigms, such as grid computing, have promised to deliver this utility computing vision. Cloud computing is the most recent emerging paradigm promising to turn the vision of “computing utilities” in to a reality. • Cloud computing is a technological advancement that focuses on the way we design computing systems, develop applications, and leverage existing services for building software. • It is based on the concept of dynamic provisioning, which is applied not only to services but also to compute capability, storage, networking, and information technology (IT) infrastructure in general. 3 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 4. • One of the most diffused views of cloud computing can be summarized as follows: “I don’t care where my servers are, who manages them, where my documents are stored, or where my applications are hosted. I just want them always available and access them from any device connected through Internet. And I am willing to pay for this service for as a long as I need it.” 4 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 5. • In 1969, Leonard Kleinrock, one of the chief scientists of the original Advanced Research Projects Agency Network (ARPANET), which seeded the Internet, said: “As of now, computer networks are still in their infancy, but as they grow up and become sophisticated, we will probably see the spread of ‘computer utilities’ which, like present electric and telephone utilities, will service individual homes and offices across the country.” • Cloud computing allows renting infrastructure, runtime environments, and services on a pay-peruse basis. This principle finds several practical applications and then gives different images of cloud computing to different people. Chief information and technology officers of large enterprises see opportunities for scaling their infrastructure on demand and sizing it according to their business needs. End users leveraging cloud computing services can access their documents and data anytime, anywhere, and from any device connected to the Internet. Many other points of view exist. 5 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 6. The vision of cloud computing • Cloud computing allows anyone with a credit card to provision virtual hardware, runtime environments, and services. These are used for as long as needed, with no up-front commitmentsRequired. • The entire stack of a computing system is transformed into a collection of utilities, which can be provisioned and composed together to deploy systems in hours rather than days and with virtually no maintenance costs. • The long-term vision of cloud computing is that IT services are traded as utilities in an open market, without technological and legal barriers. In this cloud marketplace, cloud service providers and consumers, trading cloud services as utilities, play a central role. 6 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 7. I need to grow my infrastructure, but I do not know for how long… I cannot invest in infrastructure, I just started my business…. I want to focus on application logic and not maintenance and scalability issues I want to access and edit my documents and photos from everywhere.. I have a surplus of infrastructure that I want to make use of I have a lot of infrastructure that I want to rent … I have infrastructure and middleware and I can host applications I have infrastructure and provide application services Fig.1.1. Cloud Computing Vision Cloud Computing Vision 7 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 8. Vision contd.. • Many of the technological elements contributing to this vision already exist. Different stake-holders leverage clouds for a variety of services. The need for ubiquitous storage and compute power on demand is the most common reason to consider cloud computing. A scalable runtime for applications is an attractive option for application and system developers that do not have infrastructure or cannot afford any further expansion of existing infrastructure. • This approach provides opportunities for optimizing datacenter facilities and fully utilizing theircapabilities to serve multiple users. This consolidation model will reduce the waste of energy and carbon emissions, thus contributing to a greener IT on one end and increasing revenue on the other end. 8 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 9. IToutsourcing Pay as you go No capital investments Quality of Service Security Billing Cloud Computing? Defining a Cloud Fig.1.2. Cloud Computing Technologies, Concepts and Ideas9 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 10. • The term cloud has historically been used in the telecommunications industry as an abstraction of the network in system diagrams. It then became the symbol of the most popular computer network, the Internet. This meaning also applies to cloud computing, which refers to an Internet-centric way of computing. The Internet plays a fundamental role in cloud computing, since it represents either the medium or the platform through which many cloud computing services are delivered and made accessible. • This aspect is also reflected in the definition given by Armbrust et al.: “Cloud computing refers to both the applications delivered as services over the Internet and the hardware and system software in the datacenters that provide those services.” 10 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 11. NIST Definition of Cloud computing • The notion of multiple parties using a shared cloud computing environment is highlighted in a definition proposed by the U.S. National Institute of Standards and Technology (NIST): “Cloud computing is a model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction.” 11 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 12. According to Reese, we can define three criteria to discriminate whether a service is delivered in the cloud computing style: • The service is accessible via a Web browser (nonproprietary) or a Web services application programming interface (API). • Zero capital expenditure is necessary to get started. • You pay only for what you use as you use it. 12 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 13. A closer look • Cloud computing is helping enterprises, governments, public and private institutions, and research organizations shape more effective and demand-driven computing systems. Access to, as well as integration of, cloud computing resources and systems is now as easy as performing a credit card transaction over the Internet. Practical examples of such systems exist across all market segments: – Large enterprises can offload some of their activities to cloud-based systems. – Small enterprises and start-ups can afford to translate their ideas into business results more quickly, without excessive up-front costs. – System developers can concentrate on the business logic rather than dealing with the complexity of infrastructure management and scalability. – End users can have their documents accessible from everywhere and any device. Cloud computing does not only contribute with the opportunity of easily accessing IT services on demand, it also introduces a new way of thinking about IT services and resources: as utilities. A bird’s-eye view of a cloud computing environment is shown in Figure 13 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 14. 10 Manjrasoft Compute Storage Applications Development and Runtime Platform Public Clouds Subscription-Oriented Cloud Services: X{compute, apps, data, ..} as a Service (..aaS) Clients Other Cloud Services Govt. Cloud Services Private Cloud Cloud Manager 14 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 15. Compute Storage Applications Development and Runtime Platform Private Resources Cloud Manager Private Cloud Private Cloud(Government) Public Clouds Government Agencies Organization Personnel All users, on any device Bird’s Eye View of Cloud Computing Fig.1.3.Bird’s Eye View of Cloud Computing 15 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 16. Fig.1.4. Cloud Deployment Models Private/Enterprise Clouds * A public Cloud model withina company’s own DataCenter / infrastructure for internal and/or partners use. Public/Internet Clouds * 3rd party, multi-tenant Cloud infrastructure & services: * available on subscriptionbasis toall. Hybrid/Inter Clouds * Mixed usage of private andpublic Clouds:Leasing public cloud services whenprivate cloud capacity is insufficient 16 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 17. Runtime Environment for Applications Development and Data Processing Platforms Examples: WindowsAzure, Hadoop,Google AppEngine,Aneka Platformas a Service Virtualized Servers Storage and Networking Examples: Amazon EC2, S3, Rightscale, vCloud Infrastructureas a Service End user applications Scientific applications Office automation, Photo editing, CRM, and Social Networking Examples: Google Documents, Facebook, Flickr,Salesforce Softwareas a Service Web 2.0 Interfaces Cloud Computing Reference Model Fig. 1.5.Cloud Computing Reference Model 17 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 18. • A fundamental characteristic of cloud computing is the capability to deliver, on demand, a variety of IT services that are quite diverse from each other. cloud computing services offerings into three major categories: Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS), and Software-as a-Service (SaaS). • These categories are related to each other as described in Figure 1.5, which provides an organic view of cloud computing. • At the base of the stack, Infrastructure-as-a-Service solutions deliver infrastructure on demand in the form of virtual hardware, storage, and networking. Virtual hardware is utilized to provide compute on demand in the form of virtual machine instances. • Platform-as-a-Service solutions are the next step in the stack. They deliver scalable and elastic runtime environments on demand and host the execution of applications. These services are backed by a core middleware platform that is responsible for creating the abstract environment where applications are deployed and executed. • At the top of the stack, Software-as-a-Service solutions provide applications and services on demand. Most of the common functionalities of desktop applications. Each layer provides a different service to users. IaaS solutions are sought by users who want to leverage cloud computing from building dynamically scalable computing systems requiring a spe- cific software stack. IaaS services are therefore used to develop scalable Websites or for back- ground processing. 18 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 19. Characteristics and benefits • Cloud computing has some interesting characteristics that bring benefits to both cloud service consumers (CSCs) and cloud service providers (CSPs). These characteristics are: ➢No up-front commitments ➢ On-demand access ➢ Nice pricing ➢ Simplified application acceleration and scalability ➢ Efficient resource allocation ➢ Energy efficiency ➢ Seamless creation and use of third-party services 19 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 20. • The most evident benefit from the use of cloud computing systems and technologies is the increased economical return due to the reduced maintenance costs and operational costs related to IT software and infrastructure. • This is mainly because IT assets, namely software and infrastructure, are turned into utility costs, which are paid for as long as they are used, not paid for up front. • IT infrastructure and software generated capital costs, since they were paid up front so that business start-ups could afford a computing infrastructure, enabling the business activities of the organization. The revenue of the business is then utilized to compensate over time for these costs. • End users can benefit from cloud computing by having their data and the capability of operating on it always available, from anywhere, at any time, and through multiple devices. Information and services stored in the cloud are exposed to users by Web-based interfaces that make them accessible from portable devices as well as desktops at home. 20 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 21. Challenges Ahead • New, interesting problems and challenges are regularly being posed to the cloud community, including IT practitioners, managers, governments, and regulators. Technical challenges also arise for cloud service providers for the management of large computing infrastructures and the use of virtualization technologies on top of them. • Security in terms of confidentiality, secrecy, and protection of data in a cloud environment is another important challenge. Organizations do not own the infrastructure they use to process data and store information. This condition poses challenges for confidential data, which organizations cannot afford to reveal. • Legal issues may also arise. These are specifically tied to the ubiquitous nature of cloud computing, which spreads computing infrastructure across diverse geographical locations. Different legislation about privacy in different countries may potentially create disputes as to the rights that third parties (including government agencies) have to your data. 21 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 22. Historical Developments 1950 1960 1970 1980 1990 2000 2010 Mainframes Clusters 1999: Grid Computing Grids Clouds 1966: Flynn’s Taxonomy SISD, SIMD, MISD, MIMD 1969: ARPANET 1970: DARPA’s TCP/IP 1984: DEC’s VMScluster 1984: IEEE 802.3 Ethernet & LAN 1975: Xerox PARC Invented Ethernet 1990: Lee-Calliau WWW, HTTP, HTML 2004: Web 2.0 2005: Amazon AWS (EC2, S3) 1960: Cray’s First Supercomputer 2010: Microsoft Azure 1997: IEEE 802.11 (Wi-Fi) 1989: TCP/IP IETF RFC 1122 2007: Manjrasoft Aneka 2008: Google AppEngine 1951: UNIVAC I, First Mainframe Fig.1.6. The evolutionofdistributedcomputingtechnologies,1950s2010s. 22 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 23. Historical Developments contd.. • The idea of renting computing services by leveraging large distributed computing facilities has been around for long time. It dates back to the days of the mainframes in the early 1950s. • Figure 1.6 provides an overview of the evolution of the distributed computing technologies that have influenced cloud computing. • In tracking the historical evolution, we briefly review five core technologies that played an important role in the realization of cloud computing. • These technologies are distributed systems, virtualization, Web 2.0, service orientation, and utility computing. 23 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 24. Distributes Systems • Clouds are essentially large distributed computing facilities that make available their services to third parties on demand. As a reference, we consider the characterization of a distributed system proposed by Tanenbaum et al.: • “A distributed system is a collection of independent computers that appears to its users as a single coherent system.” 24 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 25. Three major milestones have led to cloud computing evolution • Mainframes • Clusters • Grids 25 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 26. • Mainframes. These were the first examples of large computational facilities leveraging multiple processing units. Mainframes were powerful, highly reliable computers specialized for large data movement and massive input/output (I/O) operations. They were mostly used by large organizations for bulk data processing tasks such as online transactions, enterprise resource planning, and other operations involving the processing of significant amounts of data. • Clusters. Cluster computing started as a low-cost alternative to the use of mainframes and supercomputers. The technology advancement that created faster and more powerful mainframes and supercomputers eventually generated an increased availability of cheap commodity machines as a side effect. These machines could then be connected by a high-bandwidth network and controlled by specific software tools that manage them as a single system. Starting in the 1980s. Cluster technology contributed considerably to the evolution of tools and frameworks for distributed computing, including Condor, Parallel Virtual Machine (PVM), and Message Passing Interface (MPI). 26 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 27. • Grid computing appeared in the early 1990s as an evolution of cluster computing. In an analogy to the power grid, grid computing proposed a new approach to access large computational power, huge storage facilities, and a variety of services. A computing grid was a dynamic aggregation of heterogeneous computing nodes, and its scale was nationwide or even worldwide. Several developments made possible the diffusion of computing grids: i. clusters became quite common resources; ii. they were often underutilized; iii. new problems were requiring computational power that went beyond the capability of single clusters; and iv. the improvements in networking and the diffusion of the Internet made possible long-distance, high-bandwidth connectivity. 27 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 28. • Clouds are characterized by the fact of having virtually infinite capacity, being tolerant to failures, and being always on, as in the case of mainframes. • In many cases, the computing nodes that form the infrastructure of computing clouds are commodity machines, as in the case of clusters. • The services made available by a cloud vendor are consumed on a pay-per-use basis, and clouds fully implement the utility vision introduced by grid computing. 28 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 29. Virtualization • Virtualization is another core technology for cloud computing. It encompasses a collection of solutions allowing the abstraction of some of the fundamental elements for computing, such as hardware, runtime environments, storage, and networking. Virtualization has been around for more than 40 years, but its application has always been limited by technologies that did not allow an efficient use of virtualization solutions. • Virtualization is essentially a technology that allows creation of different computing environments. These environments are called virtual because they simulate the interface that is expected by a guest. The most common example of virtualization is hardware virtualization. • Virtualization technologies are also used to replicate runtime environments for programs. Applications in the case of process virtual machines (which include the foundation of technologies such as Java or .NET), instead of being executed by the operating system, are run by a specific program called a virtual machine. This technique allows isolating the execution of applications and providing a finer control on the resource they access. 29 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 30. Web 2.0 • The Web is the primary interface through which cloud computing delivers its services. At present, the Web encompasses a set of technologies and services that facilitate interactive information sharing, collaboration, user-centered design, and application composition. This evolution has transformed the Web into a rich platform for application development and is known as Web 2.0. • This term captures a new way in which developers architect applications and deliver services through the Internet and provides new experience for users of these applications and services. Web 2.0 brings interactivity and flexibility into Web pages, providing enhanced user experience by gaining Web-based access to all the functions that are normally found in desktop applications. • These capabilities are obtained by integrating a collection of standards and technologies such as XML, Asynchronous JavaScript and XML (AJAX), Web Services, and others. These technologies allow us to build applications leveraging the contribution of users, who now become providers of content. 30 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 31. Web 2.0 contd.. • Web 2.0 applications are extremely dynamic: they improve continuously, and new updates and features are integrated at a constant rate by following the usage trend of the community. There is no need to deploy new software releases on the installed base at the client side. • Web 2.0 applications aim to leverage the “long tail” of Internet users by making themselves available to everyone in terms of either media accessibility or affordability. • Examples of Web 2.0 applications are Google Documents, Google Maps, Flickr, Facebook, Twitter, YouTube, de.li.cious, Blogger, and Wikipedia. In particular, social networking Websites take the biggest advantage of Web 2.0. The level of interaction in Websites such as Facebook or Flickr would not have been possible without the support of AJAX, Really Simple Syndication (RSS), and other tools that make the user experience incredibly interactive. • This idea of the Web as a transport that enables and enhances interaction was introduced in 1999 by Darcy DiNucci 5 and started to become fully realized in 2004. Today it is a mature platform for supporting the needs of cloud computing, which strongly leverages Web 2.0. Applications and frameworks for delivering rich Internet applications (RIAs) are fundamental for making cloud services accessible to the wider public. 31 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 32. Service Oriented Computing • Service orientation is the core reference model for cloud computing systems. This approach adopts the concept of services as the main building blocks of application and system development. Service-oriented computing (SOC) supports the development of rapid, low-cost, flexible, interoperable, and evolvable applications and systems. • A service is an abstraction representing a self-describing and platform- agnostic component that can perform any function—anything from a simple function to a complex business process. • A service is supposed to be loosely coupled, reusable, programming language independent, and location transparent. Loose coupling allows services to serve different scenarios more easily and makes them reusable. Independence from a specific platform increases services accessibility. Thus, a wider range of clients, which can look up services in global registries and consume them in a location-transparent manner, can be served. 32 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 33. Service Oriented Computing contd.. • Service-oriented computing introduces and diffuses two important concepts, which are also fundamental to cloud computing: quality of service (QoS) and Software-as-a-Service (SaaS). ➢Quality of service (QoS) identifies a set of functional and nonfunctional attributes that can be used to evaluate the behavior of a service from different perspectives. These could be performance metrics such as response time, or security attributes, transactional integrity, reliability, scalability, and availability. ➢The concept of Software-as-a-Service introduces a new delivery model for applications. The term has been inherited from the world of application service providers (ASPs), which deliver software services-based solutions across the wide area network from a central datacenter and make them available on a subscription or rental basis. 33 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 34. Utility-oriented computing • Utility computing is a vision of computing that defines a service-provisioning model for compute services in which resources such as storage, compute power, applications, and infrastructure are packaged and offered on a pay- per-use basis. The idea of providing computing as a utility like natural gas, water, power, and telephone connection has a long history but has become a reality today with the advent of cloud computing. • The American scientist John McCarthy, who, in a speech for the Massachusetts Institute of Technology (MIT) centennial in 1961, observed: “If computers of the kind I have advocated become the computers of the future, then computing may someday be organized as a public utility, just as the telephone system is a public utility . . . The computer utility could become the basis of a new and important industry.” • The first traces of this service-provisioning model can be found in the mainframe era. IBM and other mainframe providers offered mainframe power to organizations such as banks and government agencies throughout their datacenters. • From an application and system development perspective, service-oriented computing and service oriented architectures (SOAs) introduced the idea of leveraging external services for performing a specific task within a software system. 34 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 35. Building cloud computing environments • The creation of cloud computing environments encompasses both the development of applications and systems that leverage cloud computing solutions and the creation of frameworks, platforms, and infrastructures delivering cloud computing services. ➢Application development ➢Infrastructure and system development 35 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 36. Application development • Applications that leverage cloud computing benefit from its capability to dynamically scale on demand. One class of applications that takes the biggest advantage of this feature is that of Web applications. Their performance is mostly influenced by the workload generated by varying user demands. With the diffusion of Web 2.0 technologies, the Web has become a platform for developing rich and complex applications, including enterprise applications that now leverage the Internet as the preferred channel for service delivery and user interaction. • Another class of applications that can potentially gain considerable advantage by leveraging cloud computing is represented by resource-intensive applications. These can be either data-intensive or compute-intensive applications. In both cases, considerable amounts of resources are required to complete execution in a reasonable timeframe. • Cloud computing provides a solution for on-demand and dynamic scaling across the entire stack of computing. This is achieved by (a) providing methods for renting compute power, storage, and networking; (b) offering runtime environments designed for scalability and dynamic sizing; and (c) providing application services that mimic the behavior of desktop applications but that are completely hosted and managedon the provider side. 36 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 37. Infrastructure and system development • Distributed computing, virtualization, service orientation, and Web 2.0 form the core technologies enabling the provisioning of cloud services from anywhere on the globe. Developing applications and systems that leverage the cloud requires knowledge across all these technologies. • Distributed computing is a foundational model for cloud computing because cloud systems are distributed systems. Besides administrative tasks mostly connected to the accessibility of resources in the cloud, the extreme dynamism of cloud systems—where new nodes and services are provisioned on demand—constitutes the major challenge for engineers and developers. • Web 2.0 technologies constitute the interface through which cloud computing services are delivered, managed, and provisioned. • Cloud computing is often summarized with the acronym XaaS—Everything-as-a- Service—that clearly underlines the central role of service orientation. • Virtualization is another element that plays a fundamental role in cloud computing. This technology is a core feature of the infrastructure used by cloud providers. • Cloud computing essentially provides mechanisms to address surges in demand by replicating the required components of computing systems under stress (i.e., heavily loaded). Dynamism, scale, and volatility of such components are the main elements that should guide the design of such systems. 37 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 38. Computing platforms and technologies Development of a cloud computing application happens by leveraging platforms and frameworks that provide different types of services, from the are-metal infrastructure to customizable applications serving specific purposes. – Amazon web services (AWS) – Google AppEngine – Microsoft Azure – Hadoop – Force.com and Salesforce.com – Manjrasoft Aneka 38 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 39. Amazon web services (AWS) • AWS offers comprehensive cloud IaaS services ranging from virtual compute, storage, and networking to complete computing stacks. AWS is mostly known for its compute and storage-on demand services, namely Elastic Compute Cloud (EC2) and Simple Storage Service (S3). • EC2 provides users with customizable virtual hardware that can be used as the base infrastructure for deploying computing systems on the cloud. It is possible to choose from a large variety of virtual hardware configurations, including GPU and cluster instances. EC2 also provides the capability to save a specific running instance as an image, thus allowing users to create their own templates for deploying systems. These templates are stored into S3 that delivers persistent storage on demand. • S3 is organized into buckets; these are containers of objects that are stored in binary form and can be enriched with attributes. Users can store objects of any size, from simple files to entire disk images, and have them accessible from everywhere. 39 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 40. Google AppEngine • Google AppEngine is a scalable runtime environment mostly devoted to executing Web applications. These take advantage of the large computing infrastructure of Google to dynamically scale as the demand varies over time. AppEngine provides both a secure execution environment and a collection of services that simplify the development of scalable and high-performance Web applications. These services include in-memory caching, scalable data store, job queues, messaging, and cron tasks. • Developers can build and test applications on their own machines using the AppEngine software development kit (SDK). Once development is complete, developers can easily migrate their application to AppEngine, set quotas to contain the costs generated, and make the application available to the world. The languages currently supported are Python, Java, and Go. 40 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 41. Microsoft Azure • Microsoft Azure is a cloud operating system and a platform for developing applications in the cloud. Applications in Azure are organized around the concept of roles, which identify a distribution unit for applications and embody the application’s logic. • Currently, there are three types of role: Web role, worker role, and virtual machine role. – The Web role is designed to host a Web application, – The worker role is a more generic container of applications and can be used to perform workload processing, and – the virtual machine role provides a virtual environment in which the computing stack can be fully customized, including the operating systems. 41 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 42. Hadoop • Apache Hadoop is an open-source framework that is suited for processing large data sets on commodity hardware. Hadoop is an implementation of MapReduce, an application programming model developed by Google, which provides two fundamental operations for data processing: map and reduce. • The former transforms and synthesizes the input data provided by the user; the latter aggregates the output obtained by the map operations. Hadoop provides the runtime environment, and developers need only provide the input data and specify the map and reduce functions that need to be executed. Force.com and Salesforce.com • Force.com is a cloud computing platform for developing social enterprise applications. The platform is the basis for SalesForce.com, a Software-as-a-Service solution for customer relationship management. • Force.com allows developers to create applications by composing ready-to-use blocks; a complete set of components supporting all the activities of an enterprise are available. The platform provides complete support for developing applications, from the design of the data layout to the definition of business rules and workflows and the definition of the user interface. 42 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 43. Manjrasoft Aneka • Manjrasoft Aneka is a cloud application platform for rapid creation of scalable applications and their deployment on various types of clouds in a seamless and elastic manner. It supports a collection of programming abstractions for developing applications and a distributed runtime environment that can be deployed on heterogeneous hardware (clusters, networked desktop computers, and cloud resources). • Developers can choose different abstractions to design their application: tasks, distributed threads, and map-reduce. These applications are then executed on the distributed service-oriented runtime environment, which can dynamically integrate additional resource on demand. 43 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 44. Chapter 3 – Virtualization
  • 45. Introduction • Virtualization is a large umbrella of technologies and concepts that are meant to provide an abstract environment—whether virtual hardware or an operating system—torun applications. • The term virtualization is often synonymous with hardware virtualization, which plays a fundamental role in efficiently delivering Infrastructure-as-a- Service (IaaS) solutions for cloud computing. • Virtualization technologies have gained renewed interested recently due to the confluence of several phenomena: ➢ Increased performance and computing capacity. The high-end side of the PC market, where supercomputers can provide immense compute power that can accommodate the execution of hundreds or thousands of virtual machines. ➢ Underutilized hardware and software resources. Hardware and software underutilization is occurring due to (1) increased performance and computing capacity, and (2) the effect of limited or sporadic use of resources. Computers today are so powerful that in most cases only a fraction of their capacity is used by an application or the system. Using these resources for other purposes after hours could improve the efficiency of the IT infrastructure. 45 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 46. ➢ Lack of space. Companies such as Google and Microsoft expand their infrastructures by building data centers as large as football fields that are able to host thousands of nodes. Although this is viable for IT giants, in most cases enterprises cannot afford to build another data center to accommodate additional resource capacity. This condition, along with hardware underutilization,has led to the diffusion of a technique called server consolidation ➢ Greening initiatives. Maintaining a data center operation not only involves keeping servers on, but a great deal of energy is also consumed in keeping them cool. Infrastructures for cooling have a significant impact on the carbon footprint of a data center. Hence, reducing the number of servers through server consolidation will definitely reduce the impact of cooling and power consumption of a data center. Virtualization technologies can provide an efficient way of consolidatingservers. ➢ Rise of administrativecosts. The increased demand for additional capacity, which translates into more servers in a data center, is also responsible for a significant increment in administrative costs. Computers— in particular, servers—do not operate all on their own, but they require care and feeding from system administrators. These are labor-intensive operations, and the higher the number of servers that have to be managed, the higher the administrative costs. Virtualization can help reduce the number of required servers for a given workload, thus reducing the cost of the administrativepersonnel. 46 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 47. Characteristics of Virtualized Environments • Virtualization is a broad concept that refers to the creation of a virtual version of something, whether hardware, a software environment, storage, or a network. In a virtualized environment • there are three major components: guest, host, and virtualization layer. • The guest represents the system component that interacts with the virtualization layer rather than with the host, as would normally happen. • The host represents the original environment where the guest is supposed to be managed. • The virtualization layer is responsible for recreating the same or a different environment where the guest will operate (see Figure 3.1). 47 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 48. Virtualization Layer Virtual Hardware Virtual Networking Virtual Storage SoftwareEmulation Host Physical Hardware Physical Storage Physical Networking Guest Applications Applications Virtual Image Fig.3.1.Virtualization reference model 48 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 49. The characteristics of virtualized solutions are: 1 Increased security 2 Managed execution 3 Portability 1. Increased security • The virtual machine represents an emulated environment in which the guest is executed. All the operations of the guest are generally performed against the virtual machine, which then translates and applies them to the host. This level of indirection allows the virtual machine manager to control and filter the activity of the guest, thus preventing some harmful operations from being performed. For example, applets downloaded from the Internet run in a sandboxed 3 version of the Java Virtual Machine (JVM), which provides them with limited access to the hosting operating system resources. Both the JVM and the .NET runtime provide extensive security policies for customizing the execution environment of applications. 49 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 50. 2 Managed execution. Virtualization of the execution environment not only allows increased security, but a wider range of features also can be implemented. In particular, sharing, aggregation, emulation, and isolation are the most relevant features (see Figure 3.2). 50 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 51. Aggregation Sharing Emulation Isolation Virtualization Physical Resources Virtual Resources Fig.3.2 Functions enabled by Managed Execution 51 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 52. • Sharing. Virtualization allows the creation of a separate computing environments within the same host. In this way it is possible to fully exploit the capabilities of a powerful guest, which would otherwise be underutilized. • Aggregation. Not only is it possible to share physical resource among several guests, but virtualization also allows aggregation, which is the opposite process. A group of separate hosts can be tied together and represented to guests as a single virtual host. • Emulation. Guest programs are executed within an environment that is controlled by the virtualization layer, which ultimately is a program. This allows for controlling and tuning the environment that is exposed to guests. For instance, a completely different environment with respect to the host can be emulated, thus allowing the execution of guest programs requiring specific characteristics that are not present in the physical host. • Isolation. Virtualization allows providing guests—whether they are operating systems, applications, or other entities—with a completely separate environment, in which they are executed. The guest program performs its activity by interacting with an abstraction layer, which provides access to the underlying resources. 52 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 53. • Portability • The concept of portability applies in different ways according to the specific type of virtualization considered. In the case of a hardware virtualization solution, the guest is packaged into a virtual image that, in most cases, can be safely moved and executed on top of different virtual machines. • In the case of programming-level virtualization, as implemented by the JVM or the .NET runtime, the binary code representing application components (jars or assemblies) can be run without any recompilation on any implementation of the corresponding virtual machine. This makes the application development cycle more flexible and application deployment very straightforward: One version of the application, in most cases, is able to run on different platforms with no changes. 53 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 54. Taxonomy of virtualization techniques • Virtualization covers a wide range of emulation techniques that are applied to different areas of computing. A classification of these techniques helps us better understand their characteristics and use (see Figure 3.3). • The first classification discriminates against the service or entity that is being emulated. • Virtualization is mainly used to emulate execution environments, storage, and networks. Among these categories, execution virtualization constitutes the oldest, most popular, and most developed area. Therefore, it deserves major investigation and a further categorization. 54 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 55. Virtualization Execution Environment Storage Network …. Emulation High-Level VM Multiprogramming Hardware-assisted Virtualization Process Level System Level Paravirtualization Full Virtualization How it is done? Technique Virtualization Model Application Programming Language Operating System Hardware Partial Virtualization Fig.3.3 Taxonomyof Virtualization Techniques 55 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 56. Execution virtualization • Execution virtualization includes all techniques that aim to emulate an execution environment that is separate from the one hosting the virtualization layer. All these techniques concentrate their interest on providing support for the execution of programs, whether these are the operating system, a binary specification of a program compiled against an abstract machine model, or an application. 1 Machine reference model 2 Hardware-level virtualization a. Hypervisors b. Hardware virtualizationtechniques c. Operating system-levelvirtualization 3. Programming language-level virtualization 4. Application-level virtualization 56 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 57. Machine reference model • Modern computing systems can be expressed in terms of the reference model described in Figure 3.4. At the bottom layer, the model for the hardware is expressed in terms of the Instruction Set Architecture (ISA), which defines the instruction set for the processor, registers, memory, and interrupt management. • ISA is the interface between hardware and software, and it is important to the operating system (OS) developer (System ISA) and developers of applications that directly manage the underlying hardware (User ISA). The application binary interface (ABI) separates the operating system layer from the applications and libraries, which are managed by the OS. • ABI covers details such as low-level data types, alignment, and call conventions and defines a format for executable programs. • The highest level of abstraction is represented by the application programming interface (API), which interfaces applications to libraries and/or the underlying operating system. 57 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 58. Libraries API ABI Hardware Operative System ISA Applications Operative System Hardware Libraries Applications API calls System calls ISA User ISA User ISA Fig.3.4 Machine Reference Model 58 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 59. Ring 3 Ring 2 Ring 1 Ring 0 Least privileged mode (user mode) Privileged modes Most privileged mode (supervisor mode) Fig.3.5. Security Rings and Privileged Modes 59 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 60. • For this purpose, the instruction set exposed by the hardware has been divided into different security classes that define who can operate with them. The first distinction can be made between privileged and nonprivileged instructions. • Nonprivileged instructions are those instructions that can be used without interfering with other tasks because they do not access shared resources. This category contains, for example, all the floating, fixed-point, and arithmetic instructions. • Privileged instructions are those that are executed under specific restrictions and are mostly used for sensitive operations, which expose (behavior-sensitive) or modify (control-sensitive) the privileged state. • For instance, a possible implementation features a hierarchy of privileges (see Figure 3.5) in the form of ring-based security: Ring 0, Ring 1, Ring 2, and Ring 3; Ring 0 is in the most privileged level and Ring 3 in the least privileged level. Ring 0 is used by the kernel of the OS, rings 1 and 2 are used by the OS-level services, and Ring 3 is used by the user. Recent systems support only two levels, with Ring 0 for supervisor mode and Ring 3 for user mode. The distinction between user and supervisor mode allows us to understand the role of the hypervisor and why it is called that. Conceptually, the hypervisor runs above the supervisor mode, and from here the prefix hyper- is used. In reality, hypervisors are run in supervisor mode. 60 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 61. Hardware-level virtualization • Hardware-level virtualization is a virtualization technique that provides an abstract execution environment in terms of computer hardware on top of which a guest operating system can be run. In this model, the guest is represented by the operating system, the host by the physical computer hardware, the virtual machine by its emulation, and the virtual machine manager by the hypervisor (see Figure 3.6). • The hypervisor is generally a program or a combination of software and hardware that allows the abstraction of the underlying physical hardware. • Hardware-level virtualization is also called system virtualization, since it provides ISA to virtual machines, which is the representation of the hardware interface of a system. This is to differentiate it from process virtual machines, which expose ABI to virtual machines. 61 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 62. Host VMM Virtual Machine binary translation instruction mapping interpretation …… Guest In memory representation Storage Virtual Image Host emulation Fig.3.6. Hardware Virtualization Reference Model 62 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 63. Hypervisors • A fundamental element of hardware virtualization is the hypervisor, or virtual machine manager (VMM). It recreates a hardware environment in which guest operating systems are installed. There are two major types of hypervisor: Type I and Type II (see Figure 3.7). • Type I hypervisors run directly on top of the hardware. Therefore, they take the place of the operating systems and interact directly with the ISA interface exposed by the underlying hardware, and they emulate this interface in order to allow the management of guest operating systems. This type of hypervisor is also called a native virtual machine since it runs natively on hardware. • Type II hypervisors require the support of an operating system to provide virtualization services. This means that they are programs managed by the operating system, which interact with it through the ABI and emulate the ISA of virtual hardware for guest operating systems. This type of hypervisor is also called a hosted virtual machine since it is hosted within an operating system. 63 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 64. ABI Hardware Operative System ISA Virtual Machine Manager ISA VM VM VM VM Hardware ISA Virtual Machine Manager ISA VM VM VM VM Fig.3.7. Hosted (left) and Native (right) Virtual Machine 64 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 65. • Conceptually, a virtual machine manager is internally organized as described in Figure 3.8. Three main modules, dispatcher, allocator, and interpreter, coordinate their activity in order to emulate the underlying hardware. • The dispatcher constitutes the entry point of the monitor and reroutes the instructions issued by the virtual machine instance to one of the two other modules. • The allocator is responsible for deciding the system resources to be provided to the VM: whenever a virtual machine tries to execute an instruction that results in changing the machine resources associated with that VM, the allocator is invoked by the dispatcher. • The interpreter module consists of interpreter routines. These are executed whenever a virtual machine executes a privileged instruction: a trap is triggered and the corresponding routine is executed. 65 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 66. Virtual Machine Manager ISA Virtual Machine Instance Instructions (ISA) Interpreter Routines Interpreter Routines Allocator Dispatcher Fig.3.8 Hypervisor Reference Architecture 66 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 67. • The design and architecture of a virtual machine manager, together with the underlying hardware design of the host machine, determine the full realization of hardware virtualization, where a guest operating system can be transparently executed on top of a VMM as though it were run on the underlying hardware. • The criteria that need to be met by a virtual machine manager to efficiently support virtualization were established by Goldberg and Popekin 1974 . Three properties have to be satisfied: 1. Equivalence. A guest running under the control of a virtual machine manager should exhibit the same behavior as when it is executed directly on the physical host. 2. Resource control. The virtual machine manager should be in complete control of virtualized resources. 3. Efficiency. A statistically dominant fraction of the machine instructions should be executed without intervention from the virtual machine manager. 67 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 68. • Popek and Goldberg provided a classification of the instruction set and proposed three theorems that define the properties that hardware instructions need to satisfy in order to efficiently support virtualization. • THEOREM3.1 For any conventional third- generation computer, a VMM may be constructed if the set of sensitive instructions for that computer is a subset of the set of privileged instructions. • This theorem establishes that all the instructions that change the configuration of the system resources should generate a trap in user mode and be executed under the control of the virtual machine manager. 68 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 69. User Instructions Sensitive Instructions Privileged Instructions Fig.3.9.VirtualizableComputer (left) and Non Virtualizable Computer( right) 69 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 70. THEOREM3.2 A conventional third generation computer is recursively virtualizable if: • It is virtualizable and • a VMM without any timing dependencies can be constructed for it. • Recursive virtualization is the ability to run a virtual machine manager on top of another virtual machine manager. This allows nesting hypervisors as long as the capacity of the underlying resources can accommodate that. Virtualizable hardware is a prerequisite to recursive virtualization. 70 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 71. THEOREM3.3 A hybrid VMM may be constructed for any conventional third generation machine in which the set of user sensitive instructions is a subset of the set of privileged instruction. • There is another term, hybrid virtual machine (HVM), which is less efficient than the virtual machine system. In the case of an HVM, more instructions are interpreted rather than being executed directly. All instructions in virtual supervisor mode are interpreted. Whenever there is an attempt to execute a behavior-sensitive or control-sensitive instruction, HVM controls the execution directly or gains the control via a trap. Here all sensitive instructions are caught by HVM that are simulated. 71 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 72. Hardware virtualization techniques Hardware-assisted virtualization. • This term refers to a scenario in which the hardware provides architectural support for building a virtual machine manager able to run a guest operating system in complete isolation. • This technique was originally introduced in the IBM System/370. At present, examples of hardware-assisted virtualization are the extensions to the x86-64 bit architecture introduced with Intel VT (formerly known as Vanderpool) and AMD V (formerly known as Pacifica). • Intel and AMD introduced processor extensions, and a wide range of virtualization solutions took advantage of them: Kernel-based Virtual Machine (KVM), VirtualBox, Xen, VMware, Hyper-V, Sun xVM, Parallels, and others. 72 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 73. • Full virtualization. Full virtualization refers to the ability to run a program, most likely an operating system, directly on top of a virtual machine and without any modification, as though it were run on the raw hardware. To make this possible, virtual machine managers are required to provide a complete emulation of the entire underlying hardware. The principal advantage of full virtualization is complete isolation, which leads to enhanced security, ease of emulation of different architectures, and coexistence of different systems on the same platform. • Paravirtualization. This is a not-transparent virtualization solution that allows implementing thin virtual machine managers. Paravirtualization techniques expose a software interface to the virtual machine that is slightly modified from the host and, as a consequence, guests need to be modified. The aim of paravirtualization is to provide the capability to demand the execution of performance critical operations directly on the host, thus preventing performance losses that would otherwise be experienced in managedexecution. • Partial virtualization. Partial virtualization provides a partial emulation of the underlying hardware, thus not allowing the complete execution of the guest operating system in complete isolation. Partial virtualization allows many applications to run transparently, but not all the features of the • operating system can be supported. 73 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 74. Operating system-level virtualization • Operating system-level virtualization offers the opportunity to create different and separated execution environments for applications that are managed concurrently. • Differently from hardware virtualization, there is no virtual machine manager or hypervisor, and the virtualization is done within a single operating system, where the OS kernel allows for multiple isolated user space instances. • The kernel is also responsible for sharing the system resources among instances and for limiting the impact of instances on each other. • This virtualization technique can be considered an evolution of the chroot mechanism in Unix systems. The chroot operation changes the file system root directory for a process and its children to a specific directory. • As a result, the process and its children cannot have access to other portions of the file system than those accessible under the new root directory. • Examples of operating system-level virtualizations are FreeBSD Jails, IBM Logical Partition (LPAR), SolarisZones and Containers, Parallels Virtuozzo Containers, OpenVZ, iCore Virtual Accounts, Free Virtual Private Server (FreeVPS). 74 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 75. Programming language-level virtualization • Programming language-level virtualization is mostly used to achieve ease of deployment of applications, managed execution, and portability across different platforms and operatingsystems. • It consists of a virtual machine executing the byte code of a program, which is the result of the compilation process. Compilers implemented and used this technology to produce a binary format representing the machine code for an abstract architecture. • Programming language-level virtualization has a long trail in computer science history and originally was used in 1966 for the implementation of Basic Combined Programming Language (BCPL), a language for writing compilers and one of the ancestors of the C programming language. • The ability to support multiple programming languages has been one of the key elements of the Common Language Infrastructure (CLI), which is the specification behind .NET Framework. • Currently, the Java platform and .NET Framework represent the most popular technologies for enterprise application development. Both Java and the CLI are stack- based virtualmachines. • The main advantage of programming-level virtual machines, also called process virtual machines, is the ability to provide a uniform execution environment across different platforms. • The process virtual machines allow for more control over the execution of programs since they do not provide direct access to the memory. Security is another advantage of managed programming languages; by filtering the I/O operations, the process virtual machine can easily support sandboxingof applications. 75 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 76. Application-level virtualization • Application-level virtualization is a technique allowing applications to be run in runtime environments that do not natively support all the features required by such applications. • In this scenario, applications are not installed in the expected runtime environment but are run as though they were. • Emulation can also be used to execute program binaries compiled for different hardware architectures. In this case, one of the following strategies can be implemented: a. Interpretation. In this technique every source instruction is interpreted by an emulator for executing native ISA instructions, leading to poor performance. Interpretation has a minimal startup cost but a huge overhead, since each instruction is emulated. b. Binary translation. In this technique every source instruction is converted to native instructions with equivalent functions. After a block of instructions is translated, it is cached and reused. Binary translation has a large initial overhead cost, but over time it is subject to better performance, since previously translated instruction blocks are directly executed. 76 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 77. Other types of virtualization Other than execution virtualization, other types of virtualization provide an abstract environment to interact with. These mainly cover storage, networking, and client/server interaction. 1 Storage virtualization Storage virtualization is a system administration practice that allows decoupling the physical organization of the hardware from its logical representation. Using this technique, users do not have to be worried about the specific location of their data, which can be identified using a logical path. Storage virtualization allows us to harness a wide range of storage facilities and represent them under a single logical file system. There are different techniques for storage virtualization, one of the most popular being network- based virtualization by means of storage area networks (SANs). 2 Network virtualization Network virtualization combines hardware appliances and specific software for the creation and management of a virtual network. Network virtualization can aggregate different physical networks into a single logical network (external network virtualization) or provide network-like functionality to an operating system partition (internal network virtualization). The result of external network virtualization is generally a virtual LAN (VLAN). 77 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 78. 3 Desktop virtualization Desktop virtualization abstracts the desktop environment available on a personal computer in order to provide access to it using a client/server approach. Desktop virtualization provides the same out come of hardware virtualization but serves a different purpose. Similarly to hardware virtualization, desktop virtualization makes accessible a different system as though it were natively installed on the host, but this system is remotely stored on a different host and accessed through a network connection. Moreover, desktop virtualization addresses the problem of making the same desktop environment accessible from everywhere 4.Application server virtualization Application server virtualization abstracts a collection of application servers that provide the same services as a single virtual application server by using load-balancing strategies and providing a high- availability infrastructure for the services hosted in the application server. This is a particular form of virtualization and serves the same purpose of storage virtualization: providing a better quality of service rather than emulating a different environment. 78 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 79. Virtualization and cloud computing • Virtualization plays an important role in cloud computing since it allows for the appropriate degree of customization, security, isolation, and manageability that are fundamental for delivering IT services on demand. • Particularly important is the role of virtual computing environment and execution virtualization techniques. Among these, hardware and programming language virtualization are the techniques adopted in cloud computing systems. • Besides being an enabler for computation on demand, virtualization also gives the opportunity to design more efficient computing systems by means of consolidation, which is performed transparently to cloud computing service users. 79 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 80. ServerA (running) VM VM VM VM ServerB (running) Virtual Machine Manager VM VM ServerA (running) VM VM VM VM ServerB (inactive) Virtual Machine Manager VM VM Before Migration After Migration Fig.3.10. Live Migration and Server Consolidation 80 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 81. • Since virtualization allows us to create isolated and controllable environments, it is possible to serve these environments with the same resource without them interfering with each other. • This opportunity is particularly attractive when resources are underutilized, because it allows reducing the number of active resources by aggregating virtual machines over a smaller number of resources that become fully utilized. This practice is also known as server consolidation, while the movement of virtual machine instances is called virtual machine migration (see Figure 3.10). • Because virtual machine instances are controllable environments, consolidation can be applied with a minimum impact, either by temporarily stopping its execution and moving its data to the new resources or by performing a finer control and moving the instance while it is running. • This second techniques is known as live migration and in general is more complex to implement but more efficient since there is no disruption of the activity of the virtual machine instance. 81 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 82. Pros and cons of virtualization • Virtualization has now become extremely popular and widely used, especially in cloud computing. Today, the capillary diffusion of the Internet connection and the advancements in computing technology have made virtualization an interesting opportunity to deliver on-demand IT infrastructure and services. Advantages of virtualization 1. Managed execution and isolation are perhaps the most important advantages of virtualization. In the case of techniques supporting the creation of virtualized execution environments, these two characteristics allow building secure and controllable computing environments. 2. Portability is another advantage of virtualization, especially for execution virtualization techniques. Virtual machine instances are normally represented by one or more files that can be easily transported with respect to physical systems. 3. Portability and self-containment also contribute to reducing the costs of maintenance, since the number of hosts is expected to be lower than the number of virtual machine instances. Since the guest program is executed in a virtual environment, there is very limited opportunity for the guest program to damage the underlying hardware. 4. Finally, by means of virtualization it is possible to achieve a more efficient use of resources. Multiple systems can securely coexist and share the resources of the underlying host, without interfering with each other. 82 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 83. The other side of the coin: disadvantages 1 Performance degradation Performance is definitely one of the major concerns in using virtualization technology. Since virtualization interposes an abstraction layer between the guest and the host, the guest can experience increased latencies (delays). For instance, in the case of hardware virtualization, where the intermediate emulates a bare machine on top of which an entire system can be installed, the causes of performance degradation can be traced back to the overhead introduced by the following activities: • Maintaining the status of virtual processors • Support of privileged instructions (trap and simulate privileged instructions) • Support of paging within VM • Console functions 2 Inefficiency and degraded user experience Virtualization can sometime lead to an inefficient use of the host. In particular, some of the specific features of the host cannot be exposed by the abstraction layer and then become inaccessible. In the case of hardware virtualization, this could happen for device drivers: The virtual machine can sometime simply provide a default graphic card that maps only a subset of the features available in the host. In the case of programming-level virtual machines, some of the features of the underlying operating systems may become inaccessible unless specific libraries are used. 3 Security holes and new threats Virtualization opens the door to a new and unexpected form of phishing. The capability of emulating a host in a completely transparent manner led the way to malicious programs that are designed to extract sensitive information from the guest. The same considerations can be made for programming-level virtual machines: Modified versions of the runtime environment can access sensitive information or monitor the memory locations utilized by guest applications while these are executed. 83 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 84. Technology Examples : Xen :ParaVirtualization • Xen is an open source initiative implementing a virtualization platform based on paravirtualization. Initially developed by a group of researchers at the University of Cambridge in the United Kingdom, Xen now has a large open-source community backing it. Citrix also offers it as a commercial solution, XenSource. • Xen-based technology is used for either desktop virtualization or server virtualization, and recently it has also been used to provide cloud computing solutions by means of Xen Cloud Platform(XCP). At the basis of all these solutions is the Xen Hypervisor, which constitutes the core technology of Xen. Recently Xen has been advanced to support full virtualization using hardware-assisted virtualization. • Xen is the most popular implementation of paravirtualization, which, in contrast with full virtualization, allows high performance execution of guest operating systems. This is made possible by eliminating the performance loss while executing instructions that require special management. This is done by modifying portions of the guest operating systems run by Xen with reference to the execution of such instructions. Therefore it is not a transparent solution for implementing virtualization. This is particularly true for x86, which is the most popular architecture on commodity machines and servers. 84 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 85. Xen Hypervisor(VMM) • Memory management • CPU state registers • Devices I/O User Domains (Domain U) • Guest OS • Modified codebase • Hypercalls into Xen VMM User Applications (unmodified ABI) Management Domain (Domain 0) • VM Management • HTTP interface • Access to the Xen Hypervisor Ring 3 Ring 2 Ring 1 Ring 0 Hardware(x86) Privileged instructions Hardware trap Fig3.11 Xen Architecture and Guest OS Management 85 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 86. • Figure3.11 describes the architecture of Xen and its mapping on to a classic x86 privilege model. A Xen based system is managed by the Xen hypervisor, which runs in the highest privileged mode and controls the access of guest operating system to the underlying hardware. • Guest operating systems are executed within domains, which represent virtual machine instances. Moreover, specific control software, which has privileged access to the host and controls all the other guest operating systems, is executed in a special domain called Domain0. • This is the first one that is loaded once the virtual machine manager has completely booted, and it hosts a Hyper Text Transfer Protocol (HTTP) server that serves requests for virtual machine creation, configuration, and termination. This component constitutes the embryonic version of a distributed virtual machine manager, which is an essential component of cloud computing systems providing Infrastructure-as-a-Service(IaaS) solutions. 86 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.
  • 87. • Many of the x86 implementations support four different security levels, called rings, where Ring 0 represent the level with the highest privileges and Ring3 the level with the lowest ones. • Almost all the most popular operating systems, except OS/2, utilize only two levels: Ring 0 for the kernel code, and Ring 3 for user application and non privileged OS code. This provides the opportunity for Xen to implement virtualization by executing the hypervisor in Ring 0, Domain 0, and all the other domains running guest operating systems generally referred to as Domain U in Ring 1, while the user applications are running Ring 3. This allows Xen to maintain the ABI unchanged, thus allowing an easy switch to Xen virtualized solutions from an application point of view. • Because of the structure of the x86 instruction set, some instructions allow code executing in Ring 3 to jump in to Ring 0 (kernel mode). Such operation is performed at the hardware level and therefore within a virtualized environment will result in a trap or silent fault, thus preventing the normal operations of the guest operating system, since this is now running in Ring 1. This condition is generally triggered by a subset of the system calls. To avoid this situation, operating systems need to be changed in their implementation, and the sensitive system calls need to be re implemented with hypercalls • Paravirtualization needs the operating system codebase to be modified, and hence not all operating systems can be used as guests in a Xen-based environment Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 87
  • 88. Vmware Full Virtualization • VMware’s technology is based on the concept of full virtualization, where the underlying hardware is replicated and made available to the guest operating system, which runs unaware of such abstrac-tion layers and does not need to be modified. • Vmware implements full virtualization either in the desktop environment, by means of Type II hypervisors, or in the server environment, by means of Type I hypervisors. • In both cases, full virtualization is made possible by means of direct execution (for non sensitive instructions) and binary translation (for sensitive instructions), thus allowing the virtualization of architecture such as x86. • Besides the se two core solutions, Vmware provides additional tools and software that simplify the use of virtualization technology either in a desktop environment, with tools enhancing the integration of virtual guests with the host, or in a server environment, with solutions for building and managing virtual computing infrastructures. Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 88
  • 89. Hypervisor • Binary translation • Instruction caching Guest Operating System • Unmodified codebase • VMM unaware User Applications (unmodified ABI) Ring 3 Ring 2 Ring 1 Ring 0 Hardware(x86) Hardware trap (sensitive instructions) Dynamic / cachedtranslation (sensitive instructions) Fig.3.12. Full Virtualization Reference Model Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 89
  • 90. • Vmware is well known for the capability to virtualize x86 architectures, which runs unmodified on top of their hypervisors. With the new generation of hardware architectures and the introduction of hardware- assisted virtualization (Intel VT-x and AMDV) in 2006, full virtualization is made possible with hardware support, but before that date, the use of dynamic binary translation was the only solution that allowed runningx86 guest operating systems unmodified in a virtualized environment. • As discussed before, x86 architecture design does not satisfy the first theorem of virtualization, since the set of sensitive instructions is not a subset of the privileged instructions. • This causes a different behavior when such instructions are not executed in Ring 0, which is the normal case in a virtualization scenario where the guest OS is run in Ring 1. Generally, a trap is generated and the way it is managed differentiates the solutions in which virtualization is implemented for x86 hard- ware. • In the case of dynamic binary translation, the trap triggers the translation of the offending instructions into an equivalent set of instructions that achieves the same goal without generating exceptions. Moreover, to improve performance, the equivalent set of instruction is cached so that translation is no longer necessary for further occurrences of the same instructions. Figure 3.12 gives an idea of the process. Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 90
  • 91. • This approach has both advantages and disadvantages. The major advantage is that guests can run unmodified in a virtualized environment, which is a crucial feature for operating systems for which source code is not available. • This is the case, for example, of operating systems in the Windows family. Binary translation is a more portable solution for full virtualization. • On the other hand, translating instructions at runtime introduces an additional overhead that is not present in other approaches (paravirtualization or hardware-assisted virtualization). • Even though such disadvantage exists, binary translation is applied to only a subset of the instruction set, whereas the others are managed through direct execution on the underlying hardware. This somehow reduces the impact on performance of binary translation. Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 91
  • 92. Hardware(x86) Host Operating System VMware Hypervisor(VMM) • Direct access to hardware •I/O, memory, networking for guests • Save/Restore CPU state for host OS VMware Driver Virtual MachineInstance User Applications VMware Workstation Guest Operating System User Applications I/O Virtualization Solutions: End-User (Desktop) Virtualization Fig. 3.13 VMWare WorkstationArchitecture Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 92
  • 93. • Vmware is a pioneer in virtualization technology and offers a collection of virtualization solutions covering the entire range of the market, from desktop computing to enterprise computing and infrastructure virtualization. • End-user (desktop) virtualization Vmware supports virtualization of operating system environments and single applications on end user computers. The first option is the most popular and allows installing a different operating systems and applications in a completely isolated environment from the hosting operating system. • Specific Vmware software Vmware Workstation, for Windows operating systems, and VMware Fusion, for Mac OS X environments is installed in the host operating system to create virtual machines and manage their execution. Besides the creation of an isolated computing environment, the two products allow a guest operating system to leverage the resources of the host machine (USB devices, folder sharing, and integration with the graphical user interface(GUI) of the host operating system). Figure 3.13 provides an overview of the architecture of these systems. The virtualization environment is created by an application installed in guest operating systems, which provides those operating systems with full hardware virtualization of the underlying hardware Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 93
  • 94. • This is done by installing a specific driver in the host operating system that provides two main services: • It deploys a virtual machine manager that can run in privileged mode. • It provides hooks for the Vmware application to process specific I/O requests eventually by relaying such requests to the host operating system via system calls. • Using this architecture also called Hosted Virtual Machine Architecture—it is possible to both isolate virtual machine instances with in the memory space of a single application and provide reasonable performance, since the intervention of the Vmware application is required only for instructions, such as device I/O, that require binary translation. Instructions that can be directly executed are managed by the virtual machine manager, which takes control of the CPU and the MMUand alternates its activity with the hostOS. • Virtual machine images are saved in a collection of files on the host file system, and both Vmware Workstation and Vmware Fusion allow creation of new images, pause their execution, create snapshots, and undo operations by rolling back to a previous state of the virtual machine. Other solutions related to the virtualization of end-user computing environments include VMware Player, Vmware ACE, and Vmware ThinApp. Vmware Player is a reduced version of VMware Workstation that allows creating and playing virtual machines in a Windows or Linux operating environment. • Vmware ACE, a similar product to Vmware Workstation, creates policy wrapped virtual machines for deploying secure corporate virtual environments on enduser computers. VMware ThinApp is a solution for application virtualization. It provides an isolated environment for applications in order to avoid conflicts due to versioning and incompatible applications. It detects all the changes to the operating environment made by the installation of a specific application and stores them together with the application binary into a package that can be run with Vmware ThinApp. Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 94
  • 95. Hardware(x86) Host Operating System VMware Hypervisor(VMM) • Direct access to hardware •I/O, memory, networking for guests • Save/Restore CPU state for host OS VMware Driver VM Instance serverd (daemon) VMware VMware VMware Web Server VM Instance VM Instance Virtualization Solutions:Server Virtualization Fig.3.14. Vmware GSX Server Architecture Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 95
  • 96. Server virtualization • Vmware provided solutions for server virtualization with different approaches over time. Initial support for server virtualization was provided by Vmware GSX server, which replicates the approach used for end user computers and introduces remote management and scripting capabilities.The architecture of Vmware GSX Server is depicted in Figure3.14. • The architecture is mostly designed to serve the virtualization of Web servers. A daemon process, called serverd, controls and manages Vmware application processes. These applications are then connected to the virtual machine instances by means of the Vmware driver installedon the host operating system. • Virtual machine instances are managed by the VMM as described previously. User requests for virtual machine management and provisioning are routed from the Web server through the VMM by means of serverd. Vmware ESX Server and its enhanced version, VMWare ESXi Server, are examples of the hypervisor based approach. Both can be installed on bare metal servers and provideservices for virtual machine management. • The two solutions provide the same services but differ in the internal architecture, more specifically in the organization of the hypervisor kernel. Vmware ESX embeds a modified version of a Linux operating system, which provides access through a service console to hypervisor. Vmware ESXi implements a very thin OS layer and replaces the service console with interfaces and services for remote management, thus considerably reducing the hypervisor code size and memory footprint. Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 96
  • 97. Hardware VMkernel hostd VMX CIM broker VM User world API Resource scheduling Device drivers Storage stack Network stack Distributed VM file system Virtual Ethernet adapter and switch VM VM VMM VMM VMM VMX VMX DCUI syslog vxpa SNMP Third-party CIM plug-ins Fig.3.15. VMware ESXi Server Architecture Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 97
  • 98. • The architecture of VMware ESXi is displayed in Figure 3.15. The base of the infrastructure is the VMkernel, which is a thin Portable Operating System Interface (POSIX) compliant operating system that provides the minimal functionality for processes and thread management, file system, I/O stacks, and resource scheduling. • The kernel is accessible through specific APIs called User world API. These APIs are utilized by all the agents that provide supporting activities for the management of virtual machines. • Remote management of an ESXi server is provided by the CIM Broker, a system agent that acts as a gateway to the VM kernel for clients by using the Common Information Model (CIM) proto col. The ESXi installation can also be managed locally by a Direct Client User Interface (DCUI), which provides a BIOS like interface for the management of local users. Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 98
  • 100. • VMware provides a set of products covering the entire stack of cloud computing, from infrastructure management to Software-as-a-Service solutions hosted in the cloud. • Figure 3.16 gives an overview of the different solutions offered and how they relate to each other. ESX and ESXi constitute the building blocks of the solution for virtual infrastructure management: • A pool of virtualized servers is tied together and remotely managed as a whole by VMware vSphere. As a virtualization platform it provides a set of basic services besides virtual compute services: • Virtual file system, virtual storage, and virtual network constitute the core of the infrastructure; application services, such as virtual machine migration, storage migration, data recovery, and security zones, complete the services offered by vSphere. Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 100
  • 101. • The management of the infrastructure is operated by VMware vCenter, which provides centralized administration and management of vSphere installations in a data center environment. • A collection of virtualized data centers are turned into a Infrastructure-as-a- Service cloud by VMware vCloud, which allows service providers to make available to end users virtual computing environments on demand on a pay- per-use basis. • A Web portal provides access to the provisioning services of vCloud, and end users can self- provision virtual machines by choosing from available templates and setting up virtual networks among virtual instances. • VMware also provides a solution for application development in the cloud with VMware vFabric, which is a set of components that facilitate the development of scalable Web applications on top of a virtualized infrastructure. • vFabric is a collection of components for application monitoring, scalable data management, and scalable execution and provisioning of Java Web applications. • Finally, at the top of the cloud computing stack, VMware provides Zimbra, a solution for office automation, messaging, and collaboration that is completely hosted in the cloud and accessible from anywhere. • This is an SaaS solution that integrates various features into a single software platform providing email and collaboration management. Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 101
  • 102. VMware: observations • Initially starting with a solution for fully virtualized x86 hardware, Vmware has grown overtime and now provides a complete offering for virtualizing hardware, infrastructure, applications, and services, thus covering every segment of the cloud computing market. • Even though full x86 virtualization is the core technology of VMware, over time paravirtualization features have been integrated in to some of the solutions offered by the vendor, especially after the introduction of hardware-assisted virtualization. • For instance, the implementation of some device emulations and the Vmware Tools suite that allows enhanced integration with the guest and the host operating environment. • Also, Vmware has strongly contributed to the development and standardization of a vendor independent Virtual Machine Interface (VMI), which allows for a general and host agnostic approach to paravirtualization. Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 102
  • 103. Microsoft Hyper-V • Hyper- V is an infrastructure virtualization solution developed by Microsoft for server virtualization. As the name recalls, it uses a hypervisor based approach to hardware virtualization, which leverages several techniques to support a variety of guest operating systems. Hyper-V is currently shipped as a component of Windows Server 2008 R2 that installs the hypervisor as a role within the server. Architecture • Hyper-V supports multiple and concurrent execution of guest operating systems by means of partitions. A partition is a completely isolated environment in which an operating system is installed and run. Figure3.17 provides an overview of the architecture of Hyper-V. Despite its straight forward installation as a component of the host operating system, HyperV takes control of the hardware, and the host operating system becomes a virtual machine instance with special privileges,called the parent partition. The parent partition (also called the root partition) is the only one that has direct access to the hardware. It runs the virtualization stack, hosts all the drivers required to configure guest operating systems, and creates child partitions through the hypervisor. Childpartitions are used to host guest operating systems and do not have access to the underlying hardware, but their interaction with it is controlled by either the parent partition or the hypervisor itself. Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 103
  • 104. Fig3.17.MicrosoftHyper –V Architecture Hardware (x86) Hypervisor (Ring -1) Hypercalls MSRs APIC Scheduler Address Management Partition Management Root / Parent Partition VMWPs VMMS WMI Hypervisor-aware Kernel (Ring 0) VSPs VID WinHv I/O Stack Drivers VMBus Enlightened Child Partition User Applications (Ring 3) Hypervisor-aware Wndows Kernel(Ring 0) VSCs / ICs WinHv I/O Stack Drivers VMBus Enlightened Child Partition User Applications (Ring 3) Hypervisor-aware Linux Kernel (Ring 0) VSCs / ICs LinuxHv I/O Stack Drivers VMBus Unenlightened Child Partition User Applications (Ring 3) Hypervisor-unaware Kernel (Ring 0) Processor Memory
  • 105. Hypervisor The hypervisor is the component that directly manages the underlying hardware (processors and memory). It is logicallydefined by the following components: • Hypercalls interface: This is the entry point for all the partitions for the execution of sensitive instructions. This is an implementation of the paravirtualization approach already discussed with Xen. This interface is used by drivers in the partitioned operating system to contact the hypervisor using the standard Windows calling convention. The parent partition also uses this interface to create childpartitions. • Memory service routines (MSRs). These are the set of functionalities that control the memory and its access from partitions. By leveraging hardware-assisted virtualization, the hypervisor uses the Input/Output Memory Management Unit (I/O MMU or IOMMU) to fast track access to devices from partitions by translating virtual memory addresses. • Advanced programmable interrupt controller(APIC). This component represents the interrupt controller, which manages the signals coming from the underlying hardware when some event occurs (timer expired, I/O ready, exceptions and traps). Each virtual processor is equipped with a synthetic interrupt controller(SynIC), which constitutes an extension of the local APIC. The hypervisor is responsible of dispatching, when appropriate, the physical interrupts to the synthetic interrupt controllers. Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 105
  • 106. • Scheduler. This component schedules the virtual processors to run on available physical processors. The scheduling is controlled by policies that are set by the parent partition. • Address manager. This component is used to manage the virtual network addresses that are allocated to each guest operating system. • Partition manager. This component is in charge of performing partition creation, finalization, destruction, enumeration, and configurations. Its services are available through the hypercalls interface API previously discussed. • The hypervisorrunsinRing-1 and therefore requires corresponding hardware technology that enables such a condition. By executing in this highly privileged mode, the hypervisor can support legacy operating systems that have been designed for x86 hardware. • Operating systems of newer generations can take advantage of the new specific architecture of Hyper-V especially for the I/O operations performed by child partitions. Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 106
  • 107. Enlightened I/O and synthetic devices Enlightened I/O provides an optimized way to perform I/O operations, allowing guest operating systems to leverage an inter partition communication channel rather than traversing the hardware emulation stack provided by the hypervisor. • This option is only available to guest operating systems that are hypervisor aware. Enlightened I/O leverages VMBus, an inter partition communication channel that is used to exchange data between partitions (child and parent) and is utilized mostly for the implementation of virtual device drivers for guest operating systems. • The architecture of Enlightened I/O is described in Figure 3.17. There are three fundamental components: VMBus, Virtual Service Providers(VSPs), and Virtual Service Clients(VSCs). VMBus implements the channel and defines the protocol for communication between partitions. VSPs are kernel level drivers that are deployed in the parent partition and provide access to the corresponding hardware devices. These interact with VSCs, which represent the virtual device drivers(also called synthetic drivers) seen by the guest operating systems in the child partitions. • Operating systems supported by Hyper-V utilize this preferred communication channel to perform I/O for storage, networking, graphics, and input sub systems. This also results in enhanced performance in child to child I/O as a result of virtual networks between guest operating systems. Legacy operating systems, which are not hypervisor aware, can still be run by Hyper-V but rely on device driver emulation, which is managed by the hypervisor and is less efficient. Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 107
  • 108. Parent partition • The parent partition executes the host operating system and implements the virtualization stack that complements the activity of the hypervisor in running guest operating systems. This partition always hosts an instance of the Windows Server 2008 R2, which manages the virtualization stackmade availableto the child partitions. • This partition is the only one that directly accesses device drivers and mediates the access to them by child partitions by hosting the VSPs. The parent partition is also the one that manages the creation, execution, and destruction of child partitions. It does so by means of the Virtualization Infrastructure Driver(VID), which controls access to the hypervisor and allows the management of virtual processors and memory. • For each child partition created, a Virtual Machine Worker Process (VMWP) is instantiated in the parent partition, which manages the child partitions by interacting with the hypervisor through the VID. • Virtual Machine Management services are also accessible remotely through a WMI provider that allows remote hosts to access the VID. Child partitions Child partitions are used to execute guest operating systems. These are isolated environments that allow secure and controlled execution of guests. Two types of child partition exist, they differ on whether the guest operating system is supported by Hyper-V or not. These are called Enlightened and Unenlightened partitions, respectively. The first one scan benefit from Enlightened I/O; the other ones are executed by leveraging hardware emulation from the hypervisor. Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 108
  • 109. Hyper V :Cloud computing and infrastructure management • Hyper-V constitutes the basic building block of Microsoft virtualization infrastructure. Other components contribute to creating a fully featured platform for server virtualization. To increase the performance of virtualized environments, a new version of Windows Server 2008, called Windows Server Core, has been released. • This is a specific version of the operating system with a reduced set of features and a smaller footprint. In particular, Windows Server Core has been designed by removing those features, which are not required in a server environment, such as the GUI component and other bulky components such as the .NET Framework and all the applications developed on top of it (for example, PowerShell). • This design decision has both advantages and disadvantages. On the plus side, it allows for reduced maintenance (i.e., fewer software patches), reduced attack surface, reduced management, and less diskspace. On the negative side, the embedded features are reduced. Still, there is the opportunity to leverage all the “removed features” by means of remote management from a fully featured Windows installation. Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 109
  • 110. • For instance, administrators can use the PowerShell to remotely manage the Windows Server Core installation through WMI. Another component that provides advanced management of virtual machines is System Center Virtual Machine Manager(SCVMM) 2008. This is a component of the Microsoft System Center suite, which brings in to the suite the virtual infrastructure management capabilities from an IT life cycle point of view. Essentially, SCVMM complements the basic features offered by Hyper-V with management capabilities, including: • Management portal for the creation and management of virtual instances • Virtual to Virtual (V2V) and Physical to Virtual (P2V) conversions • Delegated administration • Library functionality and deep PowerShell integration • Intelligent placement of virtual machines in the managed environment • Host capacity management SCVMM has also been designed to work with other virtualization platforms such as VMware vSphere (ESXservers) but benefits most from the virtual infrastructure management implemented with Hyper-V. Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 110
  • 111. Hyper-V:Observations • Compared with Xen and VMware, Hyper-V is a hybrid solution because it leverages both paravirtualization techniques and full hardware virtualization. The basic architecture of the hypervisor is based on paravirtualizedarchitecture. • The hypervisor exposes its services to the guest operating systems by means of hypercalls. Also, paravirtualized kernels can leverage VMBus for fast I/O operations. Moreover, partitions are conceptually similar to domains in Xen: The parent partition maps Domain 0, while child partitions map Domains U. • The only difference is that the Xen hypervisor is installed on bare hardware and filters all the access to the underlying hardware, whereas Hyper-V is installed as a role in the existing operating system, and the way it interacts with partitions is quite similar to the strategy implemented by VMware, as we discussed. The approach adopted by Hyper-V has both advantagesand disadvantages. • The advantages reside in a flexible virtualization platform supporting a wide range of guest operating systems. The disadvantages are represented by both hardware and software requirements. Hyper-V is compatible only with Windows Server 2008 and newer Windows Server platforms running on a x64 architecture. • Moreover, it requires a 64-bit processor supporting hardware-assisted virtualization and data execution prevention. Finally, as noted above, Hyper-V is a role that can be installed on a existing operatingsystem, while vSphere and Xen can be installedon the bare hardware. Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr. 111
  • 112. References • Rajkumar Buyya, Christian Vecchiola, and Thamarai Selvi, Mastering Cloud Computing, McGraw Hill, ISBN-13: 978-1-25- 902995-0, New Delhi, India, 2013. • Rajkumar Buyya, Christian Vecchiola, and Thamarai Selvi, Mastering Cloud Computing, Morgan Kaufmann, ISBN: 978-0- 12-411454-8, Burlington, Massachusetts,USA, May 2013. 112 Dr B Loganayagi, Professor, Dept. of CSE, SEACET,Blr.