SlideShare a Scribd company logo
Cloud Computing BCS601 Notef of Viswesvaraya University
BCS601 Cloud Computing Module-1
Dr. Sampada K S, Associate professor CSE RNSIT pg. 1
Module-1
Distributed System Models and Enabling Technologies:
Scalable Computing Over the Internet, Technologies for Network Based Systems, System Models for
Distributed and Cloud Computing, Software Environments for Distributed Systems and Clouds,
Performance, Security and Energy Efficiency.
Textbook 1: Chapter 1: 1.1 to 1.5
1. Explain the evolution of computing technology and its shift from centralized computing
to parallel and distributed systems.
Computing technology has progressed through five generations, each lasting 10-20 years, with
overlapping transitions. The evolution reflects a shift from centralized computing to parallel,
distributed, and cloud-based models that support modern computing needs.
Evolution of Computing Technology
• 1950-1970: Mainframe computers (e.g., IBM 360, CDC 6400) were designed for large-scale
enterprises and government institutions.
• 1960-1980: Minicomputers (e.g., DEC PDP 11, VAX Series) emerged, providing cost-effective
solutions for small businesses and academic institutions.
• 1970-1990: The advent of Personal Computers (PCs) powered by Very Large-Scale Integration
(VLSI) microprocessors enabled individual computing.
• 1980-2000: The proliferation of portable devices (e.g., laptops, mobile phones) introduced
computing in wired and wireless environments.
• 1990-Present: High-Performance Computing (HPC) and High-Throughput Computing (HTC)
evolved, leading to the rise of clusters, grids, and cloud computing for scalable and efficient
resource utilization.
Transition from Centralized to Distributed Computing
BCS601 Cloud Computing Module-1
Dr. Sampada K S, Associate professor CSE RNSIT pg. 2
• The shift to distributed computing introduced parallel processing and networked architectures
for better performance.
• On the High Performance Computing (HPC) side, Cluster Computing replaced Massively
Parallel Processors (MPPs) by enabling cooperation among multiple homogeneous computing
nodes.
• High-Throughput Computing (HTC) was developed for data-intensive applications, such as
web searches and large-scale data analytics. Peer-to-Peer (P2P) Networks are formed for
distributed file sharing and content delivery applications.
• Peer-to-Peer (P2P) Networks, Grid Computing, and Virtualization paved the way for modern
cloud computing, which offers on-demand, scalable, and service-oriented architectures (SOA).
Emerging Computing Paradigms
• Service-Oriented Architecture (SOA): Enables the development of modular, web-based
services and distributed applications.
• Cloud Computing: Provides virtualized, scalable, and on-demand computing resources over
the Internet.
• Internet of Things (IoT): Connects physical devices (e.g., sensors, smart appliances) to cloud
platforms for real-time data processing and automation.
BCS601 Cloud Computing Module-1
Dr. Sampada K S, Associate professor CSE RNSIT pg. 3
2. Differentiate between High-Performance Computing (HPC) and High-Throughput
Computing (HTC) with suitable examples.
HPC is best suited for problems requiring intensive computation and real-time processing, such
as climate modeling and physics simulations.
HTC is optimized for processing many independent tasks over time, such as web indexing and
data analytics.
High-Performance Computing (HPC) High-Throughput Computing (HTC)
Definition
Focuses on solving complex
computational problems that require high
processing power in a short period.
Focuses on processing a large number
of independent tasks over a long period.
Objective
Maximizing computational speed and
efficiency.
Maximizing task throughput (number of
tasks completed per unit time).
Typical Workload
Tightly coupled parallel tasks that require
communication between processors.
Loosely coupled tasks that can run
independently with minimal inter-
process communication.
Processing Model
Uses parallel computing and
supercomputers to process large-scale
simulations.
Uses distributed computing with
clusters and grid systems to process
numerous independent tasks.
Computational
Model
Executes tasks in parallel with multiple
processors working on the same problem.
Executes many small tasks in a queue-
like manner with distributed resources.
Computing
Resources
Requires supercomputers, clusters, and
GPUs for fast execution.
Uses grid computing, cloud platforms,
and distributed clusters for large-scale
data processing.
Communication
Dependency
Requires low-latency, high-bandwidth
interconnects for communication
between computing nodes.
Independent tasks do not require
communication between nodes.
Performance
Metric
Measured in Floating Point Operations
Per Second (FLOPS).
Measured in tasks completed per
second or hour.
Hardware Used
Supercomputers (e.g., IBM Summit,
Fugaku), high-speed interconnects
(InfiniBand), specialized processors
(GPUs, TPUs).
Large-scale distributed clusters (e.g.,
Google Data Centers, Amazon AWS),
cloud computing infrastructure.
Example
Applications
- Weather forecasting (e.g., climate
modeling)
- Molecular dynamics (e.g., protein
folding simulations)
- Scientific simulations (e.g., physics
experiments like CERN)
- Web search engines (e.g., indexing
Google search results)
- Big Data analytics (e.g., processing
social media trends)
- Genomic sequencing (e.g., analyzing
DNA samples at large scale)
BCS601 Cloud Computing Module-1
Dr. Sampada K S, Associate professor CSE RNSIT pg. 4
3. Describe the major computing paradigms: Centralized Computing, Parallel Computing,
Distributed Computing, and Cloud Computing.
Computing Paradigm Distinctions
Centralized computing
A computing paradigm where all computer resources are centralized in a single physical
system. In this setup, processors, memory, and storage are fully shared and tightly integrated
within one operating system. Many data centers and supercomputers operate as centralized
systems, but they are also utilized in parallel, distributed, and cloud computing applications.
Parallel computing
In parallel computing, processors are either tightly coupled with shared memory or loosely
coupled with distributed memory. Communication occurs through shared memory or message
passing. A system that performs parallel computing is a parallel computer, and the programs
running on it are called parallel programs. Writing these programs is referred to as parallel
programming.
Distributed computing studies distributed systems, which consist of multiple autonomous
computers with private memory communicating through a network via message passing.
Programs running in such systems are called distributed programs, and writing them is known
as distributed programming.
Cloud computing refers to a system of Internet-based resources that can be either centralized
or distributed. It uses parallel, distributed computing, or both, and can be established with
physical or virtualized resources over large data centers. Some regard cloud computing as a
form of utility computing or service computing. Alternatively, terms such as concurrent
computing or concurrent programming are used within the high-tech community, typically
referring to the combination of parallel and distributed computing, although interpretations
may vary among practitioners.
BCS601 Cloud Computing Module-1
Dr. Sampada K S, Associate professor CSE RNSIT pg. 5
4. Explain the key trends in scalable computing and how they influence parallelism in
modern architectures.
Multicore CPUs, Multithreading Technologies, and GPU Computing
Multicore and Manycore Architectures
• Processors now feature multiple cores (multicore) or hundreds of cores (manycore) to
improve parallelism.
o Enables thread-level parallelism (TLP) by executing multiple threads
simultaneously.
o Reduces power consumption compared to single-core processors with high
clock speeds.
o Improves performance for data-intensive workloads.
Simultaneous Multithreading (SMT)
Modern CPUs implement SMT, allowing multiple threads to execute on the same core,
improving resource utilization.
o Increases instruction-level parallelism (ILP) by keeping execution units busy.
o Improves throughput in workloads that have frequent context switches.
o Used in cloud computing, high-performance computing (HPC), and
database processing.
o Multithreading optimizes workload distribution, benefiting applications such as
database systems, web servers, etc.
GPU Acceleration and Heterogeneous Computing
General-purpose computing on GPUs (GPGPU) has become standard for accelerating
scientific simulations, and video processing.
o Massive parallelism with thousands of GPU cores executing many small tasks
simultaneously.
o Supports data parallelism, where the same operation is applied to multiple data points
(SIMD model).
o Heterogeneous computing integrates CPUs, GPUs, and FPGAs for optimized
performance.
BCS601 Cloud Computing Module-1
Dr. Sampada K S, Associate professor CSE RNSIT pg. 6
5. Discuss different degrees of parallelism with examples.
Parallelism in computing has evolved from:
• Bit-Level Parallelism (BLP) – Transition from serial to word-level processing.
• Instruction-Level Parallelism (ILP) – Executing multiple instructions simultaneously
(pipelining, superscalar computing, VLIW architecture & multithreading).
• Data-Level Parallelism (DLP) – was made popular through SIMD (Single Instruction,
Multiple Data) architectures and vector machines using vector or array type of instructions.
• Task-Level Parallelism (TLP) – Parallel execution of independent tasks on multicore
processors.
• Job-Level Parallelism (JLP) – Large-scale distributed job execution in cloud computing.
Coarse-grained parallelism builds on fine-grained parallelism, ensuring scalability in HPC and HTC
systems.
BCS601 Cloud Computing Module-1
Dr. Sampada K S, Associate professor CSE RNSIT pg. 7
6. Describe various technologies that enable network-based systems in cloud computing.
The various approaches to build distributed operating systems for handling Massive
parallelism in distributed environments are:
a) Advancement in processor and network technologies
b) Network for Storage devices
c) Virtualization for resource utilization
Advancement in processor and network technologies: Multicore CPUs, Multithreading
Technologies, and GPU Computing collectively enhance processing efficiency by leveraging
parallelism. These technologies are crucial for high-performance applications,
1. Multicore CPUs
o Traditional single-core processors faced performance limitations due to power
and heat constraints.
o Multicore architectures improve performance by integrating multiple
processing units on a single chip, enabling parallel execution of tasks.
o These CPUs are widely used in high-performance computing (HPC), cloud
computing, and real-time applications.
2. Multithreading Technologies
o Simultaneous Multithreading (SMT) allows multiple threads to execute on the
same core, increasing CPU utilization.
o Technologies like Hyper-Threading (HT) in Intel processors improve efficiency
by reducing idle CPU cycles.
o Multithreading optimizes workload distribution, benefiting applications such as
database systems, web servers.
3. GPU Computing
o Graphics Processing Units (GPUs) are designed for highly parallel tasks,
making them ideal for scientific simulations, deep learning, and big data
analytics.
o Unlike CPUs, GPUs contain thousands of cores optimized for vectorized
operations and matrix computations.
o CUDA (NVIDIA) and OpenCL enable developers to harness GPU power for
general-purpose computing (GPGPU).
Network for Storage devices: Advancements in memory, storage, and networking are critical
for high-performance computing (HPC) and cloud infrastructure.
Memory, Storage, and Wide-Area Networking
1. Memory Technology
o The gap between processor speed and memory access time continues to widen,
leading to the memory wall problem affecting overall system performance.
2. Disks and Storage Technology
o Hard drive capacity has grown significantly, from 260 MB in 1981 to 3 TB in
2011, with even greater increases expected.
BCS601 Cloud Computing Module-1
Dr. Sampada K S, Associate professor CSE RNSIT pg. 8
o Solid-State Drives (SSDs) offer faster access speeds and durability, but cost
remains a limiting factor.
o Power consumption, cooling, and packaging constraints influence future storage
system designs.
3. System-Area Interconnects
o Clusters use Ethernet, Storage Area Networks (SAN), and Network Attached
Storage (NAS) to link servers and clients.
o High-speed Gigabit Ethernet and InfiniBand dominate interconnects in HPC
environments.
4. Wide-Area Networking
o Ethernet bandwidth has grown from 10 Mbps in 1979 to 100 Gbps in 2011, with
projections reaching 1 Tbps in the near future.
o Network speeds improve faster than Moore’s Law, enabling large-scale
distributed and cloud computing.
o High-bandwidth networks enhance the scalability and efficiency of massively
distributed systems.
Virtualization for resource utilization : Virtualization is a key enabler of cloud computing and
modern distributed systems. It enhances resource efficiency, scalability, and flexibility,
allowing better management of large-scale infrastructures while reducing costs and improving
system reliability.
1. Virtual Machines (VMs) and Virtualization
o Traditional computing tightly couples applications with specific hardware,
limiting flexibility.
o Virtual machines (VMs) allow applications to run on different hardware
platforms by abstracting resources like processors, memory, and storage.
o A Virtual Machine Monitor (VMM), also called a hypervisor, manages VM
operations and resource allocation.
2. Types of Virtualization Architectures
o Native/Bare-Metal Virtualization: The hypervisor runs directly on hardware
(e.g., Xen).
o Hosted Virtualization: The hypervisor runs on top of a host OS (e.g., VMware
Workstation).
o Hybrid Virtualization: A combination of both approaches, requiring some OS
modifications.
3. VM Operations and Benefits
o VMs can be migrated, suspended, resumed, or multiplexed across different
physical machines.
o Improves resource utilization, application portability, and server efficiency.
o Reduces hardware dependency and enables server consolidation, increasing
utilization from 5-15% to 60-80%.
4. Virtual Infrastructure
o Separates physical resources (compute, storage, networking) from applications,
improving scalability, cost efficiency, and manageability.
BCS601 Cloud Computing Module-1
Dr. Sampada K S, Associate professor CSE RNSIT pg. 9
o Supports cloud computing, clusters, and grid computing, enabling dynamic
resource allocation.
5. Data Center Virtualization for Cloud Computing
o Cloud data centers use commodity hardware, low-cost storage, and energy-
efficient networking.
o Virtualization optimizes server utilization, reduces operational costs, and
enhances flexibility.
BCS601 Cloud Computing Module-1
Dr. Sampada K S, Associate professor CSE RNSIT pg. 10
7. Discuss the major challenges in network-based systems and their impact on distributed
computing.
Network-based systems underpin distributed computing but face several key challenges:
1. Memory and Storage Bottlenecks – The gap between processor speed and memory
performance (memory wall) slows data access, necessitating caching and efficient
memory management.
2. Network Latency and Bandwidth Constraints – High-speed networks improve data
transfer, but latency issues still impact real-time applications and scalability.
3. Interconnect Efficiency – Inefficient system-area interconnects cause bottlenecks in
data movement, requiring optimized networking solutions like InfiniBand.
4. Security and Virtualization Risks – Virtual machines introduce vulnerabilities that can
compromise distributed environments, demanding advanced security measures.
5. Scalability and Resource Management – Efficient allocation of compute, storage, and
networking resources is crucial for balancing workloads and preventing inefficiencies.
6. Energy Consumption – Rising power costs in data centers drive the need for energy-
efficient architectures and sustainability strategies.
BCS601 Cloud Computing Module-1
Dr. Sampada K S, Associate professor CSE RNSIT pg. 11
8. Explain the concept of virtual machines (VMs) and describe different VM architectures
such as Native VM, Host VM, and Hybrid VM.
•
Virtual Machine Architectures
1. Native VM (Hypervisor-based) – Direct hardware access via bare-metal hypervisors (e.g.,
VMware ESXi, Xen).
Native VMs, also known as bare-metal virtualization, directly run on physical hardware
without requiring a host operating system. These VMs rely on a hypervisor (or Virtual
Machine Monitor, VMM) to manage multiple virtual instances running on a single hardware
platform.
• Runs directly on the physical machine (bare-metal).
• The hypervisor is responsible for allocating resources (CPU, memory, I/O) to virtual
machines.
• Provides high performance and low overhead since it bypasses the host OS.
• Ensures strong isolation between VMs.
2. Host VM (Software-based) – Runs as an application on a host OS (e.g., VirtualBox, VMware
Workstation).
A hosted virtual machine runs as an application within an existing operating system, relying
on a host OS to provide access to hardware resources. These VMs are managed using
software-based virtualization platforms.
• Runs on top of a host operating system.
• Uses software-based virtualization techniques (binary translation, dynamic
recompilation).
• Has higher overhead compared to native VMs.
• Provides greater flexibility since it can run on general-purpose systems.
3. Hybrid VM – Uses a combination of user-mode and privileged-mode virtualization.
Hybrid VMs combine features of both native and hosted virtualization. They partially
virtualize hardware by running some components in user mode and others in privileged
mode. This architecture optimizes performance by reducing overhead while maintaining
flexibility and ease of management.
• Uses both hardware-assisted and software virtualization techniques.
BCS601 Cloud Computing Module-1
Dr. Sampada K S, Associate professor CSE RNSIT pg. 12
• The hypervisor runs at the kernel level, but some functions rely on the host OS.
• Balances performance and flexibility for different workloads.
BCS601 Cloud Computing Module-1
Dr. Sampada K S, Associate professor CSE RNSIT pg. 13
9. Explain the different system models used in distributed and cloud computing.
OR
Compare and contrast Clusters, Grids, Peer-to-Peer (P2P) Networks, and Cloud
Computing.
Distributed and cloud computing systems are built using large-scale, interconnected autonomous
computer nodes. These nodes are linked through Storage Area Networks (SANs), Local Area
Networks (LANs), or Wide Area Networks (WANs) in a hierarchical manner. These massive
systems are classified 4 groups: Clusters, P2P networks, Computing grids, and Internet clouds.
• Clusters: Connected by LAN switches, forming tightly coupled systems with hundreds of
machines.
• P2P Networks: Form decentralized, cooperative networks with millions of nodes, used in
file sharing and content distribution.
• Grids: Interconnect multiple clusters via WANs, allowing resource sharing across
thousands of computers.
• Internet Clouds: Operates over massive data centers, delivering on-demand computing
resources at a global scale.
These systems exhibit high scalability, enabling web-scale computing with millions of
interconnected nodes. Their technical and application characteristics vary based on factors such as
resource sharing, control mechanisms, and workload distribution.
Functionality,
Applications
Computer
Clusters [10,28,38]
Peer-to-Peer
Networks [34,46]
Data/
Computational
Grids [6,18,51]
Cloud Platforms
[1,9,11,12,30]
Architecture,
Network
Connectivity,
and Size
Network of compute
nodes interconnected
by SAN, LAN, or WAN
hierarchically
Flexible network
of client machines
logically
connected by an
overlay network
Heterogeneous
clusters interconnected
by high-speed network
links over selected
resource sites
Virtualized cluster of
servers over data
centers via SLA
Control and
Resources
Management
Homogeneous
nodes with
distributed control,
running UNIX or
Linux
Autonomous client
nodes, free in and out,
with self-organization
Centralized control,
server- oriented
with authenticated
security
Dynamic resource
provisioning of servers,
storage, and networks
Applications
and Network-
centric Services
High-performance
computing,
search engines, and
web services, etc.
Most appealing to
business file sharing,
content delivery, and
social networking
Distributed
supercomputing,
global problem
solving, and data
center services
Upgraded web
search, utility
computing, and
outsourced Most
appealing to business
file sharing, content
delivery, and
social networking
computing services
Representative
Operational
Systems
Google search engine,
SunBlade, IBM Road
Runner, Cray
XT4, etc.
Gnutella, eMule,
BitTorrent, Napster,
KaZaA, Skype, JXTA
TeraGrid, GriPhyN,
UK EGEE, D-Grid,
ChinaGrid, etc
Google App Engine,
IBM Bluecloud, AWS,
and Microsoft
Gnutella, eMule,
BitTorrent, Napster,
KaZaA, Skype, JXTA
BCS601 Cloud Computing Module-1
Dr. Sampada K S, Associate professor CSE RNSIT pg. 14
10. Explain layered architecture for web services and grids
The layered architecture for web services and grids integrates various technologies to enable
efficient communication, fault tolerance, security, and management in distributed systems.
1. Entity Interfaces: Defined using WSDL, Java methods, and CORBA IDL, enabling
interoperability across different distributed systems.
2. Communication Systems: Includes SOAP, RMI, and IIOP, supporting message patterns
(RPC), fault recovery, and routing via middleware like WebSphere MQ and JMS.
3. Fault Tolerance & Security: Implements WSRM for message reliability, similar to TCP,
and security frameworks such as IPsec and SSL.
4. Service Discovery & Management: Uses UDDI, LDAP, ebXML, and CORBA Trading
Service for entity discovery, while management services handle service lifecycle and
persistence.
5. Performance & Distributed Model: The shared memory model offers better information
exchange, while the distributed model provides higher performance, modularity, and
software reuse.
Modern systems favor SOAP, XML, and REST-based architectures over older CORBA
and Java models, making distributed computing more scalable and efficient.
BCS601 Cloud Computing Module-1
Dr. Sampada K S, Associate professor CSE RNSIT pg. 15
11. How does a software environment impact distributed cloud computing system?
Software environments define how applications, services, and data interact within grids, clouds,
and P2P networks. The various environments impacting distributed and cloud computing systems
are Service-Oriented Architecture (SOA), Distributed Operating Systems, Programming models.
Service-Oriented Architecture (SOA)
SOA is a fundamental architectural approach that enables modular, reusable, and networked software
components to communicate seamlessly. It is widely used in web services, grid computing, and cloud
computing for efficient service integration.
Layered Architecture for Web Services and Grids: SOA extends the OSI model by adding additional
layers for service interfaces, workflows, and management, ensuring smooth service interactions.
Communication Standards: SOA leverages multiple communication protocols, including:
• SOAP (Simple Object Access Protocol) – Used in web services to exchange structured data.
• RMI (Remote Method Invocation) – Java-based method for remote procedure calls.
• IIOP (Internet Inter-ORB Protocol) – Used in CORBA-based distributed systems.
Middleware tools like WebSphere MQ and Java Message Service (JMS) manage messaging, security,
and fault tolerance.
Web Services and Tools: SOA implementation follows two primary approaches:
1. SOAP-based Web Services – Fully specified service definitions supporting standardized,
structured communication in enterprise applications.
2. REST (Representational State Transfer) – A lightweight alternative that provides scalable,
flexible web-based APIs for cloud applications.
RESTful services are preferred for fast-evolving environments, while SOAP is better for structured,
formal communication.
Evolution of SOA: SOA has progressed from basic service integration to multi-layered computing
ecosystems, incorporating:
• Sensor Services (SS) – Collect raw data from ZigBee, Bluetooth, GPS, and WiFi.
• Filter Services (FS) – Process and refine data before storage or computation.
• Cloud Ecosystems – Compute, storage, and discovery clouds help manage large-scale
applications.
SOA enables data transformation from raw data → useful information → knowledge → intelligent
decisions.
Grids vs. Clouds: Grids use static resources, while clouds provide elastic, on-demand computing via
virtualization.
BCS601 Cloud Computing Module-1
Dr. Sampada K S, Associate professor CSE RNSIT pg. 16
• Clouds focus on automation and scalability, whereas grids offer resource negotiation and
allocation.
• Hybrid models exist, including Clouds of Grids, Grids of Clouds, and Inter-Cloud
architectures.
Distributed Operating Systems
Traditional distributed systems operate independent OS instances on each node, while a distributed
OS manages all resources as a single system.
Distributed OS Approaches (Tanenbaum's Models)
1. Network OS – Basic resource sharing with low transparency.
2. Middleware-based OS – Supports limited resource sharing using middleware tools like
MOSIX for Linux clusters.
3. Truly Distributed OS – Provides a single-system image (SSI) for full resource transparency.
Transparency in programming environments
• Cloud computing separates user data, applications, OS, and hardware, allowing flexibility in
service deployment.
• Users can switch between OS platforms and cloud services without being locked into a
specific provider.
Programming Models: Parallel execution models help process large-scale workloads efficiently in
distributed systems.
Model Description Key Features
MPI (Message-
Passing Interface)
Standard for writing parallel
applications on distributed
systems
Explicit message-passing
between processes
MapReduce
Scalable data processing model
for large clusters
Map generates key-value pairs,
Reduce aggregates results
Hadoop
Open-source framework for big
data processing
HDFS (Distributed Storage) +
MapReduce computing
Grid Standards and Toolkits: Grid computing relies on standardized middleware to manage resource
sharing, security, and automation.
Standard Function Key Features
OGSA (Open Grid
Services Architecture)
Defines common grid
services
Supports heterogeneous computing
and security policies
BCS601 Cloud Computing Module-1
Dr. Sampada K S, Associate professor CSE RNSIT pg. 17
Standard Function Key Features
Globus Toolkit (GT4)
Middleware for resource
discovery and security
Uses PKI authentication, Kerberos,
SSL, and delegation policies
IBM Grid Toolbox
Grid computing framework
for AIX/Linux clusters
Supports autonomic computing
and security management
***************************************************************************
BCS601 Cloud Computing Module-1
Dr. Sampada K S, Associate professor CSE RNSIT pg. 18
12. Discuss the role of Service-Oriented Architecture (SOA) in modern cloud-based systems.
13. Explain the advantages and challenges of utility computing and its impact on cloud
adoption.
Virtualization is a key enabler of cloud computing and modern distributed systems. It
enhances resource efficiency, scalability, and flexibility, allowing better management of
large-scale infrastructures while reducing costs and improving system reliability.
iol

More Related Content

PPTX
CC & Security for learners_Module 1.pptx
PPT
Presentation-1.ppt
PPTX
CLOUD COMPUTING UNIT-1
PPTX
unit 1.pptx
PDF
R15A0529_CloudComputing_Notes-converted.pdf
PDF
R21 Sasi Engineering College cloud-computing-notes.pdf
PPTX
cc_mod1.ppt useful for engineering students
PPTX
UNIT-1-PARADIGMS.pptx cloud computing cc
CC & Security for learners_Module 1.pptx
Presentation-1.ppt
CLOUD COMPUTING UNIT-1
unit 1.pptx
R15A0529_CloudComputing_Notes-converted.pdf
R21 Sasi Engineering College cloud-computing-notes.pdf
cc_mod1.ppt useful for engineering students
UNIT-1-PARADIGMS.pptx cloud computing cc

Similar to Cloud Computing BCS601 Notef of Viswesvaraya University (20)

PDF
UNIT I -Cloud Computing (1).pdf
PPT
Distributed_and_cloud_computing-unit-1.ppt
PPTX
Cloud Computing-UNIT 1 claud computing basics
PDF
CC LECTURE NOTES (1).pdf
PPTX
DistributedSystemModels - cloud computing and distributed system models
PDF
Unit i cloud computing
PPTX
CLOUD ENABLING TECHNOLOGIES.pptx
PPTX
(19-23)CC Unit-1 ppt.pptx
PPTX
CS8791 CLOUD COMPUTING_UNIT-I_FINAL_ppt (1).pptx
DOC
Gcc notes unit 1
PDF
Cloud Computing notes ccomputing paradigms UNIT 1.pdf
PPTX
Lecture 1 - Computing Paradigms and.pptx
PDF
IEEE Paper - A Study Of Cloud Computing Environments For High Performance App...
PDF
CloudComputing_UNIT1.pdf
PDF
CloudComputing_UNIT1.pdf
PPTX
Cloud computing is a paradigm for enabling network access to a scalable and e...
PDF
CLOUD COMPUTING Unit-I.pdf
PDF
Cs6703 grid and cloud computing book
PPTX
PDF
00 - BigData-Chapter_01-PDC.pdf
UNIT I -Cloud Computing (1).pdf
Distributed_and_cloud_computing-unit-1.ppt
Cloud Computing-UNIT 1 claud computing basics
CC LECTURE NOTES (1).pdf
DistributedSystemModels - cloud computing and distributed system models
Unit i cloud computing
CLOUD ENABLING TECHNOLOGIES.pptx
(19-23)CC Unit-1 ppt.pptx
CS8791 CLOUD COMPUTING_UNIT-I_FINAL_ppt (1).pptx
Gcc notes unit 1
Cloud Computing notes ccomputing paradigms UNIT 1.pdf
Lecture 1 - Computing Paradigms and.pptx
IEEE Paper - A Study Of Cloud Computing Environments For High Performance App...
CloudComputing_UNIT1.pdf
CloudComputing_UNIT1.pdf
Cloud computing is a paradigm for enabling network access to a scalable and e...
CLOUD COMPUTING Unit-I.pdf
Cs6703 grid and cloud computing book
00 - BigData-Chapter_01-PDC.pdf
Ad

Recently uploaded (20)

PPTX
Lesson 3_Tessellation.pptx finite Mathematics
PDF
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
PPTX
UNIT-1 - COAL BASED THERMAL POWER PLANTS
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PPTX
Welding lecture in detail for understanding
PPTX
additive manufacturing of ss316l using mig welding
PDF
Well-logging-methods_new................
PDF
Model Code of Practice - Construction Work - 21102022 .pdf
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
Sustainable Sites - Green Building Construction
PPTX
Lecture Notes Electrical Wiring System Components
PPTX
Construction Project Organization Group 2.pptx
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PPT
Mechanical Engineering MATERIALS Selection
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
DOCX
573137875-Attendance-Management-System-original
PPTX
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
PPTX
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
PDF
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
Lesson 3_Tessellation.pptx finite Mathematics
SM_6th-Sem__Cse_Internet-of-Things.pdf IOT
UNIT-1 - COAL BASED THERMAL POWER PLANTS
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
Welding lecture in detail for understanding
additive manufacturing of ss316l using mig welding
Well-logging-methods_new................
Model Code of Practice - Construction Work - 21102022 .pdf
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
Sustainable Sites - Green Building Construction
Lecture Notes Electrical Wiring System Components
Construction Project Organization Group 2.pptx
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
Mechanical Engineering MATERIALS Selection
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
573137875-Attendance-Management-System-original
Infosys Presentation by1.Riyan Bagwan 2.Samadhan Naiknavare 3.Gaurav Shinde 4...
MET 305 2019 SCHEME MODULE 2 COMPLETE.pptx
BMEC211 - INTRODUCTION TO MECHATRONICS-1.pdf
Operating System & Kernel Study Guide-1 - converted.pdf
Ad

Cloud Computing BCS601 Notef of Viswesvaraya University

  • 2. BCS601 Cloud Computing Module-1 Dr. Sampada K S, Associate professor CSE RNSIT pg. 1 Module-1 Distributed System Models and Enabling Technologies: Scalable Computing Over the Internet, Technologies for Network Based Systems, System Models for Distributed and Cloud Computing, Software Environments for Distributed Systems and Clouds, Performance, Security and Energy Efficiency. Textbook 1: Chapter 1: 1.1 to 1.5 1. Explain the evolution of computing technology and its shift from centralized computing to parallel and distributed systems. Computing technology has progressed through five generations, each lasting 10-20 years, with overlapping transitions. The evolution reflects a shift from centralized computing to parallel, distributed, and cloud-based models that support modern computing needs. Evolution of Computing Technology • 1950-1970: Mainframe computers (e.g., IBM 360, CDC 6400) were designed for large-scale enterprises and government institutions. • 1960-1980: Minicomputers (e.g., DEC PDP 11, VAX Series) emerged, providing cost-effective solutions for small businesses and academic institutions. • 1970-1990: The advent of Personal Computers (PCs) powered by Very Large-Scale Integration (VLSI) microprocessors enabled individual computing. • 1980-2000: The proliferation of portable devices (e.g., laptops, mobile phones) introduced computing in wired and wireless environments. • 1990-Present: High-Performance Computing (HPC) and High-Throughput Computing (HTC) evolved, leading to the rise of clusters, grids, and cloud computing for scalable and efficient resource utilization. Transition from Centralized to Distributed Computing
  • 3. BCS601 Cloud Computing Module-1 Dr. Sampada K S, Associate professor CSE RNSIT pg. 2 • The shift to distributed computing introduced parallel processing and networked architectures for better performance. • On the High Performance Computing (HPC) side, Cluster Computing replaced Massively Parallel Processors (MPPs) by enabling cooperation among multiple homogeneous computing nodes. • High-Throughput Computing (HTC) was developed for data-intensive applications, such as web searches and large-scale data analytics. Peer-to-Peer (P2P) Networks are formed for distributed file sharing and content delivery applications. • Peer-to-Peer (P2P) Networks, Grid Computing, and Virtualization paved the way for modern cloud computing, which offers on-demand, scalable, and service-oriented architectures (SOA). Emerging Computing Paradigms • Service-Oriented Architecture (SOA): Enables the development of modular, web-based services and distributed applications. • Cloud Computing: Provides virtualized, scalable, and on-demand computing resources over the Internet. • Internet of Things (IoT): Connects physical devices (e.g., sensors, smart appliances) to cloud platforms for real-time data processing and automation.
  • 4. BCS601 Cloud Computing Module-1 Dr. Sampada K S, Associate professor CSE RNSIT pg. 3 2. Differentiate between High-Performance Computing (HPC) and High-Throughput Computing (HTC) with suitable examples. HPC is best suited for problems requiring intensive computation and real-time processing, such as climate modeling and physics simulations. HTC is optimized for processing many independent tasks over time, such as web indexing and data analytics. High-Performance Computing (HPC) High-Throughput Computing (HTC) Definition Focuses on solving complex computational problems that require high processing power in a short period. Focuses on processing a large number of independent tasks over a long period. Objective Maximizing computational speed and efficiency. Maximizing task throughput (number of tasks completed per unit time). Typical Workload Tightly coupled parallel tasks that require communication between processors. Loosely coupled tasks that can run independently with minimal inter- process communication. Processing Model Uses parallel computing and supercomputers to process large-scale simulations. Uses distributed computing with clusters and grid systems to process numerous independent tasks. Computational Model Executes tasks in parallel with multiple processors working on the same problem. Executes many small tasks in a queue- like manner with distributed resources. Computing Resources Requires supercomputers, clusters, and GPUs for fast execution. Uses grid computing, cloud platforms, and distributed clusters for large-scale data processing. Communication Dependency Requires low-latency, high-bandwidth interconnects for communication between computing nodes. Independent tasks do not require communication between nodes. Performance Metric Measured in Floating Point Operations Per Second (FLOPS). Measured in tasks completed per second or hour. Hardware Used Supercomputers (e.g., IBM Summit, Fugaku), high-speed interconnects (InfiniBand), specialized processors (GPUs, TPUs). Large-scale distributed clusters (e.g., Google Data Centers, Amazon AWS), cloud computing infrastructure. Example Applications - Weather forecasting (e.g., climate modeling) - Molecular dynamics (e.g., protein folding simulations) - Scientific simulations (e.g., physics experiments like CERN) - Web search engines (e.g., indexing Google search results) - Big Data analytics (e.g., processing social media trends) - Genomic sequencing (e.g., analyzing DNA samples at large scale)
  • 5. BCS601 Cloud Computing Module-1 Dr. Sampada K S, Associate professor CSE RNSIT pg. 4 3. Describe the major computing paradigms: Centralized Computing, Parallel Computing, Distributed Computing, and Cloud Computing. Computing Paradigm Distinctions Centralized computing A computing paradigm where all computer resources are centralized in a single physical system. In this setup, processors, memory, and storage are fully shared and tightly integrated within one operating system. Many data centers and supercomputers operate as centralized systems, but they are also utilized in parallel, distributed, and cloud computing applications. Parallel computing In parallel computing, processors are either tightly coupled with shared memory or loosely coupled with distributed memory. Communication occurs through shared memory or message passing. A system that performs parallel computing is a parallel computer, and the programs running on it are called parallel programs. Writing these programs is referred to as parallel programming. Distributed computing studies distributed systems, which consist of multiple autonomous computers with private memory communicating through a network via message passing. Programs running in such systems are called distributed programs, and writing them is known as distributed programming. Cloud computing refers to a system of Internet-based resources that can be either centralized or distributed. It uses parallel, distributed computing, or both, and can be established with physical or virtualized resources over large data centers. Some regard cloud computing as a form of utility computing or service computing. Alternatively, terms such as concurrent computing or concurrent programming are used within the high-tech community, typically referring to the combination of parallel and distributed computing, although interpretations may vary among practitioners.
  • 6. BCS601 Cloud Computing Module-1 Dr. Sampada K S, Associate professor CSE RNSIT pg. 5 4. Explain the key trends in scalable computing and how they influence parallelism in modern architectures. Multicore CPUs, Multithreading Technologies, and GPU Computing Multicore and Manycore Architectures • Processors now feature multiple cores (multicore) or hundreds of cores (manycore) to improve parallelism. o Enables thread-level parallelism (TLP) by executing multiple threads simultaneously. o Reduces power consumption compared to single-core processors with high clock speeds. o Improves performance for data-intensive workloads. Simultaneous Multithreading (SMT) Modern CPUs implement SMT, allowing multiple threads to execute on the same core, improving resource utilization. o Increases instruction-level parallelism (ILP) by keeping execution units busy. o Improves throughput in workloads that have frequent context switches. o Used in cloud computing, high-performance computing (HPC), and database processing. o Multithreading optimizes workload distribution, benefiting applications such as database systems, web servers, etc. GPU Acceleration and Heterogeneous Computing General-purpose computing on GPUs (GPGPU) has become standard for accelerating scientific simulations, and video processing. o Massive parallelism with thousands of GPU cores executing many small tasks simultaneously. o Supports data parallelism, where the same operation is applied to multiple data points (SIMD model). o Heterogeneous computing integrates CPUs, GPUs, and FPGAs for optimized performance.
  • 7. BCS601 Cloud Computing Module-1 Dr. Sampada K S, Associate professor CSE RNSIT pg. 6 5. Discuss different degrees of parallelism with examples. Parallelism in computing has evolved from: • Bit-Level Parallelism (BLP) – Transition from serial to word-level processing. • Instruction-Level Parallelism (ILP) – Executing multiple instructions simultaneously (pipelining, superscalar computing, VLIW architecture & multithreading). • Data-Level Parallelism (DLP) – was made popular through SIMD (Single Instruction, Multiple Data) architectures and vector machines using vector or array type of instructions. • Task-Level Parallelism (TLP) – Parallel execution of independent tasks on multicore processors. • Job-Level Parallelism (JLP) – Large-scale distributed job execution in cloud computing. Coarse-grained parallelism builds on fine-grained parallelism, ensuring scalability in HPC and HTC systems.
  • 8. BCS601 Cloud Computing Module-1 Dr. Sampada K S, Associate professor CSE RNSIT pg. 7 6. Describe various technologies that enable network-based systems in cloud computing. The various approaches to build distributed operating systems for handling Massive parallelism in distributed environments are: a) Advancement in processor and network technologies b) Network for Storage devices c) Virtualization for resource utilization Advancement in processor and network technologies: Multicore CPUs, Multithreading Technologies, and GPU Computing collectively enhance processing efficiency by leveraging parallelism. These technologies are crucial for high-performance applications, 1. Multicore CPUs o Traditional single-core processors faced performance limitations due to power and heat constraints. o Multicore architectures improve performance by integrating multiple processing units on a single chip, enabling parallel execution of tasks. o These CPUs are widely used in high-performance computing (HPC), cloud computing, and real-time applications. 2. Multithreading Technologies o Simultaneous Multithreading (SMT) allows multiple threads to execute on the same core, increasing CPU utilization. o Technologies like Hyper-Threading (HT) in Intel processors improve efficiency by reducing idle CPU cycles. o Multithreading optimizes workload distribution, benefiting applications such as database systems, web servers. 3. GPU Computing o Graphics Processing Units (GPUs) are designed for highly parallel tasks, making them ideal for scientific simulations, deep learning, and big data analytics. o Unlike CPUs, GPUs contain thousands of cores optimized for vectorized operations and matrix computations. o CUDA (NVIDIA) and OpenCL enable developers to harness GPU power for general-purpose computing (GPGPU). Network for Storage devices: Advancements in memory, storage, and networking are critical for high-performance computing (HPC) and cloud infrastructure. Memory, Storage, and Wide-Area Networking 1. Memory Technology o The gap between processor speed and memory access time continues to widen, leading to the memory wall problem affecting overall system performance. 2. Disks and Storage Technology o Hard drive capacity has grown significantly, from 260 MB in 1981 to 3 TB in 2011, with even greater increases expected.
  • 9. BCS601 Cloud Computing Module-1 Dr. Sampada K S, Associate professor CSE RNSIT pg. 8 o Solid-State Drives (SSDs) offer faster access speeds and durability, but cost remains a limiting factor. o Power consumption, cooling, and packaging constraints influence future storage system designs. 3. System-Area Interconnects o Clusters use Ethernet, Storage Area Networks (SAN), and Network Attached Storage (NAS) to link servers and clients. o High-speed Gigabit Ethernet and InfiniBand dominate interconnects in HPC environments. 4. Wide-Area Networking o Ethernet bandwidth has grown from 10 Mbps in 1979 to 100 Gbps in 2011, with projections reaching 1 Tbps in the near future. o Network speeds improve faster than Moore’s Law, enabling large-scale distributed and cloud computing. o High-bandwidth networks enhance the scalability and efficiency of massively distributed systems. Virtualization for resource utilization : Virtualization is a key enabler of cloud computing and modern distributed systems. It enhances resource efficiency, scalability, and flexibility, allowing better management of large-scale infrastructures while reducing costs and improving system reliability. 1. Virtual Machines (VMs) and Virtualization o Traditional computing tightly couples applications with specific hardware, limiting flexibility. o Virtual machines (VMs) allow applications to run on different hardware platforms by abstracting resources like processors, memory, and storage. o A Virtual Machine Monitor (VMM), also called a hypervisor, manages VM operations and resource allocation. 2. Types of Virtualization Architectures o Native/Bare-Metal Virtualization: The hypervisor runs directly on hardware (e.g., Xen). o Hosted Virtualization: The hypervisor runs on top of a host OS (e.g., VMware Workstation). o Hybrid Virtualization: A combination of both approaches, requiring some OS modifications. 3. VM Operations and Benefits o VMs can be migrated, suspended, resumed, or multiplexed across different physical machines. o Improves resource utilization, application portability, and server efficiency. o Reduces hardware dependency and enables server consolidation, increasing utilization from 5-15% to 60-80%. 4. Virtual Infrastructure o Separates physical resources (compute, storage, networking) from applications, improving scalability, cost efficiency, and manageability.
  • 10. BCS601 Cloud Computing Module-1 Dr. Sampada K S, Associate professor CSE RNSIT pg. 9 o Supports cloud computing, clusters, and grid computing, enabling dynamic resource allocation. 5. Data Center Virtualization for Cloud Computing o Cloud data centers use commodity hardware, low-cost storage, and energy- efficient networking. o Virtualization optimizes server utilization, reduces operational costs, and enhances flexibility.
  • 11. BCS601 Cloud Computing Module-1 Dr. Sampada K S, Associate professor CSE RNSIT pg. 10 7. Discuss the major challenges in network-based systems and their impact on distributed computing. Network-based systems underpin distributed computing but face several key challenges: 1. Memory and Storage Bottlenecks – The gap between processor speed and memory performance (memory wall) slows data access, necessitating caching and efficient memory management. 2. Network Latency and Bandwidth Constraints – High-speed networks improve data transfer, but latency issues still impact real-time applications and scalability. 3. Interconnect Efficiency – Inefficient system-area interconnects cause bottlenecks in data movement, requiring optimized networking solutions like InfiniBand. 4. Security and Virtualization Risks – Virtual machines introduce vulnerabilities that can compromise distributed environments, demanding advanced security measures. 5. Scalability and Resource Management – Efficient allocation of compute, storage, and networking resources is crucial for balancing workloads and preventing inefficiencies. 6. Energy Consumption – Rising power costs in data centers drive the need for energy- efficient architectures and sustainability strategies.
  • 12. BCS601 Cloud Computing Module-1 Dr. Sampada K S, Associate professor CSE RNSIT pg. 11 8. Explain the concept of virtual machines (VMs) and describe different VM architectures such as Native VM, Host VM, and Hybrid VM. • Virtual Machine Architectures 1. Native VM (Hypervisor-based) – Direct hardware access via bare-metal hypervisors (e.g., VMware ESXi, Xen). Native VMs, also known as bare-metal virtualization, directly run on physical hardware without requiring a host operating system. These VMs rely on a hypervisor (or Virtual Machine Monitor, VMM) to manage multiple virtual instances running on a single hardware platform. • Runs directly on the physical machine (bare-metal). • The hypervisor is responsible for allocating resources (CPU, memory, I/O) to virtual machines. • Provides high performance and low overhead since it bypasses the host OS. • Ensures strong isolation between VMs. 2. Host VM (Software-based) – Runs as an application on a host OS (e.g., VirtualBox, VMware Workstation). A hosted virtual machine runs as an application within an existing operating system, relying on a host OS to provide access to hardware resources. These VMs are managed using software-based virtualization platforms. • Runs on top of a host operating system. • Uses software-based virtualization techniques (binary translation, dynamic recompilation). • Has higher overhead compared to native VMs. • Provides greater flexibility since it can run on general-purpose systems. 3. Hybrid VM – Uses a combination of user-mode and privileged-mode virtualization. Hybrid VMs combine features of both native and hosted virtualization. They partially virtualize hardware by running some components in user mode and others in privileged mode. This architecture optimizes performance by reducing overhead while maintaining flexibility and ease of management. • Uses both hardware-assisted and software virtualization techniques.
  • 13. BCS601 Cloud Computing Module-1 Dr. Sampada K S, Associate professor CSE RNSIT pg. 12 • The hypervisor runs at the kernel level, but some functions rely on the host OS. • Balances performance and flexibility for different workloads.
  • 14. BCS601 Cloud Computing Module-1 Dr. Sampada K S, Associate professor CSE RNSIT pg. 13 9. Explain the different system models used in distributed and cloud computing. OR Compare and contrast Clusters, Grids, Peer-to-Peer (P2P) Networks, and Cloud Computing. Distributed and cloud computing systems are built using large-scale, interconnected autonomous computer nodes. These nodes are linked through Storage Area Networks (SANs), Local Area Networks (LANs), or Wide Area Networks (WANs) in a hierarchical manner. These massive systems are classified 4 groups: Clusters, P2P networks, Computing grids, and Internet clouds. • Clusters: Connected by LAN switches, forming tightly coupled systems with hundreds of machines. • P2P Networks: Form decentralized, cooperative networks with millions of nodes, used in file sharing and content distribution. • Grids: Interconnect multiple clusters via WANs, allowing resource sharing across thousands of computers. • Internet Clouds: Operates over massive data centers, delivering on-demand computing resources at a global scale. These systems exhibit high scalability, enabling web-scale computing with millions of interconnected nodes. Their technical and application characteristics vary based on factors such as resource sharing, control mechanisms, and workload distribution. Functionality, Applications Computer Clusters [10,28,38] Peer-to-Peer Networks [34,46] Data/ Computational Grids [6,18,51] Cloud Platforms [1,9,11,12,30] Architecture, Network Connectivity, and Size Network of compute nodes interconnected by SAN, LAN, or WAN hierarchically Flexible network of client machines logically connected by an overlay network Heterogeneous clusters interconnected by high-speed network links over selected resource sites Virtualized cluster of servers over data centers via SLA Control and Resources Management Homogeneous nodes with distributed control, running UNIX or Linux Autonomous client nodes, free in and out, with self-organization Centralized control, server- oriented with authenticated security Dynamic resource provisioning of servers, storage, and networks Applications and Network- centric Services High-performance computing, search engines, and web services, etc. Most appealing to business file sharing, content delivery, and social networking Distributed supercomputing, global problem solving, and data center services Upgraded web search, utility computing, and outsourced Most appealing to business file sharing, content delivery, and social networking computing services Representative Operational Systems Google search engine, SunBlade, IBM Road Runner, Cray XT4, etc. Gnutella, eMule, BitTorrent, Napster, KaZaA, Skype, JXTA TeraGrid, GriPhyN, UK EGEE, D-Grid, ChinaGrid, etc Google App Engine, IBM Bluecloud, AWS, and Microsoft Gnutella, eMule, BitTorrent, Napster, KaZaA, Skype, JXTA
  • 15. BCS601 Cloud Computing Module-1 Dr. Sampada K S, Associate professor CSE RNSIT pg. 14 10. Explain layered architecture for web services and grids The layered architecture for web services and grids integrates various technologies to enable efficient communication, fault tolerance, security, and management in distributed systems. 1. Entity Interfaces: Defined using WSDL, Java methods, and CORBA IDL, enabling interoperability across different distributed systems. 2. Communication Systems: Includes SOAP, RMI, and IIOP, supporting message patterns (RPC), fault recovery, and routing via middleware like WebSphere MQ and JMS. 3. Fault Tolerance & Security: Implements WSRM for message reliability, similar to TCP, and security frameworks such as IPsec and SSL. 4. Service Discovery & Management: Uses UDDI, LDAP, ebXML, and CORBA Trading Service for entity discovery, while management services handle service lifecycle and persistence. 5. Performance & Distributed Model: The shared memory model offers better information exchange, while the distributed model provides higher performance, modularity, and software reuse. Modern systems favor SOAP, XML, and REST-based architectures over older CORBA and Java models, making distributed computing more scalable and efficient.
  • 16. BCS601 Cloud Computing Module-1 Dr. Sampada K S, Associate professor CSE RNSIT pg. 15 11. How does a software environment impact distributed cloud computing system? Software environments define how applications, services, and data interact within grids, clouds, and P2P networks. The various environments impacting distributed and cloud computing systems are Service-Oriented Architecture (SOA), Distributed Operating Systems, Programming models. Service-Oriented Architecture (SOA) SOA is a fundamental architectural approach that enables modular, reusable, and networked software components to communicate seamlessly. It is widely used in web services, grid computing, and cloud computing for efficient service integration. Layered Architecture for Web Services and Grids: SOA extends the OSI model by adding additional layers for service interfaces, workflows, and management, ensuring smooth service interactions. Communication Standards: SOA leverages multiple communication protocols, including: • SOAP (Simple Object Access Protocol) – Used in web services to exchange structured data. • RMI (Remote Method Invocation) – Java-based method for remote procedure calls. • IIOP (Internet Inter-ORB Protocol) – Used in CORBA-based distributed systems. Middleware tools like WebSphere MQ and Java Message Service (JMS) manage messaging, security, and fault tolerance. Web Services and Tools: SOA implementation follows two primary approaches: 1. SOAP-based Web Services – Fully specified service definitions supporting standardized, structured communication in enterprise applications. 2. REST (Representational State Transfer) – A lightweight alternative that provides scalable, flexible web-based APIs for cloud applications. RESTful services are preferred for fast-evolving environments, while SOAP is better for structured, formal communication. Evolution of SOA: SOA has progressed from basic service integration to multi-layered computing ecosystems, incorporating: • Sensor Services (SS) – Collect raw data from ZigBee, Bluetooth, GPS, and WiFi. • Filter Services (FS) – Process and refine data before storage or computation. • Cloud Ecosystems – Compute, storage, and discovery clouds help manage large-scale applications. SOA enables data transformation from raw data → useful information → knowledge → intelligent decisions. Grids vs. Clouds: Grids use static resources, while clouds provide elastic, on-demand computing via virtualization.
  • 17. BCS601 Cloud Computing Module-1 Dr. Sampada K S, Associate professor CSE RNSIT pg. 16 • Clouds focus on automation and scalability, whereas grids offer resource negotiation and allocation. • Hybrid models exist, including Clouds of Grids, Grids of Clouds, and Inter-Cloud architectures. Distributed Operating Systems Traditional distributed systems operate independent OS instances on each node, while a distributed OS manages all resources as a single system. Distributed OS Approaches (Tanenbaum's Models) 1. Network OS – Basic resource sharing with low transparency. 2. Middleware-based OS – Supports limited resource sharing using middleware tools like MOSIX for Linux clusters. 3. Truly Distributed OS – Provides a single-system image (SSI) for full resource transparency. Transparency in programming environments • Cloud computing separates user data, applications, OS, and hardware, allowing flexibility in service deployment. • Users can switch between OS platforms and cloud services without being locked into a specific provider. Programming Models: Parallel execution models help process large-scale workloads efficiently in distributed systems. Model Description Key Features MPI (Message- Passing Interface) Standard for writing parallel applications on distributed systems Explicit message-passing between processes MapReduce Scalable data processing model for large clusters Map generates key-value pairs, Reduce aggregates results Hadoop Open-source framework for big data processing HDFS (Distributed Storage) + MapReduce computing Grid Standards and Toolkits: Grid computing relies on standardized middleware to manage resource sharing, security, and automation. Standard Function Key Features OGSA (Open Grid Services Architecture) Defines common grid services Supports heterogeneous computing and security policies
  • 18. BCS601 Cloud Computing Module-1 Dr. Sampada K S, Associate professor CSE RNSIT pg. 17 Standard Function Key Features Globus Toolkit (GT4) Middleware for resource discovery and security Uses PKI authentication, Kerberos, SSL, and delegation policies IBM Grid Toolbox Grid computing framework for AIX/Linux clusters Supports autonomic computing and security management ***************************************************************************
  • 19. BCS601 Cloud Computing Module-1 Dr. Sampada K S, Associate professor CSE RNSIT pg. 18 12. Discuss the role of Service-Oriented Architecture (SOA) in modern cloud-based systems. 13. Explain the advantages and challenges of utility computing and its impact on cloud adoption. Virtualization is a key enabler of cloud computing and modern distributed systems. It enhances resource efficiency, scalability, and flexibility, allowing better management of large-scale infrastructures while reducing costs and improving system reliability. iol