SlideShare a Scribd company logo
Parallel and Distributed
Computing
Dr. Naween Kumar
Learning Outcomes
At the end of the course, the students will be able to
• - define Parallel Algorithms
• - recognize parallel speedup and performance analysis
• - identify task decomposition techniques
• - perform Parallel Programming
• - apply acceleration strategies for algorithms
Four decades of computing
• Batch Era
• Time sharing Era
• Desktop Era
• Network Era
Batch era
• Batch processing
• Is execution of a series of programs on a computer
without manual intervention
• The term originated in the days when users entered
programs on punch cards
Time-sharing Era
• time-sharing is the sharing of a computing
resource among many users by means of
multiprogramming and multi-tasking
• Developing a system that supported multiple
users at the same time
Desktop Era
• Personal Computers (PCs)
• With WAN
Network Era
• Systems with:
• Shared memory
• Distributed memory
• Example for parallel computers: Intel iPSC, nCUBE
Parallel Computing
Parallel computing is a form of computationin whichmanyinstructionsarecarriedout simultaneously
operating on the principle that largeproblems canoftenbedividedinto smallerones,whicharethen
solved concurrently (in parallel).
With the increased use of computers in every sphere of human activity, computer scientists are
facedwithtwo crucial issues today.
 Processing has to bedonefaster likenever before
 Larger or complex computationproblems need to be solved
Increasing the number of transistors as per Moore’s Law isn’t a solution, asit also increases the
frequency scalingandpower consumption.
Power consumptionhasbeen amajor issue recently, as it causes a problem of processor heating.
Theperfect solutionis PARALLELISM In hardware as well as software.
Parallel Computing
Difference between Parallel Computing & Distributed Computing
When different processors/computers work on a single common goal, It is parallel computing. Eg. Ten men pulling a
rope to lift up one rock, supercomputers implement parallel computing. Distributed computing is where several
different computers work separately on a multi-faceted computing workload. Eg. Ten men pulling ten ropes to lift ten
different rocks, employees working in an office doing their own work.
Difference between Parallel Computing & Cluster Computing
A computer cluster is a group of linked computers, working together closely so that in many respects they form a
single computer. Eg.,In an office of 50 employees, group of 15 doing some work,25 some other,and remaining 10
something else. Similarly, in a network of 20 computers,16 working on a common goal, whereas 4 on some other
common goal. Cluster Computing is a specific case of parallel computing.
Difference between Parallel Computing & Grid Computing
Grid Computing makes use of computers communicating over the Internet to work on a given problem.
Eg.When 3 persons,one of them from USA, another from Japan and a third from Norway are working together
online on a common project. Websites like Wikipedia,Yahoo!Answers,YouTube,FlickR or open source OS like Linux
are examples of grid computing. Again, it serves a san example of parallel computing.
FLYNN's taxonomy of computer
architecture
Two types of information flow into processor:
 Instructions
 Data
what are instructions and data?
FLYNN's taxonomy of computer
architecture
1. single-instruction single-data streams (SISD)
2. single-instruction multiple-data streams (SIMD)
3. multiple-instruction single-data streams (MISD)
4. multiple-instruction multiple-data streams (MIMD)
Parallel computing?
Serial computing
Parallel computing?
Parallel Computers
• all stand-alone computers today are parallel from a hardware
perspective
Parallel Computers
• Networks connect multiple stand-alone computers (nodes) to make
larger parallel computer clusters.
Why Use Parallel Computing?
• SAVE TIME AND/OR MONEY:
Why Use Parallel Computing?
• SOLVE LARGER / MORE COMPLEX PROBLEMS
Grand Challenge Problems ?
Why Use Parallel Computing?
• PROVIDE CONCURRENCY
Why Use Parallel Computing?
• TAKE ADVANTAGE OF NON-LOCAL RESOURCES:
Why Use Parallel Computing?
• MAKE BETTER USE OF UNDERLYING PARALLEL HARDWARE
• Modern computers, even laptops, are parallel in architecture with multiple
processors/cores
BACK to Flynn's Classical Taxonomy
Single Instruction Single Data
(SISD)
• A serial (non-parallel) computer
• This is the oldest type of computer
UNIVAC1
IBM 360
CRAY1 CDC 7600 PDP1
Single Instruction Multiple Data
(SIMD)
ILLIAC IV
MasPar
Cray X-MP
Cray Y-MP
Cell Processor (GPU)
Multiple Instruction Single Data
The Space Shuttle flight control computers
Multiple Instruction Multiple Data
(MIMD)
IBM POWER5
HP/Compaq Alphaserver
Intel IA32
AMD Opteron
What are we going to learn?
Factors influencing Parallel Computing
•Increased Scientific & Business Computing
•Sequential architecture constrained by speed of light and thermodynamics law
•H/w improvement in pipelining, superscalar, require sophisticated compiler
•Vector processing works well for matrix/graphics processing
•Parallel processing is matured and can be explored commercially
Shared Memory System
• A shared memory system typically accomplishes
interprocessor coordination through a global memory shared by all
processors.
Easier to program, Less tolerant, limited scalability
Failure affects entire system
• Ex: Server systems, GPGPU
Message Passing System
(Distributed Memory)
• This kind of systems typically combine the local
memory and processor at each node of the
interconnection network
• There is no global memory, Use message passing
technique to move data from one local memory to
another
• Difficult to program, More tolerant, higher scalability
Failure affects system partially
Superior price performance ratio
Limits and Costs of Parallel Programming
• Amdahl's Law:
Amdahl's Law states that potential program speedup is defined by the
fraction of code (P) that can be parallelized:
𝑆𝑝𝑒𝑒𝑑𝑢𝑝 =
1
1 − 𝑝
• If none of the code can be parallelized, P = 0 and the speedup = 1 (no
speedup).
• If all of the code is parallelized, P = 1 and the speedup is infinite (in
theory).
Limits and Costs of Parallel Programming
• If 50% of the code can be parallelized, maximum speedup = 2,
meaning the code will run twice as fast.
Limits and Costs of Parallel Programming
• Introducing the number of processors performing the parallel fraction
of work, the relationship can be modeled by:
𝑠𝑝𝑒𝑒𝑑𝑢𝑝 = 𝑃
1
𝑁 + 𝑆
• where P = parallel fraction, N = number of processors and S = serial
fraction
Limits and Costs of Parallel Programming
Types Of Parallelism
• Bit-Level
• Instructional
• Data
• Task
Bit-Level Parallelism
When an 8-bit processor needs to add two 16- bit integers,it’s
to be done in two steps.
 The processor must first add the 8 lower-order bits from each
integer using the standard addition instruction,
 Then add the 8 higher-order bits using an add- with-carry
instruction and the carry bit from the lower order addition
Instruction Level Parallelism
The instructions given to a computer for processing can be
divided into groups, or re- ordered and then processed
without changing the final result.
This is known as instruction-level parallelism. i.e.,ILP.
An Example
1. e = a + b
2. f = c + d
3. g = e * f
Here, instruction 3 is dependent on instruction
1 and 2 .
However,instruction 1 and 2 can be
independently processed.
Data Parallelism
Data parallelism focuses on distributing the data across
different parallel computing nodes.
It is also called as loop-level parallelism.
An Illustration
In a data parallel implementation, CPU A could add all
elements from the top half of the matrices, while CPU B could
add all elements from the bottom half of the matrices.
Since the two processors work in parallel, the job of
performing matrix addition would take one half the time of
performing the same operation in serial using one CPU
alone.
Task Parallelism
Task Parallelism focuses on distribution of tasks across
different processors.
It is also known as functional parallelism or control
parallelism
An Example
As a simple example, if we are running code
on a 2-processor system (CPUs "a" & "b") in a
parallel environment and we wish to do tasks
"A" and "B" , it is possible to tell CPU "a" to
do task "A" and CPU "b" to do task 'B"
simultaneously, thereby reducing the
runtime of the execution.
Key Difference Between Data And Task
Parallelism
 It is the division of
threads(processes) or instructions or
tasks internally into sub-parts for
execution.
 A task ‘A’ is divided into sub-parts
and then processed.
Data Parallelism Task Parallelism
 It is the divisions among
threads(processes) or instructions or
tasks themselves for execution.
 A task ‘A’ and task ‘B’ are
processed separately by different
processors.
Implementation Of Parallel Computing
In Software
When implemented in software(or rather algorithms), the
terminology calls it ‘parallel programming’.
An algorithm is split into pieces and then executed, as
seen earlier.
Important Points In Parallel
Programming
Dependencies-A typical scenario when line 6 of an algorithm
is dependent on lines 2,3,4 and 5
Application Checkpoints-Just like saving the algorithm, or
like creating a backup point.
Automatic Parallelisation-Identifying dependencies and
parallelising algorithms automatically.This has achieved
limited success.
Implementation Of Parallel Computing
In Hardware
When implemented in hardware, it is called as ‘parallel
processing’.
Typically,when a chunk of load for execution is divided for
processing by units like cores,processors,CPUs,etc.
Next
• Parallel Computer Memory Architectures

More Related Content

PPT
Parallel Computing
PPTX
Parallel computing and its applications
PPTX
Chap 1(one) general introduction
PPT
parallel computing.ppt
PPT
Lecture1
PPT
Lecture 1
PDF
Lecture 1 introduction to parallel and distributed computing
PPTX
PP - CH01 (2).pptxhhsjoshhshhshhhshhshsbx
Parallel Computing
Parallel computing and its applications
Chap 1(one) general introduction
parallel computing.ppt
Lecture1
Lecture 1
Lecture 1 introduction to parallel and distributed computing
PP - CH01 (2).pptxhhsjoshhshhshhhshhshsbx

Similar to Parallel Computing-Part-1.pptx (20)

PPTX
Introduction to Parallel Computing
PPT
Parallel Processing Concepts
PPTX
20090720 smith
PPTX
distributed system lab materials about ad
PPTX
1.1 Introduction.pptx about the design thinking of the engineering students
PDF
introduction to advanced distributed system
PPTX
parallel processing
PDF
parallel Questions & answers
PPT
Parallel Programming Models: Shared variable model, Message passing model, Da...
PDF
Introduction to Parallel Computing
PPT
PMSCS 657_Parallel and Distributed processing
PPTX
Lecture1
PDF
doing_parallel.pdf
PPTX
492 final presentation
PPT
Parallel Computing
PPT
Nbvtalkatjntuvizianagaram
PPT
Tutorial on Parallel Computing and Message Passing Model - C1
PPT
Parallel computing
PDF
Parallel and Distributed Computing chapter 1
PDF
Introduction to Parallel Computing
Parallel Processing Concepts
20090720 smith
distributed system lab materials about ad
1.1 Introduction.pptx about the design thinking of the engineering students
introduction to advanced distributed system
parallel processing
parallel Questions & answers
Parallel Programming Models: Shared variable model, Message passing model, Da...
Introduction to Parallel Computing
PMSCS 657_Parallel and Distributed processing
Lecture1
doing_parallel.pdf
492 final presentation
Parallel Computing
Nbvtalkatjntuvizianagaram
Tutorial on Parallel Computing and Message Passing Model - C1
Parallel computing
Parallel and Distributed Computing chapter 1
Ad

More from krnaween (11)

PPTX
AWS Serverless Computing Introduction Session 2.pptx
PDF
Route53 for Domian Registration, hosting, and traffic handlling.pdf
PPTX
Athena & AWS Glue for AWS Data analytics.pptx
PPTX
AWS Glue.pptx
PPTX
AMAZON ATHENA.pptx
PPTX
AWS SQS SNS.pptx
PPTX
2-desktop virtualization.pptx
PPTX
AMAZON ATHENA.pptx
PPTX
SessionBased.pptx
PPTX
AWS-Service.pptx
PPTX
Module 1 cloud computing
AWS Serverless Computing Introduction Session 2.pptx
Route53 for Domian Registration, hosting, and traffic handlling.pdf
Athena & AWS Glue for AWS Data analytics.pptx
AWS Glue.pptx
AMAZON ATHENA.pptx
AWS SQS SNS.pptx
2-desktop virtualization.pptx
AMAZON ATHENA.pptx
SessionBased.pptx
AWS-Service.pptx
Module 1 cloud computing
Ad

Recently uploaded (20)

PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PDF
O7-L3 Supply Chain Operations - ICLT Program
PDF
Insiders guide to clinical Medicine.pdf
PDF
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
PPTX
Final Presentation General Medicine 03-08-2024.pptx
PDF
102 student loan defaulters named and shamed – Is someone you know on the list?
PDF
TR - Agricultural Crops Production NC III.pdf
PDF
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
PDF
Mark Klimek Lecture Notes_240423 revision books _173037.pdf
PDF
Anesthesia in Laparoscopic Surgery in India
PDF
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
PDF
Pre independence Education in Inndia.pdf
PPTX
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
PDF
2.FourierTransform-ShortQuestionswithAnswers.pdf
PDF
VCE English Exam - Section C Student Revision Booklet
PDF
01-Introduction-to-Information-Management.pdf
PDF
Basic Mud Logging Guide for educational purpose
PPTX
Cell Types and Its function , kingdom of life
PPTX
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
PPTX
master seminar digital applications in india
O5-L3 Freight Transport Ops (International) V1.pdf
O7-L3 Supply Chain Operations - ICLT Program
Insiders guide to clinical Medicine.pdf
Physiotherapy_for_Respiratory_and_Cardiac_Problems WEBBER.pdf
Final Presentation General Medicine 03-08-2024.pptx
102 student loan defaulters named and shamed – Is someone you know on the list?
TR - Agricultural Crops Production NC III.pdf
BÀI TẬP BỔ TRỢ 4 KỸ NĂNG TIẾNG ANH 9 GLOBAL SUCCESS - CẢ NĂM - BÁM SÁT FORM Đ...
Mark Klimek Lecture Notes_240423 revision books _173037.pdf
Anesthesia in Laparoscopic Surgery in India
Chapter 2 Heredity, Prenatal Development, and Birth.pdf
Pre independence Education in Inndia.pdf
school management -TNTEU- B.Ed., Semester II Unit 1.pptx
2.FourierTransform-ShortQuestionswithAnswers.pdf
VCE English Exam - Section C Student Revision Booklet
01-Introduction-to-Information-Management.pdf
Basic Mud Logging Guide for educational purpose
Cell Types and Its function , kingdom of life
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
master seminar digital applications in india

Parallel Computing-Part-1.pptx

  • 2. Learning Outcomes At the end of the course, the students will be able to • - define Parallel Algorithms • - recognize parallel speedup and performance analysis • - identify task decomposition techniques • - perform Parallel Programming • - apply acceleration strategies for algorithms
  • 3. Four decades of computing • Batch Era • Time sharing Era • Desktop Era • Network Era
  • 4. Batch era • Batch processing • Is execution of a series of programs on a computer without manual intervention • The term originated in the days when users entered programs on punch cards
  • 5. Time-sharing Era • time-sharing is the sharing of a computing resource among many users by means of multiprogramming and multi-tasking • Developing a system that supported multiple users at the same time
  • 6. Desktop Era • Personal Computers (PCs) • With WAN
  • 7. Network Era • Systems with: • Shared memory • Distributed memory • Example for parallel computers: Intel iPSC, nCUBE
  • 8. Parallel Computing Parallel computing is a form of computationin whichmanyinstructionsarecarriedout simultaneously operating on the principle that largeproblems canoftenbedividedinto smallerones,whicharethen solved concurrently (in parallel). With the increased use of computers in every sphere of human activity, computer scientists are facedwithtwo crucial issues today.  Processing has to bedonefaster likenever before  Larger or complex computationproblems need to be solved Increasing the number of transistors as per Moore’s Law isn’t a solution, asit also increases the frequency scalingandpower consumption. Power consumptionhasbeen amajor issue recently, as it causes a problem of processor heating. Theperfect solutionis PARALLELISM In hardware as well as software.
  • 9. Parallel Computing Difference between Parallel Computing & Distributed Computing When different processors/computers work on a single common goal, It is parallel computing. Eg. Ten men pulling a rope to lift up one rock, supercomputers implement parallel computing. Distributed computing is where several different computers work separately on a multi-faceted computing workload. Eg. Ten men pulling ten ropes to lift ten different rocks, employees working in an office doing their own work. Difference between Parallel Computing & Cluster Computing A computer cluster is a group of linked computers, working together closely so that in many respects they form a single computer. Eg.,In an office of 50 employees, group of 15 doing some work,25 some other,and remaining 10 something else. Similarly, in a network of 20 computers,16 working on a common goal, whereas 4 on some other common goal. Cluster Computing is a specific case of parallel computing. Difference between Parallel Computing & Grid Computing Grid Computing makes use of computers communicating over the Internet to work on a given problem. Eg.When 3 persons,one of them from USA, another from Japan and a third from Norway are working together online on a common project. Websites like Wikipedia,Yahoo!Answers,YouTube,FlickR or open source OS like Linux are examples of grid computing. Again, it serves a san example of parallel computing.
  • 10. FLYNN's taxonomy of computer architecture Two types of information flow into processor:  Instructions  Data what are instructions and data?
  • 11. FLYNN's taxonomy of computer architecture 1. single-instruction single-data streams (SISD) 2. single-instruction multiple-data streams (SIMD) 3. multiple-instruction single-data streams (MISD) 4. multiple-instruction multiple-data streams (MIMD)
  • 14. Parallel Computers • all stand-alone computers today are parallel from a hardware perspective
  • 15. Parallel Computers • Networks connect multiple stand-alone computers (nodes) to make larger parallel computer clusters.
  • 16. Why Use Parallel Computing? • SAVE TIME AND/OR MONEY:
  • 17. Why Use Parallel Computing? • SOLVE LARGER / MORE COMPLEX PROBLEMS Grand Challenge Problems ?
  • 18. Why Use Parallel Computing? • PROVIDE CONCURRENCY
  • 19. Why Use Parallel Computing? • TAKE ADVANTAGE OF NON-LOCAL RESOURCES:
  • 20. Why Use Parallel Computing? • MAKE BETTER USE OF UNDERLYING PARALLEL HARDWARE • Modern computers, even laptops, are parallel in architecture with multiple processors/cores
  • 21. BACK to Flynn's Classical Taxonomy
  • 22. Single Instruction Single Data (SISD) • A serial (non-parallel) computer • This is the oldest type of computer UNIVAC1 IBM 360 CRAY1 CDC 7600 PDP1
  • 23. Single Instruction Multiple Data (SIMD) ILLIAC IV MasPar Cray X-MP Cray Y-MP Cell Processor (GPU)
  • 24. Multiple Instruction Single Data The Space Shuttle flight control computers
  • 25. Multiple Instruction Multiple Data (MIMD) IBM POWER5 HP/Compaq Alphaserver Intel IA32 AMD Opteron
  • 26. What are we going to learn?
  • 27. Factors influencing Parallel Computing •Increased Scientific & Business Computing •Sequential architecture constrained by speed of light and thermodynamics law •H/w improvement in pipelining, superscalar, require sophisticated compiler •Vector processing works well for matrix/graphics processing •Parallel processing is matured and can be explored commercially
  • 28. Shared Memory System • A shared memory system typically accomplishes interprocessor coordination through a global memory shared by all processors. Easier to program, Less tolerant, limited scalability Failure affects entire system • Ex: Server systems, GPGPU
  • 29. Message Passing System (Distributed Memory) • This kind of systems typically combine the local memory and processor at each node of the interconnection network • There is no global memory, Use message passing technique to move data from one local memory to another • Difficult to program, More tolerant, higher scalability Failure affects system partially Superior price performance ratio
  • 30. Limits and Costs of Parallel Programming • Amdahl's Law: Amdahl's Law states that potential program speedup is defined by the fraction of code (P) that can be parallelized: 𝑆𝑝𝑒𝑒𝑑𝑢𝑝 = 1 1 − 𝑝 • If none of the code can be parallelized, P = 0 and the speedup = 1 (no speedup). • If all of the code is parallelized, P = 1 and the speedup is infinite (in theory).
  • 31. Limits and Costs of Parallel Programming • If 50% of the code can be parallelized, maximum speedup = 2, meaning the code will run twice as fast.
  • 32. Limits and Costs of Parallel Programming • Introducing the number of processors performing the parallel fraction of work, the relationship can be modeled by: 𝑠𝑝𝑒𝑒𝑑𝑢𝑝 = 𝑃 1 𝑁 + 𝑆 • where P = parallel fraction, N = number of processors and S = serial fraction
  • 33. Limits and Costs of Parallel Programming
  • 34. Types Of Parallelism • Bit-Level • Instructional • Data • Task
  • 35. Bit-Level Parallelism When an 8-bit processor needs to add two 16- bit integers,it’s to be done in two steps.  The processor must first add the 8 lower-order bits from each integer using the standard addition instruction,  Then add the 8 higher-order bits using an add- with-carry instruction and the carry bit from the lower order addition
  • 36. Instruction Level Parallelism The instructions given to a computer for processing can be divided into groups, or re- ordered and then processed without changing the final result. This is known as instruction-level parallelism. i.e.,ILP.
  • 37. An Example 1. e = a + b 2. f = c + d 3. g = e * f Here, instruction 3 is dependent on instruction 1 and 2 . However,instruction 1 and 2 can be independently processed.
  • 38. Data Parallelism Data parallelism focuses on distributing the data across different parallel computing nodes. It is also called as loop-level parallelism.
  • 39. An Illustration In a data parallel implementation, CPU A could add all elements from the top half of the matrices, while CPU B could add all elements from the bottom half of the matrices. Since the two processors work in parallel, the job of performing matrix addition would take one half the time of performing the same operation in serial using one CPU alone.
  • 40. Task Parallelism Task Parallelism focuses on distribution of tasks across different processors. It is also known as functional parallelism or control parallelism
  • 41. An Example As a simple example, if we are running code on a 2-processor system (CPUs "a" & "b") in a parallel environment and we wish to do tasks "A" and "B" , it is possible to tell CPU "a" to do task "A" and CPU "b" to do task 'B" simultaneously, thereby reducing the runtime of the execution.
  • 42. Key Difference Between Data And Task Parallelism  It is the division of threads(processes) or instructions or tasks internally into sub-parts for execution.  A task ‘A’ is divided into sub-parts and then processed. Data Parallelism Task Parallelism  It is the divisions among threads(processes) or instructions or tasks themselves for execution.  A task ‘A’ and task ‘B’ are processed separately by different processors.
  • 43. Implementation Of Parallel Computing In Software When implemented in software(or rather algorithms), the terminology calls it ‘parallel programming’. An algorithm is split into pieces and then executed, as seen earlier.
  • 44. Important Points In Parallel Programming Dependencies-A typical scenario when line 6 of an algorithm is dependent on lines 2,3,4 and 5 Application Checkpoints-Just like saving the algorithm, or like creating a backup point. Automatic Parallelisation-Identifying dependencies and parallelising algorithms automatically.This has achieved limited success.
  • 45. Implementation Of Parallel Computing In Hardware When implemented in hardware, it is called as ‘parallel processing’. Typically,when a chunk of load for execution is divided for processing by units like cores,processors,CPUs,etc.
  • 46. Next • Parallel Computer Memory Architectures