SlideShare a Scribd company logo
Chapter 8
Asynchronous parallelism
Chapter 8
Asynchronous Parallelism
• 8.1 MIMD Systems
• 8.2 Asynchronous parallelism
• 8.3 Process States
Introduction
• A distinction made between two large classes of parallel
processing; asynchronous and synchronous parallelism.
• In the classical asynchronous parallelism, the problem to
be solved is partitioned into sub-problems in the form of
processes that can be distributed among a group of
autonomous processors.
• The sub-problems ma not be totally independent, so the
processes must exchange information among themselves
• And therefore must be mutually synchronized (this would
lead to large synchronization cost).
• Because of the cost asynchronous parallelism is often
characterized as coarse grain parallelism.
8.1 MIMD Systems
• Two interesting class are SIMD (synchronous) and MIMD
(asynchronous).
8.2 Asynchronous parallelism
• means that there are multiple threads of control. (data
exchange, each processor executes individual programs).
• MIMD and SIMD according to their inter connection
topology.
• (From page 8, fig 2.4 and 2.5 and 2.2 senkron)
• (Alan page 9, fig 5)
• This class (MIMD) is more general structure and always
work asynchronously)
• MIMD computers with shared memory are known as
tightly coupled and,
BIL406-Chapter-8-Asynchronous parallelism.ppt
BIL406-Chapter-8-Asynchronous parallelism.ppt
BIL406-Chapter-8-Asynchronous parallelism.ppt
• Synchronization and information exchange occur via
memory areas which can be addressed by different
processor in a coordinated manner.
• Accesses to the same portion of shared memory at the
same time requires an arbitration mechanism which must
be used to ensure only one processor accesses that memory
portion at a time.
• This problem of memory contention may restrict the
number of processors that can be interconnected using
shared memory model.
• MIMD computers without shared memory are known as
loosely coupled.
• Each PE has its own local memory.
• Synchronization and communication are much more
costly without share memory, because
• Messages must be exchanged over the network.
• PE wishes to access another PE’s private memory, it can
only do so by sending a message to the appropriate PE
along the interconnection network.
• (Kitaptan devam (Brunnel))
Structure of a MIMD system
• The most general model of a MIMD computer is shown in
figure 8.x (6.1).
• The processors (PE) are autonomous computer system that
can carry out different programs independently of one
other.
• Depending configuration, the processors may have their
own local memory (loosely coupled) or may use a shared
memory (tightly coupled, shared one is generally with bus
system).
• The concept of virtually shared memory can be applied to
simulate a shared memory area by data exchange protocols
between the PEs.
BIL406-Chapter-8-Asynchronous parallelism.ppt
• The processors function asynchronously independently one
another.
• In order to exchange information, they have to be
synchronized.
• A process is divided into individual processes (run
independently, synchronized for data exchange).
• The ideal mapping would be 1 process: 1 processor, which
is normally not possible in practice, due to the limitted
number of available processor.
• The general mapping is n process: 1 processor (time
sharing, multiple process run on a PE, scheduler and extra
control cost).
MIMD computer system
• In this section, the MIMD computer systems Sequent
symmetry, Intel hypercube, and Intel Paragon will be
briefly discussed.
Sequent symmetry
• The block diagram of sequent symmetry is illustrated in
figure 8.x (6.2).
• The SS MIMD computer is a good example of the tightly
coupling of processors and shared global memory accessed
over a central bus.
• Up to 300 PE of type 80486 with high clock rate may be
connected.
• Bus allows two device communicate simultaneously (then
bus restricts the scalability).
BIL406-Chapter-8-Asynchronous parallelism.ppt
Intel iPSC Hypercube
• These systems has several generations and these are based
on 80286, 80386, and i860.
•
• Up to 128 powerful processors can be put together in a
iPSC/860 Hypercube (no share memory, loosely coupled,
message passing procedures exchanges the data between
PEs).
•
• Semaphore and Monitor can only be implemented locally,
but not between PEs.
Intel Paragon
• The system contains i860 xp PEs.
• Up to 512 nodes (each node containing two i860 xp PEs).
•
• One processors on each node is assigned to
arithmetic/logic tasks, while other PE is dedicated to
handling data exchange.
•
• The nodes connected by two-dimensional grid.
8.3 Process states
• A process is and individual program segment that executes
asynchronously in parallel other processes.
• They run concurrently and parallel On several PEs.
• Many processes on a PE run concurrently (a time-sliced
fashion).
• At the end the running process changes its state from
running to ready.
• Then all process control data (program counter, registers,
data addresses, etc.) of a process must be stored in its
process control block (PCB), so it can be retrieved lather.
• Newly arriving processes are placed immediately in queue
ready, while terminating process are removed from
running state.
• The process state model is illustrated in figure 8.x (6.3).
• The part of the OS for doing these tasks is the scheduler.
• More complex scheduler for multi-processors system will
be given later.
• The scheduler capable of moving processes from a heavily
loaded processor to a more lightly loaded processor.
BIL406-Chapter-8-Asynchronous parallelism.ppt

More Related Content

PPT
BIL406-Chapter-2-Classifications of Parallel Systems.ppt
PPT
Par com
PDF
Advanced processor Principles
PDF
Distributed system lectures
PPTX
PPTX
PDF
Parallel Processing
PPTX
Computer organisation and architecture unit 5, SRM
BIL406-Chapter-2-Classifications of Parallel Systems.ppt
Par com
Advanced processor Principles
Distributed system lectures
Parallel Processing
Computer organisation and architecture unit 5, SRM

Similar to BIL406-Chapter-8-Asynchronous parallelism.ppt (20)

PDF
CS6801-MULTI-CORE-ARCHITECTURE-AND-PROGRAMMING_watermark.pdf
PPTX
Lec 2 (parallel design and programming)
PPTX
Parallel computing
PDF
A Parallel Computing-a Paradigm to achieve High Performance
PPTX
Parallel processing (simd and mimd)
DOCX
Parallel computing persentation
PPTX
Classification of Parallel Computers.pptx
PPTX
Lecture 04 chapter 2 - Parallel Programming Platforms
PPSX
System on chip architectures
PDF
KA 5 - Lecture 1 - Parallel Processing.pdf
PPTX
Multiprocessor.pptx
PDF
Computer Architecture CSN221_Lec_37_SpecialTopics.pdf
PPTX
Parallel architecture-programming
PDF
2 parallel processing presentation ph d 1st semester
PPTX
Flynn's Taxonomy
PPTX
parellelisum edited_jsdnsfnjdnjfnjdn.pptx
PPT
Parallel processing Concepts
PPTX
Parallel architecture &programming
PPT
18 parallel processing
PDF
CS6801-MULTI-CORE-ARCHITECTURE-AND-PROGRAMMING_watermark.pdf
Lec 2 (parallel design and programming)
Parallel computing
A Parallel Computing-a Paradigm to achieve High Performance
Parallel processing (simd and mimd)
Parallel computing persentation
Classification of Parallel Computers.pptx
Lecture 04 chapter 2 - Parallel Programming Platforms
System on chip architectures
KA 5 - Lecture 1 - Parallel Processing.pdf
Multiprocessor.pptx
Computer Architecture CSN221_Lec_37_SpecialTopics.pdf
Parallel architecture-programming
2 parallel processing presentation ph d 1st semester
Flynn's Taxonomy
parellelisum edited_jsdnsfnjdnjfnjdn.pptx
Parallel processing Concepts
Parallel architecture &programming
18 parallel processing

More from Kadri20 (9)

PPT
BIL406-Chapter-11-MIMD Programming Languages.ppt
PPT
BIL406-Chapter-5-Network Structures.ppt
PPT
BIL406-Chapter-7-Superscalar and Superpipeline processors.ppt
PPT
BIL406-Chapter-9-Synchronization and Communication in MIMD Systems.ppt
PPT
BIL406-Chapter-6-Basic Parallelism and CPU.ppt
PPT
BIL406-Chapter-10-Problems with Asynchronous Parallelism.ppt
PPT
BIL406-Chapter-4-Parallel Processing Concept.ppt
PPT
BIL406-Chapter-1-Introduction.ppt
PPT
BIL406-Chapter-0-Introduction-Course.ppt
BIL406-Chapter-11-MIMD Programming Languages.ppt
BIL406-Chapter-5-Network Structures.ppt
BIL406-Chapter-7-Superscalar and Superpipeline processors.ppt
BIL406-Chapter-9-Synchronization and Communication in MIMD Systems.ppt
BIL406-Chapter-6-Basic Parallelism and CPU.ppt
BIL406-Chapter-10-Problems with Asynchronous Parallelism.ppt
BIL406-Chapter-4-Parallel Processing Concept.ppt
BIL406-Chapter-1-Introduction.ppt
BIL406-Chapter-0-Introduction-Course.ppt

Recently uploaded (20)

PDF
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
PPTX
Welding lecture in detail for understanding
PPTX
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
DOCX
573137875-Attendance-Management-System-original
PDF
composite construction of structures.pdf
PDF
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
PDF
Well-logging-methods_new................
PPTX
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
PPTX
CH1 Production IntroductoryConcepts.pptx
PDF
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
PDF
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
PPTX
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
PPTX
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
PDF
Operating System & Kernel Study Guide-1 - converted.pdf
PPTX
web development for engineering and engineering
PPTX
additive manufacturing of ss316l using mig welding
PPTX
Construction Project Organization Group 2.pptx
PDF
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
PDF
Digital Logic Computer Design lecture notes
PPTX
Geodesy 1.pptx...............................................
keyrequirementskkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkkk
Welding lecture in detail for understanding
Engineering Ethics, Safety and Environment [Autosaved] (1).pptx
573137875-Attendance-Management-System-original
composite construction of structures.pdf
July 2025 - Top 10 Read Articles in International Journal of Software Enginee...
Well-logging-methods_new................
FINAL REVIEW FOR COPD DIANOSIS FOR PULMONARY DISEASE.pptx
CH1 Production IntroductoryConcepts.pptx
PRIZ Academy - 9 Windows Thinking Where to Invest Today to Win Tomorrow.pdf
The CXO Playbook 2025 – Future-Ready Strategies for C-Suite Leaders Cerebrai...
Recipes for Real Time Voice AI WebRTC, SLMs and Open Source Software.pptx
M Tech Sem 1 Civil Engineering Environmental Sciences.pptx
Operating System & Kernel Study Guide-1 - converted.pdf
web development for engineering and engineering
additive manufacturing of ss316l using mig welding
Construction Project Organization Group 2.pptx
Enhancing Cyber Defense Against Zero-Day Attacks using Ensemble Neural Networks
Digital Logic Computer Design lecture notes
Geodesy 1.pptx...............................................

BIL406-Chapter-8-Asynchronous parallelism.ppt

  • 2. Chapter 8 Asynchronous Parallelism • 8.1 MIMD Systems • 8.2 Asynchronous parallelism • 8.3 Process States
  • 3. Introduction • A distinction made between two large classes of parallel processing; asynchronous and synchronous parallelism. • In the classical asynchronous parallelism, the problem to be solved is partitioned into sub-problems in the form of processes that can be distributed among a group of autonomous processors. • The sub-problems ma not be totally independent, so the processes must exchange information among themselves • And therefore must be mutually synchronized (this would lead to large synchronization cost). • Because of the cost asynchronous parallelism is often characterized as coarse grain parallelism.
  • 4. 8.1 MIMD Systems • Two interesting class are SIMD (synchronous) and MIMD (asynchronous).
  • 5. 8.2 Asynchronous parallelism • means that there are multiple threads of control. (data exchange, each processor executes individual programs). • MIMD and SIMD according to their inter connection topology. • (From page 8, fig 2.4 and 2.5 and 2.2 senkron) • (Alan page 9, fig 5) • This class (MIMD) is more general structure and always work asynchronously) • MIMD computers with shared memory are known as tightly coupled and,
  • 9. • Synchronization and information exchange occur via memory areas which can be addressed by different processor in a coordinated manner. • Accesses to the same portion of shared memory at the same time requires an arbitration mechanism which must be used to ensure only one processor accesses that memory portion at a time. • This problem of memory contention may restrict the number of processors that can be interconnected using shared memory model. • MIMD computers without shared memory are known as loosely coupled. • Each PE has its own local memory. • Synchronization and communication are much more costly without share memory, because • Messages must be exchanged over the network.
  • 10. • PE wishes to access another PE’s private memory, it can only do so by sending a message to the appropriate PE along the interconnection network. • (Kitaptan devam (Brunnel))
  • 11. Structure of a MIMD system • The most general model of a MIMD computer is shown in figure 8.x (6.1). • The processors (PE) are autonomous computer system that can carry out different programs independently of one other. • Depending configuration, the processors may have their own local memory (loosely coupled) or may use a shared memory (tightly coupled, shared one is generally with bus system). • The concept of virtually shared memory can be applied to simulate a shared memory area by data exchange protocols between the PEs.
  • 13. • The processors function asynchronously independently one another. • In order to exchange information, they have to be synchronized. • A process is divided into individual processes (run independently, synchronized for data exchange). • The ideal mapping would be 1 process: 1 processor, which is normally not possible in practice, due to the limitted number of available processor. • The general mapping is n process: 1 processor (time sharing, multiple process run on a PE, scheduler and extra control cost).
  • 14. MIMD computer system • In this section, the MIMD computer systems Sequent symmetry, Intel hypercube, and Intel Paragon will be briefly discussed.
  • 15. Sequent symmetry • The block diagram of sequent symmetry is illustrated in figure 8.x (6.2). • The SS MIMD computer is a good example of the tightly coupling of processors and shared global memory accessed over a central bus. • Up to 300 PE of type 80486 with high clock rate may be connected. • Bus allows two device communicate simultaneously (then bus restricts the scalability).
  • 17. Intel iPSC Hypercube • These systems has several generations and these are based on 80286, 80386, and i860. • • Up to 128 powerful processors can be put together in a iPSC/860 Hypercube (no share memory, loosely coupled, message passing procedures exchanges the data between PEs). • • Semaphore and Monitor can only be implemented locally, but not between PEs.
  • 18. Intel Paragon • The system contains i860 xp PEs. • Up to 512 nodes (each node containing two i860 xp PEs). • • One processors on each node is assigned to arithmetic/logic tasks, while other PE is dedicated to handling data exchange. • • The nodes connected by two-dimensional grid.
  • 19. 8.3 Process states • A process is and individual program segment that executes asynchronously in parallel other processes. • They run concurrently and parallel On several PEs. • Many processes on a PE run concurrently (a time-sliced fashion). • At the end the running process changes its state from running to ready. • Then all process control data (program counter, registers, data addresses, etc.) of a process must be stored in its process control block (PCB), so it can be retrieved lather.
  • 20. • Newly arriving processes are placed immediately in queue ready, while terminating process are removed from running state. • The process state model is illustrated in figure 8.x (6.3). • The part of the OS for doing these tasks is the scheduler. • More complex scheduler for multi-processors system will be given later. • The scheduler capable of moving processes from a heavily loaded processor to a more lightly loaded processor.