1. ▣ A programming model is a collection of program
abstraction providing a programmer a simplified
and transparent view of computer H/W and S/W.
▣ Parallel programming model is designed for vector
computers.
▣ Fundamental issues in parallel programming.
▣ Creation, suspension, reactivation, termination.
2. Five model are designed that exploits
parallelism-:
▣ Shared-variable model.
▣ Message-passing model.
▣ Data parallel model.
▣ Object oriented model.
▣ Functional and logic
model.
3. ▣ In shared variable modelparallelism
depends on how IPC is implemented.
▣ IPC implemented in parallel
programming by two ways.
▣ IPC using shared variable.
▣ IPC using message passing.
4. ▣ IPC with shared variable
▣ IPC with message passing
5. ▣ Critical section.
▣ Memory consistency.
▣ Atomicity with memory operation.
▣ Fast synchronization.
▣ Shared data structure.
6. ▣ Two process communicate with each
other by passing message through a network.
▣ Delay caused by message passing
is much longer than shared variable model in a
same memory.
▣ Two message passing approach are introduced
here.
7. 1. Synchronous message passing-:
▣ Its synchronizes the sender and
receiver process with time and space just like
telephone call.
▣ No shared memory.
▣ No need of mutual exclusion.
▣ No buffer are used in communication channel.
▣ It can be blocked by channel being busy.
8. 2. Asynchronous message passing-:
▣ Does not need to synchronize the sender and
receiver in time and space.
▣ Non blocking can be achieved.
▣ Buffer are used to hold the message along the path of
connecting channel.
▣ Message passing programming is gradually
changing, once the virtual memory from all nodes
are combined.
9. ▣ It require the use of pre-distributed data set.
▣ Interconnected data structure are also needed to
facilitate data exchange operation.
▣ It emphasizes local computation and data routing
operation such as permutation, replication, reduction
and parallel prefix.
▣ It can be implemented on either SIMD or SPMD
multicomputer, depending on the grain size of
program.
11. ▣ Object are created and manipulated dynamically.
▣ Processing is performed using object.
▣ Concurrent programming model are built up from
low level object such as processes, queue and
semaphore.
▣ Object Oriented model achieve parallelism using
three methods.
13. ▣ Two language-oriented programming for
parallel processing are purposed.
▣ Functional programming modelsuch as
LISP, SISAL, Strand 88.
▣ Logic programming model as prolog.
▣ Basedon predicate logic, logic programming
is suitable for solving large database queries.
14. ▣ Language feature for parallel programming into six
categories according to functionality.
Optimization features
▣ Used for program restructuringand
compilation directives.
▣ Sequentially coded program into parallel code.
▣ Automated parallelization.
▣ Semi-automated parallelization.
15. Availability feature
▣ Its use to enhance the user- friendliness.
▣ Make language portable to large class
of parallel computers.
▣ Scalability.
▣ Compatibility.
▣ Portability.
16. Synchronization/ communication feature
▣ Shared variable for IPC.
▣ Single assignment language.
▣ Send/receive for message passing.
▣ Logical shared memory such as
the row space in Linda.
▣ Remote procedure call.
▣ Data flow languages such as id.
.
17. Control of parallelism
▣ Coarse, medium or fine grain.
▣ Explicit versus implicit
parallelism.
▣ Loop parallelism in iteration.
▣ Shared task queue.
▣ Divide and conquer paradigm.
▣ Shared abstract data type.
18. Data parallelism feature
▣ It specified how data are accessed and
distributed
▣ Runtime automatic decomposition.
▣ Mapping specification.
▣ Virtual processor support.
▣ Direct access to shared data.
19. Process management features
▣ These feature are needed to support
the efficient creation of parallel processes.
▣ Implementation of multithreading or
multitasking.
▣ Dynamic process creation at runtime.
▣ Automatic load balancing.
▣ Light weight processes.
20. ▣ Special language construct and data array
expression for exploiting parallelism in program.
▣ First is FORTRAN 90 array notation.
▣ Parallel flow control is achieve using do across and
do all type of keyword which is use in the
FORTRAN 90.
▣ Same we also use FORK and JOIN method.
21. ▣ The role of compiler to remove
the burden of program optimization and
code generation.
▣ A parallelizing compiler consist of
the three major phases.
▣ Flow analysis.
▣ Optimization.
▣ Code generation.