SlideShare a Scribd company logo
Lecture 4Lecture 4
Collective CommunicationsCollective Communications
Dr. Muhammad Hanif Durad
Department of Computer and Information Sciences
Pakistan Institute Engineering and Applied Sciences
hanif@pieas.edu.pk
Some slides have bee adapted with thanks from some other
lectures available on Internet
Dr. Hanif Durad 2
Lecture Outline
 Collective Communication
 First Program using Collective communication
 The Master- Slave Paradigm
 Multiplying a matrix with a vector
IntroMPI.ppt
Another Approach to Parallelism
 Collective routines provide a higher-level way
to organize a parallel program
 Each process executes the same communication
operations
 MPI provides a rich set of collective
operations…
Dr. Hanif Durad 3
IntroMPI.ppt
Collective Communication
 Involve all processes in the scope of communicator
 Three categories
 synchronization (barrier())
 data movement (broadcast, scatter, gather, alltoall)
 collective computation (reduce(), scan())
 Limitations/differences from point-to-point
 blocking (no more true with MPI 2)
 does not take tag arguments
 works with MPI defined datatypes - not with derived types
Dr. Hanif Durad 4
Comm.ppt
Collective Communication
 Involves set of processes, defined by an intra-communicator. Message
tags not present. Principal collective operations:
 MPI_Bcast() - Broadcast from root to all other processes
 MPI_Gather() - Gather values for group of processes
 MPI_Scatter() - Scatters buffer in parts to group of processes
 MPI_Alltoall() - Sends data from all processes to all processes
 MPI_Reduce() - Combine values on all processes to single value
 MPI_Reduce_scatter() - Combine values and scatter results
 MPI_Scan() - Compute prefix reductions of data on processes
Dr. Hanif Durad 5
slides2.ppt
One to Many
Basic primitives
• broadcast(data, source, group_id, …)
- group members
broadcast(data,…);
source
data
Comm.ppt
One to Many
Basic primitives
• scatter(data[], recvBuf, source, group_id, …)
•gather(sendBuf, recvBuf[], dest, group_id, …)
- group members
source
data
scatter(data,…);
One-to-all personalized communication
Comm.ppt
gather(…)
dest
concatenation
dual of scatter
Many to One
Basic primitives
• reduce (sendBuf, recvBuf, dest, operation, group_id, …)
reduce(… ‘+’…);
- group members
3
dest
10 4
1
0
4
-1
2
1
+
Comm.ppt
Many to One – scan()
Also called parallel prefix:
• scan(sendBuf, recvBuf, operation, group_id, …)
• performs reduce() on all predecessors
scan(sendBuf, recvBuf,‘*’, group_id, …);
- group members
sendBuf4 104-1321
*
recvBuf4 8 -8 -32 -32
* * * *
Prefix-Sum
Comm.ppt
MPI Names of various
Operations
Dr. Hanif Durad 10
Slide from PC course Chapter 4
Calculating the value of π using
mid point integration formula
Dr. Hanif Durad 11
First Program using Collective
communication
Modeling the problem?
Dr. Hanif Durad 12
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
2
2.2
2.4
2.6
2.8
3
3.2
3.4
3.6
3.8
4
1 1
2
0
0
1
arctan
1 4
dx x
x
π
=  =
+∫
>> x=0:.1:1
>> y=1+x.^2
>>plot(x,4./y)
Decomposition of a Problem
Dr. Hanif Durad 13
Different Ways to Partition Data
Dr. Hanif Durad 14
Example: PI in C (1/2)
#include "mpi.h"
#include <math.h>
#include <stdio.h>
int main(int argc, char *argv[])
{
int done = 0, n, myid, numprocs, i, rc;
double PI25DT = 3.141592653589793238462643;
double mypi, pi, h, sum, x, a;
MPI_Init(&argc,&argv);
MPI_Comm_size(MPI_COMM_WORLD,&numprocs);
MPI_Comm_rank(MPI_COMM_WORLD,&myid);
while (!done) {
if (myid == 0) {
printf("Enter the number of intervals: (0 quits) ");
scanf("%d",&n);
}
15Dr. Hanif Durad
pi2.c
Example: PI in C (2/2)
MPI_Bcast(&n, 1, MPI_INT, 0, MPI_COMM_WORLD);
if (n == 0) break;
h = 1.0 / (double) n;
sum = 0.0;
for (i = myid + 1; i <= n; i += numprocs) {
x = h * ((double)i - 0.5);
sum += 4.0 / (1.0 + x*x);
}
mypi = h * sum;
MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0,
MPI_COMM_WORLD);
if (myid == 0)
printf("pi is approximately %.16f, Error is .16fn",
pi, fabs(pi - PI25DT));
}
MPI_Finalize();
return 0;
} 16Dr. Hanif Durad
Example: PI in Fortran (1/2)
program main
include "mpif.h"
integer done, n, myid, numprocs, i, rc
double pi25dt, mypi, pi, h, sum, x, z
data done /.false./
data PI25DT/3.141592653589793238462643/
call MPI_Init(ierr)
call MPI_Comm_size(MPI_COMM_WORLD,numprocs, ierr )
call MPI_Comm_rank(MPI_COMM_WORLD,myid, ierr)
do while (.not. done)
if (myid .eq. 0) then
print *,"Enter the number of intervals: (0 quits)"
read *, n
endif
call MPI_Bcast(n, 1, MPI_INTEGER, 0, MPI_COMM_WORLD, ierr )
if (n .eq. 0) goto 10 17Dr. Hanif Durad
pi.f90
Example: PI in Fortran (2/2)
h =1.0/n
sum=0.0
do i=myid+1,n,numprocs
x=h*(i - 0.5)
sum+=4.0/(1.0+x*x)
enddo
mypi = h * sum
call MPI_Reduce(mypi, pi, 1, MPI_DOUBLE_PRECISION, MPI_SUM, 0,
MPI_COMM_WORLD, ierr )
if (myid .eq. 0) then
print *, "pi is approximately ", pi, ", Error is ", abs(pi - PI25DT)
endif
10 continue
call MPI_Finalize( ierr )
end
18Dr. Hanif Durad
Example: PI in C++ (1/2)
#include "mpi.h"
#include <math.h>
#include <iostream>
int main(int argc, char *argv[])
{
int done = 0, n, myid, numprocs, i, rc;
double PI25DT = 3.141592653589793238462643;
double mypi, pi, h, sum, x, a;
MPI::Init(argc, argv);
numprocs = MPI::COMM_WORLD.Get_size();
myid = MPI::COMM_WORLD.Get_rank();
while (!done) {
if (myid == 0) {
std::cout << "Enter the number of intervals: (0 quits) ";
std::cin >> n;;
}
19Dr. Hanif Durad
pi.cpp
Example: PI in C++ (2/2)
MPI::COMM_WORLD.Bcast(&n, 1, MPI::INT, 0 );
if (n == 0) break;
h = 1.0 / (double) n;
sum = 0.0;
for (i = myid + 1; i <= n; i += numprocs) {
x = h * ((double)i - 0.5);
sum += 4.0 / (1.0 + x*x);
}
mypi = h * sum;
MPI::COMM_WORLD.Reduce(&mypi, &pi, 1, MPI::DOUBLE,
MPI::SUM, 0);
if (myid == 0)
std::cout << "pi is approximately " << pi <<
", Error is " << fabs(pi - PI25DT) << "n";
}
MPI::Finalize();
return 0;
} 20Dr. Hanif Durad
Notes on C and Fortran
 C and Fortran bindings correspond closely
 In C:
 mpi.h must be #included
 MPI functions return error codes or MPI_SUCCESS
 In Fortran:
 mpif.h must be included, or use MPI module
 All MPI calls are to subroutines, with a place for the return code in the last
argument.
 C++ bindings, and Fortran-90 issues, are part of MPI-2.
Dr. Hanif Durad 21
Multiplying a matrix with a vector
⇒matvec.cpp & matvec1.cpp on
virtue
Dr. Hanif Durad 22
2nd
Program using Collective
communication
The Master- Slave Paradigm
Dr. Hanif Durad 23
Morgan Kaufmann Publishers - Interconnection Networks. An
Engineering Approach.pdf,
Dr. Hanif Durad 24
Dr. Hanif Durad 25
Multiplying a matrix with a matrix
Dr. Hanif Durad 26
3rd
Program using Collective
communication
The Collective Programming
Model
 One style of higher level programming is to
use only collective routines
 Provides a “data parallel” style of
programming
 Easy to follow program flow
Dr. Hanif Durad 27
Not Covered
 Topologies: map a communicator onto, say, a 3D Cartesian
processor grid
 Implementation can provide ideal logical to physical mapping
 Rich set of I/O functions: individual, collective, blocking
and non-blocking
 Collective I/O can lead to many small requests being merged for
more efficient I/O
 One-sided communication: puts and gets with various
synchronization schemes
 Task creation and destruction: change number of tasks
during a run
 Few implementations available 28

More Related Content

PPT
Ipc in linux
PPTX
Knapsack Problem
PPT
Inter-Process communication in Operating System.ppt
PDF
Computer Organization and Architecture.pdf
PDF
Memory management
PPTX
Advanced Python : Decorators
PDF
Automata theory
Ipc in linux
Knapsack Problem
Inter-Process communication in Operating System.ppt
Computer Organization and Architecture.pdf
Memory management
Advanced Python : Decorators
Automata theory

What's hot (20)

PPTX
Open addressiing &amp;rehashing,extendiblevhashing
PPT
Sum of subsets problem by backtracking 
PPT
Chapter 9 - Virtual Memory
PPT
NFA or Non deterministic finite automata
PPTX
Back patching
PPT
5.1 greedy
PPT
Introduction to System Calls
PPTX
5.2 primitive recursive functions
PPT
Peterson Critical Section Problem Solution
PPT
Inter process communication
PDF
Inter Process Communication
PDF
NFA to DFA
PPT
Regular Languages
PPTX
Intermediate code generator
PDF
Programming in PHP Course Material BCA 6th Semester
PDF
Database recovery techniques
PPTX
Performance analysis and randamized agoritham
PDF
Python programming : Files
PPT
Paging.ppt
Open addressiing &amp;rehashing,extendiblevhashing
Sum of subsets problem by backtracking 
Chapter 9 - Virtual Memory
NFA or Non deterministic finite automata
Back patching
5.1 greedy
Introduction to System Calls
5.2 primitive recursive functions
Peterson Critical Section Problem Solution
Inter process communication
Inter Process Communication
NFA to DFA
Regular Languages
Intermediate code generator
Programming in PHP Course Material BCA 6th Semester
Database recovery techniques
Performance analysis and randamized agoritham
Python programming : Files
Paging.ppt
Ad

Viewers also liked (20)

PPT
Chapter 6 pc
ODP
Chapter - 04 Basic Communication Operation
PPT
Introduction to MPI
PPT
Point-to-Point Communicationsin MPI
PPT
Chapter 5 pc
PPT
Chapter 4 pc
PPSX
Prezentare Asociatia Tineretul ONU din Romania - Filiala Banat
PPT
Chapter 2 pc
PDF
Quality Assurance Information Guide
PPT
Chapter 1 pc
PPT
Chapter 3 pc
PPS
Snoopy
PPTX
Hpc 3
PPT
Parallel Programming Primer
PDF
Implementation of FIFO in Linux
PDF
Thread dumps
PDF
Basic Multithreading using Posix Threads
PPTX
Basic Thread Knowledge
PDF
Transguard Company Brochure- Final
PPTX
Chapter 6 pc
Chapter - 04 Basic Communication Operation
Introduction to MPI
Point-to-Point Communicationsin MPI
Chapter 5 pc
Chapter 4 pc
Prezentare Asociatia Tineretul ONU din Romania - Filiala Banat
Chapter 2 pc
Quality Assurance Information Guide
Chapter 1 pc
Chapter 3 pc
Snoopy
Hpc 3
Parallel Programming Primer
Implementation of FIFO in Linux
Thread dumps
Basic Multithreading using Posix Threads
Basic Thread Knowledge
Transguard Company Brochure- Final
Ad

Similar to Collective Communications in MPI (20)

PPT
Lecture9
PDF
Move Message Passing Interface Applications to the Next Level
PDF
PDF
Using MPI third edition Portable Parallel Programming with the Message Passin...
ODP
Parallel Programming on the ANDC cluster
PPTX
Distributed Memory Programming with MPI
PDF
Using MPI third edition Portable Parallel Programming with the Message Passin...
PDF
Introduction to MPI
PDF
Using MPI third edition Portable Parallel Programming with the Message Passin...
PDF
MPI Tutorial
PDF
MPI - 2
PDF
Parallel programming using MPI
PPT
Message passing interface
PDF
MPI Presentation
PPTX
Programming using MPI and OpenMP
PDF
Parallel and Distributed Computing Chapter 10
PDF
Using MPI third edition Portable Parallel Programming with the Message Passin...
ODP
Introduction to MPI
PPT
Parallel computing(2)
PPT
Open MPI 2
Lecture9
Move Message Passing Interface Applications to the Next Level
Using MPI third edition Portable Parallel Programming with the Message Passin...
Parallel Programming on the ANDC cluster
Distributed Memory Programming with MPI
Using MPI third edition Portable Parallel Programming with the Message Passin...
Introduction to MPI
Using MPI third edition Portable Parallel Programming with the Message Passin...
MPI Tutorial
MPI - 2
Parallel programming using MPI
Message passing interface
MPI Presentation
Programming using MPI and OpenMP
Parallel and Distributed Computing Chapter 10
Using MPI third edition Portable Parallel Programming with the Message Passin...
Introduction to MPI
Parallel computing(2)
Open MPI 2

More from Hanif Durad (15)

PPT
Chapter 26 aoa
PPT
Chapter 25 aoa
PPT
Chapter 24 aoa
PPT
Chapter 23 aoa
PPT
Chapter 12 ds
PPT
Chapter 11 ds
PPT
Chapter 10 ds
PPT
Chapter 9 ds
PPT
Chapter 8 ds
PPT
Chapter 7 ds
PPT
Chapter 6 ds
PPT
Chapter 5 ds
PPT
Chapter 4 ds
PPT
Chapter 3 ds
PPT
Chapter 2 ds
Chapter 26 aoa
Chapter 25 aoa
Chapter 24 aoa
Chapter 23 aoa
Chapter 12 ds
Chapter 11 ds
Chapter 10 ds
Chapter 9 ds
Chapter 8 ds
Chapter 7 ds
Chapter 6 ds
Chapter 5 ds
Chapter 4 ds
Chapter 3 ds
Chapter 2 ds

Recently uploaded (20)

PDF
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
PDF
Abdominal Access Techniques with Prof. Dr. R K Mishra
PDF
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
PPTX
Institutional Correction lecture only . . .
PPTX
The Healthy Child – Unit II | Child Health Nursing I | B.Sc Nursing 5th Semester
PPTX
Cell Structure & Organelles in detailed.
PPTX
Week 4 Term 3 Study Techniques revisited.pptx
PPTX
master seminar digital applications in india
PPTX
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
PDF
VCE English Exam - Section C Student Revision Booklet
PPTX
Pharma ospi slides which help in ospi learning
PPTX
Microbial diseases, their pathogenesis and prophylaxis
PDF
TR - Agricultural Crops Production NC III.pdf
PPTX
PPH.pptx obstetrics and gynecology in nursing
PDF
Business Ethics Teaching Materials for college
PDF
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
PPTX
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
PDF
Anesthesia in Laparoscopic Surgery in India
PDF
O5-L3 Freight Transport Ops (International) V1.pdf
PPTX
BOWEL ELIMINATION FACTORS AFFECTING AND TYPES
The Lost Whites of Pakistan by Jahanzaib Mughal.pdf
Abdominal Access Techniques with Prof. Dr. R K Mishra
ANTIBIOTICS.pptx.pdf………………… xxxxxxxxxxxxx
Institutional Correction lecture only . . .
The Healthy Child – Unit II | Child Health Nursing I | B.Sc Nursing 5th Semester
Cell Structure & Organelles in detailed.
Week 4 Term 3 Study Techniques revisited.pptx
master seminar digital applications in india
Introduction_to_Human_Anatomy_and_Physiology_for_B.Pharm.pptx
VCE English Exam - Section C Student Revision Booklet
Pharma ospi slides which help in ospi learning
Microbial diseases, their pathogenesis and prophylaxis
TR - Agricultural Crops Production NC III.pdf
PPH.pptx obstetrics and gynecology in nursing
Business Ethics Teaching Materials for college
Saundersa Comprehensive Review for the NCLEX-RN Examination.pdf
Introduction to Child Health Nursing – Unit I | Child Health Nursing I | B.Sc...
Anesthesia in Laparoscopic Surgery in India
O5-L3 Freight Transport Ops (International) V1.pdf
BOWEL ELIMINATION FACTORS AFFECTING AND TYPES

Collective Communications in MPI

  • 1. Lecture 4Lecture 4 Collective CommunicationsCollective Communications Dr. Muhammad Hanif Durad Department of Computer and Information Sciences Pakistan Institute Engineering and Applied Sciences hanif@pieas.edu.pk Some slides have bee adapted with thanks from some other lectures available on Internet
  • 2. Dr. Hanif Durad 2 Lecture Outline  Collective Communication  First Program using Collective communication  The Master- Slave Paradigm  Multiplying a matrix with a vector IntroMPI.ppt
  • 3. Another Approach to Parallelism  Collective routines provide a higher-level way to organize a parallel program  Each process executes the same communication operations  MPI provides a rich set of collective operations… Dr. Hanif Durad 3 IntroMPI.ppt
  • 4. Collective Communication  Involve all processes in the scope of communicator  Three categories  synchronization (barrier())  data movement (broadcast, scatter, gather, alltoall)  collective computation (reduce(), scan())  Limitations/differences from point-to-point  blocking (no more true with MPI 2)  does not take tag arguments  works with MPI defined datatypes - not with derived types Dr. Hanif Durad 4 Comm.ppt
  • 5. Collective Communication  Involves set of processes, defined by an intra-communicator. Message tags not present. Principal collective operations:  MPI_Bcast() - Broadcast from root to all other processes  MPI_Gather() - Gather values for group of processes  MPI_Scatter() - Scatters buffer in parts to group of processes  MPI_Alltoall() - Sends data from all processes to all processes  MPI_Reduce() - Combine values on all processes to single value  MPI_Reduce_scatter() - Combine values and scatter results  MPI_Scan() - Compute prefix reductions of data on processes Dr. Hanif Durad 5 slides2.ppt
  • 6. One to Many Basic primitives • broadcast(data, source, group_id, …) - group members broadcast(data,…); source data Comm.ppt
  • 7. One to Many Basic primitives • scatter(data[], recvBuf, source, group_id, …) •gather(sendBuf, recvBuf[], dest, group_id, …) - group members source data scatter(data,…); One-to-all personalized communication Comm.ppt gather(…) dest concatenation dual of scatter
  • 8. Many to One Basic primitives • reduce (sendBuf, recvBuf, dest, operation, group_id, …) reduce(… ‘+’…); - group members 3 dest 10 4 1 0 4 -1 2 1 + Comm.ppt
  • 9. Many to One – scan() Also called parallel prefix: • scan(sendBuf, recvBuf, operation, group_id, …) • performs reduce() on all predecessors scan(sendBuf, recvBuf,‘*’, group_id, …); - group members sendBuf4 104-1321 * recvBuf4 8 -8 -32 -32 * * * * Prefix-Sum Comm.ppt
  • 10. MPI Names of various Operations Dr. Hanif Durad 10 Slide from PC course Chapter 4
  • 11. Calculating the value of π using mid point integration formula Dr. Hanif Durad 11 First Program using Collective communication
  • 12. Modeling the problem? Dr. Hanif Durad 12 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 2 2.2 2.4 2.6 2.8 3 3.2 3.4 3.6 3.8 4 1 1 2 0 0 1 arctan 1 4 dx x x π =  = +∫ >> x=0:.1:1 >> y=1+x.^2 >>plot(x,4./y)
  • 13. Decomposition of a Problem Dr. Hanif Durad 13
  • 14. Different Ways to Partition Data Dr. Hanif Durad 14
  • 15. Example: PI in C (1/2) #include "mpi.h" #include <math.h> #include <stdio.h> int main(int argc, char *argv[]) { int done = 0, n, myid, numprocs, i, rc; double PI25DT = 3.141592653589793238462643; double mypi, pi, h, sum, x, a; MPI_Init(&argc,&argv); MPI_Comm_size(MPI_COMM_WORLD,&numprocs); MPI_Comm_rank(MPI_COMM_WORLD,&myid); while (!done) { if (myid == 0) { printf("Enter the number of intervals: (0 quits) "); scanf("%d",&n); } 15Dr. Hanif Durad pi2.c
  • 16. Example: PI in C (2/2) MPI_Bcast(&n, 1, MPI_INT, 0, MPI_COMM_WORLD); if (n == 0) break; h = 1.0 / (double) n; sum = 0.0; for (i = myid + 1; i <= n; i += numprocs) { x = h * ((double)i - 0.5); sum += 4.0 / (1.0 + x*x); } mypi = h * sum; MPI_Reduce(&mypi, &pi, 1, MPI_DOUBLE, MPI_SUM, 0, MPI_COMM_WORLD); if (myid == 0) printf("pi is approximately %.16f, Error is .16fn", pi, fabs(pi - PI25DT)); } MPI_Finalize(); return 0; } 16Dr. Hanif Durad
  • 17. Example: PI in Fortran (1/2) program main include "mpif.h" integer done, n, myid, numprocs, i, rc double pi25dt, mypi, pi, h, sum, x, z data done /.false./ data PI25DT/3.141592653589793238462643/ call MPI_Init(ierr) call MPI_Comm_size(MPI_COMM_WORLD,numprocs, ierr ) call MPI_Comm_rank(MPI_COMM_WORLD,myid, ierr) do while (.not. done) if (myid .eq. 0) then print *,"Enter the number of intervals: (0 quits)" read *, n endif call MPI_Bcast(n, 1, MPI_INTEGER, 0, MPI_COMM_WORLD, ierr ) if (n .eq. 0) goto 10 17Dr. Hanif Durad pi.f90
  • 18. Example: PI in Fortran (2/2) h =1.0/n sum=0.0 do i=myid+1,n,numprocs x=h*(i - 0.5) sum+=4.0/(1.0+x*x) enddo mypi = h * sum call MPI_Reduce(mypi, pi, 1, MPI_DOUBLE_PRECISION, MPI_SUM, 0, MPI_COMM_WORLD, ierr ) if (myid .eq. 0) then print *, "pi is approximately ", pi, ", Error is ", abs(pi - PI25DT) endif 10 continue call MPI_Finalize( ierr ) end 18Dr. Hanif Durad
  • 19. Example: PI in C++ (1/2) #include "mpi.h" #include <math.h> #include <iostream> int main(int argc, char *argv[]) { int done = 0, n, myid, numprocs, i, rc; double PI25DT = 3.141592653589793238462643; double mypi, pi, h, sum, x, a; MPI::Init(argc, argv); numprocs = MPI::COMM_WORLD.Get_size(); myid = MPI::COMM_WORLD.Get_rank(); while (!done) { if (myid == 0) { std::cout << "Enter the number of intervals: (0 quits) "; std::cin >> n;; } 19Dr. Hanif Durad pi.cpp
  • 20. Example: PI in C++ (2/2) MPI::COMM_WORLD.Bcast(&n, 1, MPI::INT, 0 ); if (n == 0) break; h = 1.0 / (double) n; sum = 0.0; for (i = myid + 1; i <= n; i += numprocs) { x = h * ((double)i - 0.5); sum += 4.0 / (1.0 + x*x); } mypi = h * sum; MPI::COMM_WORLD.Reduce(&mypi, &pi, 1, MPI::DOUBLE, MPI::SUM, 0); if (myid == 0) std::cout << "pi is approximately " << pi << ", Error is " << fabs(pi - PI25DT) << "n"; } MPI::Finalize(); return 0; } 20Dr. Hanif Durad
  • 21. Notes on C and Fortran  C and Fortran bindings correspond closely  In C:  mpi.h must be #included  MPI functions return error codes or MPI_SUCCESS  In Fortran:  mpif.h must be included, or use MPI module  All MPI calls are to subroutines, with a place for the return code in the last argument.  C++ bindings, and Fortran-90 issues, are part of MPI-2. Dr. Hanif Durad 21
  • 22. Multiplying a matrix with a vector ⇒matvec.cpp & matvec1.cpp on virtue Dr. Hanif Durad 22 2nd Program using Collective communication
  • 23. The Master- Slave Paradigm Dr. Hanif Durad 23 Morgan Kaufmann Publishers - Interconnection Networks. An Engineering Approach.pdf,
  • 26. Multiplying a matrix with a matrix Dr. Hanif Durad 26 3rd Program using Collective communication
  • 27. The Collective Programming Model  One style of higher level programming is to use only collective routines  Provides a “data parallel” style of programming  Easy to follow program flow Dr. Hanif Durad 27
  • 28. Not Covered  Topologies: map a communicator onto, say, a 3D Cartesian processor grid  Implementation can provide ideal logical to physical mapping  Rich set of I/O functions: individual, collective, blocking and non-blocking  Collective I/O can lead to many small requests being merged for more efficient I/O  One-sided communication: puts and gets with various synchronization schemes  Task creation and destruction: change number of tasks during a run  Few implementations available 28