MPI is a directory of FORTRAN90 programs which illustrate the use of the MPI Message Passing Interface.
MPI allows a user to write a program in a familiar language, such as C, C++, FORTRAN, or Python, and carry out a computation in parallel on an arbitrary number of cooperating computers.
A remarkable feature of MPI is that the user writes a single program which runs on all the computers. However, because each computer is assigned a unique identifying number, it is possible for different actions to occur on different machines, even though they run the same program:
if ( I am processor A ) then add a bunch of numbers else if ( I am processor B ) then multipy a matrix times a vector end
Another feature of MPI is that the data stored on each computer is entirely separate from that stored on other computers. If one computer needs data from another, or wants to send a particular value to all the other computers, it must explicitly call the appropriate library routine requesting a data transfer. Depending on the library routine called, it may be necessary for both sender and receiver to be "on the line" at the same time (which means that one will probably have to wait for the other to show up), or it is possible for the sender to send the message to a buffer, for later delivery, allowing the sender to proceed immediately to further computation.
Here is a simple example of what a piece of the program would look like, in which the number X is presumed to have been computed by processor A and needed by processor B:
if ( I am processor A ) then call MPI_Send ( X ) else if ( I am processor B ) then call MPI_Recv ( X ) end
Often, an MPI program is written so that one computer supervises the work, creating data, issuing it to the worker computers, and gathering and printing the results at the end. Other models are also possible.
It should be clear that a program using MPI to execute in parallel will look much different from a corresponding sequential version. The user must divide the problem data among the different processes, rewrite the algorithm to divide up work among the processes, and add explicit calls to transfer values as needed from the process where a data item "lives" to a process that needs that value.
A FORTRAN90 program, subroutine or function must include the line
so that the various MPI functions and constants are properly defined. (If this use statement doesn't work, you may have to fall back on using the FORTRAN77 include file instead:
You probably compile and link your program with a single command, as in
Depending on the computer that you are using, you may be able to compile an MPI program with a similar command, which automatically locates the include file and the compiled libraries that you will need. This command is likely to be:
Some systems allow users to run an MPI program interactively. You do this with the mpirun command:
This command requests that the executable program a.txt be run, right now, using 4 processors.
mpirun -np 4 a.txt
The mpirun command may be a convenience for beginners, with very small jobs, but this is not the way to go once you have a large lengthy program to run! Also, what actually happens can vary from machine to machine. When you ask for 4 processors, for instance,
The computer code and data files described and made available on this web page are distributed under the GNU LGPL license.
MPI examples are available in a C version and a C++ version and a FORTRAN90 version.
COMMUNICATOR_MPI, a FORTRAN90 program which creates new communicators involving a subset of initial set of MPI processes in the default communicator MPI_COMM_WORLD.
F90_CALLS_C_AND_MPI, FORTRAN90 programs which illustrate how a FORTRAN90 program can call a C function while executing under the MPI parallel programming environment.
F90_CALLS_C++_AND_MPI, FORTRAN90 programs which illustrate how a FORTRAN90 program can call a C++ function while executing under the MPI parallel programming environment.
HEAT_MPI, a FORTRAN90 program which solves the 1D Time Dependent Heat Equation using MPI.
HELLO_MPI, a FORTRAN90 program which prints out "Hello, world!" using the MPI parallel programming environment.
MOAB, examples which illustrate the use of the MOAB job scheduler for a computer cluster.
MPI_STUBS, a FORTRAN90 library which allows a user to compile, load, and possibly run an MPI program on a serial machine.
MULTITASK_MPI, a FORTRAN90 program which demonstrates how to "multitask", that is, to execute several unrelated and distinct tasks simultaneously, using MPI for parallel execution.
POISSON_SERIAL, a FORTRAN90 program which computes an approximate solution to the Poisson equation in a rectangle, and is intended as the starting point for the creation of a parallel version.
PRIME_MPI, a FORTRAN90 program which counts the number of primes between 1 and N, using MPI for parallel execution.
PTHREADS, C programs which illustrate the use of the POSIX thread library to carry out parallel program execution.
QUAD_MPI, a FORTRAN90 program which approximates an integral using a quadrature rule, and carries out the computation in parallel using MPI.
RANDOM_MPI, a FORTRAN90 program which demonstrates one way to generate the same sequence of random numbers for both sequential execution and parallel execution under MPI.
RING_MPI, a FORTRAN90 program which uses the MPI parallel programming environment, and measures the time necessary to copy a set of data around a ring of processes.
SATISFY_MPI, a FORTRAN90 program which demonstrates, for a particular circuit, an exhaustive search for solutions of the circuit satisfiability problem, using MPI to carry out the calculation in parallel.
SEARCH_MPI, a FORTRAN90 program which searches integers between A and B for a value J such that F(J) = C, using MPI for parallel execution.
TASK_DIVISION, a FORTRAN90 library which implements a simple procedure for smoothly dividing T tasks among P processors; such a method can be useful in MPI and other parallel environments, particularly when T is not an exact multiple of P, and when the processors can be indexed starting from 0 or from 1.
WAVE_MPI, a FORTRAN90 program which uses finite differences and MPI to estimate a solution to the wave equation.
BONES_MPI passes a vector of real data from one process to another. It was used as an introductory example in an MPI workshop.
BUFFON_MPI demonstrates how parallel Monte Carlo processes can set up distinct random number streams.
DAY1_MPI works out exercise #3 assigned after day 1 of a workshop on MPI. The instructions were to have process 1 generate some integers, send them to process 3 which used some of those values to generate some real numbers which were then sent back to process 1.
INTERVALS_MPI estimates an integral by dividing an interval into subintervals, and having the servant processes estimate the integral over each subinterval.
MATMAT_MPI multiplies two matrices.
MATVEC_MPI multiplies a matrix times a vector.
MONTE_CARLO_MPI computes PI by the Monte Carlo method, testing whether random points in the unit square are in the unit circle.
POISSON_MPI solves Poisson's equation on a 2D grid, dividing the physical region into horizontal strips, assigning a process to each strip, and using MPI_SEND and MPI_RECV to pass interface data between processes.
POISSON_NONBLOCK_MPI is a revision to POISSON which uses the nonblocking communication routines MPI_ISEND and MPI_IRECV to pass interface data between processes.
QUADRATURE_MPI estimates an integral.
SEARCH_MPI searches a vector for occurrences of a particular value.
TYPE_MPI demonstrates the use of a user-defined datatype.
You can go up one level to the FORTRAN90 source codes.