Dummy MPI Library
is a C library which
implements "stub" versions of the MPI routines.
MPI_STUBS is intended to include stubs for the most commonly
called MPI routines. Most of the stub routines don't do anything.
In a few cases, where it makes sense, they do some simple action
or return a value that is appropriate for the serial processing
MPI_STUBS can be used as a convenience, when a real MPI
implementation is not available, and the user simply wants to
test-compile a code. It may also be useful in those occasions
when a code has been so carefully written that it will still
execute correctly on a single processor.
MPI_STUBS is based on a similar package supplied as
part of the LAMMPS program, which allow that program to
be compiled, linked and run on a single processor machine,
although it is normally intended for parallel execution.
The computer code and data files described and made available on this web page
are distributed under
the GNU LGPL license.
MPI_STUBS is available in
a C version and
a C++ version and
a FORTRAN77 version and
a FORTRAN90 version.
Related Data and Programs:
a C program which
prints out "Hello, world!" using the MPI parallel programming environment.
illustrate the use of the MOAB job scheduler for a computer cluster.
a library of message passing routines which
processing on a variety of machine architectures, and with
a varying number of processors.
a C program which
demonstrates how to "multitask", that is, to execute several unrelated
and distinct tasks simultaneously, using MPI for parallel execution.
a C program which
demonstrates one way to generate the same sequence of random numbers
for both sequential execution and parallel execution under MPI.
William Gropp, Ewing Lusk, Anthony Skjellum,
Using MPI: Portable Parallel Programming with the
MIT Press, 1999,
Examples and Tests:
BUFFON_LAPLACE demonstrates how parallel Monte Carlo
processes can set up distinct random number streams.
QUADRATURE is a program that estimates an integral
using the random sampling.
List of Routines:
MPI_ALLGATHER gathers data from all the processes in a communicator.
MPI_ALLGATHERV gathers data from all the processes in a communicator.
MPI_ALLREDUCE carries out a reduction operation.
MPI_BARRIER forces processes within a communicator to wait together.
MPI_BCAST broadcasts data from one process to all others.
MPI_CART_CREATE creates a communicator for a Cartesian topology.
MPI_CART_GET returns the "Cartesian coordinates" of the calling process.
MPI_CART_SHIFT finds the destination and source for Cartesian shifts.
MPI_COMM_DUP duplicates a communicator.
MPI_COMM_FREE frees a communicator.
MPI_COMM_RANK reports the rank of the calling process.
MPI_COMM_SIZE reports the number of processes in a communicator.
MPI_COMM_SPLIT splits up a communicator based on a key.
MPI_COPY_BYTE copies a byte vector.
MPI_COPY_DOUBLE copies a double vector.
MPI_COPY_FLOAT copies a float vector.
MPI_COPY_INT copies an int vector.
MPI_FINALIZE shuts down the MPI library.
MPI_GET_COUNT reports the actual number of items transmitted.
MPI_INIT initializes the MPI library.
MPI_IRECV receives data from another process.
MPI_ISEND sends data from one process to another using nonblocking transmission.
MPI_RECV receives data from another process within a communicator.
MPI_REDUCE carries out a reduction operation.
MPI_REDUCE_DOUBLE carries out a reduction operation on doubles.
MPI_REDUCE_FLOAT carries out a reduction operation on floats.
MPI_REDUCE_INT carries out a reduction operation on ints.
MPI_REDUCE_SCATTER collects a message of the same length from each process.
MPI_RSEND "ready sends" data from one process to another.
MPI_SEND sends data from one process to another.
MPI_WAIT waits for an I/O request to complete.
MPI_WAITALL waits until all I/O requests have completed.
MPI_WAITANY waits until one I/O requests has completed.
MPI_WTICK returns the time between ticks of the timer.
MPI_WTIME returns the elapsed wall clock time.
TIMESTAMP prints the current YMDHMS date as a time stamp.
You can go up one level to
the C source codes.
Last revised on 22 March 2011.