random_mpi, a C code which generates the same sequence of random numbers for both sequential execution and parallel execution under MPI.
A simpler approach to random numbers would simply let each processor choose a seed. Or the master processor could choose distinct seeds. However, this is not ideal since it will not match the sequential code and it does not avoid the possibility that two of the random sequences will quickly overlap because of a bad choice of seed.
Notice that if we have 10 processors available under MPI, we do not want each processor to generate the same random number sequence. Instead, we want each of the processors to generate a part of the sequence, so that all the parts together make up the same set of values that a sequential code would have computed.
We assume we are using a linear congruential random number generator or "LCRG", which takes an integer input and returns a new integer output:
U = ( A * V + B ) mod CWe assume that we want the MPI code to produce the same sequence of random values as a sequential code would - but we want each processor to compute one part of that sequence.
We do this by computing a new LCRG which can compute every P'th entry of the original one.
Our LCRG works with integers, but it is easy to turn each integer into a real number between [0,1].
The particular scheme for computing the parameters of the new LCRG is implemented in the UNIFORM library.
The computer code and data files made available on this web page are distributed under the MIT license
random_mpi is available in a C version and a C++ version and a FORTRAN90 version.
COMMUNICATOR_MPI, a C code which creates new communicators involving a subset of initial set of MPI processes in the default communicator MPI_COMM_WORLD.
HEAT_MPI, a C code which solves the 1D Time Dependent Heat Equation using MPI.
HELLO_MPI, a C code which prints out "Hello, world!" using the MPI parallel codeming environment.
LAPLACE_MPI, a C code which solves Laplace's equation on a rectangle, using MPI for parallel execution.
mpi_test, C codes which illustrate the use of the MPI application code interface for carrying out parallel computatioins in a distributed memory environment.
MULTITASK_MPI, a C code which demonstrates how to "multitask", that is, to execute several unrelated and distinct tasks simultaneously, using MPI for parallel execution.
POISSON_MPI, a C code which computes a solution to the Poisson equation in a rectangle, using the Jacobi iteration to solve the linear system, and MPI to carry out the Jacobi iteration in parallel.
PRIME_MPI, a C code which counts the number of primes between 1 and N, using MPI for parallel execution.
QUAD_MPI, a C code which approximates an integral using a quadrature rule, and carries out the computation in parallel using MPI.
RING_MPI, a C code which uses the MPI parallel codeming environment, and measures the time necessary to copy a set of data around a ring of processes.
SATISFY_MPI, a C code which demonstrates, for a particular circuit, an exhaustive search for solutions of the circuit satisfiability problem, using MPI to carry out the calculation in parallel.
SEARCH_MPI, a C code which searches integers between A and B for a value J such that F(J) = C, using MPI.
WAVE_MPI, a C code which uses finite differences and MPI to estimate a solution to the wave equation.