laplace_mpi, a C code which solves Laplace's equation in a rectangle, using MPI for parallel execution.
Laplace's equation can be thought of as a heat equation. For a scalar variable u(x,y), it has the form:
d2 u d2 u - ----- - ---- = 0 dx2 dy2with the value of u(x,y) specified along the boundary of the rectangle.
This code uses a finite difference scheme to solve Laplace's equation for a square matrix distributed over a square (logical) processor topology. A complete description of the algorithm is found in the reference by Fox.
This code works on the SPMD (single code, multiple data) paradigm. It illustrates 2D block decomposition, nodes exchanging edge values, and convergence checking.
Each matrix element is updated based on the values of the four neighboring matrix elements. This process is repeated until the data converges, that is, until the average change in any matrix element (compared to the value 20 iterations previous) is smaller than a specified value.
To ensure reproducible results between runs, a red/black checkerboard algorithm is used. Each process exchanges edge values with its four neighbors. Then new values are calculated for the upper left and lower right corners (the "red" corners) of each node's matrix. The processes exchange edge values again. The upper right and lower left corners (the "black" corners) are then calculated.
The code is currently configured for a 48x48 matrix distributed over four processors. It can be edited to handle different matrix sizes or number of processors, as long as the matrix can be divided evenly between the processors.
The computer code and data files made available on this web page are distributed under the MIT license
laplace_mpi is available in a C version.
COMMUNICATOR_MPI, a C code which creates new communicators involving a subset of initial set of MPI processes in the default communicator MPI_COMM_WORLD.
HEAT_MPI, a C code which solves the 1D Time Dependent Heat Equation using MPI.
HELLO_MPI, a C code which prints out "Hello, world!" using the MPI parallel codeming environment.
mpi_test, C codes which illustrate the use of the MPI application code interface for carrying out parallel computations in a distributed memory environment.
MULTITASK_MPI, a C code which demonstrates how to "multitask", that is, to execute several unrelated and distinct tasks simultaneously, using MPI for parallel execution.
POISSON_MPI, a C code which computes a solution to the Poisson equation in a rectangle, using the Jacobi iteration to solve the linear system, and MPI to carry out the Jacobi iteration in parallel.
PRIME_MPI, a C code which counts the number of primes between 1 and N, using MPI for parallel execution.
QUAD_MPI, a C code which approximates an integral using a quadrature rule, and carries out the computation in parallel using MPI.
RANDOM_MPI, a C code which demonstrates one way to generate the same sequence of random numbers for both sequential execution and parallel execution under MPI.
RING_MPI, a C code which uses the MPI parallel codeming environment, and measures the time necessary to copy a set of data around a ring of processes.
SATISFY_MPI, a C code which demonstrates, for a particular circuit, an exhaustive search for solutions of the circuit satisfiability problem, using MPI to carry out the calculation in parallel.
SEARCH_MPI, a C code which searches integers between A and B for a value J such that F(J) = C, using MPI.
WAVE_MPI, a C code which uses finite differences and MPI to estimate a solution to the wave equation.
Sequential C version by Robb Newman; MPI C version by Xianneng Shen.