COMMUNICATOR_MPI
Creating New Communicators in MPI


COMMUNICATOR_MPI is a C program which creates new communicators involving a subset of initial set of MPI processes in the default communicator MPI_COMM_WORLD.

To understand this program, let's assume we run it under MPI with 4 processes. Within the default communicator, the processes will have ID's of 0, 1, 2 and 3.

We can call MPI_Comm_group() to request that a "group id" be created from MPI_COMM_WORLD. Then we call MPI_Group_incl(), passing a list of a subset of the legal process ID's in MPI_COMM_WORLD, to be identified as a new group. In particular, we'll pass the even ID's, creating an even group, and later create an odd group in the same way.

A group ID can be used to create a new communicator, calling MPI_Comm_create(). Once we have this new communicator, we can use functions like MPI_Comm_Rank() and MPI_Comm_Size(), specifying the name of the new communicator. We then can use a function like MPI_Reduce() to sum up data associated exclusively with the processes in that communicator.

One complicating factor is that a process that is not part of the new communicator cannot make an MPI call that invokes that communicator. For instance, an odd process could not call MPI_Comm_rank() asking for its rank in the even communicator. If you look at the program, you will see that we have to be careful to determine what group we are in before we make calls to the MPI routines.

Thus, in the example, we could begin with 4 processes, whose global ID's are 0, 1, 2 and 3. We create an even communicator containing processes 0 and 2, and an odd communicator with 1 and 3. Notice that, within the even communicator, the processes with global ID's 0 and 2 have even communicator ID's of 0 and 1.

We can call MPI_Reduce() to sum the global ID's of the processes in the even communicator, getting a result of 2; the same sum, over the odd communicator, results in 4.

Licensing:

The computer code and data files made available on this web page are distributed under the GNU LGPL license.

Languages:

COMMUNICATOR_MPI is available in a C version and a C++ version and a FORTRAN77 version and a FORTRAN90 version.

Related Data and Programs:

HEAT_MPI, a C program which solves the 1D Time Dependent Heat Equation using MPI.

HELLO_MPI, a C program which prints out "Hello, world!", using MPI for parallel execution.

LAPLACE_MPI, a C program which solves Laplace's equation on a rectangle, using MPI for parallel execution.

MOAB, examples which illustrate the use of the MOAB job scheduler for a computer cluster.

MPI, C examples which illustrate the use of the MPI application program interface for carrying out parallel computatioins in a distributed memory environment.

MULTITASK_MPI, a C program which demonstrates how to "multitask", that is, to execute several unrelated and distinct tasks simultaneously, using MPI for parallel execution.

POISSON_MPI, a C program which computes a solution to the Poisson equation in a rectangle, using the Jacobi iteration to solve the linear system, and MPI to carry out the Jacobi iteration in parallel.

PRIME_MPI, a C program which counts the number of primes between 1 and N, using MPI for parallel execution.

QUAD_MPI, a C program which approximates an integral using a quadrature rule, and carries out the computation in parallel using MPI.

RANDOM_MPI, a C program which demonstrates one way to generate the same sequence of random numbers for both sequential execution and parallel execution under MPI.

RING_MPI, a C program which uses the MPI parallel programming environment, and measures the time necessary to copy a set of data around a ring of processes.

SATISFY_MPI, a C program which demonstrates, for a particular circuit, an exhaustive search for solutions of the circuit satisfiability problem, using MPI to carry out the calculation in parallel.

SEARCH_MPI, a C program which searches integers between A and B for a value J such that F(J) = C, using MPI.

WAVE_MPI, a C program which uses finite differences and MPI to estimate a solution to the wave equation.

Reference:

  1. Michael Quinn,
    Parallel Programming in C with MPI and OpenMP,
    McGraw-Hill, 2004,
    ISBN13: 978-0071232654,
    LC: QA76.73.C15.Q55.

Source Code:

Examples and Tests:

COMMUNICATOR_FSU compiles and runs the program on the FSU HPC cluster.

COMMUNICATOR_LOCAL compiles and runs the program on a local machine.

You can go up one level to the C source codes.


Last revised on 09 January 2012.