openmp_test, a Fortran77 code which uses OpenMP application code interface for carrying out parallel computations in a shared memory environment.
The directives allow the user to mark areas of the code, such as do, while or for loops, which are suitable for parallel processing. The directives appear as a special kind of comment, so the code can be compiled and run in serial mode. However, the user can tell the compiler to "notice" the special directives, in which case a version of the code will be created that runs in parallel.
Thus the same code can easily be run in serial or parallel mode on a given computer, or run on a computer that does not have Open MP at all.
OpenMP is suitable for a shared memory parallel system, that is, a situation in which there is a single memory space, and multiple processors. If memory is shared, then typically the number of processors will be small, and they will all be on the same physical machine.
By contrast, in a distributed memory system, items of data are closely associated with a particular processor. There may be a very large number of processors, and they may be more loosely coupled and even on different machines. Such a system will need to be handled with MPI or some other message passing interface.
OpenMP descended in part from the old Cray microtasking directives, so if you've lived long enough to remember those, you will recognize some features.
OpenMP includes a number of functions whose type must be declared in any code that uses them. To avoid having to declare these functions, you can use the command
include 'omp_lib.h'in any routine that invokes OpenMP functions.
Note that, for the Fortran77 compiler, the OpenMP directives are required to follow the unfriendly and intolerant rules for line length and continuation, that apply to the text of Fortran77 code, namely:
OpenMP allows you to "request" any number of threads of execution. This is a request, and it's not always a wise request. If your system has four processors available, and they're not busy doing other things, or serving other users, maybe 4 threads is what you want. But you can't guarantee you'll get the undivided use of those processors. Moreover, if you run the same code using 1 thread and 4 threads, you may find that using 4 threads slows you down, either because you don't actually have 4 processors, (so the system has the overhead of pretending to run in parallel), or because the processors you have are also busy doing other things.
For this reason, it's wise to run the code at least once in single thread mode, so you have a benchmark against which to measure the speedup you got (or didn't get!) versus the speedup you hoped for.
The compiler you use must recognize the OpenMP directives in order to produce code that will run in parallel. Here are some of the compilers available that support OpenMP:
The information on this web page is distributed under the MIT license.
openmp_test is available in a C version and a C++ version and a Fortran77 version and a Fortran90 version.
dijkstra_openmp, a Fortran77 code which uses OpenMP to parallelize a simple example of Dijkstra's minimum distance algorithm for graphs.
fft_openmp, a Fortran77 code which demonstrates the computation of a Fast Fourier Transform in parallel, using OpenMP.
heated_plate_openmp, a Fortran77 code which solves the steady (time independent) heat equation in a 2D rectangular region, using OpenMP to run in parallel.
hello_openmp, a Fortran77 code which prints out "Hello, world!" using the OpenMP parallel programming environment.
helmholtz_openmp, a Fortran77 code which solves the discretized Helmholtz equation in 2D using the OpenMP application program interface for carrying out parallel computations in a shared memory environment.
mandelbrot_openmp, a Fortran77 code which generates an ASCII Portable Pixel Map (PPM) image of the Mandelbrot fractal set, using OpenMP for parallel execution.
md_openmp, a Fortran77 code which carries out a molecular dynamics simulation using OpenMP.
multitask_openmp, a Fortran77 code which demonstrates how to multitask, that is, to execute several unrelated and distinct tasks simultaneously, using OpenMP for parallel execution.
mxm_openmp, a Fortran77 code which computes a dense matrix product C=A*B, using OpenMP for parallel execution.
mxv_openmp, a Fortran77 code which compares the performance of plain vanilla Fortran and the Fortran90 intrinsic routine MATMUL, for the matrix multiplication problem y=A*x, with and without parallelization by OpenMP.
openmp_stubs, a Fortran77 code which implements a stub version of OpenMP, so that an OpenMP program can be compiled, linked and executed on a system that does not have OpenMP installed.
poisson_openmp, a Fortran77 code which computes an approximate solution to the Poisson equation in a rectangle, using the Jacobi iteration to solve the linear system, and OpenMP to carry out the Jacobi iteration in parallel.
prime_openmp, a Fortran77 code which counts the number of primes between 1 and N, using OpenMP for parallel execution.
quad_openmp, a Fortran77 code which approximates an integral using a quadrature rule, and carries out the computation in parallel using OpenMP.
random_openmp, a Fortran77 code which illustrates how a parallel code using OpenMP can generate multiple distinct streams of random numbers.
satisfy_openmp, a Fortran77 code which demonstrates, for a particular circuit, an exhaustive search for solutions of the circuit satisfiability problem, using OpenMP for parallel execution.
schedule_openmp, a Fortran77 code which demonstrates the default, static, and dynamic methods of "scheduling" loop iterations in OpenMP to avoid work imbalance.
sgefa_openmp, a Fortran77 code which reimplements the SGEFA/SGESL linear algebra routines from LINPACK for use with OpenMP.
ziggurat_openmp, a Fortran77 code which demonstrates how the ZIGGURAT library can be used to generate random numbers in an OpenMP parallel program.
DOT_PRODUCT compares the computation of a vector dot product in sequential mode, and using OpenMP. Typically, the overhead of using parallel processing outweighs the advantage for small vector sizes N. The code demonstrates this fact by using a number of values of N, and by running both sequential and OpenMP versions of the calculation.
MXM is a simple exercise in timing the computation of a matrix-matrix product.