This chapter provides procedures for building MPI applications. It provides examples of the use of the mpirun(1) command to launch MPI jobs. It also provides procedures for building and running SHMEM applications. It covers the following topics:
The default locations for the include files, the .so files, the .a files, and the mpirun command are pulled in automatically.
To ensure that the mpt software module is loaded, perform the following command:
% module load mpt |
Once the MPT RPM is installed as default, the commands to build an MPI-based application using the .so files are as follows:
To compile using GNU compilers, choose one of the following commands:
% g++ -o myprog myprog.C -lmpi++ -lmpi % gcc -o myprog myprog.c -lmpi |
To compile programs with the Intel compiler, choose one of the following commands:
% ifort -o myprog myprog.f -lmpi (Fortran - version 8) % icc -o myprog myprog.c -lmpi (C - version 8) % mpif90 simple1_mpi.f (Fortan 90) % mpicc -o myprog myprog.c (Open MPI C wrapper compiler) % mpicxx -o myprog myprog.C (Open MPI C++ wrapper compiler) |
The libmpi++.so library is compatible with code generated by g++ 3.0 or later compilers, as well as Intel C++ 8.0 or later compilers. If compatibility with previous g++ or C++ compilers is required, the libmpi++.so released with MPT 1.9 (or earlier) must be used.
Note: You must use the Intel compiler to compile Fortran 90 programs. |
To compile Fortran programs with the Intel compiler, enabling compile-time checking of MPI subroutine calls, insert a USE MPI statement near the beginning of each subprogram to be checked and use one of the following commands:
% ifort -I/usr/include -o myprog myprog.f -lmpi (version 8) |
Note: The above command line assumes a default installation; if you have installed MPT into a non-default location, replace /usr/include with the name of the relocated directory. |
The special case of using the Open64 compiler in combination with hybrid MPI/OpenMP applications requires separate compilation and link command lines. The Open64 version of the OpenMP library requires the use of the -openmp option on the command line for compiling, but interferes with proper linking of MPI libraries. Use the following sequence:
% opencc -o myprog.o -openmp -c myprog.c % opencc -o myprog myprog.o -lopenmp -lmpi |
You must use the mpirun(1) command to start MPI applications. For complete specification of the command line syntax, see the mpirun(1) man page. This section summarizes the procedures for launching an MPI application.
To run an application on the local host, enter the mpirun command with the -np argument. Your entry must include the number of processes to run and the name of the MPI executable file.
The following example starts three instances of the mtest application, which is passed an argument list (arguments are optional):
% mpirun -np 3 mtest 1000 "arg2" |
You are not required to use a different host in each entry that you specify on the mpirun command. You can launch a job that has multiple executable files on the same host. In the following example, one copy of prog1 and five copies of prog2 are run on the local host. Both executable files use shared memory.
% mpirun -np 1 prog1 : 5 prog2 |
You can use the mpirun command to launch a program that consists of any number of executable files and processes and you can distribute the program to any number of hosts. A host is usually a single machine, or it can be any accessible computer running Array Services software. For available nodes on systems running Array Services software, see the /usr/lib/array/arrayd.conf file.
You can list multiple entries on the mpirun command line. Each entry contains an MPI executable file and a combination of hosts and process counts for running it. This gives you the ability to start different executable files on the same or different hosts as part of the same MPI application.
The examples in this section show various ways to launch an application that consists of multiple MPI executable files on multiple hosts.
The following example runs ten instances of the a.out file on host_a:
% mpirun host_a -np 10 a.out |
When specifying multiple hosts, you can omit the -np option and list the number of processes directly. The following example launches ten instances of fred on three hosts. fred has two input arguments.
% mpirun host_a, host_b, host_c 10 fred arg1 arg2 |
The following example launches an MPI application on different hosts with different numbers of processes and executable files:
% mpirun host_a 6 a.out : host_b 26 b.out |
To use the MPI-2 process creation functions MPI_Comm_spawn or MPI_Comm_spawn_multiple, you must specify the universe size by specifying the -up option on the mpirun command line. For example, the following command starts three instances of the mtest MPI application in a universe of size 10:
% mpirun -up 10 -np 3 mtest |
By using one of the above MPI spawn functions, mtest can start up to seven more MPI processes.
When running MPI applications on partitioned Altix systems which use the MPI-2 MPI_Comm_spawn or MPI_Comm_spawn_multiple functions, it may be necessary to explicitly specify the partitions on which additional MPI processes may be launched. See the section "Launching Spawn Capable Jobs on Altix Partitioned Systems" on the mpirun(1) man page.
When an MPI job is run from a workload manager like PBS Professional, Torque, or Load Sharing Facility (LSF), it needs to launch on the cluster nodes and CPUs that have been allocated to the job. For multi-node MPI jobs, this type of launch requires the use of an MPI launch command that interprets the node and CPU selection information for the workload manager that is in use. One of these commands, mpiexec_mpt (1), is provided with MPT, and another such command, mpiexec(1), ships with the PBS Professional workload manager software. The following section describes how to launch MPI jobs with specific workload managers and covers these topics:
Often MPI applications are run from job scripts submitted through batch schedulers like PBS Professional. This section provides some details about how to properly set up PBS job scripts to run MPI applications.
PBS job scripts can specify needed resource allocations using the -l option on a "#PBS" directive line. These lines will have the following form:
#PBS -l select=P:ncpus=T[:other options] |
The value P should be set to the total number of MPI processes in the job, and the value T should be set to the number of OpenMP threads per process. For purely MPI jobs, T is 1. For more information on resource allocation options, see the pbs_resources(7) man page from the PBS Professional software distribution.
Each MPI application is executed with the mpiexec command that is delivered with the PBS Professional software packages. This is a wrapper script that assembles the correct host list and corresponding mpirun command before executing the assembled mpirun command. The basic syntax is, as follows:
mpiexec -n P ./a.out |
where P is the total number of MPI processes in the application. This syntax applies whether running on a single host or a clustered system. See the mpiexec(1) man page for more details.
Process and thread pinning onto CPUs is especially important on cache coherent non-uniform memory access (ccNUMA) systems like the SGI Altix 4700 or the SGI Altix UV 1000. Process pinning is performed automatically if PBS Professional is set up to run each application in a set of dedicated cpusets. In these cases, PBS Professional will set the PBS_CPUSET_DEDICATED environment variable to the value "YES". This has the same effect as setting MPI_DSM_DISTRIBUTE=ON. Process and thread pinning are also performed in all cases where omplace(1) is used.
Example 3-1. Run an MPI application with 512 Processes
To run an application with 512 processes, perform the following:
#PBS -l select=512:ncpus=1 mpiexec -n 512 ./a.out |
Example 3-2. Run an MPI application with 512 Processes and Four OpenMP Threads per Process
To run an MPI application with 512 Processes and four OpenMP threads per process, perform the following:
#PBS -l select=512:ncpus=4 mpiexec -n 512 omplace -nt 4 ./a.out |
The mpiexec_mpt(1) command is provided by the SGI Message Passing Toolkit (MPT). The mpiexec_mpt command launches a MPT MPI program in a batch scheduler-managed cluster environment. When running PBS Professional, mpiexec_mpt is an alternative to the mpiexec(1) command. Unlike the PBS Professional mpiexec command, mpiexec_mpt supports all MPT mpirun global options. The mpiexec_mpt command has a -tv option for use by MPT with the TotalView Debugger. For more information on using the mpiexec_mpt command -tv option, see “Using the TotalView Debugger with MPI programs” in Chapter 5.
When running Torque, SGI recommends the MPT mpiexec_mpt(1) command to launch MPT MPI jobs.
The basic syntax is, as follows:
mpiexec_mpt -n P ./a.out |
where P is the total number of MPI processes in the application. This syntax applies whether running on a single host or a clustered system. See the mpiexec_mpt(1) man page for more details.
The mpiexec_mpt command has a -tv option for use by MPT when running the TotalView Debugger with a batch scheduler like Torque. For more information on using the mpiexec_mpt command -tv option, see “Using the TotalView Debugger with MPI programs” in Chapter 5.
To compile SHMEM programs with a GNU compiler, choose one of the following commands:
% g++ compute.C -lsma % gcc compute.c -lsma |
To compile SHMEM programs with the Intel compiler, use the following commands:
% icc compute.C -lsma (version 8) % icc compute.c -lsma (version 8) % ifort compute.f -lsma (version 8) |
You must use mpirun to launch SHMEM applications. The NPES variable has no effect on SHMEM programs. To request the desired number of processes to launch, you must set the -np option on mpirun.
The SHMEM programming model supports single host SHMEM applications, as well as SHMEM applications that span multiple partitions. To launch a SHMEM application on more than one partition, use the multiple host mpirun syntax, such as the following:
% mpirun hostA, hostB -np 16 ./shmem_app |
For more information, see the intro_shmem(3) man page.