Intel Cluster Studio XE 2012 provides an MPI hybrid development suite that targets developers on high-performance clusters. The suite includes Intel MPI Library version 4.0 Update 3, which provides interoperability with OpenMP. Thus, you can develop and optimize hybrid MPI/OpenMP applications to take full advantage of the capabilities provided by the high-performance clusters that you target.
When you decide to use both MPI and OpenMP with Intel MPI Library, you must use the
-mt_mpi compiler command option to link the thread-safe version of the Intel MPI Library. This setting will be automatically applied when you use either the
-Qopenmp or the
-Qparallel options for the Intel C/C++ compiler, and therefore, those options will link the thread-safe version of the Intel MPI Library even if you don't add the
-mt_mpi compiler command.
When you work with the thread-safe version of the Intel MPI Library, any of the following three levels will have the thread-safe version linked:
It is also necessary to set the appropriate value to the
I_MPI_PIN_DOMAIN environment variable. This variable allows you to control the process pinning scheme for hybrid MPI/Open MP applications. The possible values for this variable allow you to define a number of non-overlapping domains of logical processors on a node and a set of rules on how the MPI processes are bound to those domains. You will always have one MPI process per one domain, and each domain will be composed of certain logical processors. Each MPI process can create threads that will be able to run on the logical processors within the domain assigned to the process. When you set a value for the
I_MPI_PIN_DOMAIN environment value, any value assigned to the
I_MPI_PIN_PROCESSOR_LIST environment variable will be ignored.
I_MPI_PIN_DOMAIN environment variable has the following three syntax forms to define the domain.
I_MPI_PIN_DOMAIN=<mc-shape>— Define the domain by using multi-core terms. For example, the
I_MPI_PIN_DOMIAN=coreestablishes that each domain consists of the logical processors that share a particular core. If you set this value, the number of domains for a node is going to equal the number of cores for the node. Other options allow you to define the domain by socket, nodes, or the different cache levels that the logical processors might share.
I_MPI_PIN_DOMAIN=— Define the domain by specifying the domain size and the domain member layout. It is also possible to define only the size. The size value determines the number of logical processors in each domain. You can specify the desired number of logical processors to determine the size. However, the most convenient option for hybrid MPI/OpenMP applications is usually
I_MPI_PIN_DOMAIN=ompto make the domain size equal to the
OMP_NUM_THREADSenvironment variable value. This way, the processing pinning domain size is going to be equal to
OMP_NUM_THREADSand each MPI process can create
OMP_NUM_THREADSthreads for running within the corresponding domain. If
OPM_NUM_THREADSisn't set, each node will be treated as a separate domain, and therefore, each MPI process will be able to create as many threads as the number of available cores. In addition, you can specify the ordering of the domain members in the optional layout parameter. The default value is compact, which means that when you specify
I_MPI_PIN_DOMAIN=omp, it will be equivalent to
I_MPI_PIN_DOMAIN=omp:compact. The compact option determines that domain members are located as close to each other as possible according to their common resources; e.g., cores, caches, sockets. The
compactvalue benefits MPI processes that take advantage of sharing common resources; e.g., cores, caches, sockets. On the other hand, the
scattervalue determines that domain members are ordered so that adjacent domains have minimal sharing of common resources. The most convenient value depends on the available hardware, the kind of application, and its specific needs.
I_MPI_PIN_DOMAIN=<masklist>]— Define the domain by using domain masks. For example, you can use a comma-separated list of hexadecimal domain masks that establish whether processors are included in each domain or not based on the BIOS numbering.
So, if you set the
-mt_mpi compiler command option, set
I_MPI_PIN_DOMAIN=omp, and configure
OMP_NUM_THREADS to establish the desired number of threads for OpenMP, you will be able to execute hybrid MPI/OpenMP applications that can take full advantage of all the possibilities and configurations offered by the high-performance clusters. You can simply set different values for both
OMP_NUM_THREADS with the
mpiexec job startup command. For example, you can use
-env I_MPI_PIN_DOMAIN omp as part of the
mpiexec options to establish the value for
omp. In addition, you can use
setenv OMP_NUM_THREADS=8 to establish the value for
You can create threads that execute code in parallel with OpenMP and use MPI to coordinate higher-level communications. You can have different levels of optimizations, and you can tune them with the tools that Intel Cluster Studio XE 2012 provides you, such as Intel Trace Analyzer and Collector, Intel VTune Amplifier XE, Intel Inspector XE, and Intel MPI Benchmarks. You can run the application with different options, analyze, and then tune your code and the configurations.
Intel Cluster Studio XE 2012 is a commercial product, but you can download a free trial version here.