OpenMPI : Différence entre versions

De ClustersSophia
Aller à : navigation, rechercher
(Page créée avec « {{entete}} ==  Open MPI == The OpenMPI Project is an open source MPI-2 implementation that is developed and maintained by a consortium of academic, research, and i... »)
 
Ligne 4 : Ligne 4 :
  
  
==  Open MPI ==
+
=  Open MPI =
  
  
The OpenMPI Project is an open source MPI-2 implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing community in order to build the best MPI library available. Open MPI offers advantages for system and software vendors, application developers and computer science researchers.
+
The [http://www.open-mpi.org/ OpenMPI] Project is an open source MPI-2 implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing community in order to build the best MPI library available. Open MPI offers advantages for system and software vendors, application developers and computer science researchers.
  
  
==  Installation & documentation ==
+
=  Installation & documentation =
  
  
The version installed is 1.4.3 for 64bit (in the /opt/openmpi/current/ directory). This version is compiled with the PGI compiler
+
The version installed is <code>1.4.3</code> for 64bit (in the <code>/opt/openmpi/current/</code> directory). This version is compiled with the PGI compiler
  
A version compiled with GCC (only 64bit) is also available in /opt/openmpi-gcc/current/
+
A version compiled with GCC (only 64bit) is also available in <code>/opt/openmpi-gcc/current/</code>
  
 
Infiniband support is available.
 
Infiniband support is available.
  
Documentation is available online on the Open MPI website.
+
Documentation is available online on the [http://www.open-mpi.org/faq/ Open MPI website].
  
  
==  Environment ==
+
=  Environment =
  
  
Don't mix commands from OpenMPI, MPICH2 and LAM/MPI, otherwise things will fails
+
Don't mix commands from OpenMPI, MPICH2 and LAM/MPI, otherwise things will fails
  
To use specifically Open MPI, you should add /opt/openmpi/current/bin in your PATH :
+
To use specifically Open MPI, you should add <code>/opt/openmpi/current/bin</code> in your PATH :
  
    shell (bash, zsh): export PATH=/opt/openmpi/current/bin:$PATH
+
*    shell (bash, zsh): <code>export PATH=/opt/openmpi/current/bin:$PATH</code>
    cshell : set path=(/opt/openmpi/current/bin $path)  
+
*    cshell : <code> path=(/opt/openmpi/current/bin $path)</code>
  
Another option is to use mpi commands postfixed by the MPI implementation, for example: mpicc.openmpi, mpirun.openmpi ....
+
Another option is to use mpi commands postfixed by the MPI implementation, for example: <code>mpicc.openmpi, mpirun.openmpi</code> ....
  
  
==  Compiling a MPI program ==
+
=  Compiling a MPI program =
  
  
 
The best and easiest way to compile a MPI program is to use the wrappers included in the distribution.
 
The best and easiest way to compile a MPI program is to use the wrappers included in the distribution.
  
If you use the Open MPI version compiled with PGI (the default case), you must have PGI binaries (pgcc, pgf90, etc) in your PATH (cf. check the PGI local documentation )
+
If you use the Open MPI version compiled with PGI (the default case), you must have PGI binaries (pgcc, pgf90, etc) in your PATH (cf. [[ToolsPGI|check the PGI local documentation]] )
  
  
===  compiling for 64 bit processors ===
+
==  compiling for 64 bit processors ==
  
  
Compiling a C MPI code
+
; Compiling a C MPI code
    mpicc -o prog prog.c or  
+
: <code>mpicc -o prog prog.c or </code>
 +
: <code>mpicc.openmpi -o prog prog.c or /opt/openmpi/current/bin/mpicc -o prog prog.c</code>
 +
; Compiling a C++ MPI code
 +
:    <code>mpicxx.openmpi -o prog prog.c</code>
 +
;Compiling a Fortran 77 MPI code
 +
:    <code>mpif77.openmpi -o prog prog.f</code>
 +
;Compiling a Fortran 90/95 MPI code
 +
:    <code>mpif90.openmpi -o prog prog.f </code>
  
mpicc.openmpi -o prog prog.c or /opt/openmpi/current/bin/mpicc -o prog prog.c
 
  
Compiling a C++ MPI code
+
==  compiling for 32 bit processors ==
    mpicxx.openmpi -o prog prog.c
 
Compiling a Fortran 77 MPI code
 
    mpif77.openmpi -o prog prog.f
 
Compiling a Fortran 90/95 MPI code
 
    mpif90.openmpi -o prog prog.f
 
  
  
===  compiling for 32 bit processors ===
+
32 bit version is not yet installed
  
 +
To do this, you must add the <code>xarch=i386</code> compiler option. eg.:
  
32 bit version is not yet installed
+
<code>mpif90 xarch=i386 -o prog prog.f</code>
  
To do this, you must add the xarch=i386 compiler option. eg.:
+
At the time of execution, 32 bit libraries must be added in the <code>LD_LIBRARY_PATH</code> environment variable:
  
mpif90 xarch=i386 -o prog prog.f
+
<code>export LD_LIBRARY_PATH=/opt/openmpi/current/lib32</code>
  
At the time of execution, 32 bit libraries must be added in the LD_LIBRARY_PATH environment variable:
 
  
export LD_LIBRARY_PATH=/opt/openmpi/current/lib32
+
=  Network interface selection at runtime =
 
 
 
 
==  Network interface selection at runtime ==
 
  
  
 
OpenMPI detects automatically at runtime network equipement available on compute nodes ( including Infiniband network cards, with native verbs library support). However, you can choose to use a specific network interface if you wish.
 
OpenMPI detects automatically at runtime network equipement available on compute nodes ( including Infiniband network cards, with native verbs library support). However, you can choose to use a specific network interface if you wish.
  
    the ethernet interface
+
*    the ethernet interface
    the infiniband interface through IP over IB emulation
+
*    the infiniband interface through IP over IB emulation
    native Infiniband support (bypass the OS IP network stack). This is the fastest. It is used by default by Open MPI  
+
*    native Infiniband support (bypass the OS IP network stack). This is the fastest. It is used by default by Open MPI  
  
To use a specific ethernet interface (in the given example, eth0), you should add this in your scripts:
+
To use a specific ethernet interface (in the given example, <code>eth0</code>), you should add this in your scripts:
  
export OMPI_MCA_btl_tcp_if_include="eth0"
+
export OMPI_MCA_btl_tcp_if_include="eth0"
export OMPI_MCA_btl="^ib"
+
export OMPI_MCA_btl="^ib"
  
  
Ligne 91 : Ligne 89 :
  
  
See How to execute an `OpenMPI program ? in the FAQ
+
See [[ClusterFaq#openmpi|How to execute an `OpenMPI program ?]] in the FAQ

Version du 5 décembre 2014 à 16:40



  Open MPI

The OpenMPI Project is an open source MPI-2 implementation that is developed and maintained by a consortium of academic, research, and industry partners. Open MPI is therefore able to combine the expertise, technologies, and resources from all across the High Performance Computing community in order to build the best MPI library available. Open MPI offers advantages for system and software vendors, application developers and computer science researchers.


  Installation & documentation

The version installed is 1.4.3 for 64bit (in the /opt/openmpi/current/ directory). This version is compiled with the PGI compiler

A version compiled with GCC (only 64bit) is also available in /opt/openmpi-gcc/current/

Infiniband support is available.

Documentation is available online on the Open MPI website.


  Environment

Don't mix commands from OpenMPI, MPICH2 and LAM/MPI, otherwise things will fails

To use specifically Open MPI, you should add /opt/openmpi/current/bin in your PATH :

  • shell (bash, zsh): export PATH=/opt/openmpi/current/bin:$PATH
  • cshell : path=(/opt/openmpi/current/bin $path)

Another option is to use mpi commands postfixed by the MPI implementation, for example: mpicc.openmpi, mpirun.openmpi ....


  Compiling a MPI program

The best and easiest way to compile a MPI program is to use the wrappers included in the distribution.

If you use the Open MPI version compiled with PGI (the default case), you must have PGI binaries (pgcc, pgf90, etc) in your PATH (cf. check the PGI local documentation )


  compiling for 64 bit processors

Compiling a C MPI code
mpicc -o prog prog.c or
mpicc.openmpi -o prog prog.c or /opt/openmpi/current/bin/mpicc -o prog prog.c
Compiling a C++ MPI code
mpicxx.openmpi -o prog prog.c
Compiling a Fortran 77 MPI code
mpif77.openmpi -o prog prog.f
Compiling a Fortran 90/95 MPI code
mpif90.openmpi -o prog prog.f


  compiling for 32 bit processors

32 bit version is not yet installed

To do this, you must add the xarch=i386 compiler option. eg.:

mpif90 xarch=i386 -o prog prog.f

At the time of execution, 32 bit libraries must be added in the LD_LIBRARY_PATH environment variable:

export LD_LIBRARY_PATH=/opt/openmpi/current/lib32


  Network interface selection at runtime

OpenMPI detects automatically at runtime network equipement available on compute nodes ( including Infiniband network cards, with native verbs library support). However, you can choose to use a specific network interface if you wish.

  • the ethernet interface
  • the infiniband interface through IP over IB emulation
  • native Infiniband support (bypass the OS IP network stack). This is the fastest. It is used by default by Open MPI

To use a specific ethernet interface (in the given example, eth0), you should add this in your scripts:

export OMPI_MCA_btl_tcp_if_include="eth0"
export OMPI_MCA_btl="^ib"


  Running a OpenMPI using the batch manager

See How to execute an `OpenMPI program ? in the FAQ