User Guide new config : Différence entre versions
(→ Softwares) |
|||
Ligne 87 : | Ligne 87 : | ||
* [[PGI|PGI]] 14.10 compilers | * [[PGI|PGI]] 14.10 compilers | ||
* GCC (C,C++, Fortran77, Fortran95 compilers) 4.8.3 (<code>/usr/bin/gcc</code>) and 5.1.0 (<code>/misc/opt/gcc-5.1.0/bin</code>) | * GCC (C,C++, Fortran77, Fortran95 compilers) 4.8.3 (<code>/usr/bin/gcc</code>) and 5.1.0 (<code>/misc/opt/gcc-5.1.0/bin</code>) | ||
− | * [[OpenMPI|OpenMPI]] 1. | + | * [[OpenMPI|OpenMPI]] 1.10.1 / 1.6.4 |
* [[Paraview|Paraview]] 4.2.0 | * [[Paraview|Paraview]] 4.2.0 | ||
* [[MPICH2|Mvapich2]] 1.9a2 | * [[MPICH2|Mvapich2]] 1.9a2 | ||
Ligne 94 : | Ligne 94 : | ||
* Maple (<code>/misc/opt/maple2015/bin</code>) | * Maple (<code>/misc/opt/maple2015/bin</code>) | ||
* Intel Parallel Studio 2015 (<code>/misc/opt/intel2015/ </code>) | * Intel Parallel Studio 2015 (<code>/misc/opt/intel2015/ </code>) | ||
+ | * Intel Parallel Studio 2016 (<code>/misc/opt/intel2016/ </code>) | ||
* CUDA 7.0 (/usr/local/cuda) + CUDA SDK (/usr/local/cuda/samples) | * CUDA 7.0 (/usr/local/cuda) + CUDA SDK (/usr/local/cuda/samples) | ||
* DDT debugger (<code>/opt/allinea/ddt</code>, see [https://wiki.inria.fr/sed_ren/DDT Documentation]) | * DDT debugger (<code>/opt/allinea/ddt</code>, see [https://wiki.inria.fr/sed_ren/DDT Documentation]) | ||
* Environment Modules: type <code>module avail</code> to list all available modules: | * Environment Modules: type <code>module avail</code> to list all available modules: | ||
+ | blacs/1.1-ompi-1.10.1 | ||
boost/1.58.0 | boost/1.58.0 | ||
caffe/caffe-0.13 | caffe/caffe-0.13 | ||
cuda/7.0 | cuda/7.0 | ||
+ | ddt/5.1-43967 | ||
+ | hdf5/1.8.16-ompi-1.10.1 | ||
+ | hypre/2.10.0b | ||
intel/intel64-2015 | intel/intel64-2015 | ||
+ | intel/intel64-2016 | ||
likwid/4.0.1 | likwid/4.0.1 | ||
+ | metis/5.1.0 | ||
mpi/intel64-5.0.3.048 | mpi/intel64-5.0.3.048 | ||
− | mpi/openmpi-1. | + | mpi/openmpi-1.10.1-gcc |
mpi/openmpi-x86_64 | mpi/openmpi-x86_64 | ||
mpi/mvapich2-x86_64 | mpi/mvapich2-x86_64 | ||
mpi/openmpi-1.8.8-pgi | mpi/openmpi-1.8.8-pgi | ||
+ | pastix/5.2.2.20-nompi | ||
petsc/3.4.5 | petsc/3.4.5 | ||
+ | pgi/pgi-14.10 | ||
+ | qwt/6.1.2-qt5 | ||
+ | scalapack/1.8.0-ompi-1.10.1 | ||
scotch/6.0.4 | scotch/6.0.4 | ||
− | |||
− | |||
− | |||
− | |||
trilinos/trilinos-12.2.1 | trilinos/trilinos-12.2.1 | ||
vtk/6.2.0-qt5 | vtk/6.2.0-qt5 | ||
Ligne 121 : | Ligne 128 : | ||
* openblas | * openblas | ||
* gmsh 2.8.5 | * gmsh 2.8.5 | ||
− | * Python (including scipy, numpy, pycuda) | + | * Python (including scipy, numpy, pycuda, pip) |
* R 3.2.1 | * R 3.2.1 | ||
* Erlang | * Erlang |
Version du 16 décembre 2015 à 09:19
Sommaire
nef new configuration is currently a BETA VERSION with no warranty concerning jobs and data : service may be interrupted, configuration modified and data spaces may be cleared
Front-end
The cluster is based on several front-end servers and a lot of compute nodes.
2 servers are available:
- nef-frontal.inria.fr alias nef.inria.fr : main front-end, ssh and submission front-end
- nef-devel2.inria.fr : compilation, ssh and submission front-end
Currently, the only way to use the cluster is to connect to one of the front-end with ssh. Then you can have access to the computing resources available by using the Oar job manager.
The nef cluster is not part of the internal (production) network of Inria; therefore Inria (iLDAP) accounts/passwords are not used.
First steps
Web access
Web access with Kali is the preferred mode for simple usage :
- Connect to Kali web portal : if you have an Inria account just Sign in/up with CAS ; if you have no Inria account use Sign in/up
- From Kali, apply for an account on nef through the Clusters > Overview page
- Follow Kali online helps to prepare and launch your jobs ; please submit a ticket if you need assistance to get started with Kali
- You can view the current running jobs using the Monika web interface. You can also see jobs in gantt format DrawGantt .
Command line access
Command line access with ssh enables advanced usage :
- First you need to apply for an account on the nef cluster. You must give your ssh public key to have an account.
- Then connect to one of the main front-end nef.inria.fr or nef-devel2.inria.fr using ssh
- Then you can import your data / sources files using rsync,scp on the nef front-end and compile your programs on nef-devel2.inria.fr
- Use the job manager to start your tasks
- You can view the current running jobs using the Monika web interface. You can also see jobs in gantt format DrawGantt .
Disk space management
All data stored on the cluster ARE NOT backed up ! They are LOST and NOT RECOVERABLE in the case of accidental deletion or server hardware failure.
Each user has a dedicated directory in the /home storage.
- A quota system is activated on the shared storage server:
- The soft limit is 150GB
- The hard limit 600GB
- The delay is 4 weeks
- You can use 150GB of data without restrictions; as soon as the soft limit is reached, you have 4 weeks in which to delete files and go back under the soft limit. You can never use more than the hard limit.
- A warning message will be sent by mail every Sunday when a limit is reached.
- You can check you current disk occupation with the
quota -s
command
A distributed scalable file system is available under /data for several usages :
- long term storage : 1TB quota per team shared among the team members ; teams may buy additional quota please contact the cluster administrators
- disk quota is available for sale ! price (late 2015) is 1kEuro for 5 TiB * 7 years
- scratch storage (for transient / short term storage) : variable total size, no quota is currently applied per user or per team, but data may be periodically purged by administrators
- data is tagged as long term storage or scratch storage based on Unix group of files :
- use
chgrp my_team_unix_group my_file
to tag my_file as long term storage - use
chgrp scratch my_file
to tag my_file as scratch storage
- use
- check your quota with
sudo beegfs-ctl --getquota --uid my_logname
or your team quota withsudo beegfs-ctl --getquota --gid my_team_unix_group
- Or also check your quota with
sudo nef-getquota -u <usernames>
or your team quota withsudo nef-getquota -g <groupnames>
More (legacy) storage is available for the ABS, ASCLEPIOS, MORPHEME, NEUROMATHCOMP and TROPICS team members in /epi/<teamname>/<username>.
There is also temporary disk space available on each node, in the /tmp directory
Softwares
All nodes are installed using a Linux Centos 7 64bit distribution
Main softwares available on the cluster:
- 3.10.0 linux x86_64 kernel
- PGI 14.10 compilers
- GCC (C,C++, Fortran77, Fortran95 compilers) 4.8.3 (
/usr/bin/gcc
) and 5.1.0 (/misc/opt/gcc-5.1.0/bin
) - OpenMPI 1.10.1 / 1.6.4
- Paraview 4.2.0
- Mvapich2 1.9a2
- Java 1.8 + java3D 1.5.2
- Matlab 2015a (
/misc/opt/matlab2015a/bin/matlab
) - Maple (
/misc/opt/maple2015/bin
) - Intel Parallel Studio 2015 (
/misc/opt/intel2015/
) - Intel Parallel Studio 2016 (
/misc/opt/intel2016/
) - CUDA 7.0 (/usr/local/cuda) + CUDA SDK (/usr/local/cuda/samples)
- DDT debugger (
/opt/allinea/ddt
, see Documentation) - Environment Modules: type
module avail
to list all available modules:
blacs/1.1-ompi-1.10.1 boost/1.58.0 caffe/caffe-0.13 cuda/7.0 ddt/5.1-43967 hdf5/1.8.16-ompi-1.10.1 hypre/2.10.0b intel/intel64-2015 intel/intel64-2016 likwid/4.0.1 metis/5.1.0 mpi/intel64-5.0.3.048 mpi/openmpi-1.10.1-gcc mpi/openmpi-x86_64 mpi/mvapich2-x86_64 mpi/openmpi-1.8.8-pgi pastix/5.2.2.20-nompi petsc/3.4.5 pgi/pgi-14.10 qwt/6.1.2-qt5 scalapack/1.8.0-ompi-1.10.1 scotch/6.0.4 trilinos/trilinos-12.2.1 vtk/6.2.0-qt5
Other tools/libraries available:
- blas, atlas
- openblas
- gmsh 2.8.5
- Python (including scipy, numpy, pycuda, pip)
- R 3.2.1
- Erlang
- GDB & DDD
- Valgrind
- GSL & GLPK
- boost 1.53.0 and 1.58.0