User Guide : Différence entre versions

De ClustersSophia
Aller à : navigation, rechercher
Ligne 32 : Ligne 32 :
  
  
Each user has a dedicated home directory on the storage server. The available disk space for users is 7TB. All nodes have access to this storage using NFS.
+
Each user has a dedicated home directory on the storage server. The total available disk space for users is 7TB. All nodes have access to this storage using NFS.
  
 
<div class="alert">
 
<div class="alert">
Ligne 39 : Ligne 39 :
  
 
A quota system is activated on the shared storage server:
 
A quota system is activated on the shared storage server:
 
 
*    The soft limit is 40GB
 
*    The soft limit is 40GB
 
*    The hard limit 350GB
 
*    The hard limit 350GB
Ligne 50 : Ligne 49 :
 
You can check you current disk occupation with the <code>quota -s</code> command
 
You can check you current disk occupation with the <code>quota -s</code> command
  
More storage is available for the ASCLEPIOS, OPALE, and TROPICS team members in /epi/<teamname>/<username>. Teams needing more storage should contact the [[Support|cluster administrators]].
 
  
More storage is also available for the NEUROMATHCOMP, ABS, ODYSSEE and GEOMETRICA team members in <code>/epi/<teamname></code>.
+
More storage is available for the ABS, ASCLEPIOS, MORPHEME, NEUROMATHCOMP and TROPICS team members in /epi/<teamname>/<username>. Teams needing more storage should contact the [[Support|cluster administrators]].
 +
 
 +
 
 +
 
 +
A 19TB experimental distributed scratch space is available under /dfs for all users. Its first target is short term storage, quotas or time limits may be applied in the future if it becomes permanently saturated.
 +
It has low performance for activities intensive on metadata (eg : compilation, reading/writing lots of small files).
 +
  
There is also disk space available on each node, in the /tmp directory
+
 
 +
There is also temporary disk space available on each node, in the /tmp directory
  
 
*    1.1TB on Dell R815 nodes
 
*    1.1TB on Dell R815 nodes
Ligne 60 : Ligne 65 :
 
*    420GB on HP nodes
 
*    420GB on HP nodes
 
*    110GB on Carri nodes  
 
*    110GB on Carri nodes  
 +
  
 
== Softwares ==
 
== Softwares ==

Version du 2 février 2015 à 18:35



Front-end

The cluster is based on several front-end servers and a lot of compute nodes.

3 servers are available:

  • nef-frontal.inria.fr : main front-end, ssh and submission front-end
  • nef-devel.inria.fr : compilation, ssh and submission front-end
  • A storage server (no direct access for users)

Currently, the only way to use the cluster is to connect to one of the front-end with ssh. Then you can have access to the computing resources available by using the Torque job manager.

The nef cluster is not part of the internal (production) network of Inria; therefore Inria (iLDAP) accounts/passwords are not used.

First steps

  1. First you need to apply for an account on the nef cluster. You must give your ssh public key to have an account.
  2. Then connect to one of the main front-end nef.inria.fr or nef-devel.inria.fr using ssh
  3. Then you can import your data / sources files using scp on the nef front-end and compile your programs on nef-devel.inria.fr
  4. Use the job manager to start your tasks
  5. You can view the current running jobs using the Monika web interface. You can also view the system activity on nodes using ganglia

 Disk space management

Each user has a dedicated home directory on the storage server. The total available disk space for users is 7TB. All nodes have access to this storage using NFS.

Data stored on the cluster IS NOT backed up !

A quota system is activated on the shared storage server:

  • The soft limit is 40GB
  • The hard limit 350GB
  • The delay is 4 weeks

You can use 40GB of data without restrictions; as soon as the soft limit is reached, you have 4 weeks in which to delete files and go back under the soft limit. You can never use more than the hard limit.

A warning message will be sent by mail every Sunday when a limit is reached.

You can check you current disk occupation with the quota -s command


More storage is available for the ABS, ASCLEPIOS, MORPHEME, NEUROMATHCOMP and TROPICS team members in /epi/<teamname>/<username>. Teams needing more storage should contact the cluster administrators.


A 19TB experimental distributed scratch space is available under /dfs for all users. Its first target is short term storage, quotas or time limits may be applied in the future if it becomes permanently saturated. It has low performance for activities intensive on metadata (eg : compilation, reading/writing lots of small files).


There is also temporary disk space available on each node, in the /tmp directory

  • 1.1TB on Dell R815 nodes
  • 100 GB on Dell PE1950 nodes
  • 420GB on HP nodes
  • 110GB on Carri nodes


 Softwares

All nodes are installed using a Linux Fedora 16 64bit distribution

Main softwares available on the cluster:

  • 3.4.7 linux x86_64 kernel
  • PGI 13.5 compilers
  • GCC 4.6.3 (C,C++, Fortran77, Fortran95 compilers)
  • OpenMPI 1.6.3
  • Paraview 3.14.1
  • MvaMpich2 1.9a2
  • Java 1.6.0_24 + java3D 1.5.2
  • Petsc 3.4.2
  • Matlab 2011b
  • CUDA 5.0 (/usr/local/cuda) + CUDA SDK (/usr/local/cuda/samples)
  • DDT debugger (/opt/allinea/ddt, see Documentation)

Other tools/libraries available:

  • blas, atlas
  • blacs and scalapack (in /usr/local/lib64)
  • openblas 0.2.8 (in /opt/openblas/)
  • gmsh 2.6.1 (in /opt/gmsh)
  • Python (including scipy, numpy, pycuda)
  • Trilinos (/opt/Trilinos)
  • mesa 7.7.1(/opt/mesa)
  • Erlang
  • GDB & DDD
  • Valgrind
  • CGAL
  • GSL & GLPK
  • boost 1.47


Energy savings

Energy savings is currently disabled.

Since the cluster is using an important amount of electricity (for the nodes and for the cooling system), unused computing nodes are automatically shutdown during nights and weekends

  • unused nodes are powered off at 22:00 every night
  • nodes are powered on every morning at 08:00 (except saturdays and sundays)

If you really need to start new jobs during nights or week-ends, you can manually power on nodes using a dedicated command (to be executed on nef.inria.fr): wakeup-nodes