FAQ new config

De ClustersSophia
Révision datée du 21 août 2015 à 17:12 par Rmichela (discussion | contributions) (in which state is my job ?)
Aller à : navigation, rechercher



General

What is OAR ?

OAR is a versatile resource and task manager (also called a batch scheduler) for HPC clusters, and other computing infrastructures (like distributed computing experimental testbeds where versatility is a key).

What are the most commonly used commands ? (see official docs)

  • oarsub : to submit a job
  • oartat : to see the state of the queues (Running/Waiting jobs)
  • oardel : to cancel a job
  • oarhold : to hold a job when its Waiting
  • oarresume : to resume jobs in the states Hold or Suspended


Job submission

How do i submit a job ?

With Oar you can't directly use a binary as a submission argument in the command line. The first thing to do is to create a submission script ; The script includes the command line to execute and the resources needed for the job.

Once the script exists, use it as the argument of the oarsub command: oarsub -S ./myscript.sh

How to choose the node type ?

The cluster has 6 kinds of nodes, with the following properties:

Dell 1950 nodes
nef, xeon, dell, nogpu
Dell R815 nodes
nef, opteron, dellr815, nogpu
Dell C6220 nodes
nef,xeon,dellc6220,nogpu
Dell C6145 nodes
nef,opteron,dellc6145,nogpu
HP nodes
nef, xeon, gpu, hpdl160g5, T10
Carri nodes
nef, xeon, carri5600XLR8, gpu, C2050

If you want 32 cores from the Dell cluster:

oarsub -p "cluster='dellc6100'" -l /nodes=4/core=8

If you want 96 cores from the Dell R815 cluster:

oarsub -p "cputype=opteron'" -l /nodes=2/core=48

If you want a node with a GPU and 8 cores :

oarsub -p "gpu='YES'" -l /nodes=1/core=8

What is the format of a submission script ?

The script should not only includes the path to the program to execute, but also includes information about the needed resources (you can specify resources using oarsub options on the command line). Simple example : helloWorld.sh

# Submission script for the helloWorld program
#
# Comments starting with #OAR are used by the 
# resource manager
#
#OAR -l /nodes=8/core=1,walltime=00:10:00
#OAR -p "cputype='opteron'"
# The job use 8 nodes with one processor (core) per node,
# only on opteron nodes
# (remplace opteron by xeon to start the job
# on dell/Xeon machines)
# job duration is less than 10min


# Le chemin vers le binaire
./helloWorld

Resources can also be specified on the command line (bypassing the ressources specified in the script), eg: oarsub -p "cputype='opteron'" -l /nodes=8/core=1 -S ./helloWorld.sh will run helloWorld on 8 nodes using 2 cores per node, on nef nodes (Opteron or Xeon).

Several examples can be downloaded here :

#TODO

All scripts must specify the walltime of the job (maximum expected duration) (#OAR -l /nodes=1/core=1/,walltime=HH:MM:SS ). This is needed by scheduler. If a job is using more time that requested, it will be killed.

You can specify the amount of memory per core needed by the job (-l pmem=XXXmb) or per node (-l mem=XXXXmb). This is useful to prevent the nodes from swapping, if your processes are using more than the available memory per core (2GB on Dell nodes, 2.5GB on Opteron ). If one of the processes use more memory that asked, the job will be killed.

You can find more information in the pbs_resources manpage

What are the available queues ?

Each queue has a maximum number of running jobs per user.

There are two sets of queues: one for very long jobs, and one for extra long jobs with each decreasing resources available: long jobs with less ressources have a higher priority

Default and besteffort queue are for short jobs, the best effort jobs could be killed at any time.

| queue name  | max user |  min  | max  |  prio | max resources   |
|             | cores    | dur.  |dur.  |       | (time*resources)|
|-------------+----------+-------+------+-------+-----------------+
| long2       |      128 | >48h  | 7d   |    30 |     21504       |
| long1       |      256 | >6h   | 48h  |    20 |     12288       |            
| default     |      512 |       |  6h  |    10 |     3072        |

| besteffort  |          |       |      |     0 |                 |

eg. a user can submit several jobs in the parlong queue (using at most 376 cores and no longer than 12hours). Another user can submit several sequential jobs in the seqshort queue, only 512 (at most) will be running at the same time.


Fair share scheduling (Karma) decreases each user's priority based on his recent resources consumption (current evaluation window : 30 days).

If you don't specify the queue, jobs are automatically routed to one of the queue based on the walltime and resource.


Can i submit hundreds/thousands of jobs ?

You can submit easily hundreds of jobs, but you should not try to submit more than 500 jobs at a time, and please consider the use of array jobs for this:

Sometimes it is desired to submit a large number of similar jobs to the queueing system. One obvious, but inefficient, way to do this would be to prepare a prototype job script, and then to use shell scripting to call qsub on this (possibly modified) job script the required number of times.

Alternatively, the current version of Torque provides a feature known as job arrays which allows the creation of multiple, similar jobs with one oarsub command. This feature introduces a new job naming convention that allows users either to reference the entire set of jobs as a unit or to reference one particular job from the set.

To submit a job array use either

oarsub --array <array_number> --array-param-file <param_file>

or equivalently insert a directive

#OAR --array <array_number>
#OAR --array-param-file <param_file>

in your batch script. In each case an array is created consisting of a set of jobs, each of which is assigned a unique index number (available to each job as the value of the environment variable %OAR_ARRAY_ID%), with each job using the same jobscript and running in a nearly identical environment. Here array_number is your number of array to be created, if not specified the number will be the number of line in your parameter file. Example:

oarsub --array 6 --array-param-file ./param_file -S ./jobscript
[ADMISSION RULE] Modify resource description with type constraints
  [ARRAY COUNT] You requested 6 job array
  [CORES COUNT] You requested 3066 cores
  [CPUh] You requested 3066 total cpuh (cores * walltime)
  [JOB QUEUE] Your job is in default queue
Generate a job key...
Generate a job key...
Generate a job key...
Generate a job key...
Generate a job key...
Generate a job key...
OAR_JOB_ID=1839
OAR_JOB_ID=1840
OAR_JOB_ID=1841
OAR_JOB_ID=1842
OAR_JOB_ID=1843
OAR_JOB_ID=1844
OAR_ARRAY_ID=1839


oarstat --array
Job id    A. id     A. index  Name       User     Submission Date     S Queue
--------- --------- --------- ---------- -------- ------------------- - --------
1839      1839      1         TEST_OAR   rmichela 2015-08-21 17:49:08 R default 
1840      1839      2         TEST_OAR   rmichela 2015-08-21 17:49:09 W default 
1841      1839      3         TEST_OAR   rmichela 2015-08-21 17:49:09 W default 
1842      1839      4         TEST_OAR   rmichela 2015-08-21 17:49:09 W default 
1843      1839      5         TEST_OAR   rmichela 2015-08-21 17:49:09 W default 
1844      1839      6         TEST_OAR   rmichela 2015-08-21 17:49:09 W default      

Note that each job is assigned a composite job id of the form 1839 (stored as usual in the environment variable OAR_JOB_ID). Each job is distinguished by a unique array index x (stored in the environment variable OAR_ARRAY_ID), which is also added to the value of the PBS_JOBNAME variable. Thus each job can perform slightly different actions based on the value of OAR_ARRAY_ID (e.g. using different input or output files, or different options).

How to run a MPICH 2 application ?

mpiexec is available to run application compiled with mvapich (mpich2 for infiniband) :

Submission script for MPICH2 : monAppliMPICH2.pbs

# File : monAppliMPICH2.pbs
#PBS -l "nodes=3:ppn=1:nef"
cd $PBS_O_WORKDIR
mpiexec  monAppliMPICH2

In this case, mpiexec will launch MPI on 3 nodes with one core per node.


How to run an OpenMPI application?

The mpirun binary included in openmpi run the application using the resources reserved by the jobs :

Submission script for OpenMPI : monAppliMPICH2.pbs

# Fichier : monAppliOpenMPI.pbs
#PBS -l "nodes=3:ppn=1:nef"
export LD_LIBRARY_PATH=/opt/openmpi/current/lib64
# bind a mpi process to a cpu; the linux scheduler sucks for mpi
export OMPI_MCA_mpi_paffinity_alone=1
cd $PBS_O_WORKDIR
# For PGI openmpi:
/opt/openmpi/current/bin/mpirun monAppliOpenMPI
# For gcc openmpi:
#/opt/openmpi-gcc/current/bin/mpirun monAppliOpenMPI
# or simply:
#mpirun monAppliOpenMPI

in this cas, mpirun will start the MPI application on 3 nodes with a single core per node.


How can I run a LAM MPI application ?

LAM is no longer installed on the cluster. Use OpenMPI or MPICH2 instead.


How can i run a PVM application ?

PVM is no longer installed on the cluster.


How to Interact with a job when it is running

in which state is my job ?

The oarstat JOBID command let you show the state of your job and in which queue it has been scheduled.

Examples of using oarstat

-bash-4.1$ oarstat -j 1839
Job id     Name           User           Submission Date     S Queue
---------- -------------- -------------- ------------------- - ----------
1839       TEST_OAR       rmichela       2015-08-21 17:49:08 T default   

The S column gives the the current state ( Waiting, Running, Launching, Terminating).

The Queue column shows the job's queue

When will my job be executed ?

You can use showstart to have an estimation on when your job will be started


How can i get the stderr or stdout of my job during its execution ?

You can use the qpeek command. qpeek JOBID show the stdout of JOBID , and qpeek -e shows the stder .


How can i cancel a job ?

The qdel <JOBID> command let you cancel a job (man qdel).


Troubleshooting

Why is my job rejected at submission ?

The job system may refuse a job submission, which usually causes this error message :

qsub: Job rejected by all possible destinations

Most of the time it indicates that the requested resources are not available, which may be caused by a typo (eg -l nodes=3:dell6220 instead of -l nodes=3:dellc6220).

Sometimes it may also be caused by some nodes being temporarily out of service. This may be verified typing pbsnodes -a for listing all nodes in service.

Another cause may be the job requested more resources than the total resources existing on the cluster.


Why is my job blocked in a queue while there are no other jobs currently running ?

A node on the cluster has a problem, please contact the administrators.


What does this error message mean ?

cannot execute binary file
qsub was launched without a launching script as a parameter. A script lauching the binary is required by qsub even if you don't need to transmit specific parameters.
p0_XXXX
p4_error: : 8262 : The binary was compiled with mpich and is being run with LAM MPI

I received a mail that says: job violates resource utilization policies, what does it mean ?

Most of the time, it means that you have used too much memory and maui has to kill your job. You should try to change the pmem or mem value of your job

What are the best practices for Matlab jobs ?

If launching many Matlab jobs at the same time, please launch them on as few nodes as possible. Matlab uses a floating licence per {node,user} couple. Eg :

  • 10 jobs for user foo on 10 differents cores of nef012 node use 1 floating license,
  • 1 job for user foo on each of nef01[0-9] nodes use 10 floating licenses.