Paraview : Différence entre versions

De ClustersSophia
Aller à : navigation, rechercher
(Page créée avec « Paraview On this page... (hide) 1. Features 2. Installation & documentation 2.1 Local Installation 2.2 Documentation 3. Usage 4... »)
 
 
(5 révisions intermédiaires par 2 utilisateurs non affichées)
Ligne 1 : Ligne 1 :
Paraview
 
  
On this page... (hide)
+
{{entete}}
  
    1. Features
 
    2. Installation & documentation
 
        2.1 Local Installation
 
        2.2 Documentation
 
    3. Usage
 
    4. Batch mode usage
 
  
1.  Features
+
== Features ==
  
Paraview is an open-source, multi-platform data analysis and visualization application. It is developped by Kitware. The version installed on the cluster is built with MPI support; therefore it can be started on several nodes of the cluster in parallel mode.
 
2.  Installation & documentation
 
  
Paraview version 3.14.1 is installed on the cluster (compiled with openmpi-gcc) .
+
[[Image:Paraview.png|x200px|left]]
  
Documentation: see also this site.
+
[http://www.paraview.org/ Paraview] is an open-source, multi-platform data analysis and visualization application. It is developped by [http://www.kitware.com/ Kitware]. The version installed on the cluster is built with MPI support; therefore it can be started on several nodes of the cluster in parallel mode.
2.1  Local Installation
+
<br clear=all>
  
For Fedora users, Paraview can be installed on their workstation. To do this, yum install paraview. If the version doesn't match with the version installed on nef, you can download a binary from the paraview website.
+
==  Installation & documentation ==
2.2  Documentation
 
  
Documentation is available online
 
3.  Usage
 
  
On the nef cluster, paraview is installed in the /opt/paraview directory. binaries are available in /opt/paraview/bin The paraview binary let you start the application, in standard mode.
+
Paraview version ''5.4.1'' is installed on the cluster (compiled with openmpi-gcc) .
 +
 
 +
Documentation: see also [http://daac.hpc.mil/software/ParaView/ this site].
 +
 
 +
 
 +
===  Local Installation ===
 +
 
 +
 
 +
For Fedora users, Paraview can be installed on their workstation. To do this, yum install paraview. If the version doesn't match with the version installed on nef, you can download a binary from the [http://paraview.org/ paraview website].
 +
 
 +
 
 +
===  Documentation ===
 +
 
 +
 
 +
Documentation is available [http://www.paraview.org/paraview/help/documentation.html online].
 +
 
 +
==  Usage ==
 +
 
 +
 
 +
On the nef cluster, two versions of paraview are installed ; one compiled with opengl (can be used on gpu nodes or with remote display) , and one with mesa (for offline rendering).
 +
Both versions are available through a module (type <code>module avail</code> to see the current versions)
 +
 
  
 
The usual way to use paraview on nef is to start the client (paraview binary) on your workstation, start a paraview server (pvserver) on nef and then use the client to connect to the server.
 
The usual way to use paraview on nef is to start the client (paraview binary) on your workstation, start a paraview server (pvserver) on nef and then use the client to connect to the server.
4.  Batch mode usage
 
  
To start the Paraview server, you have to use a script like this (paraview.pbs):
 
  
#!/bin/sh
+
==  Batch mode usage ==
cd $PBS_O_WORKDIR
+
 
server=`hostname`
+
 
echo "The server is listening on host $server "
+
To start the Paraview server, you have to use a script like this (paraview.sh):
PVSERVER=/opt/paraview/bin/pvserver
+
#!/bin/sh
export LD_LIBRARY_PATH=/opt/mesa/lib
+
server=`hostname`
/opt/openmpi-gcc/current/bin/mpirun $PVSERVER --use-offscreen-rendering
+
echo "The server is listening on host $server "
 +
source /etc/profile.d/modules.sh
 +
module load paraview/5.4.1-mesa
 +
module load mpi/openmpi-1.10.7-gcc
 +
mpirun --prefix $MPI_HOME $PARAVIEW_DIR/bin/pvserver --use-offscreen-rendering
  
Then, to start paraview server on 16 cores: qsub -l nodes=2:ppn=8,walltime=2:00:00 paraview.pbs
+
Then, to start paraview server on 4 nodes: <code>oarsub -l /nodes=4,walltime=2:0:0 ./paraview.sh</code>
  
The server will be started on 16 cores, but a single instance will listen to client connections. To know the name of the listening node, use qpeek [job id].
+
The server will be started on all the cores of the 4 nodes, but a single instance will listen to client connections. To know the name of the listening node, use oarpeek [job id].
 +
  nef:>oarpeek 407086
 +
-- Info - Your job is in Running state ! --
 +
-- Info - Command is : cat /home/nniclaus/paraview/OAR.407086.stdout
 +
-- Info - Output is :
 +
The server is listening on host nef086
 +
Waiting for client...
 +
Connection URL: cs://nef086:11111
 +
Accepting connection(s): nef086:11111
  
nef:>qsub  -l nodes=2:ppn=8,walltime=2:00:00 paraview.pbs
 
3130.nef.inria.fr
 
nef:>qpeek 3130.nef.inria.fr
 
The server is listening on host nef002.inria.fr
 
Listen on port: 11111
 
Waiting for client...
 
  
In this example, the listening node is nef002, so on the paraview client , you have to connect to nef002.inria.fr using the connect button ( if this is the first time you use this server, you have to use Add server, and put nef002 in Host and Name. Then use the Manual Startup type and save)
+
In this example, the listening node is '''nef086''', so on the paraview client , you have to connect to '''nef086.inria.fr''' using the connect button ( if this is the first time you use this server, you have to use ''Add server'', and put nef086 in Host and Name. Then use the '''Manual''' Startup type and save)

Version actuelle datée du 12 mars 2018 à 19:28



 Features

Paraview.png

Paraview is an open-source, multi-platform data analysis and visualization application. It is developped by Kitware. The version installed on the cluster is built with MPI support; therefore it can be started on several nodes of the cluster in parallel mode.

  Installation & documentation

Paraview version 5.4.1 is installed on the cluster (compiled with openmpi-gcc) .

Documentation: see also this site.


  Local Installation

For Fedora users, Paraview can be installed on their workstation. To do this, yum install paraview. If the version doesn't match with the version installed on nef, you can download a binary from the paraview website.


  Documentation

Documentation is available online.

  Usage

On the nef cluster, two versions of paraview are installed ; one compiled with opengl (can be used on gpu nodes or with remote display) , and one with mesa (for offline rendering). Both versions are available through a module (type module avail to see the current versions)


The usual way to use paraview on nef is to start the client (paraview binary) on your workstation, start a paraview server (pvserver) on nef and then use the client to connect to the server.


  Batch mode usage

To start the Paraview server, you have to use a script like this (paraview.sh):

#!/bin/sh
server=`hostname`
echo "The server is listening on host $server "
source /etc/profile.d/modules.sh
module load paraview/5.4.1-mesa
module load mpi/openmpi-1.10.7-gcc
mpirun --prefix $MPI_HOME $PARAVIEW_DIR/bin/pvserver --use-offscreen-rendering

Then, to start paraview server on 4 nodes: oarsub -l /nodes=4,walltime=2:0:0 ./paraview.sh

The server will be started on all the cores of the 4 nodes, but a single instance will listen to client connections. To know the name of the listening node, use oarpeek [job id].

 nef:>oarpeek 407086
-- Info - Your job is in Running state ! --
-- Info - Command is : cat /home/nniclaus/paraview/OAR.407086.stdout 
-- Info - Output is : 
The server is listening on host nef086 
Waiting for client...
Connection URL: cs://nef086:11111
Accepting connection(s): nef086:11111


In this example, the listening node is nef086, so on the paraview client , you have to connect to nef086.inria.fr using the connect button ( if this is the first time you use this server, you have to use Add server, and put nef086 in Host and Name. Then use the Manual Startup type and save)