Hardware : Différence entre versions

De ClustersSophia
Aller à : navigation, rechercher
Ligne 92 : Ligne 92 :
  
 
<div class="alert">  
 
<div class="alert">  
This paragraph describes the obsolete "legacy nef" configuration which will definitively stop on 17 april 2016 : please use the [[User_Guide_new_config| new nef configuration]]. Legacy cluster hardware will be re-installed in the new cluster
+
This paragraph describes the obsolete "legacy nef" configuration which will definitively stop on 17 april 2016 : please use the [[User_Guide_new_config| new nef configuration]].
 +
 
 +
Hardware from the legacy cluster will be re-installed in the new cluster
 
</div>
 
</div>
  
Ligne 219 : Ligne 221 :
 
*  common storage : /home
 
*  common storage : /home
 
**      '''15 TiB''' (8 x 4TB SAS disks, 2 x RAID-10 array), infiniband QDR, NFS access, available to all users, quotas
 
**      '''15 TiB''' (8 x 4TB SAS disks, 2 x RAID-10 array), infiniband QDR, NFS access, available to all users, quotas
* legacy experimental scratch storage (/dfs) is to be removed in late 2015
 
 
* capacity distributed and scalable common storage : /data
 
* capacity distributed and scalable common storage : /data
 
**      '''153TiB''' , infiniband QDR, BeeGFS access
 
**      '''153TiB''' , infiniband QDR, BeeGFS access
Ligne 226 : Ligne 227 :
 
**      scratch storage : variable size (initially ~20TiB), no quota limit, for transient storage (data may be purged)  
 
**      scratch storage : variable size (initially ~20TiB), no quota limit, for transient storage (data may be purged)  
  
 +
Legacy team storage :
 
*    legacy external storage ([http://www.dell.com/fr/entreprise/p/powervault-md1200/pd Dell MD1200]) : out of maintenance hardware
 
*    legacy external storage ([http://www.dell.com/fr/entreprise/p/powervault-md1200/pd Dell MD1200]) : out of maintenance hardware
 
**        19TB for the [https://team.inria.fr/asclepios ASCLEPIOS] EPI
 
**        19TB for the [https://team.inria.fr/asclepios ASCLEPIOS] EPI
Ligne 231 : Ligne 233 :
 
*    legacy external storage ([http://www.dell.com/downloads/global/products/pvaul/fr/pvaul_md1000_specs_fr.pdf Dell MD1000]) : out of maintenance hardware
 
*    legacy external storage ([http://www.dell.com/downloads/global/products/pvaul/fr/pvaul_md1000_specs_fr.pdf Dell MD1000]) : out of maintenance hardware
 
**        1TB RAID-1 for the [http://www-sop.inria.fr/teams/tropics TROPICS] EPI archive
 
**        1TB RAID-1 for the [http://www-sop.inria.fr/teams/tropics TROPICS] EPI archive
 +
  
 
=  legacy cluster - Storage =
 
=  legacy cluster - Storage =

Version du 16 mars 2016 à 17:45



new cluster - CPU & GPU

The current platform includes:

  • 8 dual-Xeon 8 cores, 16GB/core RAM
  • 44 dual-Xeon 6 cores, 8 GB/core RAM
  • 13 quad-Xeon 6 cores, ~2.6 GB/core RAM

The number of cores available is

  • 128 Xeon 2.6Ghz cores : ~2.4 Tflops (2.7 Tflops peak)
  • 528 Xeon 2.93Ghz cores : ~4.8 Tflops (6.2 Tflops peak)
  • 312 Xeon 2.4Ghz cores : ~1.8 Tflops (3.0 Tflops peak)

The total cumulated computing power is close to 11.9 Tflops (peak, double precision)

Dell C6220 Cluster (2015)

Dell C6220 dual-Xeon E5-2650 v2 @ 2.60GHz (16 cores) : 8 nodes ( nef029 to nef036 )

C6220-300x191.jpg


  • RAM capacity : 256 GB RAM
  • 1x1TB SATA HardDisk drive
  • 2x gigabit network ports (one connected)
  • 2x infiniband QDR card (one connected)

  Dell C6100 Cluster

Dell C6100 dual-Xeon X5670 @ 2.93GHz (12 cores) : 44 nodes ( nef097 to nef140 )

Dell-c6100.jpeg


  • RAM capacity : 96 GB RAM
  • 1x250GB hard disk drive SATA 7.2kRPM
  • 2x gigabit network ports (one connected)
  • 1x infiniband QDR card

  Dell R900 Cluster

Dell R900 quad-Xeon E7450 @ 2.4GHz (24 cores) : 13 nodes ( nef084 to nef096 )

Dell-r900.jpeg


  • RAM capacity : 64 GB RAM
  • 2x146GB hard disk drive (RAID-1) SAS 15K RPM
  • 4x gigabit network ports (one connected)
  • 1x infiniband QDR card

  Dell C6100/C410x GPU cluster

Dell C6100 nodes: mono-Xeon X5650 @ 2.66Ghz : 2 nodes ( nefgpu05 nefgpu06)

Dell-c410x.jpg
  • RAM capacity: 24 GB
  • 1x250GB SATA disk
  • 2x gigabit ethernet ports (one connected)
  • 1x infiniband QDR card
  • 2 Tesla M2050 GPUs connected with a PCIe gen2 16x interface
    • 448 Streaming Processor Cores per card
    • Simple precision performance peak: 1.03 Tflops per card
    • Double precision performance peak: 0.51 Tflops per card
    • 3GB of RAM capacity per card

  Carri GPU cluster

Carri HighStation 5600 XLR8 nodes: dual-Xeon X5650 @ 2.66Ghz : 1 node ( nefgpu03 )

Carri-tesla.jpg
  • RAM capacity: 72 GB
  • 2x160GB SSD disks
  • 4x gigabit ethernet ports (one connected)
  • 1x infiniband QDR card
  • 7 Tesla C2070 (nefgpu03)/ 7 Tesla C2050(nefgpu04) GPUs connected with a PCIe gen2 16x interface
    • 448 Streaming Processor Cores per card
    • Simple precision performance peak: 1.03 Tflops per card
    • Double precision performance peak: 0.51 Tflops per card
    • 3GB of RAM capacity per card (6GB on nefgpu03)


legacy cluster - CPU & GPU

This paragraph describes the obsolete "legacy nef" configuration which will definitively stop on 17 april 2016 : please use the new nef configuration.

Hardware from the legacy cluster will be re-installed in the new cluster


The current platform includes:

  • 6 quad-Opteron 12 cores
  • 6 quad-Opteron 16 cores
  • 16 dual-Xeon 10 cores.
  • 21 dual-Xeon quad core.
  • 2 dual-Xeon hexa core.
  • 2 mono-Xeon hexa core.
  • 22 Nvidia Tesla GPU

The number of cores available is

  • 288 Opteron 2.2Ghz cores : ~2 Tflops (2.53 Tflops peak)
  • 384 Opteron 2.3Ghz cores : ~2.7 Tflops (3.5 Tflops peak)
  • 320 Xeon E5 2.8Ghz cores : ~5.9 Tflops (7.17 Tflops peak)
  • 204 Xeon 2.66Ghz cores : ~1.7 Tflops (2.17 Tflops peak)
  • 9024 Streaming Processor Cores (22 Nvidia GPU) ~9.5Tflops peak (22.2 Tflops in single precision)


The total cumulated computing power is close to 24 Tflops (peak, double precision)


Dell R815 Cluster

Dell R815 quad-Opterons 6174 @ 2.2Ghz (48 cores) : 6 nodes

Cluster-dellr815.jpg


  • RAM capacity : 256 GB RAM
  • 2x600GB SAS HardDisk drive (RAID-0)
  • 4x gigabit network ports (one connected)
  • 1x infiniband QDR card


  Dell C6145 Cluster

Dell C6145 quad-Opterons 6376 @ 2.3Ghz (64 cores) : 6 nodes

Cluster-c6145.jpg


  • RAM capacity : 256 GB RAM (512GB on nef011 and nef012)
  • 1x500GB SATA HardDisk drive
  • 2x gigabit network ports (one connected)
  • 1x infiniband QDR card

  Dell C6220 Cluster (2014)

Dell C6220 dual-Xeon E5-2680 v2 @ 2.80GHz (20 cores) : 16 nodes

C6220-300x191.jpg


  • RAM capacity : 192 GB RAM
  • 1x2TB SATA HardDisk drive
  • 2x gigabit network ports (one connected)
  • 1x infiniband FDR card (QDR used)


  Dell 1950 Cluster

PowerEdge 1950 dual-Xeon 5355 @ 2.66Ghz : 19 nodes

Cluster-dell.jpg
  • RAM capacity : 16 GB
  • 2x73GB SAS in RAID-0, 146Go disk space available
  • 2x gigabit netword ports (one connected)
  • 1x infiniband QDR card


  HP GPU cluster

HP DL160 G5 nodes: dual-Xeon 5430 @ 2.66Ghz : 2 nodes

  • RAM capacity: 16 GB
  • 500GB of local disk space
  • 2x gigabit ethernet interface
  • 1x 10GBit myrinet interface
  • 2 GPUs connected with a PCIe gen2 16x interface

Nvidia Tesla S1070: 1 node

Nvidia-tesla.jpg
  • 4 GPUs (2 per compute node)
  • Streaming Processor Cores: 960 (260 per GPU)
  • Simple precision performance peak: 3.73 Tflops
  • Double precision performance peak: 0.31 Tflops
  • RAM capacity : 16 GB (4GB per GPU)


  Carri GPU cluster

Carri HighStation 5600 XLR8 nodes: dual-Xeon X5650 @ 2.66Ghz : 1 node

Carri-tesla.jpg
  • RAM capacity: 72 GB
  • 2x160GB SSD disks
  • 4x gigabit ethernet ports (one connected)
  • 1x infiniband QDR card
  • 7 Tesla C2070 (nefgpu03)/ 7 Tesla C2050(nefgpu04) GPUs connected with a PCIe gen2 16x interface
    • 448 Streaming Processor Cores per card
    • Simple precision performance peak: 1.03 Tflops per card
    • Double precision performance peak: 0.51 Tflops per card
    • 3GB of RAM capacity per card (6GB on nefgpu03)


  new cluster - Storage

All nodes have access to common storage :

  • common storage : /home
    • 15 TiB (8 x 4TB SAS disks, 2 x RAID-10 array), infiniband QDR, NFS access, available to all users, quotas
  • capacity distributed and scalable common storage : /data
    • 153TiB , infiniband QDR, BeeGFS access
    • to be extended : + 55 TiB in early 2016
    • permanent storage : 1TiB quota per team + teams may buy additional quota (please contact cluster administrators)
    • scratch storage : variable size (initially ~20TiB), no quota limit, for transient storage (data may be purged)

Legacy team storage :


  legacy cluster - Storage

This paragraph describes the obsolete "legacy nef" configuration which will definitively stop on 17 april 2016 : please use the new nef configuration


All nodes have access to a common storage using NFS. The NFS server is accessed through the infiniband 40Gb network if available

Dell-md1000.jpg


  • common storage : /home
    • 15 TBytes (8 x 4TB SAS disks, 2 x RAID-10 array), NFS access, available to all users, quotas
  • experimental scratch storage based on distributed file system : /dfs
    • 19 TB, GlusterFS access, available to all users, no quotas (may change later)