Hardware : Différence entre versions

De ClustersSophia
Aller à : navigation, rechercher
(Page créée avec « == CPU & GPU == The current platform includes: * 6 quad-Opteron 12 cores * 6 quad-Opteron 16 cores * 16 dual-Xeon 10 cores. * 21 dual-Xeon quad core. * 2... »)
 
Ligne 1 : Ligne 1 :
  
== CPU & GPU ==
+
{{entete}}
 +
 
 +
 
 +
= CPU & GPU =
  
 
The current platform includes:
 
The current platform includes:
Ligne 14 : Ligne 17 :
 
The number of cores available is
 
The number of cores available is
  
*    288 Opteron 2.2Ghz cores : ~2 Tflops (2.53 Tflops peak)
+
*    '''288''' Opteron 2.2Ghz cores : ~2 Tflops (2.53 Tflops peak)
*    384 Opteron 2.3Ghz cores : ~2.7 Tflops (3.5 Tflops peak)
+
*    '''384''' Opteron 2.3Ghz cores : ~2.7 Tflops (3.5 Tflops peak)
*    320 Xeon E5 2.8Ghz cores : ~5.9 Tflops (7.17 Tflops peak)
+
*    '''320''' Xeon E5 2.8Ghz cores : ~5.9 Tflops (7.17 Tflops peak)
*    204 Xeon 2.66Ghz cores : ~1.7 Tflops (2.17 Tflops peak)
+
*    '''204''' Xeon 2.66Ghz cores : ~1.7 Tflops (2.17 Tflops peak)
*    9024 Streaming Processor Cores (22 Nvidia GPU) 9.5Tflops peak (22.2 Tflops in single precision)  
+
*    '''9024''' Streaming Processor Cores (22 Nvidia GPU) 9.5Tflops peak (22.2 Tflops in single precision)  
 +
 
 +
 
 +
'''The total cumulated computing power is close to 24 Tflops (peak, double precision)'''
 +
 
 +
 
 +
== Dell R815 Cluster ==
 +
 
 +
 
 +
'''Dell R815''' quad-Opterons 6174 @ 2.2Ghz (48 cores) : 6 nodes
 +
 
 +
*    RAM capacity : 256 GB RAM
 +
*    2x600GB SAS HardDisk drive (RAID-0)
 +
*    4x gigabit network ports (one connected)
 +
*    1x infiniband QDR card
 +
 
 +
 
 +
==  Dell C6145 Cluster ==
 +
 
 +
 
 +
'''Dell C6145''' quad-Opterons 6376 @ 2.3Ghz (64 cores) : 6 nodes
 +
 
 +
*    RAM capacity : 256 GB RAM (512GB on nef011 and nef012)
 +
*    1x500GB SATA HardDisk drive
 +
*    2x gigabit network ports (one connected)
 +
*    1x infiniband QDR card
 +
 
 +
==  Dell C6220 Cluster ==
 +
 
  
The total cumulated computing power is close to 24 Tflops (peak, double precision)
+
'''Dell C6220''' dual-Xeon E5-2680 v2 @ 2.80GHz (20 cores) : 16 nodes
  
===  Dell R815 Cluster ===
+
*    RAM capacity : 192 GB RAM
 +
*    1x2TB SATA HardDisk drive
 +
*    2x gigabit network ports (one connected)
 +
*    1x infiniband FDR card (QDR used)
  
Dell R815 quad-Opterons 6174 @ 2.2Ghz (48 cores) : 6 nodes
 
  
    RAM capacity : 256 GB RAM
+
==  Dell 1950 Cluster ==
    2x600GB SAS HardDisk drive (RAID-0)
 
    4x gigabit network ports (one connected)
 
    1x infiniband QDR card
 
  
===  Dell C6145 Cluster ===
 
  
Dell C6145 quad-Opterons 6376 @ 2.3Ghz (64 cores) : 6 nodes
+
'''PowerEdge 1950''' dual-Xeon 5355 @ 2.66Ghz : 19 nodes
  
    RAM capacity : 256 GB RAM (512GB on nef011 and nef012)
+
*    RAM capacity : 16 GB
    1x500GB SATA HardDisk drive
+
*    2x73GB SAS in RAID-0, 146Go disk space available
    2x gigabit network ports (one connected)
+
*    2x gigabit netword ports (one connected)
    1x infiniband QDR card  
+
*    1x infiniband QDR card  
  
===  Dell C6220 Cluster ===
+
==  HP GPU cluster ==
  
Attach:Public/cluster-c6220.jpg Δ Δ
 
  
Dell C6220 dual-Xeon E5-2680 v2 @ 2.80GHz (20 cores) : 16 nodes
+
'''HP DL160 G5''' nodes: dual-Xeon 5430 @ 2.66Ghz : 2 nodes
  
    RAM capacity : 192 GB RAM
+
*    RAM capacity: 16 GB
    1x2TB SATA HardDisk drive
+
*    500GB of local disk space
    2x gigabit network ports (one connected)
+
*    2x gigabit ethernet interface
    1x infiniband FDR card (QDR used)
+
*    1x 10GBit myrinet interface
 +
*    2 GPUs connected with a PCIe gen2 16x interface
  
===  Dell 1950 Cluster ===
+
'''Nvidia Tesla''' S1070: 1 node
  
PowerEdge 1950 dual-Xeon 5355 @ 2.66Ghz : 19 nodes
+
*    4 GPUs (2 per compute node)
 +
*    Streaming Processor Cores: 960 (260 per GPU)
 +
*    Simple precision performance peak: 3.73 Tflops
 +
*    Double precision performance peak: 0.31 Tflops
 +
*    RAM capacity : 16 GB (4GB per GPU)
  
    RAM capacity : 16 GB
 
    2x73GB SAS in RAID-0, 146Go disk space available
 
    2x gigabit netword ports (one connected)
 
    1x infiniband QDR card
 
  
===  HP GPU cluster ===
+
==  Carri GPU cluster ==
  
HP DL160 G5 nodes: dual-Xeon 5430 @ 2.66Ghz : 2 nodes
 
  
    RAM capacity: 16 GB
+
'''Carri HighStation 5600 XLR8''' nodes: dual-Xeon X5650 @ 2.66Ghz : 2 nodes
    500GB of local disk space
 
    2x gigabit ethernet interface
 
    1x 10GBit myrinet interface
 
    2 GPUs connected with a PCIe gen2 16x interface
 
  
Nvidia Tesla S1070: 1 node
+
*    RAM capacity: 72 GB
 +
*    2x160GB SSD disks
 +
*    4x gigabit ethernet ports (one connected)
 +
*    1x infiniband QDR card
 +
*    7 Tesla C2070 (nefgpu03)/ 7 Tesla C2050(nefgpu04) GPUs connected with a PCIe gen2 16x interface
 +
**        448 Streaming Processor Cores per card
 +
**        Simple precision performance peak: 1.03 Tflops per card
 +
**        Double precision performance peak: 0.51 Tflops per card
 +
**        3GB of RAM capacity per card (6GB on nefgpu03)
  
    4 GPUs (2 per compute node)
 
    Streaming Processor Cores: 960 (260 per GPU)
 
    Simple precision performance peak: 3.73 Tflops
 
    Double precision performance peak: 0.31 Tflops
 
    RAM capacity : 16 GB (4GB per GPU)
 
  
===  Carri GPU cluster ===
+
==  Dell C6100/C410x GPU cluster ==
  
Carri HighStation 5600 XLR8 nodes: dual-Xeon X5650 @ 2.66Ghz : 2 nodes
 
  
    RAM capacity: 72 GB
+
'''Dell C6100''' nodes: mono-Xeon X5650 @ 2.66Ghz : 2 nodes
    2x160GB SSD disks
 
    4x gigabit ethernet ports (one connected)
 
    1x infiniband QDR card
 
    7 Tesla C2070 (nefgpu03)/ 7 Tesla C2050(nefgpu04) GPUs connected with a PCIe gen2 16x interface
 
        448 Streaming Processor Cores per card
 
        Simple precision performance peak: 1.03 Tflops per card
 
        Double precision performance peak: 0.51 Tflops per card
 
        3GB of RAM capacity per card (6GB on nefgpu03)
 
  
===  Dell C6100/C410x GPU cluster ===
+
*    RAM capacity: 24 GB
 +
*    1x250GB SATA disk
 +
*    2x gigabit ethernet ports (one connected)
 +
*    1x infiniband QDR card
 +
*    2 Tesla M2050 GPUs connected with a PCIe gen2 16x interface
 +
**        448 Streaming Processor Cores per card
 +
**        Simple precision performance peak: 1.03 Tflops per card
 +
**        Double precision performance peak: 0.51 Tflops per card
 +
**        3GB of RAM capacity per card
  
Dell C6100 nodes: mono-Xeon X5650 @ 2.66Ghz : 2 nodes
 
  
    RAM capacity: 24 GB
+
=  Storage =
    1x250GB SATA disk
 
    2x gigabit ethernet ports (one connected)
 
    1x infiniband QDR card
 
    2 Tesla M2050 GPUs connected with a PCIe gen2 16x interface
 
        448 Streaming Processor Cores per card
 
        Simple precision performance peak: 1.03 Tflops per card
 
        Double precision performance peak: 0.51 Tflops per card
 
        3GB of RAM capacity per card
 
  
==  Storage ==
 
  
 
All nodes have access to a common storage using NFS. The NFS server is accessed through the infiniband 40Gb network if available
 
All nodes have access to a common storage using NFS. The NFS server is accessed through the infiniband 40Gb network if available
  
    A common storage of 7 TBytes (4 x 4TB SAS disks, RAID-10 array)
+
*    A common storage of 7 TBytes (4 x 4TB SAS disks, RAID-10 array)
 +
*    external storage ([http://www.dell.com/downloads/global/products/pvaul/fr/pvaul_md1000_specs_fr.pdf Dell MD1000])
 +
**        15 slots available, reserved for INRIA teams.
 +
**        9 slots are already used ([https://team.inria.fr/asclepios ASCLEPIOS] 4x1TB RAID-5, [http://www-sop.inria.fr/teams/tropics TROPICS] 1TB RAID-1 and [https://team.inria.fr/opale OPALE] 1TB RAID-1).
 +
**        Teams needing more storage should contact the cluster administrators
 +
*    external storage ([http://www.dell.com/fr/entreprise/p/powervault-md1200/pd Dell MD1200])
 +
**        19TB for the [https://team.inria.fr/asclepios ASCLEPIOS] EPI
 +
**        11TB for the [http://www-sop.inria.fr/morpheme MORPHEME] EPI

Version du 5 décembre 2014 à 14:26



CPU & GPU

The current platform includes:

  • 6 quad-Opteron 12 cores
  • 6 quad-Opteron 16 cores
  • 16 dual-Xeon 10 cores.
  • 21 dual-Xeon quad core.
  • 2 dual-Xeon hexa core.
  • 2 mono-Xeon hexa core.
  • 22 Nvidia Tesla GPU

The number of cores available is

  • 288 Opteron 2.2Ghz cores : ~2 Tflops (2.53 Tflops peak)
  • 384 Opteron 2.3Ghz cores : ~2.7 Tflops (3.5 Tflops peak)
  • 320 Xeon E5 2.8Ghz cores : ~5.9 Tflops (7.17 Tflops peak)
  • 204 Xeon 2.66Ghz cores : ~1.7 Tflops (2.17 Tflops peak)
  • 9024 Streaming Processor Cores (22 Nvidia GPU) 9.5Tflops peak (22.2 Tflops in single precision)


The total cumulated computing power is close to 24 Tflops (peak, double precision)


Dell R815 Cluster

Dell R815 quad-Opterons 6174 @ 2.2Ghz (48 cores) : 6 nodes

  • RAM capacity : 256 GB RAM
  • 2x600GB SAS HardDisk drive (RAID-0)
  • 4x gigabit network ports (one connected)
  • 1x infiniband QDR card


  Dell C6145 Cluster

Dell C6145 quad-Opterons 6376 @ 2.3Ghz (64 cores) : 6 nodes

  • RAM capacity : 256 GB RAM (512GB on nef011 and nef012)
  • 1x500GB SATA HardDisk drive
  • 2x gigabit network ports (one connected)
  • 1x infiniband QDR card

  Dell C6220 Cluster

Dell C6220 dual-Xeon E5-2680 v2 @ 2.80GHz (20 cores) : 16 nodes

  • RAM capacity : 192 GB RAM
  • 1x2TB SATA HardDisk drive
  • 2x gigabit network ports (one connected)
  • 1x infiniband FDR card (QDR used)


  Dell 1950 Cluster

PowerEdge 1950 dual-Xeon 5355 @ 2.66Ghz : 19 nodes

  • RAM capacity : 16 GB
  • 2x73GB SAS in RAID-0, 146Go disk space available
  • 2x gigabit netword ports (one connected)
  • 1x infiniband QDR card

  HP GPU cluster

HP DL160 G5 nodes: dual-Xeon 5430 @ 2.66Ghz : 2 nodes

  • RAM capacity: 16 GB
  • 500GB of local disk space
  • 2x gigabit ethernet interface
  • 1x 10GBit myrinet interface
  • 2 GPUs connected with a PCIe gen2 16x interface

Nvidia Tesla S1070: 1 node

  • 4 GPUs (2 per compute node)
  • Streaming Processor Cores: 960 (260 per GPU)
  • Simple precision performance peak: 3.73 Tflops
  • Double precision performance peak: 0.31 Tflops
  • RAM capacity : 16 GB (4GB per GPU)


  Carri GPU cluster

Carri HighStation 5600 XLR8 nodes: dual-Xeon X5650 @ 2.66Ghz : 2 nodes

  • RAM capacity: 72 GB
  • 2x160GB SSD disks
  • 4x gigabit ethernet ports (one connected)
  • 1x infiniband QDR card
  • 7 Tesla C2070 (nefgpu03)/ 7 Tesla C2050(nefgpu04) GPUs connected with a PCIe gen2 16x interface
    • 448 Streaming Processor Cores per card
    • Simple precision performance peak: 1.03 Tflops per card
    • Double precision performance peak: 0.51 Tflops per card
    • 3GB of RAM capacity per card (6GB on nefgpu03)


  Dell C6100/C410x GPU cluster

Dell C6100 nodes: mono-Xeon X5650 @ 2.66Ghz : 2 nodes

  • RAM capacity: 24 GB
  • 1x250GB SATA disk
  • 2x gigabit ethernet ports (one connected)
  • 1x infiniband QDR card
  • 2 Tesla M2050 GPUs connected with a PCIe gen2 16x interface
    • 448 Streaming Processor Cores per card
    • Simple precision performance peak: 1.03 Tflops per card
    • Double precision performance peak: 0.51 Tflops per card
    • 3GB of RAM capacity per card


  Storage

All nodes have access to a common storage using NFS. The NFS server is accessed through the infiniband 40Gb network if available

  • A common storage of 7 TBytes (4 x 4TB SAS disks, RAID-10 array)
  • external storage (Dell MD1000)
    • 15 slots available, reserved for INRIA teams.
    • 9 slots are already used (ASCLEPIOS 4x1TB RAID-5, TROPICS 1TB RAID-1 and OPALE 1TB RAID-1).
    • Teams needing more storage should contact the cluster administrators
  • external storage (Dell MD1200)