Hardware : Différence entre versions

De ClustersSophia
Aller à : navigation, rechercher
Ligne 13 : Ligne 13 :
 
*    6 quad-Opteron 12 cores, ~5.3GB/core RAM
 
*    6 quad-Opteron 12 cores, ~5.3GB/core RAM
 
*    13 quad-Xeon 6 cores, ~2.6 GB/core RAM
 
*    13 quad-Xeon 6 cores, ~2.6 GB/core RAM
 +
*    19 dual-Xeon quad core, 2GB/core RAM
 
*    18 Nvidia Tesla GPU
 
*    18 Nvidia Tesla GPU
  
Ligne 22 : Ligne 23 :
 
*    '''288''' Opteron 2.2Ghz cores : ~2 Tflops (2.53 Tflops peak)
 
*    '''288''' Opteron 2.2Ghz cores : ~2 Tflops (2.53 Tflops peak)
 
*    '''312''' Xeon 2.4Ghz cores :  ~1.8 Tflops (3.0 Tflops peak)
 
*    '''312''' Xeon 2.4Ghz cores :  ~1.8 Tflops (3.0 Tflops peak)
 +
*    '''152''' Xeon 2.66Ghz cores : ~1.7 Tflops (2.17 Tflops peak)
 
*    '''8064''' Streaming Processor Cores (18 Nvidia GPU) ~9.2Tflops peak (18.5 Tflops in single precision)  
 
*    '''8064''' Streaming Processor Cores (18 Nvidia GPU) ~9.2Tflops peak (18.5 Tflops in single precision)  
  
 
'''The total cumulated computing power is close to 21.1 Tflops (peak, double precision)'''
 
'''The total cumulated computing power is close to 21.1 Tflops (peak, double precision)'''
 
 
Legacy platform includes:
 
 
 
 
*    21 dual-Xeon quad core.
 
*    2 dual-Xeon hexa core.
 
*    2 mono-Xeon hexa core.
 
*    22 Nvidia Tesla GPU
 
 
The number of cores available is
 
 
 
 
*    '''204''' Xeon 2.66Ghz cores : ~1.7 Tflops (2.17 Tflops peak)
 
*    '''960''' Streaming Processor Cores (4 Nvidia GPU) ~0.3Tflops peak (3.7 Tflops in single precision)
 
 
 
'''The total cumulated computing power is close to 15.7 Tflops (peak, double precision)'''
 
  
  
Ligne 119 : Ligne 101 :
 
*    1x infiniband QDR card  
 
*    1x infiniband QDR card  
  
==  Dell C6100/C410x GPU cluster ==
+
==  Dell 1950 cluster ==
 +
 
 +
'''PowerEdge 1950''' dual-Xeon 5355 @ 2.66Ghz (8 cores) : 19 nodes ( '''nef065 to nef083''' )
 +
 
 +
[[Image:Cluster-dell.jpg|x200px|right]]
 +
 
 +
*    RAM capacity : 16 GB
 +
*    2x73GB SAS in RAID-0, 146Go disk space available
 +
*    2x gigabit netword ports (one connected)
 +
*    1x infiniband QDR card
 +
 
 +
==  Dell C6100/C410x GPU nodes ==
  
  
Ligne 136 : Ligne 129 :
 
**        3GB of RAM capacity per card  
 
**        3GB of RAM capacity per card  
  
==  Carri GPU cluster ==
+
==  Carri GPU nodes ==
  
  
Ligne 185 : Ligne 178 :
 
'''The total cumulated computing power is close to 15.7 Tflops (peak, double precision)'''
 
'''The total cumulated computing power is close to 15.7 Tflops (peak, double precision)'''
  
 
==  Dell 1950 Cluster ==
 
 
 
'''PowerEdge 1950''' dual-Xeon 5355 @ 2.66Ghz : 19 nodes ( '''nef065 to nef083''' )
 
 
[[Image:Cluster-dell.jpg|x200px|right]]
 
 
*    RAM capacity : 16 GB
 
*    2x73GB SAS in RAID-0, 146Go disk space available
 
*    2x gigabit netword ports (one connected)
 
*    1x infiniband QDR card
 
<br clear=all>
 
  
  

Version du 18 avril 2016 à 17:54



CPU & GPU

The current platform includes:

  • 8 dual-Xeon 8 cores, 16GB/core RAM
  • 16 dual-Xeon 10 cores, 9.6GB/core RAM
  • 44 dual-Xeon 6 cores, 8 GB/core RAM
  • 6 quad-Opteron 16 cores, 4GB/core RAM
  • 6 quad-Opteron 12 cores, ~5.3GB/core RAM
  • 13 quad-Xeon 6 cores, ~2.6 GB/core RAM
  • 19 dual-Xeon quad core, 2GB/core RAM
  • 18 Nvidia Tesla GPU

The number of cores available is

  • 128 Xeon 2.6Ghz cores : ~2.4 Tflops (2.7 Tflops peak)
  • 320 Xeon E5 2.8Ghz cores : ~5.9 Tflops (7.17 Tflops peak)
  • 528 Xeon 2.93Ghz cores : ~4.8 Tflops (6.2 Tflops peak)
  • 384 Opteron 2.3Ghz cores : ~2.7 Tflops (3.5 Tflops peak)
  • 288 Opteron 2.2Ghz cores : ~2 Tflops (2.53 Tflops peak)
  • 312 Xeon 2.4Ghz cores : ~1.8 Tflops (3.0 Tflops peak)
  • 152 Xeon 2.66Ghz cores : ~1.7 Tflops (2.17 Tflops peak)
  • 8064 Streaming Processor Cores (18 Nvidia GPU) ~9.2Tflops peak (18.5 Tflops in single precision)

The total cumulated computing power is close to 21.1 Tflops (peak, double precision)


Dell C6220 Cluster (2015)

Dell C6220 dual-Xeon E5-2650 v2 @ 2.60GHz (16 cores) : 8 nodes ( nef029 to nef036 )

C6220-300x191.jpg


  • RAM capacity : 256 GB RAM
  • 1x1TB SATA HardDisk drive
  • 2x gigabit network ports (one connected)
  • 2x infiniband QDR card (one connected)

  Dell C6220 Cluster (2014)

Dell C6220 dual-Xeon E5-2680 v2 @ 2.80GHz (20 cores) : 16 nodes ( nef013 to nef028 )

C6220-300x191.jpg


  • RAM capacity : 192 GB RAM
  • 1x2TB SATA HardDisk drive
  • 2x gigabit network ports (one connected)
  • 1x infiniband FDR card (QDR used)

  Dell C6100 Cluster

Dell C6100 dual-Xeon X5670 @ 2.93GHz (12 cores) : 44 nodes ( nef097 to nef140 )

Dell-c6100.jpeg


  • RAM capacity : 96 GB RAM
  • 1x250GB hard disk drive SATA 7.2kRPM
  • 2x gigabit network ports (one connected)
  • 1x infiniband QDR card

  Dell C6145 Cluster

Dell C6145 quad-Opterons 6376 @ 2.3Ghz (64 cores) : 6 nodes ( nef007 to nef012 )

Cluster-c6145.jpg


  • RAM capacity : 256 GB RAM (512GB on nef011 and nef012)
  • 1x500GB SATA HardDisk drive
  • 2x gigabit network ports (one connected)
  • 1x infiniband QDR card

Dell R815 Cluster

Dell R815 quad-Opterons 6174 @ 2.2Ghz (48 cores) : 6 nodes ( nef001 to nef006 )

Cluster-dellr815.jpg


  • RAM capacity : 256 GB RAM
  • 2x600GB SAS HardDisk drive (RAID-0)
  • 4x gigabit network ports (one connected)
  • 1x infiniband QDR card

  Dell R900 Cluster

Dell R900 quad-Xeon E7450 @ 2.4GHz (24 cores) : 13 nodes ( nef084 to nef096 )

Dell-r900.jpeg


  • RAM capacity : 64 GB RAM
  • 2x146GB hard disk drive (RAID-1) SAS 15K RPM
  • 4x gigabit network ports (one connected)
  • 1x infiniband QDR card

  Dell 1950 cluster

PowerEdge 1950 dual-Xeon 5355 @ 2.66Ghz (8 cores) : 19 nodes ( nef065 to nef083 )

Cluster-dell.jpg
  • RAM capacity : 16 GB
  • 2x73GB SAS in RAID-0, 146Go disk space available
  • 2x gigabit netword ports (one connected)
  • 1x infiniband QDR card

  Dell C6100/C410x GPU nodes

Dell C6100 nodes: mono-Xeon X5650 @ 2.66Ghz : 2 nodes ( nefgpu05 nefgpu06)

Dell-c410x.jpg
  • RAM capacity: 24 GB
  • 1x250GB SATA disk
  • 2x gigabit ethernet ports (one connected)
  • 1x infiniband QDR card
  • 2 Tesla M2050 GPUs connected with a PCIe gen2 16x interface
    • 448 Streaming Processor Cores per card
    • Simple precision performance peak: 1.03 Tflops per card
    • Double precision performance peak: 0.51 Tflops per card
    • 3GB of RAM capacity per card

  Carri GPU nodes

Carri HighStation 5600 XLR8 nodes: dual-Xeon X5650 @ 2.66Ghz : 1 node ( nefgpu03 nefgpu04 )

Carri-tesla.jpg
  • RAM capacity: 72 GB
  • 2x160GB SSD disks
  • 4x gigabit ethernet ports (one connected)
  • 1x infiniband QDR card
  • 7 Tesla C2070 (nefgpu03)/ 7 Tesla C2050(nefgpu04) GPUs connected with a PCIe gen2 16x interface
    • 448 Streaming Processor Cores per card
    • Simple precision performance peak: 1.03 Tflops per card
    • Double precision performance peak: 0.51 Tflops per card
    • 3GB of RAM capacity per card (6GB on nefgpu03)


legacy cluster - CPU & GPU

This paragraph describes the obsolete "legacy nef" configuration which will definitively stop on 17 april 2016 : please use the new nef configuration.

Hardware from the legacy cluster will be re-installed in the new cluster


The current platform includes:

  • 6 quad-Opteron 12 cores
  • 6 quad-Opteron 16 cores
  • 16 dual-Xeon 10 cores.
  • 21 dual-Xeon quad core.
  • 2 dual-Xeon hexa core.
  • 2 mono-Xeon hexa core.
  • 22 Nvidia Tesla GPU

The number of cores available is

  • 288 Opteron 2.2Ghz cores : ~2 Tflops (2.53 Tflops peak)
  • 384 Opteron 2.3Ghz cores : ~2.7 Tflops (3.5 Tflops peak)
  • 320 Xeon E5 2.8Ghz cores : ~5.9 Tflops (7.17 Tflops peak)
  • 204 Xeon 2.66Ghz cores : ~1.7 Tflops (2.17 Tflops peak)
  • 960 Streaming Processor Cores (4 Nvidia GPU) ~0.3Tflops peak (3.7 Tflops in single precision)


The total cumulated computing power is close to 15.7 Tflops (peak, double precision)


  Storage

All nodes have access to common storage :

  • common storage : /home
    • 15 TiB, available to all users, quotas
    • 1 Dell PowerEdge R520 server with 2 RAID-10 array 4 x 4TB SAS 7.2 kRPM disks, infiniband QDR, NFS access
  • capacity distributed and scalable common storage : /data
    • 153TiB, to be extended : + 55 TiB in 2016
      • permanent storage : 1TiB quota per team + teams may buy additional quota (please contact cluster administrators)
      • scratch storage : variable size (initially ~20TiB), no quota limit, for temporary storage (data may be purged)
    • BeeGFS filesystem on multiple hardware :
      • 2 Dell PowerEdge R730xd ; 800GB metadata : RAID-1 array 2 x 800 GB SSD mixed use MLC disks ; 1 or 2 x 36TB data: 1 or 2 x RAID-6 array 8 x 6TB SAS 7.2 kRPM disks
      • 3 Dell PowerEdge R510 ; 20TB data : RAID-6 array 12 x 2TB SAS 7.2 kRPM disks
      • infiniband QDR

Legacy team storage :