Hardware : Différence entre versions

De ClustersSophia
Aller à : navigation, rechercher
Ligne 215 : Ligne 215 :
 
| 4x GTX 1080 Ti
 
| 4x GTX 1080 Ti
 
| 2x E5-2620v4
 
| 2x E5-2620v4
| 64 GB
+
| 128 GB
 
| 2x 1TB SATA 7.2kRPM + 1x 400GB SSD
 
| 2x 1TB SATA 7.2kRPM + 1x 400GB SSD
 
| yes
 
| yes

Version du 9 janvier 2018 à 15:18



CPU & GPU summary

The current platform includes:

  • 8 dual-Xeon 8 cores, 16GB/core RAM
  • 16 dual-Xeon 10 cores, 9.6GB/core RAM
  • 44 dual-Xeon 6 cores, 8 GB/core RAM
  • 6 quad-Opteron 16 cores, 4GB/core RAM
  • 6 quad-Opteron 12 cores, ~5.3GB/core RAM
  • 13 quad-Xeon 6 cores, ~2.6 GB/core RAM
  • 52 Nvidia GPU

The number of CPU cores available is :

  • 128 Xeon 2.6Ghz cores : ~2.4 Tflops (2.7 Tflops peak)
  • 320 Xeon E5 2.8Ghz cores : ~5.9 Tflops (7.17 Tflops peak)
  • 528 Xeon 2.93Ghz cores : ~4.8 Tflops (6.2 Tflops peak)
  • 384 Opteron 2.3Ghz cores : ~2.7 Tflops (3.5 Tflops peak)
  • 288 Opteron 2.2Ghz cores : ~2 Tflops (2.53 Tflops peak)
  • 312 Xeon 2.4Ghz cores : ~1.8 Tflops (3.0 Tflops peak)

The total cumulated CPU computing power is close to 25.1 Tflops peak


The number of GPU cores available is :

  • 139968 Streaming Processor Cores ~385.7 Tflops peak in single precision ( ~18.5 Tflops peak in double precision )



CPU clusters

Dell C6220 cluster (2015)

Dell C6220 dual-Xeon E5-2650 v2 @ 2.60GHz (16 cores) : 8 nodes ( nef029 to nef036 )

C6220-300x191.jpg


  • RAM capacity : 256 GB RAM
  • 1x1TB SATA HardDisk drive
  • 2x gigabit network ports (one connected)
  • 2x infiniband QDR card (one connected)
  • hyperthreading not active


Dell C6220 cluster (2014)

Dell C6220 dual-Xeon E5-2680 v2 @ 2.80GHz (20 cores) : 16 nodes ( nef013 to nef028 )

C6220-300x191.jpg


  • RAM capacity : 192 GB RAM
  • 1x2TB SATA HardDisk drive
  • 2x gigabit network ports (one connected)
  • 1x infiniband FDR card (QDR used)
  • hyperthreading not active


 Dell C6100 cluster

Dell C6100 dual-Xeon X5670 @ 2.93GHz (12 cores) : 44 nodes ( nef097 to nef140 )

Dell-c6100.jpeg


  • RAM capacity : 96 GB RAM
  • 1x250GB hard disk drive SATA 7.2kRPM
  • 2x gigabit network ports (one connected)
  • 1x infiniband QDR card
  • hyperthreading not active


 Dell C6145 cluster

Dell C6145 quad-Opterons 6376 @ 2.3Ghz (64 cores) : 6 nodes ( nef007 to nef012 )

Cluster-c6145.jpg


  • RAM capacity : 256 GB RAM (512GB on nef011 and nef012)
  • 1x500GB SATA HardDisk drive
  • 2x gigabit network ports (one connected)
  • 1x infiniband QDR card
  • hyperthreading not supported


Dell R815 cluster

Dell R815 quad-Opterons 6174 @ 2.2Ghz (48 cores) : 6 nodes ( nef001 to nef006 )

Cluster-dellr815.jpg


  • RAM capacity : 256 GB RAM
  • 2x600GB SAS HardDisk drive (RAID-0)
  • 4x gigabit network ports (one connected)
  • 1x infiniband QDR card
  • hyperthreading not supported


 Dell R900 cluster

Dell R900 quad-Xeon E7450 @ 2.4GHz (24 cores) : 13 nodes ( nef084 to nef096 )

Dell-r900.jpeg


  • RAM capacity : 64 GB RAM
  • 2x146GB hard disk drive (RAID-1) SAS 15K RPM
  • 4x gigabit network ports (one connected)
  • 1x infiniband QDR card
  • hyperthreading not supported


GPU nodes

 Dell T630 GPU nodes

Dell T630 nodes: dual-Xeon E5-26xx : 11 nodes ( nefgpu07 to nefgpu17)

  • GeForce GTX 1080 Ti GPUs cards connected with a PCIe gen3 16x interface
    • 3584 CUDA cores per card
    • 11GB of RAM capacity per card
    • Simple precision performance peak: 10.6 Tflops per card
    • Double precision performance peak: 0.3 Tflops per card
    • 484 GB/s GPU memory bandwidth
  • GeForce GTX 1080 GPUs cards connected with a PCIe gen3 16x interface
    • 2560 CUDA cores per card
    • 8GB of RAM capacity per card
    • Simple precision performance peak: 8.2 Tflops per card
    • Double precision performance peak: 0.3 Tflops per card
    • 320 GB/s GPU memory bandwidth
  • GeForce GTX Titan X GPUs cards connected with a PCIe gen3 16x interface
    • 3072 CUDA cores per card
    • 12GB of RAM capacity per card
    • Simple precision performance peak: 7.0 Tflops per card
    • Double precision performance peak: 0.2 Tflops per card
    • 336.5 GB/s GPU memory bandwidth
  • 4x gigabit ethernet ports (one connected)
  • 1x infiniband QDR card
Node details
Node name Funding team Number of GPU cards Node CPU Node RAM Node storage Hyper threading active ?
nefgpu07 ASCLEPIOS 2x Titan X 2x E5-2620v3 32 GB 2x 1TB SATA 7.2kRPM no
nefgpu08 ZENITH 4x GTX 1080 Ti 2x E5-2630v3 64 GB 2x 300GB SAS 15kRPM + 1x 800GB SSD + 2x 1.92TB SSD read intensive no
nefgpu09 GRAPHDECO 4x Titan X 2x E5-2630v4 48 GB 2x 1TB SATA 7.2kRPM + 1x 400GB SSD no
nefgpu10 STARS 4x Titan X 2x E5-2630v4 128 GB 2x 1TB SATA 7.2kRPM + 1x 1.6TB SSD no
nefgpu11 STARS 4x GTX 1080 2x E5-2630v4 128 GB 2x 1TB SATA 7.2kRPM + 1x 1.6TB SSD no
nefgpu12 STARS 4x GTX 1080 2x E5-2630v4 128 GB 2x 1TB SATA 7.2kRPM + 1x 1.6TB SSD yes
nefgpu13 GRAPHDECO 2x GTX 1080 Ti 2x E5-2650v4 64 GB 2x 1TB SATA 7.2kRPM + 1x 400GB SSD yes
nefgpu14 STARS 4x GTX 1080 Ti 2x E5-2620v4 128 GB 2x 1TB SATA 7.2kRPM + 1x 400GB SSD yes
nefgpu15 STARS 4x GTX 1080 Ti 2x E5-2620v4 64 GB 2x 1TB SATA 7.2kRPM + 1x 400GB SSD yes
nefgpu16 ASCLEPIOS 4x GTX 1080 Ti 2x E5-2630v4 128 GB 2x 600GB SAS 10kRPM + 1x 1.6TB SSD yes
nefgpu17 ZENITH 4x GTX 1080 Ti 2x E5-2630v4 64 GB 2x 600GB SAS 10kRPM + 1x 1.6TB SSD + 2x 1.92TB SSD read intensive yes


Dell R730 GPU node

Dell R730 nodes: dual-Xeon E5-26xx : 1 node ( nefgpu01)

  • Tesla K80 GPU cards connected with a PCIe gen3 16x interface
    • 2x Tesla GK210 GPUs per card
    • 4992 CUDA cores per card
    • 2x 12GB RAM capacity per card with ECC
    • Simple precision performance peak: 5.61 Tflops per card
    • Double precision performance peak: 1.87 Tflops per card
    • 2x 240 GB/s GPU memory bandwidth
  • 4x gigabit ethernet ports (one connected)
  • 1x infiniband QDR card
  • hyperthreading not active
Node details
Node name Funding team Number of GPU cards Node CPU Node RAM Node storage
nefgpu01 MATHNEURO 1x K80 2x E5-2623v4 32 GB 2x 400GB SSD


 Dell C6100/C410x GPU nodes

Dell C6100 nodes: mono-Xeon X5650 @ 2.66Ghz : 2 nodes ( nefgpu05 nefgpu06)

Dell-c410x.jpg
  • RAM capacity: 24 GB
  • 1x250GB SATA disk
  • 2x gigabit ethernet ports (one connected)
  • 1x infiniband QDR card
  • 2 Tesla M2050 GPUs connected with a PCIe gen2 16x interface
    • 448 Streaming Processor Cores per card
    • Simple precision performance peak: 1.03 Tflops per card
    • Double precision performance peak: 0.51 Tflops per card
    • 3GB of RAM capacity per card
  • hyperthreading not active


 Carri GPU nodes

Carri HighStation 5600 XLR8 nodes: dual-Xeon X5650 @ 2.66Ghz : 1 node ( nefgpu03 )

Carri-tesla.jpg
  • RAM capacity: 72 GB
  • 2x160GB SSD disks
  • 4x gigabit ethernet ports (one connected)
  • 1x infiniband QDR card
  • 7 Tesla C2070 (nefgpu03) GPUs connected with a PCIe gen2 16x interface
    • 448 Streaming Processor Cores per card
    • Simple precision performance peak: 1.03 Tflops per card
    • Double precision performance peak: 0.51 Tflops per card
    • 3GB of RAM capacity per card (6GB on nefgpu03)
  • hyperthreading not active


 Storage

All nodes have access to common storage :

  • common storage : /home
    • 15 TiB, available to all users, quotas
    • 1 Dell PowerEdge R520 server with 2 RAID-10 array 4 x 4TB SAS 7.2 kRPM disks, infiniband QDR, NFS access
  • capacity distributed and scalable common storage : /data
    • 252TiB (08/2016)
      • permanent storage : 1TiB quota per team + teams may buy additional quota (please contact cluster administrators)
      • scratch storage : variable size (initially ~20TiB), no quota limit, for temporary storage (data may be purged)
    • BeeGFS filesystem on multiple hardware :
      • 2 Dell PowerEdge R730xd ; 800GB metadata : RAID-1 array 2 x 800 GB SSD mixed use MLC disks ; 2 x {36 or 48}TB data: 2 x RAID-6 array 8 x {6 or 8}TB SAS 7.2 kRPM disks
      • 6 Dell PowerEdge R510 ; 20TB data : RAID-6 array 12 x 2TB SAS 7.2 kRPM disks
      • infiniband QDR