Hardware : Différence entre versions

De ClustersSophia
Aller à : navigation, rechercher
Ligne 14 : Ligne 14 :
 
*    6 quad-Opteron 16 cores, 4GB/core RAM
 
*    6 quad-Opteron 16 cores, 4GB/core RAM
 
*    6 quad-Opteron 12 cores, ~5.3GB/core RAM
 
*    6 quad-Opteron 12 cores, ~5.3GB/core RAM
*    87 Nvidia GPU
+
*    91 Nvidia GPU
  
 
The number of CPU cores available is :
 
The number of CPU cores available is :
Ligne 28 : Ligne 28 :
  
 
The number of GPU cores available is :
 
The number of GPU cores available is :
*    '''262848''' Streaming Processor Cores  '''~753.3 Tflops peak in single precision''' ( ~28.9 Tflops peak in double precision )
+
*    '''277184''' Streaming Processor Cores  '''~795.7 Tflops peak in single precision''' ( ~30.1 Tflops peak in double precision )
  
  
Ligne 187 : Ligne 187 :
 
| 2x 600GB SAS 10kRPM + 1x 960GB SSD
 
| 2x 600GB SAS 10kRPM + 1x 960GB SSD
 
|}
 
|}
 +
  
 
== Dell T630 GPU nodes ==
 
== Dell T630 GPU nodes ==

Version du 6 août 2018 à 17:43



CPU & GPU summary

The current platform includes:

  • 16 dual-Xeon SP 10 cores, 9.6GB/core RAM
  • 1 quad-Xeon SP 20 cores, 12.8GB/core RAM
  • 8 dual-Xeon 8 cores, 16GB/core RAM
  • 16 dual-Xeon 10 cores, 9.6GB/core RAM
  • 44 dual-Xeon 6 cores, 8 GB/core RAM
  • 6 quad-Opteron 16 cores, 4GB/core RAM
  • 6 quad-Opteron 12 cores, ~5.3GB/core RAM
  • 91 Nvidia GPU

The number of CPU cores available is :

  • 320 Xeon SP Silver 2.2GHz cores : ~8.8 TFlops AVX2 (11.2 TFlops peak) / ~6.4 TFlops AVX-512 (9.7 TFlops peak)
  • 80 Xeon SP Gold 2.4GHz cores : ~2.4 TFlops AVX2 (3.1 Tflops peak) / ~3.8 Tflops AVX-512 (4.9 Tflops peak)
  • 128 Xeon E5 v2 2.6GHz cores : ~2.4 Tflops (2.7 Tflops peak)
  • 320 Xeon E5 v2 2.8GHz cores : ~5.9 Tflops (7.2 Tflops peak)
  • 528 Xeon 2.93GHz cores : ~4.8 Tflops (6.2 Tflops peak)
  • 384 Opteron 2.3GHz cores : ~2.7 Tflops (3.5 Tflops peak)
  • 288 Opteron 2.2GHz cores : ~2 Tflops (2.5 Tflops peak)

The total cumulated CPU computing power is close to 38.2 Tflops peak


The number of GPU cores available is :

  • 277184 Streaming Processor Cores ~795.7 Tflops peak in single precision ( ~30.1 Tflops peak in double precision )



CPU clusters

Dell C6420 cluster

Dell C6420 dual-Xeon Skylake SP Silver 4114 @ 2.20GHz (20 cores) : 16 nodes ( nef037 to nef052 )

C6420.jpg


  • RAM capacity : 192 GB RAM
  • 1x600GB 10kRPM SAS HardDisk drive
  • 1x gigabit network port
  • 1x infiniband FDR card
  • hyperthreading active
  • AVX-512 support, optimal performance with AVX/AVX2


Dell R940 node

Dell R940 quad-Xeon SP Gold 6148 @ 2.40GHz (80 cores) : 1 node ( nef053 )

Dellr940.jpg
  • RAM capacity : 1024 GB RAM
  • storage : system 2x600 GB SATA RAID-1 + local scratch data 1.92 TB SATA SSD + controller H740P
  • 4x gigabit network ports (one connected)
  • infiniband EDR card (connected to FDR switch)
  • hyperthreading active
  • optimal performance with AVX-512, AVX/AVX2 support


Dell C6220 cluster (2015)

Dell C6220 dual-Xeon E5-2650 v2 @ 2.60GHz (16 cores) : 8 nodes ( nef029 to nef036 )

C6220-300x191.jpg


  • RAM capacity : 256 GB RAM
  • 1x1TB SATA HardDisk drive
  • 2x gigabit network ports (one connected)
  • 2x infiniband QDR card (one connected)
  • hyperthreading not active


Dell C6220 cluster (2014)

Dell C6220 dual-Xeon E5-2680 v2 @ 2.80GHz (20 cores) : 16 nodes ( nef013 to nef028 )

C6220-300x191.jpg


  • RAM capacity : 192 GB RAM
  • 1x2TB SATA HardDisk drive
  • 2x gigabit network ports (one connected)
  • 1x infiniband FDR card (QDR used)
  • hyperthreading not active


 Dell C6100 cluster

Dell C6100 dual-Xeon X5670 @ 2.93GHz (12 cores) : 44 nodes ( nef097 to nef140 )

Dell-c6100.jpeg


  • RAM capacity : 96 GB RAM
  • 1x250GB hard disk drive SATA 7.2kRPM
  • 2x gigabit network ports (one connected)
  • 1x infiniband QDR card
  • hyperthreading not active


 Dell C6145 cluster

Dell C6145 quad-Opterons 6376 @ 2.3Ghz (64 cores) : 6 nodes ( nef007 to nef012 )

Cluster-c6145.jpg


  • RAM capacity : 256 GB RAM (512GB on nef011 and nef012)
  • 1x500GB SATA HardDisk drive
  • 2x gigabit network ports (one connected)
  • 1x infiniband QDR card
  • hyperthreading not supported


Dell R815 cluster

Dell R815 quad-Opterons 6174 @ 2.2Ghz (48 cores) : 6 nodes ( nef001 to nef006 )

Cluster-dellr815.jpg


  • RAM capacity : 256 GB RAM
  • 2x600GB SAS HardDisk drive (RAID-0)
  • 4x gigabit network ports (one connected)
  • 1x infiniband QDR card
  • hyperthreading not supported


GPU nodes

Asus ESC8000 GPU node

Asus ESC8000G4 / Carri HighServer nodes : 1 node (nefgpu40 )

  • 8x GeForce GTX 1080 Ti GPUs cards connected with a PCIe gen3 16x interface
    • 3584 CUDA cores per card
    • 11GB of RAM capacity per card
    • Simple precision performance peak: 10.6 Tflops per card
    • Double precision performance peak: 0.3 Tflops per card
    • 484 GB/s GPU memory bandwidth
  • PCIe single-root topology (2 PCIe 96 lane switches)
    • topology can be software modified (BIOS & reboot) to dual-root for an experiment campaign
  • CPU : 2x Xeon SP Gold 5115 @ 2.4 GHz
  • RAM capacity : 256 GB
  • storage : system 2x512 GB SATA SSD RAID-1 + local scratch data 4 TB SATA SSD RAID-0 + controller SAS 12Gb/s
  • 4x gigabit ethernet ports (one connected)
  • 1x infiniband FDR card
  • hyperthreading active


 Dell T640 GPU nodes

Dell T640 nodes: dual-Xeon Skylake SP : 1 node ( nefgpu24)

  • GeForce GTX 1080 Ti GPUs cards connected with a PCIe gen3 16x interface
    • 3584 CUDA cores per card
    • 11GB of RAM capacity per card
    • Simple precision performance peak: 10.6 Tflops per card
    • Double precision performance peak: 0.3 Tflops per card
    • 484 GB/s GPU memory bandwidth
  • 4x gigabit ethernet ports (one connected)
  • 1x infiniband FDR card
  • hyperthreading active
Node details
Node name Funding team GPU cards Node CPU Node RAM Node storage
nefgpu24 EPIONE 4x GTX 1080 Ti 2x Xeon Silver 4110 96 GB 2x 600GB SAS 10kRPM + 1x 960GB SSD


 Dell T630 GPU nodes

Dell T630 nodes: dual-Xeon E5-26xx : 17 nodes ( nefgpu07 to nefgpu23)

  • GeForce GTX 1080 Ti GPUs cards connected with a PCIe gen3 16x interface
    • 3584 CUDA cores per card
    • 11GB of RAM capacity per card
    • Simple precision performance peak: 10.6 Tflops per card
    • Double precision performance peak: 0.3 Tflops per card
    • 484 GB/s GPU memory bandwidth
  • GeForce GTX 1080 GPUs cards connected with a PCIe gen3 16x interface
    • 2560 CUDA cores per card
    • 8GB of RAM capacity per card
    • Simple precision performance peak: 8.2 Tflops per card
    • Double precision performance peak: 0.3 Tflops per card
    • 320 GB/s GPU memory bandwidth
  • GeForce GTX Titan X GPUs cards connected with a PCIe gen3 16x interface
    • 3072 CUDA cores per card
    • 12GB of RAM capacity per card
    • Simple precision performance peak: 7.0 Tflops per card
    • Double precision performance peak: 0.2 Tflops per card
    • 336.5 GB/s GPU memory bandwidth
  • 4x gigabit ethernet ports (one connected)
  • 1x infiniband FDR card
Node details
Node name Funding team GPU cards Node CPU Node RAM Node storage Hyper threading active ?
nefgpu07 EPIONE 4x GTX 1080 Ti 2x E5-2620v3 128 GB 2x 1TB SATA 7.2kRPM no
nefgpu08 ZENITH 4x GTX 1080 Ti 2x E5-2630v3 64 GB 2x 300GB SAS 15kRPM + 1x 800GB SSD + 2x 1.92TB SSD read intensive no
nefgpu09 GRAPHDECO 4x Titan X 2x E5-2630v4 48 GB 2x 1TB SATA 7.2kRPM + 1x 400GB SSD no
nefgpu10 STARS 4x Titan X 2x E5-2630v4 128 GB 2x 1TB SATA 7.2kRPM + 1x 1.6TB SSD no
nefgpu11 STARS 4x GTX 1080 2x E5-2630v4 128 GB 2x 1TB SATA 7.2kRPM + 1x 1.6TB SSD no
nefgpu12 STARS 4x GTX 1080 2x E5-2630v4 128 GB 2x 1TB SATA 7.2kRPM + 1x 1.6TB SSD yes
nefgpu13 GRAPHDECO 2x GTX 1080 Ti 2x E5-2650v4 64 GB 2x 1TB SATA 7.2kRPM + 1x 400GB SSD yes
nefgpu14 STARS 4x GTX 1080 Ti 2x E5-2620v4 128 GB 2x 1TB SATA 7.2kRPM + 1x 400GB SSD yes
nefgpu15 STARS 4x GTX 1080 Ti 2x E5-2620v4 128 GB 2x 1TB SATA 7.2kRPM + 1x 400GB SSD yes
nefgpu16 EPIONE 4x GTX 1080 Ti 2x E5-2630v4 128 GB 2x 600GB SAS 10kRPM + 1x 1.6TB SSD yes
nefgpu17 ZENITH 4x GTX 1080 Ti 2x E5-2630v4 64 GB 2x 600GB SAS 10kRPM + 1x 1.6TB SSD + 2x 1.92TB SSD read intensive yes
nefgpu18 common 4x GTX 1080 Ti 2x E5-2630v4 128 GB 2x 600GB SAS 10kRPM + 1x 1.6TB SSD yes
nefgpu19 common 4x GTX 1080 Ti 2x E5-2630v4 128 GB 2x 600GB SAS 10kRPM + 1x 1.6TB SSD yes
nefgpu20 common 4x GTX 1080 Ti 2x E5-2630v4 128 GB 2x 600GB SAS 10kRPM + 1x 1.6TB SSD yes
nefgpu21 STARS 4x GTX 1080 Ti 2x E5-2620v4 128 GB 2x 600GB SAS 10kRPM + 1x 480GB SSD yes
nefgpu22 STARS 4x GTX 1080 Ti 2x E5-2620v4 128 GB 2x 600GB SAS 10kRPM + 1x 480GB SSD yes
nefgpu23 TITANE-EPITOME 4x GTX 1080 Ti 2x E5-2630v4 64 GB 2x 600GB SAS 10kRPM + 1x 1.6TB SSD yes


Dell R730 GPU node

Dell R730 nodes: dual-Xeon E5-26xx : 1 node ( nefgpu01)

  • Tesla K80 GPU cards connected with a PCIe gen3 16x interface
    • 2x Tesla GK210 GPUs per card
    • 4992 CUDA cores per card
    • 2x 12GB RAM capacity per card with ECC
    • Simple precision performance peak: 5.61 Tflops per card
    • Double precision performance peak: 1.87 Tflops per card
    • 2x 240 GB/s GPU memory bandwidth
  • 4x gigabit ethernet ports (one connected)
  • 1x infiniband QDR card
  • hyperthreading not active
Node details
Node name Funding team Number of GPU cards Node CPU Node RAM Node storage
nefgpu01 MATHNEURO 1x K80 2x E5-2623v4 32 GB 2x 400GB SSD


 Dell C6100/C410x GPU nodes

Dell C6100 nodes: mono-Xeon X5650 @ 2.66Ghz : 2 nodes ( nefgpu05 nefgpu06)

Dell-c410x.jpg
  • RAM capacity: 24 GB
  • 1x250GB SATA disk
  • 2x gigabit ethernet ports (one connected)
  • 1x infiniband QDR card
  • 2 Tesla M2050 GPUs connected with a PCIe gen2 16x interface
    • 448 Streaming Processor Cores per card
    • Simple precision performance peak: 1.03 Tflops per card
    • Double precision performance peak: 0.51 Tflops per card
    • 3GB of RAM capacity per card
  • hyperthreading not active


 Carri GPU nodes

Carri HighStation 5600 XLR8 nodes: dual-Xeon X5650 @ 2.66Ghz : 1 node ( nefgpu03 )

Carri-tesla.jpg
  • RAM capacity: 72 GB
  • 2x160GB SSD disks
  • 4x gigabit ethernet ports (one connected)
  • 1x infiniband QDR card
  • 7 Tesla C2070 (nefgpu03) GPUs connected with a PCIe gen2 16x interface
    • 448 Streaming Processor Cores per card
    • Simple precision performance peak: 1.03 Tflops per card
    • Double precision performance peak: 0.51 Tflops per card
    • 3GB of RAM capacity per card (6GB on nefgpu03)
  • hyperthreading not active


 Storage

All nodes have access to common storage :

  • common storage : /home
    • 15 TiB, available to all users, quotas
    • 1 Dell PowerEdge R520 server with 2 RAID-10 array 4 x 4TB SAS 7.2 kRPM disks, infiniband QDR, NFS access
  • capacity distributed and scalable common storage : /data
    • 252TiB (08/2016)
      • permanent storage : 1TiB quota per team + teams may buy additional quota (please contact cluster administrators)
      • scratch storage : variable size (initially ~20TiB), no quota limit, for temporary storage (data may be purged)
    • BeeGFS filesystem on multiple hardware :
      • 2 Dell PowerEdge R730xd ; 800GB metadata : RAID-1 array 2 x 800 GB SSD mixed use MLC disks ; 2 x {36 or 48}TB data: 2 x RAID-6 array 8 x {6 or 8}TB SAS 7.2 kRPM disks
      • 6 Dell PowerEdge R510 ; 20TB data : RAID-6 array 12 x 2TB SAS 7.2 kRPM disks
      • infiniband QDR