Hardware
CPU & GPU summary
The current platform includes:
- 1 quad-Xeon SP 20 cores, 12.8GB/core RAM
- 8 dual-Xeon 8 cores, 16GB/core RAM
- 16 dual-Xeon 10 cores, 9.6GB/core RAM
- 44 dual-Xeon 6 cores, 8 GB/core RAM
- 6 quad-Opteron 16 cores, 4GB/core RAM
- 6 quad-Opteron 12 cores, ~5.3GB/core RAM
- 13 quad-Xeon 6 cores, ~2.6 GB/core RAM
- 81 Nvidia GPU
The number of CPU cores available is :
- 80 Xeon SP Gold v2 2.4GHz cores : ~3.9 Tflops (4.9 Tflops peak)
- 128 Xeon E5 v2 2.6GHz cores : ~2.4 Tflops (2.7 Tflops peak)
- 320 Xeon E5 v2 2.8GHz cores : ~5.9 Tflops (7.2 Tflops peak)
- 528 Xeon 2.93GHz cores : ~4.8 Tflops (6.2 Tflops peak)
- 384 Opteron 2.3GHz cores : ~2.7 Tflops (3.5 Tflops peak)
- 288 Opteron 2.2GHz cores : ~2 Tflops (2.5 Tflops peak)
- 312 Xeon 2.4GHz cores : ~1.8 Tflops (3.0 Tflops peak)
The total cumulated CPU computing power is close to 30.0 Tflops peak
The number of GPU cores available is :
- 240320 Streaming Processor Cores ~682.5 Tflops peak in single precision ( ~26.9 Tflops peak in double precision )
CPU clusters
Dell R940 node
Dell R940 quad-Xeon SP Gold 6148 @ 2.40GHz (80 cores) : 1 node ( nef053 )
- RAM capacity : 1024 GB RAM
- storage : system 2x600 GB SATA RAID-1 + local scratch data 1.92 TB SATA SSD + controller H740P
- 4x gigabit network ports (one connected)
- infiniband EDR card (connected to FDR switch)
- hyperthreading active
Dell C6220 cluster (2015)
Dell C6220 dual-Xeon E5-2650 v2 @ 2.60GHz (16 cores) : 8 nodes ( nef029 to nef036 )
- RAM capacity : 256 GB RAM
- 1x1TB SATA HardDisk drive
- 2x gigabit network ports (one connected)
- 2x infiniband QDR card (one connected)
- hyperthreading not active
Dell C6220 cluster (2014)
Dell C6220 dual-Xeon E5-2680 v2 @ 2.80GHz (20 cores) : 16 nodes ( nef013 to nef028 )
- RAM capacity : 192 GB RAM
- 1x2TB SATA HardDisk drive
- 2x gigabit network ports (one connected)
- 1x infiniband FDR card (QDR used)
- hyperthreading not active
Dell C6100 cluster
Dell C6100 dual-Xeon X5670 @ 2.93GHz (12 cores) : 44 nodes ( nef097 to nef140 )
- RAM capacity : 96 GB RAM
- 1x250GB hard disk drive SATA 7.2kRPM
- 2x gigabit network ports (one connected)
- 1x infiniband QDR card
- hyperthreading not active
Dell C6145 cluster
Dell C6145 quad-Opterons 6376 @ 2.3Ghz (64 cores) : 6 nodes ( nef007 to nef012 )
- RAM capacity : 256 GB RAM (512GB on nef011 and nef012)
- 1x500GB SATA HardDisk drive
- 2x gigabit network ports (one connected)
- 1x infiniband QDR card
- hyperthreading not supported
Dell R815 cluster
Dell R815 quad-Opterons 6174 @ 2.2Ghz (48 cores) : 6 nodes ( nef001 to nef006 )
- RAM capacity : 256 GB RAM
- 2x600GB SAS HardDisk drive (RAID-0)
- 4x gigabit network ports (one connected)
- 1x infiniband QDR card
- hyperthreading not supported
Dell R900 cluster
Dell R900 quad-Xeon E7450 @ 2.4GHz (24 cores) : 13 nodes ( nef084 to nef096 )
- RAM capacity : 64 GB RAM
- 2x146GB hard disk drive (RAID-1) SAS 15K RPM
- 4x gigabit network ports (one connected)
- 1x infiniband QDR card
- hyperthreading not supported
GPU nodes
Asus ESC8000 GPU node
Asus ESC8000G4 / Carri HighServer nodes : 1 node (nefgpu40 )
- 8x GeForce GTX 1080 Ti GPUs cards connected with a PCIe gen3 16x interface
- 3584 CUDA cores per card
- 11GB of RAM capacity per card
- Simple precision performance peak: 10.6 Tflops per card
- Double precision performance peak: 0.3 Tflops per card
- 484 GB/s GPU memory bandwidth
- PCIe single-root topology (2 PCIe 96 lane switches)
- topology can be software modified (BIOS & reboot) to dual-root for an experiment campaign
- CPU : 2x Xeon SP Gold 5115 @ 2.4 GHz
- RAM capacity : 256 GB
- storage : system 2x512 GB SATA SSD RAID-1 + local scratch data 4 TB SATA SSD RAID-0 + controller SAS 12Gb/s
- 4x gigabit ethernet ports (one connected)
- 1x infiniband FDR card
- hyperthreading active
Dell T630 GPU nodes
Dell T630 nodes: dual-Xeon E5-26xx : 16 nodes ( nefgpu07 to nefgpu22)
- GeForce GTX 1080 Ti GPUs cards connected with a PCIe gen3 16x interface
- 3584 CUDA cores per card
- 11GB of RAM capacity per card
- Simple precision performance peak: 10.6 Tflops per card
- Double precision performance peak: 0.3 Tflops per card
- 484 GB/s GPU memory bandwidth
- GeForce GTX 1080 GPUs cards connected with a PCIe gen3 16x interface
- 2560 CUDA cores per card
- 8GB of RAM capacity per card
- Simple precision performance peak: 8.2 Tflops per card
- Double precision performance peak: 0.3 Tflops per card
- 320 GB/s GPU memory bandwidth
- GeForce GTX Titan X GPUs cards connected with a PCIe gen3 16x interface
- 3072 CUDA cores per card
- 12GB of RAM capacity per card
- Simple precision performance peak: 7.0 Tflops per card
- Double precision performance peak: 0.2 Tflops per card
- 336.5 GB/s GPU memory bandwidth
- 4x gigabit ethernet ports (one connected)
- 1x infiniband FDR card
Node name | Funding team | Number of GPU cards | Node CPU | Node RAM | Node storage | Hyper threading active ? |
nefgpu07 | ASCLEPIOS | 2x Titan X | 2x E5-2620v3 | 128 GB | 2x 1TB SATA 7.2kRPM | no |
nefgpu08 | ZENITH | 4x GTX 1080 Ti | 2x E5-2630v3 | 64 GB | 2x 300GB SAS 15kRPM + 1x 800GB SSD + 2x 1.92TB SSD read intensive | no |
nefgpu09 | GRAPHDECO | 4x Titan X | 2x E5-2630v4 | 48 GB | 2x 1TB SATA 7.2kRPM + 1x 400GB SSD | no |
nefgpu10 | STARS | 4x Titan X | 2x E5-2630v4 | 128 GB | 2x 1TB SATA 7.2kRPM + 1x 1.6TB SSD | no |
nefgpu11 | STARS | 4x GTX 1080 | 2x E5-2630v4 | 128 GB | 2x 1TB SATA 7.2kRPM + 1x 1.6TB SSD | no |
nefgpu12 | STARS | 4x GTX 1080 | 2x E5-2630v4 | 128 GB | 2x 1TB SATA 7.2kRPM + 1x 1.6TB SSD | yes |
nefgpu13 | GRAPHDECO | 2x GTX 1080 Ti | 2x E5-2650v4 | 64 GB | 2x 1TB SATA 7.2kRPM + 1x 400GB SSD | yes |
nefgpu14 | STARS | 4x GTX 1080 Ti | 2x E5-2620v4 | 128 GB | 2x 1TB SATA 7.2kRPM + 1x 400GB SSD | yes |
nefgpu15 | STARS | 4x GTX 1080 Ti | 2x E5-2620v4 | 128 GB | 2x 1TB SATA 7.2kRPM + 1x 400GB SSD | yes |
nefgpu16 | ASCLEPIOS | 4x GTX 1080 Ti | 2x E5-2630v4 | 128 GB | 2x 600GB SAS 10kRPM + 1x 1.6TB SSD | yes |
nefgpu17 | ZENITH | 4x GTX 1080 Ti | 2x E5-2630v4 | 64 GB | 2x 600GB SAS 10kRPM + 1x 1.6TB SSD + 2x 1.92TB SSD read intensive | yes |
nefgpu18 | common | 4x GTX 1080 Ti | 2x E5-2630v4 | 128 GB | 2x 600GB SAS 10kRPM + 1x 1.6TB SSD | yes |
nefgpu19 | common | 4x GTX 1080 Ti | 2x E5-2630v4 | 128 GB | 2x 600GB SAS 10kRPM + 1x 1.6TB SSD | yes |
nefgpu20 | common | 4x GTX 1080 Ti | 2x E5-2630v4 | 128 GB | 2x 600GB SAS 10kRPM + 1x 1.6TB SSD | yes |
nefgpu21 | STARS | 4x GTX 1080 Ti | 2x E5-2620v4 | 128 GB | 2x 600GB SAS 10kRPM + 1x 480GB SSD | yes |
nefgpu22 | STARS | 4x GTX 1080 Ti | 2x E5-2620v4 | 128 GB | 2x 600GB SAS 10kRPM + 1x 480GB SSD | yes |
Dell R730 GPU node
Dell R730 nodes: dual-Xeon E5-26xx : 1 node ( nefgpu01)
- Tesla K80 GPU cards connected with a PCIe gen3 16x interface
- 2x Tesla GK210 GPUs per card
- 4992 CUDA cores per card
- 2x 12GB RAM capacity per card with ECC
- Simple precision performance peak: 5.61 Tflops per card
- Double precision performance peak: 1.87 Tflops per card
- 2x 240 GB/s GPU memory bandwidth
- 4x gigabit ethernet ports (one connected)
- 1x infiniband QDR card
- hyperthreading not active
Node name | Funding team | Number of GPU cards | Node CPU | Node RAM | Node storage |
nefgpu01 | MATHNEURO | 1x K80 | 2x E5-2623v4 | 32 GB | 2x 400GB SSD |
Dell C6100/C410x GPU nodes
Dell C6100 nodes: mono-Xeon X5650 @ 2.66Ghz : 2 nodes ( nefgpu05 nefgpu06)
- RAM capacity: 24 GB
- 1x250GB SATA disk
- 2x gigabit ethernet ports (one connected)
- 1x infiniband QDR card
- 2 Tesla M2050 GPUs connected with a PCIe gen2 16x interface
- 448 Streaming Processor Cores per card
- Simple precision performance peak: 1.03 Tflops per card
- Double precision performance peak: 0.51 Tflops per card
- 3GB of RAM capacity per card
- hyperthreading not active
Carri GPU nodes
Carri HighStation 5600 XLR8 nodes: dual-Xeon X5650 @ 2.66Ghz : 1 node ( nefgpu03 )
- RAM capacity: 72 GB
- 2x160GB SSD disks
- 4x gigabit ethernet ports (one connected)
- 1x infiniband QDR card
- 7 Tesla C2070 (nefgpu03) GPUs connected with a PCIe gen2 16x interface
- 448 Streaming Processor Cores per card
- Simple precision performance peak: 1.03 Tflops per card
- Double precision performance peak: 0.51 Tflops per card
- 3GB of RAM capacity per card (6GB on nefgpu03)
- hyperthreading not active
Storage
All nodes have access to common storage :
- common storage : /home
- 15 TiB, available to all users, quotas
- 1 Dell PowerEdge R520 server with 2 RAID-10 array 4 x 4TB SAS 7.2 kRPM disks, infiniband QDR, NFS access
- capacity distributed and scalable common storage : /data
- 252TiB (08/2016)
- permanent storage : 1TiB quota per team + teams may buy additional quota (please contact cluster administrators)
- scratch storage : variable size (initially ~20TiB), no quota limit, for temporary storage (data may be purged)
- BeeGFS filesystem on multiple hardware :
- 2 Dell PowerEdge R730xd ; 800GB metadata : RAID-1 array 2 x 800 GB SSD mixed use MLC disks ; 2 x {36 or 48}TB data: 2 x RAID-6 array 8 x {6 or 8}TB SAS 7.2 kRPM disks
- 6 Dell PowerEdge R510 ; 20TB data : RAID-6 array 12 x 2TB SAS 7.2 kRPM disks
- infiniband QDR
- 252TiB (08/2016)