Hardware : Différence entre versions
Ligne 14 : | Ligne 14 : | ||
* 13 quad-Xeon 6 cores, ~2.6 GB/core RAM | * 13 quad-Xeon 6 cores, ~2.6 GB/core RAM | ||
* 19 dual-Xeon quad core, 2GB/core RAM | * 19 dual-Xeon quad core, 2GB/core RAM | ||
− | * | + | * 28 Nvidia GPU |
The number of CPU cores available is : | The number of CPU cores available is : | ||
Ligne 25 : | Ligne 25 : | ||
* '''152''' Xeon 2.66Ghz cores : ~1.7 Tflops (2.17 Tflops peak) | * '''152''' Xeon 2.66Ghz cores : ~1.7 Tflops (2.17 Tflops peak) | ||
'''The total cumulated CPU computing power is close to 27.3 Tflops peak''' | '''The total cumulated CPU computing power is close to 27.3 Tflops peak''' | ||
− | |||
The number of GPU cores available is : | The number of GPU cores available is : | ||
− | * ''' | + | * '''38784''' Streaming Processor Cores (28 Nvidia GPU) '''~88.5 Tflops peak in single precision''' ( 11.2 Tflops peak in double precision ) |
Ligne 129 : | Ligne 128 : | ||
== Dell T630 GPU nodes == | == Dell T630 GPU nodes == | ||
− | '''Dell T630''' nodes: dual-Xeon E5-26xx : | + | '''Dell T630''' nodes: dual-Xeon E5-26xx : 5 nodes ( '''nefgpu07 to nefgpu11''') |
* GeForce GTX Titan X GPUs cards connected with a PCIe gen3 16x interface | * GeForce GTX Titan X GPUs cards connected with a PCIe gen3 16x interface | ||
** 3072 CUDA cores per card | ** 3072 CUDA cores per card | ||
Ligne 162 : | Ligne 161 : | ||
| 64 GB | | 64 GB | ||
| 2x 300GB SAS 15kRPM + 1x 800GB SSD | | 2x 300GB SAS 15kRPM + 1x 800GB SSD | ||
+ | |- | ||
+ | | nefgpu09 | ||
+ | | GRAPHDECO | ||
+ | | 2x Titan X | ||
+ | | 2x E5-2630v4 | ||
+ | | 48 GB | ||
+ | | 2x 1TB SATA 7.2kRPM + 1x 400GB SSD | ||
+ | |- | ||
+ | | nefgpu10 | ||
+ | | STARS | ||
+ | | 2x Titan X | ||
+ | | 2x E5-2630v4 | ||
+ | | 64 GB | ||
+ | | 2x 1TB SATA 7.2kRPM + 1x 1.6TB SSD | ||
+ | |- | ||
+ | | nefgpu11 | ||
+ | | STARS | ||
+ | | 2x Titan X | ||
+ | | 2x E5-2630v4 | ||
+ | | 64 GB | ||
+ | | 2x 1TB SATA 7.2kRPM + 1x 1.6TB SSD | ||
|} | |} | ||
Version du 20 juin 2016 à 18:05
CPU & GPU summary
The current platform includes:
- 8 dual-Xeon 8 cores, 16GB/core RAM
- 16 dual-Xeon 10 cores, 9.6GB/core RAM
- 44 dual-Xeon 6 cores, 8 GB/core RAM
- 6 quad-Opteron 16 cores, 4GB/core RAM
- 6 quad-Opteron 12 cores, ~5.3GB/core RAM
- 13 quad-Xeon 6 cores, ~2.6 GB/core RAM
- 19 dual-Xeon quad core, 2GB/core RAM
- 28 Nvidia GPU
The number of CPU cores available is :
- 128 Xeon 2.6Ghz cores : ~2.4 Tflops (2.7 Tflops peak)
- 320 Xeon E5 2.8Ghz cores : ~5.9 Tflops (7.17 Tflops peak)
- 528 Xeon 2.93Ghz cores : ~4.8 Tflops (6.2 Tflops peak)
- 384 Opteron 2.3Ghz cores : ~2.7 Tflops (3.5 Tflops peak)
- 288 Opteron 2.2Ghz cores : ~2 Tflops (2.53 Tflops peak)
- 312 Xeon 2.4Ghz cores : ~1.8 Tflops (3.0 Tflops peak)
- 152 Xeon 2.66Ghz cores : ~1.7 Tflops (2.17 Tflops peak)
The total cumulated CPU computing power is close to 27.3 Tflops peak
The number of GPU cores available is :
- 38784 Streaming Processor Cores (28 Nvidia GPU) ~88.5 Tflops peak in single precision ( 11.2 Tflops peak in double precision )
CPU clusters
Dell C6220 cluster (2015)
Dell C6220 dual-Xeon E5-2650 v2 @ 2.60GHz (16 cores) : 8 nodes ( nef029 to nef036 )
- RAM capacity : 256 GB RAM
- 1x1TB SATA HardDisk drive
- 2x gigabit network ports (one connected)
- 2x infiniband QDR card (one connected)
Dell C6220 cluster (2014)
Dell C6220 dual-Xeon E5-2680 v2 @ 2.80GHz (20 cores) : 16 nodes ( nef013 to nef028 )
- RAM capacity : 192 GB RAM
- 1x2TB SATA HardDisk drive
- 2x gigabit network ports (one connected)
- 1x infiniband FDR card (QDR used)
Dell C6100 cluster
Dell C6100 dual-Xeon X5670 @ 2.93GHz (12 cores) : 44 nodes ( nef097 to nef140 )
- RAM capacity : 96 GB RAM
- 1x250GB hard disk drive SATA 7.2kRPM
- 2x gigabit network ports (one connected)
- 1x infiniband QDR card
Dell C6145 cluster
Dell C6145 quad-Opterons 6376 @ 2.3Ghz (64 cores) : 6 nodes ( nef007 to nef012 )
- RAM capacity : 256 GB RAM (512GB on nef011 and nef012)
- 1x500GB SATA HardDisk drive
- 2x gigabit network ports (one connected)
- 1x infiniband QDR card
Dell R815 cluster
Dell R815 quad-Opterons 6174 @ 2.2Ghz (48 cores) : 6 nodes ( nef001 to nef006 )
- RAM capacity : 256 GB RAM
- 2x600GB SAS HardDisk drive (RAID-0)
- 4x gigabit network ports (one connected)
- 1x infiniband QDR card
Dell R900 cluster
Dell R900 quad-Xeon E7450 @ 2.4GHz (24 cores) : 13 nodes ( nef084 to nef096 )
- RAM capacity : 64 GB RAM
- 2x146GB hard disk drive (RAID-1) SAS 15K RPM
- 4x gigabit network ports (one connected)
- 1x infiniband QDR card
Dell 1950 cluster
PowerEdge 1950 dual-Xeon 5355 @ 2.66Ghz (8 cores) : 19 nodes ( nef065 to nef083 )
- RAM capacity : 16 GB
- 2x73GB SAS in RAID-0, 146Go disk space available
- 2x gigabit netword ports (one connected)
- 1x infiniband QDR card
GPU nodes
Dell T630 GPU nodes
Dell T630 nodes: dual-Xeon E5-26xx : 5 nodes ( nefgpu07 to nefgpu11)
- GeForce GTX Titan X GPUs cards connected with a PCIe gen3 16x interface
- 3072 CUDA cores per card
- 12GB of RAM capacity per card
- Simple precision performance peak: 7.0 Tflops per card
- Double precision performance peak: 0.2 Tflops per card
- 336.5 GB/s GPU memory bandwidth
- 4x gigabit ethernet ports (one connected)
- 1x infiniband QDR card
Node name | Funding team | Number of GPU cards | Node CPU | Node RAM | Node storage |
nefgpu07 | ASCLEPIOS | 2x Titan X | 2x E5-2620v3 | 32 GB | 2x 1TB SATA 7.2kRPM |
nefgpu08 | ZENITH | 2x Titan X | 2x E5-2630v3 | 64 GB | 2x 300GB SAS 15kRPM + 1x 800GB SSD |
nefgpu09 | GRAPHDECO | 2x Titan X | 2x E5-2630v4 | 48 GB | 2x 1TB SATA 7.2kRPM + 1x 400GB SSD |
nefgpu10 | STARS | 2x Titan X | 2x E5-2630v4 | 64 GB | 2x 1TB SATA 7.2kRPM + 1x 1.6TB SSD |
nefgpu11 | STARS | 2x Titan X | 2x E5-2630v4 | 64 GB | 2x 1TB SATA 7.2kRPM + 1x 1.6TB SSD |
Dell C6100/C410x GPU nodes
Dell C6100 nodes: mono-Xeon X5650 @ 2.66Ghz : 2 nodes ( nefgpu05 nefgpu06)
- RAM capacity: 24 GB
- 1x250GB SATA disk
- 2x gigabit ethernet ports (one connected)
- 1x infiniband QDR card
- 2 Tesla M2050 GPUs connected with a PCIe gen2 16x interface
- 448 Streaming Processor Cores per card
- Simple precision performance peak: 1.03 Tflops per card
- Double precision performance peak: 0.51 Tflops per card
- 3GB of RAM capacity per card
Carri GPU nodes
Carri HighStation 5600 XLR8 nodes: dual-Xeon X5650 @ 2.66Ghz : 1 node ( nefgpu03 nefgpu04 )
- RAM capacity: 72 GB
- 2x160GB SSD disks
- 4x gigabit ethernet ports (one connected)
- 1x infiniband QDR card
- 7 Tesla C2070 (nefgpu03)/ 7 Tesla C2050(nefgpu04) GPUs connected with a PCIe gen2 16x interface
- 448 Streaming Processor Cores per card
- Simple precision performance peak: 1.03 Tflops per card
- Double precision performance peak: 0.51 Tflops per card
- 3GB of RAM capacity per card (6GB on nefgpu03)
Storage
All nodes have access to common storage :
- common storage : /home
- 15 TiB, available to all users, quotas
- 1 Dell PowerEdge R520 server with 2 RAID-10 array 4 x 4TB SAS 7.2 kRPM disks, infiniband QDR, NFS access
- capacity distributed and scalable common storage : /data
- 153TiB, to be extended : + 55 TiB in 2016
- permanent storage : 1TiB quota per team + teams may buy additional quota (please contact cluster administrators)
- scratch storage : variable size (initially ~20TiB), no quota limit, for temporary storage (data may be purged)
- BeeGFS filesystem on multiple hardware :
- 2 Dell PowerEdge R730xd ; 800GB metadata : RAID-1 array 2 x 800 GB SSD mixed use MLC disks ; 1 or 2 x 36TB data: 1 or 2 x RAID-6 array 8 x 6TB SAS 7.2 kRPM disks
- 3 Dell PowerEdge R510 ; 20TB data : RAID-6 array 12 x 2TB SAS 7.2 kRPM disks
- infiniband QDR
- 153TiB, to be extended : + 55 TiB in 2016
Legacy team storage :
- legacy external storage (Dell MD1200) : out of maintenance hardware
- legacy external storage (Dell MD1000) : out of maintenance hardware
- 1TB RAID-1 for the TROPICS EPI archive