Hardware : Différence entre versions
Ligne 8 : | Ligne 8 : | ||
* 8 dual-Xeon 8 cores, 16GB/core RAM | * 8 dual-Xeon 8 cores, 16GB/core RAM | ||
− | * 44 dual-Xeon 6 cores, | + | * 44 dual-Xeon 6 cores, 8 GB/core RAM |
− | * 13 quad-Xeon 6 cores, | + | * 13 quad-Xeon 6 cores, ~2.6 GB/core RAM |
The number of cores available is | The number of cores available is |
Version du 27 octobre 2015 à 19:44
Sommaire
new cluster - CPU & GPU
The current platform includes:
- 8 dual-Xeon 8 cores, 16GB/core RAM
- 44 dual-Xeon 6 cores, 8 GB/core RAM
- 13 quad-Xeon 6 cores, ~2.6 GB/core RAM
The number of cores available is
- 128 Xeon 2.6Ghz cores : ~2.4 Tflops (2.7 Tflops peak)
- 528 Xeon 2.93Ghz cores : ~4.8 Tflops (6.2 Tflops peak)
- 312 Xeon 2.4Ghz cores : ~1.8 Tflops (3.0 Tflops peak)
The total cumulated computing power is close to 11.9 Tflops (peak, double precision)
Dell C6220 Cluster (2015)
Dell C6220 dual-Xeon E5-2650 v2 @ 2.60GHz (16 cores) : 8 nodes ( nef029 to nef036 )
- RAM capacity : 256 GB RAM
- 1x1TB SATA HardDisk drive
- 2x gigabit network ports (one connected)
- 2x infiniband QDR card (one connected)
Dell C6100 Cluster
Dell C6100 dual-Xeon X5670 @ 2.93GHz (12 cores) : 44 nodes ( nef097 to nef140 )
- RAM capacity : 96 GB RAM
- 1x250GB hard disk drive SATA 7.2kRPM
- 2x gigabit network ports (one connected)
- 1x infiniband QDR card
Dell R900 Cluster
Dell R900 quad-Xeon E7450 @ 2.4GHz (24 cores) : 13 nodes ( nef084 to nef096 )
- RAM capacity : 64 GB RAM
- 2x146GB hard disk drive (RAID-1) SAS 15K RPM
- 4x gigabit network ports (one connected)
- 1x infiniband QDR card
legacy cluster - CPU & GPU
The current platform includes:
- 6 quad-Opteron 12 cores
- 6 quad-Opteron 16 cores
- 16 dual-Xeon 10 cores.
- 21 dual-Xeon quad core.
- 2 dual-Xeon hexa core.
- 2 mono-Xeon hexa core.
- 22 Nvidia Tesla GPU
The number of cores available is
- 288 Opteron 2.2Ghz cores : ~2 Tflops (2.53 Tflops peak)
- 384 Opteron 2.3Ghz cores : ~2.7 Tflops (3.5 Tflops peak)
- 320 Xeon E5 2.8Ghz cores : ~5.9 Tflops (7.17 Tflops peak)
- 204 Xeon 2.66Ghz cores : ~1.7 Tflops (2.17 Tflops peak)
- 9024 Streaming Processor Cores (22 Nvidia GPU) ~9.5Tflops peak (22.2 Tflops in single precision)
The total cumulated computing power is close to 24 Tflops (peak, double precision)
Dell R815 Cluster
Dell R815 quad-Opterons 6174 @ 2.2Ghz (48 cores) : 6 nodes
- RAM capacity : 256 GB RAM
- 2x600GB SAS HardDisk drive (RAID-0)
- 4x gigabit network ports (one connected)
- 1x infiniband QDR card
Dell C6145 Cluster
Dell C6145 quad-Opterons 6376 @ 2.3Ghz (64 cores) : 6 nodes
- RAM capacity : 256 GB RAM (512GB on nef011 and nef012)
- 1x500GB SATA HardDisk drive
- 2x gigabit network ports (one connected)
- 1x infiniband QDR card
Dell C6220 Cluster (2014)
Dell C6220 dual-Xeon E5-2680 v2 @ 2.80GHz (20 cores) : 16 nodes
- RAM capacity : 192 GB RAM
- 1x2TB SATA HardDisk drive
- 2x gigabit network ports (one connected)
- 1x infiniband FDR card (QDR used)
Dell 1950 Cluster
PowerEdge 1950 dual-Xeon 5355 @ 2.66Ghz : 19 nodes
- RAM capacity : 16 GB
- 2x73GB SAS in RAID-0, 146Go disk space available
- 2x gigabit netword ports (one connected)
- 1x infiniband QDR card
HP GPU cluster
HP DL160 G5 nodes: dual-Xeon 5430 @ 2.66Ghz : 2 nodes
- RAM capacity: 16 GB
- 500GB of local disk space
- 2x gigabit ethernet interface
- 1x 10GBit myrinet interface
- 2 GPUs connected with a PCIe gen2 16x interface
Nvidia Tesla S1070: 1 node
- 4 GPUs (2 per compute node)
- Streaming Processor Cores: 960 (260 per GPU)
- Simple precision performance peak: 3.73 Tflops
- Double precision performance peak: 0.31 Tflops
- RAM capacity : 16 GB (4GB per GPU)
Carri GPU cluster
Carri HighStation 5600 XLR8 nodes: dual-Xeon X5650 @ 2.66Ghz : 2 nodes
- RAM capacity: 72 GB
- 2x160GB SSD disks
- 4x gigabit ethernet ports (one connected)
- 1x infiniband QDR card
- 7 Tesla C2070 (nefgpu03)/ 7 Tesla C2050(nefgpu04) GPUs connected with a PCIe gen2 16x interface
- 448 Streaming Processor Cores per card
- Simple precision performance peak: 1.03 Tflops per card
- Double precision performance peak: 0.51 Tflops per card
- 3GB of RAM capacity per card (6GB on nefgpu03)
Dell C6100/C410x GPU cluster
Dell C6100 nodes: mono-Xeon X5650 @ 2.66Ghz : 2 nodes
- RAM capacity: 24 GB
- 1x250GB SATA disk
- 2x gigabit ethernet ports (one connected)
- 1x infiniband QDR card
- 2 Tesla M2050 GPUs connected with a PCIe gen2 16x interface
- 448 Streaming Processor Cores per card
- Simple precision performance peak: 1.03 Tflops per card
- Double precision performance peak: 0.51 Tflops per card
- 3GB of RAM capacity per card
new cluster - Storage
All nodes have access to common storage :
- common storage : /home
- 15 TiB (8 x 4TB SAS disks, 2 x RAID-10 array), infiniband QDR, NFS access, available to all users, quotas
- legacy experimental scratch storage (/dfs) is to be removed in late 2015
- capacity distributed and scalable common storage : /data
- 153TiB , infiniband QDR, BeeGFS access
- to be extended : + 55 TiB in early 2016
- permanent storage : 1TiB quota per team + teams may buy additional quota (please contact cluster administrators)
- scratch storage : variable size (initially ~20TiB), no quota limit, for transient storage (data may be purged)
legacy cluster - Storage
All nodes have access to a common storage using NFS. The NFS server is accessed through the infiniband 40Gb network if available
- common storage : /home
- 15 TBytes (8 x 4TB SAS disks, 2 x RAID-10 array), NFS access, available to all users, quotas
- experimental scratch storage based on distributed file system : /dfs
- 19 TB, GlusterFS access, available to all users, no quotas (may change later)
- legacy external storage (Dell MD1000)
- 15 slots available, reserved for INRIA teams.
- This storage is no longer under warranty, the only storage left is for TROPICS 1TB RAID-1 .
- Teams needing more storage should contact the cluster administrators
- legacy external storage (Dell MD1200)