Hardware

De ClustersSophia
Aller à : navigation, rechercher



new cluster - CPU & GPU

The current platform includes:

  • 13 quad-Xeon 6 cores
  • 44 dual-Xeon 6 cores

The number of cores available is

  • 312 Xeon 2.4Ghz cores : [to-be-completed] Tflops ([to-be-completed] Tflops peak)
  • 528 Xeon 2.93Ghz cores : [to-be-completed] Tflops ([to-be-completed] Tflops peak)

The total cumulated computing power is close to [to-be-completed] Tflops (peak, double precision)

  Dell R900 Cluster

Dell R900 quad-Xeon E7450 @ 2.4GHz (24 cores) : 13 nodes

Dell-r900.jpeg


  • RAM capacity : 64 GB RAM
  • 2x146GB hard disk drive (RAID-1) SAS 15K RPM
  • 4x gigabit network ports (one connected)
  • 1x infiniband QDR card

  Dell C6100 Cluster

Dell C6100 dual-Xeon X5670 @ 2.93GHz (12 cores) : 44 nodes

Dell-c6100.jpeg


  • RAM capacity : 96 GB RAM
  • 1x250GB hard disk drive SATA 7.2kRPM
  • 2x gigabit network ports (one connected)
  • 1x infiniband QDR card


legacy cluster - CPU & GPU

The current platform includes:

  • 6 quad-Opteron 12 cores
  • 6 quad-Opteron 16 cores
  • 16 dual-Xeon 10 cores.
  • 21 dual-Xeon quad core.
  • 2 dual-Xeon hexa core.
  • 2 mono-Xeon hexa core.
  • 22 Nvidia Tesla GPU

The number of cores available is

  • 288 Opteron 2.2Ghz cores : ~2 Tflops (2.53 Tflops peak)
  • 384 Opteron 2.3Ghz cores : ~2.7 Tflops (3.5 Tflops peak)
  • 320 Xeon E5 2.8Ghz cores : ~5.9 Tflops (7.17 Tflops peak)
  • 204 Xeon 2.66Ghz cores : ~1.7 Tflops (2.17 Tflops peak)
  • 9024 Streaming Processor Cores (22 Nvidia GPU) 9.5Tflops peak (22.2 Tflops in single precision)


The total cumulated computing power is close to 24 Tflops (peak, double precision)


Dell R815 Cluster

Dell R815 quad-Opterons 6174 @ 2.2Ghz (48 cores) : 6 nodes

Cluster-dellr815.jpg


  • RAM capacity : 256 GB RAM
  • 2x600GB SAS HardDisk drive (RAID-0)
  • 4x gigabit network ports (one connected)
  • 1x infiniband QDR card


  Dell C6145 Cluster

Dell C6145 quad-Opterons 6376 @ 2.3Ghz (64 cores) : 6 nodes

Cluster-c6145.jpg


  • RAM capacity : 256 GB RAM (512GB on nef011 and nef012)
  • 1x500GB SATA HardDisk drive
  • 2x gigabit network ports (one connected)
  • 1x infiniband QDR card

  Dell C6220 Cluster

Dell C6220 dual-Xeon E5-2680 v2 @ 2.80GHz (20 cores) : 16 nodes

C6220-300x191.jpg


  • RAM capacity : 192 GB RAM
  • 1x2TB SATA HardDisk drive
  • 2x gigabit network ports (one connected)
  • 1x infiniband FDR card (QDR used)


  Dell 1950 Cluster

PowerEdge 1950 dual-Xeon 5355 @ 2.66Ghz : 19 nodes

Cluster-dell.jpg
  • RAM capacity : 16 GB
  • 2x73GB SAS in RAID-0, 146Go disk space available
  • 2x gigabit netword ports (one connected)
  • 1x infiniband QDR card


  HP GPU cluster

HP DL160 G5 nodes: dual-Xeon 5430 @ 2.66Ghz : 2 nodes

  • RAM capacity: 16 GB
  • 500GB of local disk space
  • 2x gigabit ethernet interface
  • 1x 10GBit myrinet interface
  • 2 GPUs connected with a PCIe gen2 16x interface

Nvidia Tesla S1070: 1 node

Nvidia-tesla.jpg
  • 4 GPUs (2 per compute node)
  • Streaming Processor Cores: 960 (260 per GPU)
  • Simple precision performance peak: 3.73 Tflops
  • Double precision performance peak: 0.31 Tflops
  • RAM capacity : 16 GB (4GB per GPU)


  Carri GPU cluster

Carri HighStation 5600 XLR8 nodes: dual-Xeon X5650 @ 2.66Ghz : 2 nodes

Carri-tesla.jpg
  • RAM capacity: 72 GB
  • 2x160GB SSD disks
  • 4x gigabit ethernet ports (one connected)
  • 1x infiniband QDR card
  • 7 Tesla C2070 (nefgpu03)/ 7 Tesla C2050(nefgpu04) GPUs connected with a PCIe gen2 16x interface
    • 448 Streaming Processor Cores per card
    • Simple precision performance peak: 1.03 Tflops per card
    • Double precision performance peak: 0.51 Tflops per card
    • 3GB of RAM capacity per card (6GB on nefgpu03)


  Dell C6100/C410x GPU cluster

Dell C6100 nodes: mono-Xeon X5650 @ 2.66Ghz : 2 nodes

Dell-c410x.jpg
  • RAM capacity: 24 GB
  • 1x250GB SATA disk
  • 2x gigabit ethernet ports (one connected)
  • 1x infiniband QDR card
  • 2 Tesla M2050 GPUs connected with a PCIe gen2 16x interface
    • 448 Streaming Processor Cores per card
    • Simple precision performance peak: 1.03 Tflops per card
    • Double precision performance peak: 0.51 Tflops per card
    • 3GB of RAM capacity per card

  new cluster - Storage

All nodes have access to common storage :

  • common storage : /home
    • 7 TB (4 x 4TB SAS disks, RAID-10 array), infiniband QDR, NFS access, available to all users, quotas
    • to be extended : +12 TB in late 2015
  • legacy experimental scratch storage (/dfs) is to be removed in late 2015
  • capacity distributed and scalable common storage : /data
    • 60TB , infiniband QDR, BeeGFS access
    • to be extended : + 108 TB in late 2015
    • to be extended : + 60 TB in early 2016
    • permanent storage : 1TB quota per team + teams may buy additional quota (please contact cluster administrators)
    • scratch storage : variable size (initially ~20TB), no quota limit, for transient storage (data may be purged)


  legacy cluster - Storage

All nodes have access to a common storage using NFS. The NFS server is accessed through the infiniband 40Gb network if available

Dell-md1000.jpg


  • common storage : /home
    • 7 TBytes (4 x 4TB SAS disks, RAID-10 array), NFS access, available to all users, quotas
  • experimental scratch storage based on distributed file system : /dfs
    • 19 TB, GlusterFS access, available to all users, no quotas (may change later)
  • legacy external storage (Dell MD1000)
    • 15 slots available, reserved for INRIA teams.
    • This storage is no longer under warranty, the only storage left is for TROPICS 1TB RAID-1 .
    • Teams needing more storage should contact the cluster administrators
  • legacy external storage (Dell MD1200)