Clusters Home : Différence entre versions
(→History) |
|||
(69 révisions intermédiaires par 4 utilisateurs non affichées) | |||
Ligne 5 : | Ligne 5 : | ||
= Context = | = Context = | ||
+ | {| | ||
+ | | style="width: 24%" |[[Image:Inr_logo_rouge.png|300px|link=http://www.inria.fr]] | ||
+ | | style="width: 24%" |[[Image:Uca.png|300px|link=http://univ-cotedazur.eu/]] | ||
+ | |} | ||
− | + | {| | |
+ | | style="width: 24%" |[[Image:Cper-440x250.jpg|160px|link=http://www.prefectures-regions.gouv.fr/provence-alpes-cote-dazur/Grands-dossiers/Le-CPER-2015-2020-en-Provence-Alpes-Cote-d-Azur-PACA]] | ||
+ | | style="width: 24%" |[[Image:Regionsudlogo.png|160px|link=http://www.maregionsud.fr/]] | ||
+ | | style="width: 24%" |[[Image:Mesri.png|200px|link=http://www.enseignementsup-recherche.gouv.fr/]] | ||
+ | | style="width: 24%" |[[Image:Casa.png|120px|link=http://casa-infos.agglo-casa.fr/]] | ||
+ | |} | ||
− | |||
− | + | '''Nef is [http://www.inria.fr Inria] Sophia Antipolis Méditerranée research center multidomain scientific experimentation infrastructure for computation, storage and visualisation. ''' | |
+ | The platform relies on a cluster computing approach (several generations of high performance servers are combined to provide heterogeneous parallel architecture), fast network interconnect, high capacity storage and interconnection to Inria Sophia interactive visualisation platform. | ||
+ | |||
+ | Nef is a member infrastructure the UCA [http://univ-cotedazur.eu/opal-computing-center OPAL distributed computation mesocentre]. | ||
+ | |||
+ | Nef is operated and originaly funded by Inria. Nef now also gratefully benefits from UCA and UCA<sup>jedi</sup> IDEX funding. | ||
+ | |||
+ | '''Nef gratefully benefits from CPER 2015-2020 funding from [http://www.maregionsud.fr Région Sud Provence Alpes Côte d'Azur], [http://www.enseignementsup-recherche.gouv.fr/ French MESRI] (Ministère de l'Education Nationale, de l'Enseignement Supérieur et de la Recherche) and [http://casa-infos.agglo-casa.fr/ CASA] (Communauté d'Agglomération de Sophia Antipolis).''' | ||
= Usage = | = Usage = | ||
Ligne 22 : | Ligne 37 : | ||
= History = | = History = | ||
− | + | ; 2024-03-20 | |
+ | : 2 new HPE DL385 GPU node nefgpu{60-61} (2x A40-48GB each) with privileged access for the funding team | ||
+ | ; 2023-11-03 | ||
+ | : 3 new HPE DL385 GPU nodes with privileged access for the funding team : nefgpu{57-58} (2x A100-80GB each), nefgpu59 (3x A40) | ||
+ | ; 2023-03-17 | ||
+ | : 1 new HPE DL385 GPU node nefgpu56 (3x A100-4OGB total) with privileged access for the funding team | ||
+ | ; 2022-08-29 | ||
+ | : Upgrade of nefgpu28 with 4x RTX A6000 to replace the previous 4 GTX 1080Ti | ||
+ | ; 2022-02-20 | ||
+ | : 2 new Dell R7525 GPU node nefgpu54, nefgpu55 (6x A40 total) with privileged access for the funding team | ||
+ | ; 2022-01-15 | ||
+ | : 1 new Dell R7525 GPU node nefgpu53 (3x A40 total) with privileged access for the funding team | ||
+ | ; 2021-12-15 | ||
+ | : 1 new Dell R7525 GPU node nefgpu52 (3x A40 total) with privileged access for the funding team | ||
+ | ; 2021-03-24 | ||
+ | : 1 new Dell T640 GPU node nefgpu39 (4x RTX8000 total) with privileged access for the funding team | ||
+ | ; 2021-03-19 | ||
+ | : a 3rd NVIDIA TESLA V100 32GB GPU added to nefgpu46 | ||
+ | ; 2021-03-16 | ||
+ | : 5 new Dell T640 GPU node nefgpu{47-51} (20x RTX8000 total) with privileged access for the funding team | ||
+ | ; 2021-03-11 | ||
+ | : 2 new Dell R7525 CPU node nef058, nef059 (64 cores each) with privileged access for the funding team | ||
+ | ; 2021-02-09 | ||
+ | : 1 new Dell T640 GPU node nefgpu38 (2x RTX8000) with privileged access for the funding team | ||
+ | ; 2021-01-30 | ||
+ | : shutdown and deprovisioning of nefgpu03 GPU node (old Carri Workstation from 2011) | ||
+ | ; 2021-01-20 | ||
+ | : 1 new Dell T640 GPU node nefgpu37 (4x RTX8000) with privileged access for the funding team | ||
+ | ; 2020-01-10 | ||
+ | : 2 new Dell T640 GPU node nefgpu{35-36} (8x RTX6000 total) with privileged access for the funding team | ||
+ | ; 2019-11-22 | ||
+ | : 1 new Dell T640 GPU node nefgpu34 (4x RTX6000) + 1 new Dell R740 GPU node nefgpu46 (2x Tesla V100 GPU) with privileged access for the funding team | ||
+ | ; 2019-08-30 | ||
+ | : 4 new Dell C6420 nodes nef054-nef057 (144 cores) ; 4 new Dell R740 GPU node nefgpu42-nefgpu45 (12x Tesla T4 GPU total) | ||
+ | ; 2019-07-08 | ||
+ | : 4 new Dell T640 GPU node nefgpu{30-33} (12x RTX2080 Ti + 4x Titan-RTX GPU total) | ||
+ | ; 2019-06-27 | ||
+ | : /data refactoring and extension (>600TiB) | ||
+ | ; 2019-05-17 | ||
+ | : 1 new SuperMicro4029 GPU node nefgpu41 (4x V100 GPU 32GB with NVLink interco, 384GB CPU RAM, 4.8TB scratch) | ||
+ | ; 2019-03-21 | ||
+ | : 6 new Dell T640 GPU node nefgpu{24-29} (4x RTX2080Ti + 16x GTX1080 Ti + 2x Titan-X GPU total) with privileged access for the funding team | ||
+ | ; 2018-04-23 | ||
+ | : 1 new Dell R940 node nef053 (80 cores, 1TB RAM) ; 16 new Dell C6420 nodes nef037-nef052 (320 cores) ; 1 new Asus ESC8000g4 GPU node nefgpu40 (8x GTX1080 Ti) ; 3 new Dell T630 GPU node nefgpu18-nefgpu20 (12x GTX1080 Ti GPU total) | ||
+ | ; 2018-04-09 | ||
+ | : 5 new Dell T630 GPU nodes nefgpu{16-17,21-23} (20x GTX1080 Ti GPU total) with privileged access for the funding team | ||
+ | ; 2018-03-13 | ||
+ | : system upgrade to CentOS 7.4 | ||
+ | ; 2017-06-30 | ||
+ | : 5 new Dell T630 GPU nodes nefgpu11-nefgpu15 (8x GTX1080 + 10x GTX1080 Ti GPU total) with privileged access for the funding team | ||
+ | ; 2016-11-28 | ||
+ | : 1 new Dell R730 GPU node nefgpu01 (1x Tesla K80) with privileged access for the funding team | ||
+ | ; 2016-06-30 | ||
+ | : 5 new Dell T630 GPU nodes nefgpu07-nefgpu11 (12x Titan-X GPU total) with privileged access for the funding team | ||
; 2016-03-29 | ; 2016-03-29 | ||
− | : [[: | + | : [[:Media:Atelier-cluster-20160329.pdf|Slides of the nef introduction presentation]] are available |
; 2016-03-16 | ; 2016-03-16 | ||
: [[User_Guide_new_config|new nef configuration]] is now the operational platform, [[User_Guide|legacy nef configuration]] shutdowns on 2016-04-17 | : [[User_Guide_new_config|new nef configuration]] is now the operational platform, [[User_Guide|legacy nef configuration]] shutdowns on 2016-04-17 |
Version actuelle datée du 21 mars 2024 à 17:46
Context
Nef is Inria Sophia Antipolis Méditerranée research center multidomain scientific experimentation infrastructure for computation, storage and visualisation.
The platform relies on a cluster computing approach (several generations of high performance servers are combined to provide heterogeneous parallel architecture), fast network interconnect, high capacity storage and interconnection to Inria Sophia interactive visualisation platform.
Nef is a member infrastructure the UCA OPAL distributed computation mesocentre.
Nef is operated and originaly funded by Inria. Nef now also gratefully benefits from UCA and UCAjedi IDEX funding.
Nef gratefully benefits from CPER 2015-2020 funding from Région Sud Provence Alpes Côte d'Azur, French MESRI (Ministère de l'Education Nationale, de l'Enseignement Supérieur et de la Recherche) and CASA (Communauté d'Agglomération de Sophia Antipolis).
Usage
The platform is open for members of all Inria teams. Accounts can also be opened for academic and corporate partners of Inria Sophia research center teams.
It can be used for both parallel computations, GPU computation and sequential jobs, etc.
History
- 2024-03-20
- 2 new HPE DL385 GPU node nefgpu{60-61} (2x A40-48GB each) with privileged access for the funding team
- 2023-11-03
- 3 new HPE DL385 GPU nodes with privileged access for the funding team : nefgpu{57-58} (2x A100-80GB each), nefgpu59 (3x A40)
- 2023-03-17
- 1 new HPE DL385 GPU node nefgpu56 (3x A100-4OGB total) with privileged access for the funding team
- 2022-08-29
- Upgrade of nefgpu28 with 4x RTX A6000 to replace the previous 4 GTX 1080Ti
- 2022-02-20
- 2 new Dell R7525 GPU node nefgpu54, nefgpu55 (6x A40 total) with privileged access for the funding team
- 2022-01-15
- 1 new Dell R7525 GPU node nefgpu53 (3x A40 total) with privileged access for the funding team
- 2021-12-15
- 1 new Dell R7525 GPU node nefgpu52 (3x A40 total) with privileged access for the funding team
- 2021-03-24
- 1 new Dell T640 GPU node nefgpu39 (4x RTX8000 total) with privileged access for the funding team
- 2021-03-19
- a 3rd NVIDIA TESLA V100 32GB GPU added to nefgpu46
- 2021-03-16
- 5 new Dell T640 GPU node nefgpu{47-51} (20x RTX8000 total) with privileged access for the funding team
- 2021-03-11
- 2 new Dell R7525 CPU node nef058, nef059 (64 cores each) with privileged access for the funding team
- 2021-02-09
- 1 new Dell T640 GPU node nefgpu38 (2x RTX8000) with privileged access for the funding team
- 2021-01-30
- shutdown and deprovisioning of nefgpu03 GPU node (old Carri Workstation from 2011)
- 2021-01-20
- 1 new Dell T640 GPU node nefgpu37 (4x RTX8000) with privileged access for the funding team
- 2020-01-10
- 2 new Dell T640 GPU node nefgpu{35-36} (8x RTX6000 total) with privileged access for the funding team
- 2019-11-22
- 1 new Dell T640 GPU node nefgpu34 (4x RTX6000) + 1 new Dell R740 GPU node nefgpu46 (2x Tesla V100 GPU) with privileged access for the funding team
- 2019-08-30
- 4 new Dell C6420 nodes nef054-nef057 (144 cores) ; 4 new Dell R740 GPU node nefgpu42-nefgpu45 (12x Tesla T4 GPU total)
- 2019-07-08
- 4 new Dell T640 GPU node nefgpu{30-33} (12x RTX2080 Ti + 4x Titan-RTX GPU total)
- 2019-06-27
- /data refactoring and extension (>600TiB)
- 2019-05-17
- 1 new SuperMicro4029 GPU node nefgpu41 (4x V100 GPU 32GB with NVLink interco, 384GB CPU RAM, 4.8TB scratch)
- 2019-03-21
- 6 new Dell T640 GPU node nefgpu{24-29} (4x RTX2080Ti + 16x GTX1080 Ti + 2x Titan-X GPU total) with privileged access for the funding team
- 2018-04-23
- 1 new Dell R940 node nef053 (80 cores, 1TB RAM) ; 16 new Dell C6420 nodes nef037-nef052 (320 cores) ; 1 new Asus ESC8000g4 GPU node nefgpu40 (8x GTX1080 Ti) ; 3 new Dell T630 GPU node nefgpu18-nefgpu20 (12x GTX1080 Ti GPU total)
- 2018-04-09
- 5 new Dell T630 GPU nodes nefgpu{16-17,21-23} (20x GTX1080 Ti GPU total) with privileged access for the funding team
- 2018-03-13
- system upgrade to CentOS 7.4
- 2017-06-30
- 5 new Dell T630 GPU nodes nefgpu11-nefgpu15 (8x GTX1080 + 10x GTX1080 Ti GPU total) with privileged access for the funding team
- 2016-11-28
- 1 new Dell R730 GPU node nefgpu01 (1x Tesla K80) with privileged access for the funding team
- 2016-06-30
- 5 new Dell T630 GPU nodes nefgpu07-nefgpu11 (12x Titan-X GPU total) with privileged access for the funding team
- 2016-03-29
- Slides of the nef introduction presentation are available
- 2016-03-16
- new nef configuration is now the operational platform, legacy nef configuration shutdowns on 2016-04-17
- 2015-10-09
- 8 new Dell C6220 nodes nef029-nef036 (128 cores with 16GB RAM/core), /home extension (now 15 TiB), /data extension with reserved space for contributing teams
- 2015-09-04
- new nef configuration is open for beta test : 840 additional cores, 60TB BeeGFS storage, OAR scheduler, nodes under CentOS7
- 2014-07-17
- 16 new machines added nef013-028 (320 new cores)
- 2013-11-21
- new NFS server for home directories (7TB available instead of 1.3TB)
- 2013-06-18
- 6 new machines added nef007-012 (384 new cores), PGI 13.5 installed.
- 2013-04-22
- memory upgrade on nef001-006 (from 128GB to 256GB)
- 2012-12-06
- software upgrade (fedora 16, PGI 12.6, CUDA 5.0, ...)
- 2011-07-07
- software upgrade (fedora 14, PGI 11.5, CUDA 4.0, ...)
- 2011-02-03
- 8 new machines added, major software upgrade (Fedora 12, PGI 11.1, openmpi 1.4.3 ...). QDR Infiniband network instead of myrinet
- 2010-03-12
- software upgrade (openmpi 1.4.1, PGI 10.3, ...)
- 2009-06-09
- major software upgrade (from Fedora 7 to Fedora 10)
- 2009-02-20
- GPU cluster available for tests : two nodes with Fedora 10 and 4 Nvidia GPUs
- 2008-02-28
- 19 new machines added, minor software upgrade (PGI 7.1 and openmpi 1.2.5)
- 2007-08-16
- major software upgrade (from Rocks to Fedora 7)
- 2007-01-30
- 32 new nef machines added
- 2005-06-20
- Slides on How to use Sophia's cluster are available
- 2005-03-09
- The nef cluster is online! This platform includes one frontend and 32 compute nodes (dual opterons)