NIMP – HPC

FACILITIES /

NIMP – HPC

NIMP HPC infrastructure started in 2011 with the acquisition of an HP BladeSystem with modest specifications: 6 blade server nodes with a total of 72 cores, 96 Gb RAM per node and a 1.3 Tb storage system. QLogic Infiniband 40Gb/s network for internode communication insured low latency data transfer during parallel computing jobs.

Recently, the HPC infrastructure was redesigned and upgraded and we reached a total of 352 cores spread over 7 compute nodes with 256 Gb RAM each. 100 Gb/s Infiniband internode communication is used for parallel computing jobs and 10Gb/s network is used for data transfer to the 20 Tb (combined SSD and HDD) storage system.

The new HPC setup integrates the original 2011 infrastructure that is used mainly for training and testing as well as the new generation servers.

On the software side the NIMP-HPC centre uses Rocky Linux 8.5 with an OpenHPC setup, Warewolf provisioning, SLURM job management system and Easybuild recipe-based software installation system. At the moment the following computing software packages are used: Quantum Espresso, Siesta and Abinit (with the possibility of adding many more!).

The main computing tool that is used almost daily since the HPC centre entered production is Quantum Espresso which is a computer code for electronic-structure calculations using density functional theory (DFT), plane wave decomposition of atomic orbitals and pseudopotentials. It is capable of using parallel systems to take advantage of many-core distributed calculations.

The main focus so far has been the study of electronic properties of ferroelectric perovskites such as Pb(ZrxTi1-x)O3, BaTiO3 etc. and heterostructures for ferroelectric based applications.



Back to top

Copyright © 2024 National Institute of Materials Physics. All Rights Reserved