^M ENEA-GRID CMAST Lab :: Materials for energy

Welcome to CMAST Lab
Benchmarks

Benchmark of the CPMD code on the CRESCO clusters in ENEA-GRID environment
The paper. Hydrogen desorption from a hydride matrix is still an open field of research. For this purpose, we want to set up a numerical model to perform first principle calculations based on the density functional theory, using CPMD code. To evaluate the suitable computational demand, we performed benchmarks of the CPMD code on the HPC ENEA CRESCO computing facilities taking in to account also the energy cost issue.
  • S. Giusepponi, ENEA
CPMD benchmark on CRESCO3
Benchmark on CRESCO
Solid-state metal hydrides are considered useful for storing hydrogen although materials suitable for practical use are still under development. Magnesium is an important candidate in this respect as it can reversibly store about 7.6 wt% hydrogen, is light weight and is a low-cost material. However, its thermodynamic parameters are not completely favorable and the reaction with hydrogen often shows sluggish kinetics. Different treatments in Mg based materials have been proposed to overcome these drawbacks. One of these is tailoring Mg nanoparticles in view of an enhancement on the reaction kinetics and thermodynamics of the Mg-MgH2 phase transformation. Because metallic nanoparticles often show size dependent behavior different from bulk matter, a better understanding of their physical-chemical properties is necessary. For this purpose, we want to set up a numerical model to perform first principle calculations based on the density functional theory (DFT), using CPMD code. To evaluate the suitable computational demand, we performed benchmarks of the CPMD code on the HCP ENEA CRESCO computing facilities.
  • S. Giusepponi, ENEA

AMD 6234 Interlagos vs. Intel E5-2680 Sandy Bridge. Benchmark of different computational codes
Link to the paper
In this paper we report results concerning the performances of the AMD 6234 Interlagos 2.4 GHz and Intel E5-2680 Sandy Bridge 2.7 GHz processors by running standard application benchmarks. The tests were conducted on an experimental node with 24 cores, 64 GB RAM and 16 cores, 32 GB RAM for AMD and Intel processors, respectively. The Sandy Bridge processor supports Hyper Threading (HT) technology which makes a single physical processor appear as two logical processors. Thus by enabling HT the Sandy Bridge is seen by the OS as a node with 32 cores. With HT each logical processor maintains a complete set of the architecture state but share all other resources on the physical processors (caches, execution units, branch predictors, control logic and buses). Instead of using a single benchmark we have considered several applications which refer to actual usage cases as CFD (Computational Fluid Dynamics) and MD (Molecular Dynamics). We also tested the HPL (High Performance Linpack) code which is a standard benchmark used in the HPC (High Performance Computing) community. In the case of CFD the open source code OpenFOAM and the commercial code Ansys Fluent have been tested. For the MD simulation we used the CPMD code. We also tested both architectures with Intel MPI Alltoall benchmark. Usually this benchmark is used to provide a measure of network performances testing a set of MPI functions; in this case it was used to have a measure of shared memory access performances.
  • S.Giusepponi, A.Funel, F.Ambrosino, G.Guarnieri, G.Bracco, ENEA

Benchmark of OCCAM code on CRESCO2 and CRESCO3
Milano We chosen a large system composed by five different types of particles. The composition is reported in the follow: 258096 Particles, 225058 Molecules (2812 DPPC molecules (12 particles/DPPC); 222200 Water molecules; 34 L64 molecules (56 particles/L64); 8 Trimers). The system has been simulated on 144 cores. We used two different environments, respectively CRESCO3 and CRESCO2. The compilation of the code in CRESCO3 has been done with the options: mpif90 -O2 -fomit-frame-pointer -funroll-loops. We also tested different combinations of options for the compiler, but not relevant difference has been found. For the tests, we did 10 simulations with the same system for both the environments. All simulations in CRESCO2 have been done with the option model==Intel_E5530. For the CRESCO3 no option have been used because the environment is formed only by AMD Opteron 6234TM. The number of cores has been chose on the base of number of cores/node. In CRESCO3 every node count 12 cores. In that case, 12 nodes correspond to 144 cores. In figure we compare the average values obtained for the two different sets.
  • A.De Nicola, G.Milano, Univ. Salerno

..

 
Cookies Policy