ABINIT

The HMEM Cluster

Specifications

This cluster is dedicated to large shared-memory jobs (100+ GB of RAM and 24+ cores).

Configuration .ac File

Here is an example of a configuration .ac file:

with_mpi_prefix="/home/ucl/naps/ygillet/tools/openmpi-1.6.3-gcc-4.7.2"
enable_64bit_flags="yes"
enable_mpi="yes"
enable_mpi_io="yes"
enable_gw_dpc="yes"

with_fft_flavor="fftw3"
with_fft_libs="-L/opt/intel/compilerpro-12.0.0.084/mkl/lib/intel64  -Wl,--start-group -lmkl_gf_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -lpthread -lm"

with_linalg_flavor="mkl"
with_linalg_libs="-L/opt/intel/compilerpro-12.0.0.084/mkl/lib -Wl,--start-group -lmkl_gf_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -lpthread -lm"

Submission Scripts

HMEM uses a Slurm submission script system.

Here is an example submission script (you have to modify it slightly to suit your needs):

#!/bin/bash
#SBATCH --job-name=your_job_name
#SBATCH --mail-user=your_e_mail@blabla.com
#SBATCH --mail-type=ALL
#SBATCH --time=90:00:00
#SBATCH --ntasks=30
####SBATCH --ntasks-per-node=16
#SBATCH --cpus-per-task=1
####SBATCH --partition=High
#SBATCH --mem-per-cpu=5000

module purge
module load gcc

source /usr/local/intel/compilerpro-12.0.0.084/mkl/bin/mklvars.sh intel64

export PATH=$PATH:/home/ucl/naps/ygillet/tools/openmpi-1.6.3-gcc-4.7.2/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/home/ucl/naps/ygillet/tools/openmpi-1.6.3-gcc-4.7.2/lib

export OMP_NUM_THREADS=1
unset SLURM_CPUS_PER_TASK

MPIRUN="mpirun"
MPIOPT="--mca btl tcp,self -n 2"
ABINIT="/home/ucl/naps/sponce/Develop/7.2.0-private/build/src/98_main/abinit"

${MPIRUN} ${MPIOPT} ${ABINIT} < sigma10.files >& logsigma10_DS4
echo "--"