HPC Abinit

option, parallelism,...

Moderators: fgoudreault, mcote

Forum rules
Please have a look at ~abinit/doc/config/build-config.ac in the source package for detailed and up-to-date information about the configuration of Abinit 8 builds.
For a video explanation on how to build Abinit 7.x for Linux, please go to: http://www.youtube.com/watch?v=DppLQ-KQA68.
IMPORTANT: when an answer solves your problem, please check the little green V-like button on its upper-right corner to accept it.
Locked
hpc_sysadmin
Posts: 2
Joined: Wed May 25, 2011 3:43 pm

HPC Abinit

Post by hpc_sysadmin » Wed May 25, 2011 3:58 pm

Hi everyone, I'm trying to run abinit 6.6.2 with openMPI 1.4.3 in a cluster where the nodes each have a local filesystem.

A problem arises as my user tries to do a multi-dataset case: abinit writes the _DS1_WFK file correctly on the "master" node and an empty _DS1_WFK on the remote nodes. The calculation stops as the program tries to read the first dataset.

I tried to run abinit with single-host mpi parallelization and the DSx_WFK files are written correctly and the case completed successfully.

I searched the documentation and found a couple of settings that could be relevant to this problem (localrdwf and accesswff); I tried various settings but I could not get abinit to write and read the wavefunction files just from the master node. I also tried to use a centralized networked filesystem but the performance is so awful that I had to discard that possibility (+45% cpu time with 2*cpus against a single-host case).

Is there a way to get around this problem? I tried to search the forums but I didn't find anything relevant...

mverstra
Posts: 655
Joined: Wed Aug 19, 2009 12:01 pm

Re: HPC Abinit

Post by mverstra » Sun Jun 05, 2011 9:32 pm

You are quite correct, this is a pain. The variables do not work systematically for localrdw and so on, and will not do what you are suggesting, which would be to save the WF on all of the processors. The normal possibility would be only 1 copy for the mother node, which reads and broadcasts for everyone, but to my knowledge this is not possible.

Centralized NFS disk is indeed too slow in most cases, although for this use it might work: only need i/o at the end or beginning of each dataset. MPI-io would work just as well as direct access if you do have a high performance disk (GPFS or lustre)

The simplest for you to do would be to run the DS separately, or make a batch file with several inputs, and in between the DS it scp-s the needed files to the other nodes.

abinit< files1 > log1 # first inputs, just for DS1
for node in `echo nodefile`
do
scp *WFK* $node:/scratch/$user/
done
abinit < files2> log2 # rest of the datasets

etc... where I presumed /scratch/$user is your local scratch directory.
Normally there is just one or a few WF files you need to copy for the rest of the datasets.


ciao

Matthieu
Matthieu Verstraete
University of Liege, Belgium

maurosgroi
Posts: 27
Joined: Wed Apr 07, 2010 12:12 pm

Re: HPC Abinit

Post by maurosgroi » Tue Jun 07, 2011 8:59 pm

Dear Matthieu,
so, if I understand well, the multidataset mode implemented in abinit cannot be used on a parallel calculation cluster?
Is this correct?
Is there a way to modify the source in order to write correctly the WFK files on each node?
Best regards,
Mauro.

hpc_sysadmin
Posts: 2
Joined: Wed May 25, 2011 3:43 pm

Re: HPC Abinit

Post by hpc_sysadmin » Wed Jun 08, 2011 12:24 pm

First of all, thank you for your feedback.


mverstra wrote:
The simplest for you to do would be to run the DS separately, or make a batch file with several inputs, and in between the DS it scp-s the needed files to the other nodes.


This could work if we have a precise number of datasets to work on, but if we need to do some geometry optimization runs we wouldn't know beforehand how many datasets we'll be working on.
Sadly a global cluster filesystem is out of question (our architecture won't permit such a configuration) so our best bet (besides tweaking abinit sources to make it write WFK files on every node, which I guess is way out of my league) would be to mount a shared NFS filesystem on all nodes just for optimization jobs (even though I have goosebumps just thinking about the performances...).

Do you think it would be a daunting process to patch the sources in such a fashion?

Bye,
Dave

mverstra
Posts: 655
Joined: Wed Aug 19, 2009 12:01 pm

Re: HPC Abinit

Post by mverstra » Wed Oct 12, 2011 11:08 am

Hello Dave,

there used to be an option localrdwf to choose between
* only mother node reads wf and distributes
and
* everyone reads their own wf (the present solution)

it never worked entirely correctly, and apparently has been disowned, but the code is still there and could easily be used (and hopefully patched) by someone with a little bit of patience. I think it also collided with some of the mpi-io code that was being implemented. Needs to be checked, with a simple test case, but the tough bit will be accounting for all the sub-possibilities of different types of jobs (phonons, ground state, GW, etc...)

cheers

Matthieu
Matthieu Verstraete
University of Liege, Belgium

Locked