sharing .ac config and install files on supercomputers
Posted: Fri Sep 30, 2016 11:33 pm
Does anyone have compile experience on any XSEDE supercomputers, and would be willing to share config files etc? I'm about to embark on installation on various computers, e.g. Stampede, Maverick, Jetstream, Comet, Gordon, OSG, Bridges, etc. But every computer will have its own challenges, so it seems like it would be sweet if the abinit community shared config/install files and notes on each of these, as I assume many users of those computers also use abinit. When I get working executables, I'd be willing to share too if there is interest.
For now the first computer I'm looking at is Stampede, if anyone has suggestions as to mpi vs openmp for starters and anything else that may be useful. Here's a little info:
https://portal.tacc.utexas.edu/user-guides/stampede
On Stampede nodes, MPI applications can be launched solely on the E5 processors, or solely on the Phi coprocessors, or on both in a "symmetric" heterogeneous computing mode. For heterogeneous computing, an application is compiled for each architecture and the MPI launcher ("ibrun" at TACC) is modified to launch the executables on the appropriate processors according to the resource specification for each platform (number of tasks on the E5 component and the Phi component of a node).
So, to use both the E5's and the Phi coprocessors, I will need to compile abinit on each separately, and somehow get ibrun to make them both work properly...
So the Phi can run either MPI or openMP, but perhaps I should do something like offload a sharedmem part of the application to the co? Because, hey, 61 cores w/ 8GB. Well I guess you
could still do MPI, but maybe openmp would be better.
abinit seems to expect MPI... openmp is still possible right?
Any thoughts, suggestions, experience? Much thanks!
-Ryan
For now the first computer I'm looking at is Stampede, if anyone has suggestions as to mpi vs openmp for starters and anything else that may be useful. Here's a little info:
https://portal.tacc.utexas.edu/user-guides/stampede
On Stampede nodes, MPI applications can be launched solely on the E5 processors, or solely on the Phi coprocessors, or on both in a "symmetric" heterogeneous computing mode. For heterogeneous computing, an application is compiled for each architecture and the MPI launcher ("ibrun" at TACC) is modified to launch the executables on the appropriate processors according to the resource specification for each platform (number of tasks on the E5 component and the Phi component of a node).
So, to use both the E5's and the Phi coprocessors, I will need to compile abinit on each separately, and somehow get ibrun to make them both work properly...
So the Phi can run either MPI or openMP, but perhaps I should do something like offload a sharedmem part of the application to the co? Because, hey, 61 cores w/ 8GB. Well I guess you
could still do MPI, but maybe openmp would be better.
abinit seems to expect MPI... openmp is still possible right?
Any thoughts, suggestions, experience? Much thanks!
-Ryan