When trying to use MPI level 3 (--with-mpi-level=3) the build crashes.
The reason is quite simple,
Look at shared/common/src/11_memory_mpi/m_profiling_abi.F90
There you have:
#ifdef HAVE_MPI2
use mpi
#endif
...
#if defined HAVE_MPI1
include 'mpif.h'
#endif
but no where do you have HAVE_MPI3.
This results in undefined variables (MPI_COMM_WORLD etc.)
Oh, and this is for v9.2.1 and v9.2.2 (sorry)
MPI level = 3 fails
Moderators: fgoudreault, mcote
Forum rules
Please have a look at ~abinit/doc/config/build-config.ac in the source package for detailed and up-to-date information about the configuration of Abinit 8 builds.
For a video explanation on how to build Abinit 7.x for Linux, please go to: http://www.youtube.com/watch?v=DppLQ-KQA68.
IMPORTANT: when an answer solves your problem, please check the little green V-like button on its upper-right corner to accept it.
Please have a look at ~abinit/doc/config/build-config.ac in the source package for detailed and up-to-date information about the configuration of Abinit 8 builds.
For a video explanation on how to build Abinit 7.x for Linux, please go to: http://www.youtube.com/watch?v=DppLQ-KQA68.
IMPORTANT: when an answer solves your problem, please check the little green V-like button on its upper-right corner to accept it.
Re: MPI level = 3 fails
I don't think this option has some effect at the level of the source code except for making the compilation abort!When trying to use MPI level 3 (--with-mpi-level=3) the build crashes.
The build system automatically detects whether the MPI library provides the mpi module (MPI2+)
or if we have to fallback to MPI1 include files.
There are MPI2+ extensions that are used in the code (e.g. non-blocking collective communications) but the presence of these extension is tested at runtime and CPP flags are defined accordingly.
Moreover, the MPI2+ features we are presently using do not require the mpi_f08 module so for the time being we do not require a library that is fully MPI3-compliant.
In a nutshell, there's no need to use with-mpi-level=3 to take advantage of (part) of the MPI3 specs.
This is what you should get at configure time if you are using a recent MPI implementation
Code: Select all
checking whether to build MPI I/O code... yes
checking which level of MPI is supported by the Fortran compiler... 2
configure: forcing MPI-2 standard level support
checking whether the MPI library supports MPI_INTEGER16... yes
checking whether the MPI library supports MPI_CREATE_TYPE_STRUCT... yes
checking whether the MPI library supports MPI_IBCAST (MPI3)... yes
checking whether the MPI library supports MPI_IALLGATHER (MPI3)... yes
checking whether the MPI library supports MPI_IALLTOALL (MPI3)... yes
checking whether the MPI library supports MPI_IALLTOALLV (MPI3)... yes
checking whether the MPI library supports MPI_IGATHERV (MPI3)... yes
checking whether the MPI library supports MPI_IALLREDUCE (MPI3)... yes
Matteo