Compiling with MPI and Intel 19.0 [SOLVED]
Moderators: fgoudreault, mcote
Forum rules
Please have a look at ~abinit/doc/config/build-config.ac in the source package for detailed and up-to-date information about the configuration of Abinit 8 builds.
For a video explanation on how to build Abinit 7.x for Linux, please go to: http://www.youtube.com/watch?v=DppLQ-KQA68.
IMPORTANT: when an answer solves your problem, please check the little green V-like button on its upper-right corner to accept it.
Please have a look at ~abinit/doc/config/build-config.ac in the source package for detailed and up-to-date information about the configuration of Abinit 8 builds.
For a video explanation on how to build Abinit 7.x for Linux, please go to: http://www.youtube.com/watch?v=DppLQ-KQA68.
IMPORTANT: when an answer solves your problem, please check the little green V-like button on its upper-right corner to accept it.
Compiling with MPI and Intel 19.0
Hello,
I'm having trouble running the abinit (8.10.2) executable after compiling with the Intel 19.0 compilers and with MPI enabled (64 bit intel).
If I compile with either the gnu tools (gcc, gfortran 7.3.0) or the Intel tools (icc, ifort), and without MPI enabled, make check shows all fast tests succeed.
If I compile with the gnu tools (gcc, gfortran) and MPI enabled, make check runs abinit without mpirun and all fast tests succeed.
If I compile with the Intel tools (mpiicc, mpiifort) and MPI enabled, make check still runs abinit without mpirun and all fast tests fail with the following error:
forrtl: severe (24): end-of-file during read, unit 5, file /proc/19230/fd/0
Image PC Routine Line Source
libifcoremt.so.5 000014A9FFCA97B6 for__io_return Unknown Unknown
libifcoremt.so.5 000014A9FFCE7C00 for_read_seq_fmt Unknown Unknown
abinit 00000000015B9312 Unknown Unknown Unknown
abinit 0000000000409DEF Unknown Unknown Unknown
abinit 0000000000409B22 Unknown Unknown Unknown
libc-2.27.so 000014A9FD6E1B97 __libc_start_main Unknown Unknown
abinit 0000000000409A0A Unknown Unknown Unknown
If I compile with the Intel tools and MPI enabled and run "runtests.py fast --force-mpirun", then abinit is run with "mpirun -np 1" and all tests succeed.
My understanding is that executables compiled with mpiifort must be run with mpirun even if np=1.
It seems that runtests.py tries to run serial tests without mpirun. This seems to work when abinit is compiled with the gnu tools, but not when compiled with the intel tools.
Is this a known difference in behavior for mpi executables compiled with the gnu tools vs the intel tools? If so, why doesn't runtests.py use mpirun for the intel compiled executable even for serial tests. Or am I doing something wrong?
Also, when compiling with Intel and MPI, setting with_mpi_incs and with_mpi_libs has no effect. They are not used in the compilation. I assume this is because mpiifort is a wrapper that is supplying these. Is that correct?
I am using Intel Parallel Studio XE and source psxevars.sh to set the environment before compiling/running with intel.
Thanks for any suggestions.
I'm having trouble running the abinit (8.10.2) executable after compiling with the Intel 19.0 compilers and with MPI enabled (64 bit intel).
If I compile with either the gnu tools (gcc, gfortran 7.3.0) or the Intel tools (icc, ifort), and without MPI enabled, make check shows all fast tests succeed.
If I compile with the gnu tools (gcc, gfortran) and MPI enabled, make check runs abinit without mpirun and all fast tests succeed.
If I compile with the Intel tools (mpiicc, mpiifort) and MPI enabled, make check still runs abinit without mpirun and all fast tests fail with the following error:
forrtl: severe (24): end-of-file during read, unit 5, file /proc/19230/fd/0
Image PC Routine Line Source
libifcoremt.so.5 000014A9FFCA97B6 for__io_return Unknown Unknown
libifcoremt.so.5 000014A9FFCE7C00 for_read_seq_fmt Unknown Unknown
abinit 00000000015B9312 Unknown Unknown Unknown
abinit 0000000000409DEF Unknown Unknown Unknown
abinit 0000000000409B22 Unknown Unknown Unknown
libc-2.27.so 000014A9FD6E1B97 __libc_start_main Unknown Unknown
abinit 0000000000409A0A Unknown Unknown Unknown
If I compile with the Intel tools and MPI enabled and run "runtests.py fast --force-mpirun", then abinit is run with "mpirun -np 1" and all tests succeed.
My understanding is that executables compiled with mpiifort must be run with mpirun even if np=1.
It seems that runtests.py tries to run serial tests without mpirun. This seems to work when abinit is compiled with the gnu tools, but not when compiled with the intel tools.
Is this a known difference in behavior for mpi executables compiled with the gnu tools vs the intel tools? If so, why doesn't runtests.py use mpirun for the intel compiled executable even for serial tests. Or am I doing something wrong?
Also, when compiling with Intel and MPI, setting with_mpi_incs and with_mpi_libs has no effect. They are not used in the compilation. I assume this is because mpiifort is a wrapper that is supplying these. Is that correct?
I am using Intel Parallel Studio XE and source psxevars.sh to set the environment before compiling/running with intel.
Thanks for any suggestions.
Re: Compiling with MPI and Intel 19.0
Dear Frodo,
You can type "./runtests.py -h" to see all the available options.
For the parallel tests, you have the option runtests.py paral -n XX with XX the number of CPU on which you want to run the mpi. See the other options to force some executable, etc.
Best wishes,
Eric
You can type "./runtests.py -h" to see all the available options.
For the parallel tests, you have the option runtests.py paral -n XX with XX the number of CPU on which you want to run the mpi. See the other options to force some executable, etc.
Best wishes,
Eric
Re: Compiling with MPI and Intel 19.0
Hi Eric,
Thank you for the reply.
Yes, I know about runtests.py and use it all the time.
My question was why running "make check" fails with the indicated error when abinit is compiled with Intel 19.0 (mpiifort), where "make check" does not fail when abinit is compiled with gnu (mpif90). In both cases, "make check" invokes runtests.py to run abinit directly, i.e., without mpirun. This works for the gnu compiled version but fails with the error I indicated for the Intel compiled version.
Let me try to make what I am saying more clear:
Compiling with GNU:
I compile abinit with the following config.ac:
Configure automatically sets CC=mpicc and FC=mpif90.
I run "abinit < test.stdin > test.sdtout 2> test.stderr". This executes normally.
Compiling with Intel 19.0:
I compile abinit with the following config.ac:
I run "abinit < test.stdin > test.stdout 2> test.stderr". This fails with the following error:
However, if I run "mpirun -np 1 abinit < test.stdin > test.stdout 2> test.stderr", it execute normally.
In other words, the intel compiled version has to be executed with mpirun, even when np=1. But the gnu compiled version can be executed directly, without mpirun.
I know I can force runtests.py to use mpirun to invoke abinit (--force-mpirun) but why do I have to do this even for np=1 when I am testing an intel compiled executabe but I do not have to do this (i.e., don't have to use --force-mpirun) for a gnu compiled executable?
What is the reason for this difference in behavior?
Thank you for the reply.
Yes, I know about runtests.py and use it all the time.
My question was why running "make check" fails with the indicated error when abinit is compiled with Intel 19.0 (mpiifort), where "make check" does not fail when abinit is compiled with gnu (mpif90). In both cases, "make check" invokes runtests.py to run abinit directly, i.e., without mpirun. This works for the gnu compiled version but fails with the error I indicated for the Intel compiled version.
Let me try to make what I am saying more clear:
Compiling with GNU:
I compile abinit with the following config.ac:
Code: Select all
enable_debug="no"
enable_avx_safe_mode="no"
prefix="/usr/local/abinit"
enable_mpi="yes"
enable_mpi_inplace="yes"
enable_mpi_io="yes"
with_mpi_prefix="/usr"
enable_gpu="no"
Configure automatically sets CC=mpicc and FC=mpif90.
I run "abinit < test.stdin > test.sdtout 2> test.stderr". This executes normally.
Compiling with Intel 19.0:
I compile abinit with the following config.ac:
Code: Select all
enable_debug="no"
enable_avx_safe_mode="no"
prefix="/usr/local/abinit"
CC="mpiicc"
CXX="mpiicpc"
FC="mpiifort"
enable_mpi="yes"
enable_mpi_inplace="yes"
enable_mpi_io="yes"
enable_gpu="no"
I run "abinit < test.stdin > test.stdout 2> test.stderr". This fails with the following error:
Code: Select all
forrtl: severe (24): end-of-file during read, unit 5, file /proc/19230/fd/0
Image PC Routine Line Source
libifcoremt.so.5 000014A9FFCA97B6 for__io_return Unknown Unknown
libifcoremt.so.5 000014A9FFCE7C00 for_read_seq_fmt Unknown Unknown
abinit 00000000015B9312 Unknown Unknown Unknown
abinit 0000000000409DEF Unknown Unknown Unknown
abinit 0000000000409B22 Unknown Unknown Unknown
libc-2.27.so 000014A9FD6E1B97 __libc_start_main Unknown Unknown
abinit 0000000000409A0A Unknown Unknown Unknown
However, if I run "mpirun -np 1 abinit < test.stdin > test.stdout 2> test.stderr", it execute normally.
In other words, the intel compiled version has to be executed with mpirun, even when np=1. But the gnu compiled version can be executed directly, without mpirun.
I know I can force runtests.py to use mpirun to invoke abinit (--force-mpirun) but why do I have to do this even for np=1 when I am testing an intel compiled executabe but I do not have to do this (i.e., don't have to use --force-mpirun) for a gnu compiled executable?
What is the reason for this difference in behavior?
Re: Compiling with MPI and Intel 19.0
Hi,
quickly: I don't reproduce the behavior you observe
my .ac file :
jmb
quickly: I don't reproduce the behavior you observe
Code: Select all
[root@yquem fast_t01]# mpiifort -V
Intel(R) Fortran Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 17.0.4.196 Build 20170411
Copyright (C) 1985-2017 Intel Corporation. All rights reserved.
FOR NON-COMMERCIAL USE ONLY
Code: Select all
../../../src/98_main/abinit < t01.stdin > OUT
tail t01.out
- Comment : the original paper describing the ABINIT project.
- DOI and bibtex : see https://docs.abinit.org/theory/bibliography/#gonze2002
-
- Proc. 0 individual time (sec): cpu= 0.1 wall= 0.1
================================================================================
Calculation completed.
.Delivered 6 WARNINGs and 10 COMMENTs to log file.
+Overall time at end (sec) : cpu= 0.1 wall= 0.1
my .ac file :
Code: Select all
CC="mpiicc"
CXX="mpiicpc"
FC="mpiifort"
FCFLAGS_EXTRA="-g -O3 -align all"
enable_mpi="yes"
enable_mpi_inplace="yes"
enable_mpi_io="yes"
with_trio_flavor=none
with_dft_flavor=none
#I_MPI_ROOT=/opt/intel/compilers_and_libraries_2017.4.196/linux/mpi/
with_mpi_incs="-I${I_MPI_ROOT}/include64"
with_mpi_libs="-L${I_MPI_ROOT}/lib64 -lmpi"
with_fft_flavor="fftw3-mkl"
with_fft_incs="-I${MKLROOT}/include"
with_fft_libs="-L${MKLROOT}/lib/intel64 -Wl,--start-group -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -lpthread -lm -ldl"
with_linalg_flavor="mkl"
with_linalg_incs="-I${MKLROOT}/include"
with_linalg_libs="-L${MKLROOT}/lib/intel64 -Wl,--start-group -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -lpthread -lm -ldl"
jmb
------
Jean-Michel Beuken
Computer Scientist
Jean-Michel Beuken
Computer Scientist
Re: Compiling with MPI and Intel 19.0
Hi,
Thanks for the reply.
My mpiifort:
I recompiled using exactly your config.ac
A comment on this is that apparently Intel 19 changed the library structure for MPI. MPI Libs are in $[I_MPI_ROOT]/intel64/lib, not ${I_MPI_ROOT}lib64. Also, the mpi library is now in a subdirectory of this: ${I_MPI_ROOT}/intel64/lib/release. See: https://github.com/spack/spack/issues/9913
Also, with_mpi_incs and with_mpi_libs are not used. Here is the actual compilation line emitted by make:
It appears that the mpiifort wrapper has set the libraries to link against and the with_mpi_libs specified in the config.ac file has been ignored.
Here is what I get when I run abinit directly:
Here is what I get when I run abinit via mpirun:
So maybe this is a difference between Intel 19 and Intel 17? Unfortunately, I don't have Intel 17 installed on my system to test that.
Thanks for the reply.
My mpiifort:
Code: Select all
mpiifort -V
Intel(R) Fortran Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 19.0.1.144 Build 20181018
Copyright (C) 1985-2018 Intel Corporation. All rights reserved.
I recompiled using exactly your config.ac
Code: Select all
CC="mpiicc"
CXX="mpiicpc"
FC="mpiifort"
FCFLAGS_EXTRA="-g -O3 -align all"
enable_mpi="yes"
enable_mpi_inplace="yes"
enable_mpi_io="yes"
with_trio_flavor=none
with_dft_flavor=none
#I_MPI_ROOT=/opt/intel/compilers_and_libraries_2017.4.196/linux/mpi/
with_mpi_incs="-I${I_MPI_ROOT}/include64"
with_mpi_libs="-L${I_MPI_ROOT}/lib64 -lmpi"
with_fft_flavor="fftw3-mkl"
with_fft_incs="-I${MKLROOT}/include"
with_fft_libs="-L${MKLROOT}/lib/intel64 -Wl,--start-group -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -lpthread -lm -ldl"
with_linalg_flavor="mkl"
with_linalg_incs="-I${MKLROOT}/include"
with_linalg_libs="-L${MKLROOT}/lib/intel64 -Wl,--start-group -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -lpthread -lm -ldl"
A comment on this is that apparently Intel 19 changed the library structure for MPI. MPI Libs are in $[I_MPI_ROOT]/intel64/lib, not ${I_MPI_ROOT}lib64. Also, the mpi library is now in a subdirectory of this: ${I_MPI_ROOT}/intel64/lib/release. See: https://github.com/spack/spack/issues/9913
Also, with_mpi_incs and with_mpi_libs are not used. Here is the actual compilation line emitted by make:
Code: Select all
mpiifort -DHAVE_CONFIG_H -I. -I../../../src/98_main -I../.. -I../../src/incs -I../../../src/incs -I/home/dierker/abinit-8.10.2/build/fallbacks/exports/include -I/opt/intel/compilers_and_libraries_2019.1.144/linux/mkl/include -I/opt/intel/compilers_and_libraries_2019.1.144/linux/mkl/include -free -module /home/dierker/abinit-8.10.2/build/src/mods -O3 -g -extend-source -noaltparam -nofpscomp -g -O3 -align all -g -extend-source -noaltparam -nofpscomp -g -O3 -align all -c -o abinit-abinit.o `test -f 'abinit.F90' || echo '../../../src/98_main/'`abinit.F90
mpiifort -free -module /home/dierker/abinit-8.10.2/build/src/mods -O3 -g -extend-source -noaltparam -nofpscomp -g -O3 -align all -g -extend-source -noaltparam -nofpscomp -g -O3 -align all -static-intel -static-libgcc -static-intel -static-libgcc -o abinit abinit-abinit.o -static-intel -static-libgcc ../../src/95_drive/lib95_drive.a ../../src/94_scfcv/lib94_scfcv.a ../../src/79_seqpar_mpi/lib79_seqpar_mpi.a ../../src/78_effpot/lib78_effpot.a ../../src/78_eph/lib78_eph.a ../../src/77_ddb/lib77_ddb.a ../../src/77_suscep/lib77_suscep.a ../../src/72_response/lib72_response.a ../../src/71_bse/lib71_bse.a ../../src/71_wannier/lib71_wannier.a ../../src/70_gw/lib70_gw.a ../../src/69_wfdesc/lib69_wfdesc.a ../../src/68_dmft/lib68_dmft.a ../../src/68_recursion/lib68_recursion.a ../../src/68_rsprc/lib68_rsprc.a ../../src/67_common/lib67_common.a ../../src/66_vdwxc/lib66_vdwxc.a ../../src/66_wfs/lib66_wfs.a ../../src/66_nonlocal/lib66_nonlocal.a ../../src/65_paw/lib65_paw.a ../../src/64_psp/lib64_psp.a ../../src/62_iowfdenpot/lib62_iowfdenpot.a ../../src/62_wvl_wfs/lib62_wvl_wfs.a ../../src/62_poisson/lib62_poisson.a ../../src/62_cg_noabirule/lib62_cg_noabirule.a ../../src/62_ctqmc/lib62_ctqmc.a ../../src/61_occeig/lib61_occeig.a ../../src/59_ionetcdf/lib59_ionetcdf.a ../../src/57_iovars/lib57_iovars.a ../../src/57_iopsp_parser/lib57_iopsp_parser.a ../../src/56_recipspace/lib56_recipspace.a ../../src/56_xc/lib56_xc.a ../../src/56_mixing/lib56_mixing.a ../../src/56_io_mpi/lib56_io_mpi.a ../../src/55_abiutil/lib55_abiutil.a ../../src/54_spacepar/lib54_spacepar.a ../../src/53_ffts/lib53_ffts.a ../../src/52_fft_mpi_noabirule/lib52_fft_mpi_noabirule.a ../../src/51_manage_mpi/lib51_manage_mpi.a ../../src/49_gw_toolbox_oop/lib49_gw_toolbox_oop.a ../../src/46_diago/lib46_diago.a ../../src/45_xgTools/lib45_xgTools.a ../../src/45_geomoptim/lib45_geomoptim.a ../../src/44_abitypes_defs/lib44_abitypes_defs.a ../../src/44_abitools/lib44_abitools.a ../../src/43_wvl_wrappers/lib43_wvl_wrappers.a ../../src/43_ptgroups/lib43_ptgroups.a ../../src/42_parser/lib42_parser.a ../../src/42_nlstrain/lib42_nlstrain.a ../../src/42_libpaw/lib42_libpaw.a ../../src/41_xc_lowlevel/lib41_xc_lowlevel.a ../../src/41_geometry/lib41_geometry.a ../../src/32_util/lib32_util.a ../../src/29_kpoints/lib29_kpoints.a ../../src/28_numeric_noabirule/lib28_numeric_noabirule.a ../../src/27_toolbox_oop/lib27_toolbox_oop.a ../../src/21_hashfuncs/lib21_hashfuncs.a ../../src/18_timing/lib18_timing.a ../../src/17_libtetra_ext/lib17_libtetra_ext.a ../../src/16_hideleave/lib16_hideleave.a ../../src/14_hidewrite/lib14_hidewrite.a ../../src/12_hide_mpi/lib12_hide_mpi.a ../../src/11_memory_mpi/lib11_memory_mpi.a ../../src/10_dumpinfo/lib10_dumpinfo.a ../../src/10_defs/lib10_defs.a ../../src/02_clib/lib02_clib.a -L/opt/intel/compilers_and_libraries_2019.1.144/linux/mkl/lib/intel64 -Wl,--start-group -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -lpthread -lm -ldl -L/opt/intel/compilers_and_libraries_2019.1.144/linux/mkl/lib/intel64 -Wl,--start-group -lmkl_intel_lp64 -lmkl_sequential -lmkl_core -Wl,--end-group -lpthread -lm -ldl -lrt -L/opt/intel//compilers_and_libraries_2019.1.144/linux/mpi/intel64/lib/release -L/opt/intel//compilers_and_libraries_2019.1.144/linux/mpi/intel64/lib -L/opt/intel/clck/2019.0/lib/intel64 -L/opt/intel//compilers_and_libraries_2019.1.144/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.1.144/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.1.144/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.1.144/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.1.144/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.1.144/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.1.144/linux/daal/../tbb/lib/intel64_lin/gcc4.4 -L/opt/intel//compilers_and_libraries_2019.1.144/linux/mpi/intel64/libfabric/lib/../lib/ -L/usr/lib/gcc/x86_64-linux-gnu/7/ -L/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/ -L/usr/lib/gcc/x86_64-linux-gnu/7/../../../../lib/ -L/lib/x86_64-linux-gnu/ -L/lib/../lib64 -L/lib/../lib/ -L/usr/lib/x86_64-linux-gnu/ -L/usr/lib/../lib/ -L/opt/intel/clck/2019.0/lib/intel64/ -L/opt/intel//compilers_and_libraries_2019.1.144/linux/mpi/intel64/libfabric/lib/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/ipp/lib/intel64/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/compiler/lib/intel64_lin/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/mkl/lib/intel64_lin/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/tbb/lib/intel64/gcc4.7/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/daal/lib/intel64_lin/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/daal/../tbb/lib/intel64_lin/gcc4.4/ -L/usr/lib/gcc/x86_64-linux-gnu/7/../../../ -L/lib64 -L/lib/ -L/usr/lib -L/usr/lib/i386-linux-gnu -lmpifort -lmpi -ldl -lrt -lpthread -lifport -lifcoremt -limf -lsvml -lm -lipgo -lirc -lirc_s -L/opt/intel//compilers_and_libraries_2019.1.144/linux/mpi/intel64/lib/release -L/opt/intel//compilers_and_libraries_2019.1.144/linux/mpi/intel64/lib -L/opt/intel/clck/2019.0/lib/intel64 -L/opt/intel//compilers_and_libraries_2019.1.144/linux/mpi/intel64/libfabric/lib -L/opt/intel/compilers_and_libraries_2019.1.144/linux/ipp/lib/intel64 -L/opt/intel/compilers_and_libraries_2019.1.144/linux/compiler/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.1.144/linux/mkl/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.1.144/linux/tbb/lib/intel64/gcc4.7 -L/opt/intel/compilers_and_libraries_2019.1.144/linux/daal/lib/intel64_lin -L/opt/intel/compilers_and_libraries_2019.1.144/linux/daal/../tbb/lib/intel64_lin/gcc4.4 -L/opt/intel//compilers_and_libraries_2019.1.144/linux/mpi/intel64/libfabric/lib/../lib/ -L/usr/lib/gcc/x86_64-linux-gnu/7/ -L/usr/lib/gcc/x86_64-linux-gnu/7/../../../x86_64-linux-gnu/ -L/usr/lib/gcc/x86_64-linux-gnu/7/../../../../lib/ -L/lib/x86_64-linux-gnu/ -L/lib/../lib64 -L/lib/../lib/ -L/usr/lib/x86_64-linux-gnu/ -L/usr/lib/../lib/ -L/opt/intel/clck/2019.0/lib/intel64/ -L/opt/intel//compilers_and_libraries_2019.1.144/linux/mpi/intel64/libfabric/lib/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/ipp/lib/intel64/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/compiler/lib/intel64_lin/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/mkl/lib/intel64_lin/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/tbb/lib/intel64/gcc4.7/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/daal/lib/intel64_lin/ -L/opt/intel/compilers_and_libraries_2019.1.144/linux/daal/../tbb/lib/intel64_lin/gcc4.4/ -L/usr/lib/gcc/x86_64-linux-gnu/7/../../../ -L/lib64 -L/lib/ -L/usr/lib -L/usr/lib/i386-linux-gnu -lmpifort -lmpi -ldl -lrt -lpthread -lifport -lifcoremt -limf -lsvml -lm -lipgo -lirc -lirc_s
It appears that the mpiifort wrapper has set the libraries to link against and the with_mpi_libs specified in the config.ac file has been ignored.
Here is what I get when I run abinit directly:
Code: Select all
../../../src/98_main/abinit < t01.stdin > OUT
forrtl: severe (24): end-of-file during read, unit 5, file /proc/33337/fd/0
Image PC Routine Line Source
libifcoremt.so.5 00007F49529B97B6 for__io_return Unknown Unknown
libifcoremt.so.5 00007F49529F7C00 for_read_seq_fmt Unknown Unknown
abinit 00000000018A6119 Unknown Unknown Unknown
abinit 0000000000407C49 Unknown Unknown Unknown
abinit 0000000000407942 Unknown Unknown Unknown
libc-2.27.so 00007F49503F1B97 __libc_start_main Unknown Unknown
abinit 000000000040782A Unknown Unknown Unknown
tail -25 OUT
ABINIT 8.10.2
Give name for formatted input file:
Here is what I get when I run abinit via mpirun:
Code: Select all
mpirun -np 1 ../../../src/98_main/abinit < t01.stdin > OUT
tail -25 OUT
- Computational Materials Science 25, 478-492 (2002). http://dx.doi.org/10.1016/S0927-0256(02)00325-7
- Comment : the original paper describing the ABINIT project.
- DOI and bibtex : see https://docs.abinit.org/theory/bibliography/#gonze2002
Proc. 0 individual time (sec): cpu= 0.1 wall= 0.1
Calculation completed.
.Delivered 6 WARNINGs and 8 COMMENTs to log file.
--- !FinalSummary
program: abinit
version: 8.10.2
start_datetime: Sun Mar 10 09:32:31 2019
end_datetime: Sun Mar 10 09:32:31 2019
overall_cpu_time: 0.1
overall_wall_time: 0.1
exit_requested_by_user: no
timelimit: 0
pseudos:
H : eb3a1fb3ac49f520fd87c87e3deb9929
usepaw: 0
mpi_procs: 1
omp_threads: 1
num_warnings: 6
num_comments: 8
...
So maybe this is a difference between Intel 19 and Intel 17? Unfortunately, I don't have Intel 17 installed on my system to test that.
Re: Compiling with MPI and Intel 19.0
I added -traceback to FC_FLAGS_EXTRA and now get the file and linenumber where abinit is failing:
Line 1363 in m_dtfil.F90 is just a straightforward read (the preceeding write succeeds, as you can see in the OUT file I posted)
Code: Select all
../../../src/98_main/abinit < t01.stdin > OUT-traceback
forrtl: severe (24): end-of-file during read, unit 5, file /proc/26824/fd/0
Image PC Routine Line Source
libifcoremt.so.5 00007F0847FAC7B6 for__io_return Unknown Unknown
libifcoremt.so.5 00007F0847FEAC00 for_read_seq_fmt Unknown Unknown
abinit 000000000187BC1F m_dtfil_mp_iofn1_ 1363 m_dtfil.F90
abinit 0000000000407C49 MAIN__ 251 abinit.F90
abinit 0000000000407942 Unknown Unknown Unknown
libc-2.27.so 00007F08459E4B97 __libc_start_main Unknown Unknown
abinit 000000000040782A Unknown Unknown Unknown
Line 1363 in m_dtfil.F90 is just a straightforward read (the preceeding write succeeds, as you can see in the OUT file I posted)
Code: Select all
! Read name of input file (std_in):
write(std_out,*,err=10,iomsg=errmsg)' Give name for formatted input file: '
read(std_in, '(a)',err=10,iomsg=errmsg ) filnam(1)
Re: Compiling with MPI and Intel 19.0
I tried upgrading to from Intel Parallel Studio XE Cluster Edition 2019 Update 1 to Intel Parallel Studio XE Cluster Edition 2019 Update 3.
Now:
I also tried compiling with enable_mpi_io="no".
Neither change made any difference. I still get forrtl severe (24) unless I run abinit with mpirun.
Now:
Code: Select all
mpiifort -V
Intel(R) Fortran Intel(R) 64 Compiler for applications running on Intel(R) 64, Version 19.0.3.199 Build 20190206
Copyright (C) 1985-2019 Intel Corporation. All rights reserved.
I also tried compiling with enable_mpi_io="no".
Neither change made any difference. I still get forrtl severe (24) unless I run abinit with mpirun.
Re: Compiling with MPI and Intel 19.0
Hi Frodo,
So, unless somebody else can comment on that, it sounds like we don't really know what's wrong here, but since it works by calling mpirun -np 1, in the meantime just run it like that (it'll probably the same for all other exec, e.g. anaddb, etc)... Otherwise recompile another exec in sequential...
Best wishes,
Eric
So, unless somebody else can comment on that, it sounds like we don't really know what's wrong here, but since it works by calling mpirun -np 1, in the meantime just run it like that (it'll probably the same for all other exec, e.g. anaddb, etc)... Otherwise recompile another exec in sequential...
Best wishes,
Eric
Re: Compiling with MPI and Intel 19.0
I discovered that there are a couple of comments on the Intel Developer Forum from others who noticed similar behavior.
Starting with Intel 19, you need to execute a program with mpirun (even for np=1) if it reads or writes stdin AFTER calling MPI_Init, otherwise the program will fail, as I noted earlier. If you do i/o to stdin BEFORE calling MPI_Init, the program succeeds.
For example:
can be successfully run as "test_mpi_1 1"
However,
fails if you try to run "test_mpi_2 1" but succeeds if you run "mpirun -n 1 test_mpi_2 1".
Abinit calls xmpi_init (a wrapper for MPI_Init) before doing i/o to stdin, so if it is compiled with Intel 19, it can't be run without mpirun. This breaks "make check" and requires that all tests be run with "runtests.py --force-mpirun".
I installed Intel Parallel Studio XE 2018 and recompiled abinit (with mpi enabled) and verified that it CAN be run directly (without mpirun) and does not fail when doing i/o to stdin.
In summary, Intel 19 works fine with abinit but requires that you always use mpirun.
Another smaller point relates to the fortran compiler flag to enable openmp code. Before 19, either fopenmp or qopenmp were accepted. As of Intel 19, only qopenmp is accepted and fopenmp is no longer recognized.
These changes in behavior should be taken into account in future releases of abinit.
Starting with Intel 19, you need to execute a program with mpirun (even for np=1) if it reads or writes stdin AFTER calling MPI_Init, otherwise the program will fail, as I noted earlier. If you do i/o to stdin BEFORE calling MPI_Init, the program succeeds.
For example:
Code: Select all
program test_mpi_1
read(5,*)n
write(6,*)n
call MPI_Init(ierr)
call MPI_Finalize(ierr)
end program test_mpi_1
can be successfully run as "test_mpi_1 1"
However,
Code: Select all
program test_mpi_2
call MPI_Init(ierr)
read(5,*)n
write(6,*)n
call MPI_Finalize(ierr)
end program test_mpi_2
fails if you try to run "test_mpi_2 1" but succeeds if you run "mpirun -n 1 test_mpi_2 1".
Abinit calls xmpi_init (a wrapper for MPI_Init) before doing i/o to stdin, so if it is compiled with Intel 19, it can't be run without mpirun. This breaks "make check" and requires that all tests be run with "runtests.py --force-mpirun".
I installed Intel Parallel Studio XE 2018 and recompiled abinit (with mpi enabled) and verified that it CAN be run directly (without mpirun) and does not fail when doing i/o to stdin.
In summary, Intel 19 works fine with abinit but requires that you always use mpirun.
Another smaller point relates to the fortran compiler flag to enable openmp code. Before 19, either fopenmp or qopenmp were accepted. As of Intel 19, only qopenmp is accepted and fopenmp is no longer recognized.
These changes in behavior should be taken into account in future releases of abinit.
Re: Compiling with MPI and Intel 19.0
Dear Frodo,
Thanks for your reply and reporting the details regarding this problem, this will help future users and developers.
Best wishes,
Eric
Thanks for your reply and reporting the details regarding this problem, this will help future users and developers.
Best wishes,
Eric
Re: Compiling with MPI and Intel 19.0
In addition to making the "make check" tests fail, this problem also makes abipy scripts that invoke abinit or anaddb fail since they expect to be able to invoke abinit or anaddb without using mpirun. Using "runtests.py --force-mpirun" is a simple work around for not being able to use "make check". However, I couldn't see any built-in way to make the abipy scripts work.
This was a big issue since it prevents use of the excellent abipy library. So I came up with the following simple shell script as a workaround until a future abinit release that addresses this issue is available.
I renamed the abinit executable to "abinit-mpi" and created the following shell script named "abinit":
With intel19, invoking abinit via "mpirun -n m abinit ...arguments..." passes the "abinit" command to a "hydra_pmi_proxy" script, which invokes it. The bash script above checks for this, and passes "abinit-mpi" to the hydra_pmi_proxy script along with any command line arguments (...arguments...). On the other hand, if abinit is invoked directly via "abinit ...arguments...", then the bash script invokes it with "mpirun -n 1 abinit-mpi ...arguments...".
anaddb suffers from the same issue, so I wrote a similar script for invoking it.
With these scripts, "make check", and especially the abipy library, all work normally.
Perhaps these scripts will be useful to others who run into this problem with intel19.
This was a big issue since it prevents use of the excellent abipy library. So I came up with the following simple shell script as a workaround until a future abinit release that addresses this issue is available.
I renamed the abinit executable to "abinit-mpi" and created the following shell script named "abinit":
Code: Select all
#!/bin/bash
PARENT=`ps --no-heading -o %c -p $PPID`
parentstr=${PARENT:0:4}
if [ ${parentstr} = "hydr" ]
then
abinit-mpi "$@"
else
mpirun -n 1 abinit-mpi "$@"
fi
With intel19, invoking abinit via "mpirun -n m abinit ...arguments..." passes the "abinit" command to a "hydra_pmi_proxy" script, which invokes it. The bash script above checks for this, and passes "abinit-mpi" to the hydra_pmi_proxy script along with any command line arguments (...arguments...). On the other hand, if abinit is invoked directly via "abinit ...arguments...", then the bash script invokes it with "mpirun -n 1 abinit-mpi ...arguments...".
anaddb suffers from the same issue, so I wrote a similar script for invoking it.
With these scripts, "make check", and especially the abipy library, all work normally.
Perhaps these scripts will be useful to others who run into this problem with intel19.
Re: Compiling with MPI and Intel 19.0
Update: I received confirmation from Intel that this behavior is indeed a bug. They claim it will be fixed in one of the Intel MPI 2020 updates.
Re: Compiling with MPI and Intel 19.0
OK, good to know!
Thanks a lot,
Eric
Thanks a lot,
Eric
Re: Compiling with MPI and Intel 19.0 [SOLVED]
Hi,
Intel released 2019 Update 6 for the Intel MPI library on Nov 6. I tested it and it does indeed fix the problem I reported above.
Abinit executables compiled with the Intel MPI library Update 6 can now be run without mpirun.
Intel has not yet released an update for the full Parallel Studio XE Suite that includes update 6 for the MPI library component, so you currently have to install the MPI update 6 library separately. It gets installed in a parallel_studio_xe_2020 directory instead of a parallel_studio_xe_2019 directory, so I suspect the next update of the full suite will be a 2020 version instead of a 2019 update release. Until the 2020 version of the full suite is released, you can include the standalone update 6 mpi library by sourcing mpivars.sh from its directory tree after sourcing psxevars.sh from the 2019 parallel studio directory tree.
Cheers
Intel released 2019 Update 6 for the Intel MPI library on Nov 6. I tested it and it does indeed fix the problem I reported above.
Abinit executables compiled with the Intel MPI library Update 6 can now be run without mpirun.
Intel has not yet released an update for the full Parallel Studio XE Suite that includes update 6 for the MPI library component, so you currently have to install the MPI update 6 library separately. It gets installed in a parallel_studio_xe_2020 directory instead of a parallel_studio_xe_2019 directory, so I suspect the next update of the full suite will be a 2020 version instead of a 2019 update release. Until the 2020 version of the full suite is released, you can include the standalone update 6 mpi library by sourcing mpivars.sh from its directory tree after sourcing psxevars.sh from the 2019 parallel studio directory tree.
Cheers
-
- Posts: 6
- Joined: Fri Nov 29, 2019 5:01 pm
Re: Compiling with MPI and Intel 19.0
And do we have an ETA for the 2020 version, or did I miss it? Thanks in advance!
Re: Compiling with MPI and Intel 19.0
------
Jean-Michel Beuken
Computer Scientist
Jean-Michel Beuken
Computer Scientist