mpi in conducti

Documentation, Web site and code modifications

Moderators: baguetl, routerov

Locked
rangel
Posts: 45
Joined: Tue Aug 18, 2009 9:50 pm

mpi in conducti

Post by rangel » Thu Feb 18, 2010 4:09 pm

This is to follow a thread initiated at the abinit e-mail list.
My problem is when running linear_optics_paw in serial but compiling with mpi.
I get an error message ( I pasted the previous discussion below).

According to Matteo's suggestion, I am to add a call to initmpi_seq at the beggining of the
program and to xmpi_end at the end.

The main program is "conducti". I just want to know if all of its children are just coded for serial runs:
conducti_nc
conducti_paw
linear_optics_paw
conducti_paw

If so, I will proceed adding these calls to the program "conducti"

What do you think?



> Hi,
>
> Tonatiuh Rangel <Tonatiuh.Rangel@uclouvain.be> ha escrito:
>
>> I am having an error with optics paw:
>> [...]
>>
>> But when running conducti the code stops with the following error:
>> *** An error occurred in MPI_Comm_f2c
>> *** before MPI was initialized
>> *** MPI_ERRORS_ARE_FATAL (your MPI job will now abort)
>>
>> Can anyone help me with this?
>
> I can just tell you what's wrong: conducti is calling routines that
> depend on MPI, but doesn't initialize MPI itself.

I would add a call to initmpi_seq in order to initialize mpi_enreg
before calling the different conducti routines.
Besides one should call xmpi_end before the end of the main program.

>
> A proper call to mpi_init() has to be added at an appropriate place.
>
> Best regards,
>
> Yann.
>
> --
> Yann Pouillon European Theoretical Spectroscopy Facility
> (ETSF)
> Tel: (+34) 943 01 83 94 Centro Joxe Mari Korta, Avenida de Tolosa, 72
> Fax: (+34) 943 01 83 90 20018 Donostia-San Sebastián (Gipuzkoa)
> Web: http://www.etsf.es/ España
Tonatiuh Rangel

User avatar
jzwanzig
Posts: 504
Joined: Mon Aug 17, 2009 9:25 am

Re: mpi in conducti

Post by jzwanzig » Thu Feb 18, 2010 5:07 pm

Hi,
linear_optics_paw definitely works in serial only, I'm pretty sure that the others do too.
Joe
Josef W. Zwanziger
Professor, Department of Chemistry
Canada Research Chair in NMR Studies of Materials
Dalhousie University
Halifax, NS B3H 4J3 Canada
jzwanzig@gmail.com

User avatar
pouillon
Posts: 651
Joined: Wed Aug 19, 2009 10:08 am
Location: Spain
Contact:

Re: mpi in conducti

Post by pouillon » Fri Feb 19, 2010 6:56 pm

In order to give you some hints, here is how I did for mrgddb:

Code: Select all

=== modifié fichier src/98_main/mrgddb.F90
--- src/98_main/mrgddb.F90   2009-05-07 17:39:48 +0000
+++ src/98_main/mrgddb.F90   2009-07-02 22:35:08 +0000
@@ -46,7 +46,11 @@
 program mrgddb
 
  use defs_basis
+ use defs_abitypes
  use m_build_info
+#if defined HAVE_MPI && defined HAVE_MPI2
+ use mpi
+#endif
 
 !This section has been created automatically by the script Abilint (TD).
 !Do not modify the following lines by hand.
@@ -58,6 +62,9 @@
 !End of the abilint section
 
  implicit none
+#if defined HAVE_MPI && defined HAVE_MPI1
+ include 'mpif.h'
+#endif
 
 !Arguments -----------------------------------
 
@@ -71,7 +78,8 @@
 !Define input and output unit numbers:
  integer,parameter :: ddbun=2,unit00=1
  integer :: choice,dimekb,dimekb_tmp,fullinit,fullinit8,iblok,iblok1,iblok2
- integer :: iddb,ii,intxc,intxc8,iscf,iscf8,ixc,ixc8,lmnmax,lnmax,matom,mband
+ integer :: iddb,ierr,ii,intxc,intxc8,iscf,iscf8,ixc,ixc8,lmnmax,lnmax,matom
+ integer :: mband
  integer :: mband_tmp,mblktyp,mblok,mkpt,mpert,msize,msize_tmp,mtypat,natom
  integer :: natom8,nblok,nblok8,nblokt,nddb,nkpt,nkpt8,nline,nq,nspden,nspden8
  integer :: nspinor,nspinor8,nsppo8,nsppol,nsym,nsym8,ntypat,ntypat8,nunit
@@ -96,10 +104,62 @@
  character(len=fnlen) :: filnam(mddb+1)
  character(len=strlen) :: string
  character(len=500) :: message
+ type(MPI_type) :: mpi_enreg
 
 !******************************************************************
 !BEGIN EXECUTABLE SECTION
 
+! Initialize MPI : one should write a separate routine -init_mpi_enreg-
+! for doing that !!
+
+!Default for sequential use
+ mpi_enreg%world_comm=0
+ mpi_enreg%world_group=0
+ mpi_enreg%me=0
+ mpi_enreg%nproc=1
+ mpi_enreg%num_group_fft = 0 ! in some cases not initialized but referenced in xdef_comm.F90
+ mpi_enreg%paral_compil=0
+ mpi_enreg%paral_compil_mpio=0
+!MG080916 If we want to avoid MPI preprocessing options, %proc_distr should be always allocated and
+!set to mpi_enreg%me. In such a way we can safely test its value inside loops parallelized over k-points
+!For the time being, do not remove this line since it is needed in outkss.F90.
+ nullify(mpi_enreg%proc_distrb)
+
+!Initialize MPI
+#if defined HAVE_MPI
+           call MPI_INIT(ierr)
+           mpi_enreg%world_comm=MPI_COMM_WORLD
+           mpi_enreg%world_group=MPI_GROUP_NULL
+           call MPI_COMM_RANK(MPI_COMM_WORLD,mpi_enreg%me,ierr)
+           call MPI_COMM_SIZE(MPI_COMM_WORLD,mpi_enreg%nproc,ierr)
+!          write(6,*)' abinit : nproc,me=',mpi_enreg%nproc,mpi_enreg%me
+           mpi_enreg%paral_compil=1
+#endif
+
+!Signal MPI I/O compilation has been activated
+#if defined HAVE_MPI_IO
+           mpi_enreg%paral_compil_mpio=1
+           if(mpi_enreg%paral_compil==0)then
+            write(message,'(6a)') ch10,&
+&            ' abinit : ERROR -',ch10,&
+&            '  In order to use MPI_IO, you must compile with the MPI flag ',ch10,&
+&            '  Action : recompile your code with different CPP flags.'
+            call wrtout(06,message,'COLL')
+            call leave_new('COLL')
+           end if
+#endif
+
+!Initialize spaceComm, used in leave_test
+ mpi_enreg%spaceComm=mpi_enreg%world_comm
+!Initialize paral_compil_kpt, actually always equal to paral_compil
+!(paral_compil_kpt should be suppressed after big cleaning)
+ mpi_enreg%paral_compil_kpt=0
+ if(mpi_enreg%paral_compil==1) mpi_enreg%paral_compil_kpt=1
+
+!Other values of mpi_enreg are dataset dependent, and should NOT be initialized
+!inside mrgddb.F90.
+
+
  codename='MRGDDB'//repeat(' ',18)
  call herald(codename,abinit_version,std_out)
 !YP: calling dump_config() makes tests fail => commented
@@ -504,5 +564,9 @@
  write(message, '(a)' )'+mrgddb : the run completed successfully '
  call wrtout(6,message,'COLL')
 
- end program
+#if defined HAVE_MPI
+ call MPI_FINALIZE(ierr)
+#endif
+
+end program mrgddb
 !!***
Yann Pouillon
Simune Atomistics
Donostia-San Sebastián, Spain

rangel
Posts: 45
Joined: Tue Aug 18, 2009 9:50 pm

Re: mpi in conducti

Post by rangel » Sun Feb 21, 2010 9:43 pm

Thanks a lot for your reply,

I have already modified it into my public branch, and the tests passed.

So the issue is now solved.

Best
Tonatiuh
Tonatiuh Rangel

Locked