Page 1 of 1
input file as command line argument?
Posted: Tue Feb 09, 2010 11:36 pm
by Christian_L
Hello,
I am trying to run abinit-5.8.3 on a Blue Gene system. The architecture prohibits passing input files via piping ('abinip < input.files' does not work).
Is there a way to pass it as a command line argument? Something like 'abinip -input.files'. If there is not, can you tell me how and where I would have to modify the source code to achieve this?
All the best,
Christian
Re: input file as command line argument?
Posted: Wed Feb 10, 2010 10:46 am
by pouillon
Such a mechanism does not exist (yet).
You may however configure Abinit with the --disable-stdin option. Then, at run-time, you'll have to make sure that the "files" file is named ab.files.
Re: input file as command line argument?
Posted: Wed Feb 10, 2010 1:50 pm
by Christian_L
Well, that should achieve exactly what I need. Is there an analogue --disable-stdout option to get a default log-file name?
Re: input file as command line argument?
Posted: Thu Feb 11, 2010 12:56 pm
by pouillon
Not to my knowledge. But this could be though about, as it shouldn't be too complicated to implement.
Re: input file as command line argument?
Posted: Thu Feb 11, 2010 5:31 pm
by gonze
Dear Christian,
The --disable-stdin option is actually what you need, because not only it hardcodes the
input file name to "ab.files" , but it also hardcodes the standard output file name to "log".
See the routine 95_drive/iofn1.F90 . The --disable-stdin option activated the CPP READ_FROM_FILE.
Actually, you can further modify this routine to suit your needs !
X
Re: input file as command line argument?
Posted: Thu Feb 11, 2010 8:54 pm
by Christian_L
It worked fine, I can now run abinit on the Blue Gene. But there is still another problem:
The calculation works fine for my case with ecut 20, but this is too low for me. When I raise it I get
memory : analysis of memory needs
================================================================================
Values of the parameters that define the memory need for DATASET 1.
intxc = 0 ionmov = 2 iscf = 7 xclevel = 2
lmnmax = 4 lnmax = 4 mband = 138 mffmem = 1
P mgfft = 180 mkmem = 1 mpssoang= 4 mpw = 139296
mqgrid = 3001 natom = 32 nfft = 2624400 nkpt = 15
nloalg = 4 nspden = 1 nspinor = 1 nsppol = 1
nsym = 2 n1xccc = 0 ntypat = 4 occopt = 1
================================================================================
P This job should need less than 1098.532 Mbytes of memory.
Rough estimation (10% accuracy) of disk space for files :
WF disk file : 303.763 Mbytes ; DEN or POT disk file : 20.025 Mbytes.
================================================================================
Biggest array : f_fftgr(disk), with 320.3633 MBytes.
-P-0000 leave_test : synchronization done...
memana : allocated an array of 320.363 Mbytes, for testing purposes.
memana : allocated 1098.532 Mbytes, for testing purposes.
The job will continue.
DATASET 2 : space group P2 (# 3); Bravais mP (primitive monocl.)
...
some more lines
...
================================================================================
Values of the parameters that define the memory need for DATASET 2.
intxc = 0 ionmov = 2 iscf = 7 xclevel = 2
lmnmax = 4 lnmax = 4 mband = 138 mffmem = 1
P mgfft = 180 mkmem = 1 mpssoang= 4 mpw = 139296
mqgrid = 3001 natom = 32 nfft = 2624400 nkpt = 15
nloalg = 4 nspden = 1 nspinor = 1 nsppol = 1
nsym = 2 n1xccc = 0 ntypat = 4 occopt = 1
================================================================================
memory : COMMENT -
The determination of memory needs at this stage is meaningless,
since getcell = -1 is non-zero, while idtset= 2.
The following numbers are obtained by supposing that acell and rprim
are NOT taken from a previous dataset. You cannot rely on them.
P This job should need less than 1098.532 Mbytes of memory.
Rough estimation (10% accuracy) of disk space for files :
WF disk file : 303.763 Mbytes ; DEN or POT disk file : 20.025 Mbytes.
================================================================================
Biggest array : f_fftgr(disk), with 320.3633 MBytes.
-P-0000 leave_test : synchronization done...
-P-0000 leave_test : exiting...
Now what we see here is the following: Memory consumption for dataset 1 is checked, arrays are allocated and the job continues. Then it comes to the same point for dataset 2 and the memory test seems to fail. And the biggest array is estimated to be 320 MB, this is far below the memory of the system (it has 4 GB per core I believe). What could be the reason for this behaviour?