Page 1 of 1

PAW structure/positron LT calculus supercells & overlapping

Posted: Tue Oct 19, 2010 8:59 pm
by dr_sferrari
Hi, I have been using Abinit for positron lifetime calculations and I make a few bulk bulk calculations with success (below is an example). As I have researched the positron calculations "In the latest version 6 they are implemented in the PAW framework" so PAW pseudopotencials of the atoms have to be used . My problem is that when I tried
to make a structure optimization or positron lifetime calculation in 2x2x2 supercell (zincblend) with 64 atoms (unit cell of zinclbend is 8) and I get the following error in the log:

chkpawovlp : ERROR -
PAW SPHERES ARE OVERLAPPING !
Distance between atoms 1 and 32 is : 2.31572
PAW radius of the sphere around atom 1 is: 2.01467
PAW radius of the sphere around atom 32 is: 2.10320
This leads to a (voluminal) overlap ratio of 26.17 %

COMPENSATION DENSITIES ARE OVERLAPPING !!!!
Distance between atoms 1 and 32 is : 2.31572
Compensation radius around atom 1 is: 2.01467
Compensation radius around atom 32 is: 2.10320
This leads to a (voluminal) overlap ratio of 26.17 %
THIS IS DANGEROUS !, as PAW formalism assume non-overlapping compensation densities.

I tried out to reduce the value of pawecutdg variable so to fix this problem, but eitherway I find myself in the same sittuation. I have to remind that I know that the value
pawecutdg has to be greater than ecut so the min. value I set to pawecutdg was 8 (the same value I set to ecut). Any ideas how to work this out?




Example of bulk calculation
# Positron calculation
# Zincblend structure: ZnSe

# Datasets definition
ndtset 3

positron1 0 ! Dataset 1 is a simple electronic GS calculation

positron2 1 ! Dataset 2 is a positronic GS calculation
getden2 1 ! in presence of the previous electronic density
positron3 -1 ! Dataset 4 is an electronic/positronic GS calculation
! without storage of the wave-functions
kptopt2 0 ! Use only k=gamma point

! For testing purpose, several electron-positron correlations are used
ixcpositron2 1
ixcpositron3 1

diemac 6.3 ! Dielectical constant for the material
acell 3*5.66 angstrom #Valor para ZnS
rprim 1.0 0.0 0.0 #cubic structure
0.0 1.0 0.0
0.0 0.0 1.0
ntypat 2
znucl 30 34
natom 8
typat 4*1 4*2
xred
0.0 0.0 0.0
0.5 0.5 0.0
0.5 0.0 0.5
0.0 0.5 0.5
1/4 1/4 1/4
3/4 3/4 1/4# Positron calculation
# Zincblend structure: ZnSe

# Datasets definition
ndtset 3
positron1 0 ! Dataset 1 is a simple electronic GS calculation
positron2 1 ! Dataset 2 is a positronic GS calculation
getden2 1 ! in presence of the previous electronic density
positron3 -1 ! Dataset 4 is an electronic/positronic GS calculation
! without storage of the wave-functions
kptopt2 0 ! Use only k=gamma point
! For testing purpose, several electron-positron correlations are used
ixcpositron2 1
ixcpositron3 1
diemac 6.3 ! Dielectical constant for the material
acell 3*5.66 angstrom #Valor para ZnS
rprim 1.0 0.0 0.0 #cubic structure
0.0 1.0 0.0
0.0 0.0 1.0
ntypat 2
znucl 30 34
natom 8
typat 4*1 4*2
xred
0.0 0.0 0.0
0.5 0.5 0.0
0.5 0.0 0.5
0.0 0.5 0.5
1/4 1/4 1/4
3/4 3/4 1/4
3/4 1/4 3/4
1/4 3/4 3/4
chkprim 0
! K-points and occupations
kptopt 1
ngkpt 4 4 4
occopt 1
! occopt 7
! nband 20
posocc 1.0 ! Occupation number for the positron
! Convergence parameters
ecut 8. pawecutdg 15.
iscf 17 ! iscf>=10 recommended for electron-positron calculation
! (although not mandatory)
nstep 50 tolvrs 1.d-10
postoldfe 1.d-6 ! Only used for automatic electron-positron
posnstep 20 ! calculations (datasets 4 and 5)
! Miscelaneous
prtwf 0 prteig 0 ! To save disk space

Re: PAW structure/positron LT calculus supercells & overlap

Posted: Wed Oct 20, 2010 11:21 am
by Alain_Jacques
The error message complains about PAW spheres overlapping ... seems fair to me as there is only a distance of 2.31 a.u. between a Zn and a Se. Looks very very small to me - there is something fishy in the atomic positions definition. But changing ecut or pawecutdg has a completely different effect and won't improve the situation. Select a suitable ecut after a convergence test (should be around 10-15Ha) and use a pawecutdg around (2.5 * ecut).

Kind regards,

Alain

Re: PAW structure/positron LT calculus supercells & overlap

Posted: Wed Oct 20, 2010 4:39 pm
by dr_sferrari
Alain_Jacques wrote:The error message complains about PAW spheres overlapping ... seems fair to me as there is only a distance of 2.31 a.u. between a Zn and a Se. Looks very very small to me - there is something fishy in the atomic positions definition. But changing ecut or pawecutdg has a completely different effect and won't improve the situation. Select a suitable ecut after a convergence test (should be around 10-15Ha) and use a pawecutdg around (2.5 * ecut).

Kind regards,

Alain

Thx Alain, it was my fault, in the supercell input file I forgot to duplicate (so to have 2x2x2 supercell) the alat parameter. With this done I re run the abinit but I am afraid I have some bad news...
MEMORY STUFF: Abinit complains now that "Operating system error: Cannot allocate memory ", I read through forums that I have to modify the dilatmx input variable (I am doing structure optimization, because there is a vacancy, before the positron lifetime calculation) and reduce it (how much?) and also reduce ecut (and then also pawecutdg parameter). Is that so? How much? I mean if it is try-error system I would go nuts (each time I run abinit it takes a long time computing, remember now I dealing with a 64 atoms sample).
Kind regards,
Sergio

Re: PAW structure/positron LT calculus supercells & overlap

Posted: Wed Oct 20, 2010 6:22 pm
by dr_sferrari
I continue having this following error about memory:

memana : ERROR -
Test failed to allocate 1138.828 Mbytes
It is not worth to continue
Action : modify input variable to fit the available memory.
or increase limit on available memory.

Re: PAW structure/positron LT calculus supercells & overlap

Posted: Wed Oct 20, 2010 7:48 pm
by Alain_Jacques
Hello Sergio,

If you have a time consuming calculation, optimizing ecut - analyse the convergence of the plane wave cutoff - won't be a loss of time. If you're not the author of the pseudo, look in the accompanying files for ecut recommendations. And keep the test plain and simple with a ground state calculation on one unit cell of ZnSe as explained in http://www.abinit.org/documentation/helpfiles/for-v6.4/tutorial/lesson_paw1.html i.e. vary ecut from 6 Ha to 20 Ha @ 2 Ha step to find convergence at 1 mHa.

For the memory ... are you using a 32bit Abinit binary or 64bit - it looks strange to me that you are exhausting your available RAM. Are you using MPI?

Alain

Re: PAW structure/positron LT calculus supercells & overlap

Posted: Wed Oct 20, 2010 8:06 pm
by dr_sferrari
Alain_Jacques wrote:Hello Sergio,

If you have a time consuming calculation, optimizing ecut - analyse the convergence of the plane wave cutoff - won't be a loss of time. If you're not the author of the pseudo, look in the accompanying files for ecut recommendations. And keep the test plain and simple with a ground state calculation on one unit cell of ZnSe as explained in http://www.abinit.org/documentation/helpfiles/for-v6.4/tutorial/lesson_paw1.html i.e. vary ecut from 6 Ha to 20 Ha @ 2 Ha step to find convergence at 1 mHa.

For the memory ... are you using a 32bit Abinit binary or 64bit - it looks strange to me that you are exhausting your available RAM. Are you using MPI?

Alain


Hi Alain, thx for the tip, I thought (wrongly??) that the exhausting memory problem had something to with the large number of atoms involved. I will try to follow patiently your advice so to see where I find convergence. Respect memory, my main problem, I have 3 GB of RAM but either way I find myself in that sittuation. I sincerly do not know what MPI means (something about parallel computing?). I have to mention that I tried to run the calculus in my PC (the one with 3GB Ram) and also in a cluster with the same result.
Varying ecut would finnaly get ride off the memory pŕoblem or I would have to modify other parameters?

Re: PAW structure/positron LT calculus supercells & overlap

Posted: Wed Oct 20, 2010 11:10 pm
by Alain_Jacques
yes, dealing with 64 atoms will require a substantial amount of memory. I suspect that you compiled a 32bit Abinit - it's easy to check that afterwards on Linux with a

Code: Select all

file abinit
... it should return the ABI of the executable - 32 or 64bit. 32bit is limited to 2GB. You can setup a 64bit compilation with the --enable-64bit-flags configure option.
The positive impact on optimizing ecut is speed, not memory (too small and you'll have convergence problem, too high and it slows down the calculation). MPI stands for "message passing interface" and, yes, it is parallel processing. Having a look on your k-points grid, you would certainly benefit from this kind of parallelization.

Kind regards,

Alain

Re: PAW structure/positron LT calculus supercells & overlap

Posted: Thu Oct 21, 2010 9:01 pm
by dr_sferrari
Alain_Jacques wrote:yes, dealing with 64 atoms will require a substantial amount of memory. I suspect that you compiled a 32bit Abinit - it's easy to check that afterwards on Linux with a

Code: Select all

file abinit
... it should return the ABI of the executable - 32 or 64bit. 32bit is limited to 2GB. You can setup a 64bit compilation with the --enable-64bit-flags configure option.
The positive impact on optimizing ecut is speed, not memory (too small and you'll have convergence problem, too high and it slows down the calculation). MPI stands for "message passing interface" and, yes, it is parallel processing. Having a look on your k-points grid, you would certainly benefit from this kind of parallelization.

Kind regards,

Alain


Dear Alain, thank you so much for continue helping me. About what version of Abinit I am running, I was sure I was running a 64 bit version of Abinit since I've compliled it in my laptop wich has 64 bit processor and Fedora 12 -64 bits intalled as OS. Eitherway to check it, here is the proof:

Code: Select all

[ferrari@BlackIce ~]$ file abinit-6.0.2/src/98_main/abinit
abinit-6.0.2/src/98_main/abinit: ELF 64-bit LSB executable, x86-64, version 1 (GNU/Linux), dynamically linked (uses shared libs), for GNU/Linux 2.6.18, not stripped

Now cleared up this part, and clear up that ecut variable has nothing to do with the exahustion of memory, I keep wondering how to avoid this memory problem. I have two comments to express:
1) The example input file I presented in the original post is not the one I am actually trying to run; the actual one is:

Code: Select all

# parallelization variables

paral_kgb 1
npband 1
npfft 2
npkpt 4

wfoptalg 4
fft_opt_lob 2
fftalg 401
iprcch 0
istwfk 8*1


# Datasets definition
  ndtset 2

  optcell1 0
  # modify nuclear positions but no cell shape and dimension optimization
  ionmov1 2
  ntime1 150
  kptopt1 1
  occopt1 1
  ngkpt1 4 4 4

  positron2  1  ! Dataset 3 is a positronic GS calculation
  getden2    1  !   in presence of the previous electronic density
  kptopt2    0  !   Use only k=gamma point

  ! For testing purpose, several electron-positron correlations are used
    ixcpositron2  1

#Definition of the unit cell

chkprim 0 # allows the run to continue even if the unit cell
# is not primitive

diemac 6.3  ! Dielectical constant for the material
acell   3*16.98 angstrom  #Valor para ZnS
ntypat   2
znucl   30 34
natom 63
rprim 1.0 0.0 0.0
0.0 1.0 0.0
0.0 0.0 1.0
typat 31*1 32*2
xred
# 0.0000 0.0000 0.00000
# Zinc vacancy
0.7500    0.7500    0.5000
0.2500    0.0000    0.2500
0.2500    0.2500    0.0000
0.5000    0.0000    0.0000
0.5000    0.2500    0.2500
0.7500    0.0000    0.2500
0.7500    0.2500    0.0000
0.0000    0.5000    0.0000
0.0000    0.7500    0.2500
0.2500    0.5000    0.2500
0.2500    0.7500    0.0000
0.0000    0.0000    0.5000
0.0000    0.2500    0.7500
0.2500    0.0000    0.7500
0.2500    0.2500    0.5000
0.5000    0.5000    0.0000
0.5000    0.7500    0.2500
0.7500    0.5000    0.2500
0.7500    0.7500    0.0000
0.5000    0.0000    0.5000
0.5000    0.2500    0.7500
0.7500    0.0000    0.7500
0.7500    0.2500    0.5000
0.0000    0.5000    0.5000
0.0000    0.7500    0.7500
0.2500    0.5000    0.7500
0.2500    0.7500    0.5000
0.5000    0.5000    0.5000
0.5000    0.7500    0.7500
0.7500    0.5000    0.7500
0.0000    0.2500    0.2500
0.8750    0.8750    0.6250
0.6250    0.6250    0.6250
0.6250    0.8750    0.8750
0.8750    0.6250    0.8750
0.6250    0.1250    0.1250
0.6250    0.3750    0.3750
0.8750    0.1250    0.3750
0.8750    0.3750    0.1250
0.1250    0.6250    0.1250
0.1250    0.8750    0.3750
0.3750    0.6250    0.3750
0.3750    0.8750    0.1250
0.1250    0.1250    0.6250
0.1250    0.3750    0.8750
0.3750    0.1250    0.8750
0.3750    0.3750    0.6250
0.6250    0.6250    0.1250
0.6250    0.8750    0.3750
0.8750    0.6250    0.3750
0.8750    0.8750    0.1250
0.6250    0.1250    0.6250
0.6250    0.3750    0.8750
0.8750    0.1250    0.8750
0.8750    0.3750    0.6250
0.1250    0.6250    0.6250
0.1250    0.8750    0.8750
0.3750    0.6250    0.8750
0.3750    0.8750    0.6250
0.1250    0.1250    0.1250
0.1250    0.3750    0.3750
0.3750    0.1250    0.3750
0.3750    0.3750    0.1250


    posocc 1.0  ! Occupation number for the positron
! Convergence parameters
    dilatmx 1.1
toldff 1.0d-8 # Will stop when, twice in a row, the difference
    tolmxf 5.0d-7
    ecut 4. pawecutdg 10.
    iscf 17   ! iscf>=10 recommended for electron-positron calculation
              ! (although not mandatory)
 
# Will stop when, twice in a row, the difference
# between two consecutive evaluations of total energy
# differ by less than toldfe (in Hartree)
    nstep 50 


!   Miscelaneous
    prtwf 0 prteig 0           ! To save disk space

Where I have included some paralization input values (just in case I decide to run the job in a cluster) wich I do not know if their pressence is a problem if I run the job in my computer (who is not a part of cluster, of course). Notice that the procedure I try to run is a structure optization of 2x2x2 diamond/zinc blend ZnSe supercell with a vacancy (a zinc vacancy) folowed by the common positron lifetime calculation calling the previous calculated electronic density. In the example input file I just run a electronic density calculus without structure optimization (because it was a bulk unit cell and of course there was no need of it)

2) K-points grid
Since I have changed to work in a 2x2x2 supercell of 64 atoms, instead of the unit cell of 8 atoms, of course perhaps I shoud change the k-point grid. The question remaing is how? What is the optimal grid of k-point that would give good calculation and also would not exhaust the memory?