Restarting from CHGCAR with PAW, in parallel gets wrong augmentation charge

Problems running VASP: crashes, internal errors, "wrong" results.

Moderators: Global Moderator, Moderator

Post Reply
Message
Author
dtrinkle
Newbie
Newbie
Posts: 6
Joined: Tue Jan 16, 2007 3:36 pm
License Nr.: 608
Location: Urbana, IL
Contact:

Restarting from CHGCAR with PAW, in parallel gets wrong augmentation charge

#1 Post by dtrinkle » Thu Jan 18, 2007 9:25 pm

I'm using VASP 4.6.28, running on an opteron, in parallel, and I've encountered what looks like a strange bug when restarting a job using just a CHGCAR and no WAVECAR file. It only happens when running parallel jobs as well, not in the serial version of the code. Namely, it appears that the augmentation charge is not correctly read in, or transmitted to the nodes.

For example, on a single atom bcc Fe calculation with PAW, where I've restarted after getting a self-consistent charge density (non-spin polarized, but the same thing happens spin-polarized), when I grep magnetization on OUTCAR after a restarted run, I get:

number of electron 8.0000076 magnetization
number of electron 8.0000076 magnetization
augmentation part 8.0000076 magnetization
number of electron 8.0000076 magnetization
augmentation part 8.0000076 magnetization
number of electron 8.0000076 magnetization
augmentation part 8.0000076 magnetization
number of electron 8.0000076 magnetization
augmentation part 8.0000076 magnetization
number of electron 8.0000076 magnetization # here's where RMMS kicks in
augmentation part 4.1432529 magnetization
number of electron 8.0000076 magnetization
augmentation part 4.1432197 magnetization
number of electron 8.0000076 magnetization
augmentation part 4.1432197 magnetization

So, in the beginning, the augmentation part is exactly what the number of electrons part is; after the charge is unfrozen, it changes. I originally encountered this problem with a spin-polarized system (there, the magnetization also erroneously matches). If I (a) restart with WAVECAR, or (b) restart on a serial machine, it reads the augmentation charge correctly. I don't think it's purely a reporting error at this stage in the code, as it takes a few iteration after the charge density is relaxed to get self-consistency even after restarting with a self-consistent charge density. If I do this same run (single atom bcc Fe calculation) on a serial machine, it gets the wavefunction in one self-consistency iteration, and the augmentation charge density is right from the beginning.

I've looked around, and haven't seen anyone else post with this specific problem, but I don't know how many people are trying to restart calculations using the CHGCAR only with PAW in parallel. Thanks; --d
Last edited by dtrinkle on Thu Jan 18, 2007 9:25 pm, edited 1 time in total.

admin
Administrator
Administrator
Posts: 2922
Joined: Tue Aug 03, 2004 8:18 am
License Nr.: 458

Restarting from CHGCAR with PAW, in parallel gets wrong augmentation charge

#2 Post by admin » Wed May 16, 2007 2:38 pm

the augmentation part is correct from the very first step on only if WAVECAR is read successfully as well (in addition to CHGCAR)
to check, please have a look if the line
the WAVECAR file was read sucessfully
is written in the .o file of the job.
One of the reasons why WAVECAR is not used (though it is copied to the batchworker) is that the number of bands (NBANDS)
do not match between different VASP runs (this may happen if vasp is run on different numbers of CPUs, because NBANDS is automatically set to an integer multiple of the number of CPUs which are used in your job).
--> to avoid this source of inconsistency, please set NBANDS explicitely to the same number for the runs which generate CHGCAR and WAVECAR and for the runs which read them .
Last edited by admin on Wed May 16, 2007 2:38 pm, edited 1 time in total.

Post Reply