LTMP2 - Tutorial: Difference between revisions

From VASP Wiki
No edit summary
Line 1: Line 1:
(UNDER CONSTRUCTION)
On this page, we explain how to perform a calculation using the '''LTMP2'''<ref name="schaefer2017"/> algorithm. Make sure you successfully completed the preparation steps [[MP2 ground state calculation#Preparation:_the_Hartree-Fock_ground_state|Hartree-Fock ground state]] and [[MP2 ground state calculation#Calculating_the_unoccupied_Hartree-Fock_orbitals|Hartree-Fock virtuals]].
On this page, we explain how to perform a calculation using the '''LTMP2'''<ref name="schaefer2017"/> algorithm. Make sure you successfully completed the preparation steps [[MP2 ground state calculation#Preparation:_the_Hartree-Fock_ground_state|Hartree-Fock ground state]] and [[MP2 ground state calculation#Calculating_the_unoccupied_Hartree-Fock_orbitals|Hartree-Fock virtuals]].



Revision as of 15:51, 27 September 2018

On this page, we explain how to perform a calculation using the LTMP2[1] algorithm. Make sure you successfully completed the preparation steps Hartree-Fock ground state and Hartree-Fock virtuals.

The INCAR file

The LTMP2 calculation can simply be performed using the following INCAR file

ALGO = ACFDTRK 
LMP2LT = .TRUE.
NOMEGA = # number of tau points (see below)
NBANDS = # same valule as in the Hartree-Fock unoccupieds step ( = number of plane-waves)
ENCUT = # same value as in the Hartree-Fock step
LORBITALREAL = .TRUE.
PRECFOCK = Fast

Make sure that VASP reads the WAVECAR file from the Hartree-Fock virtuals step.

NOMEGA flag

The number of -points is controlled by the NOMEGA flag. This is necessary to calculate the Laplace transformed energy denominator (see Ref [1] for details),

Parallelization

The LTMP2 algorithm is a high-performance code and can easily be used on many CPUs. Both OpenMP and MPI is supported. We recommend to use MPI for parallelization since the code possesses an almost ideal parallelization efficiency. OpenMP should only be used to increase the shared memory, if necessary.

In order to activate the efficient MPI parallelization use the KPAR flag in the following way (note that the usual meaning of the KPAR flag becomes obsolete in the LTMP2 algorithm). Set KPAR to half of the used MPI ranks. If this results in memory issues, further decrease KPAR (such that KPAR is alway a divisor of the used MPI ranks) or increase the number of OpenMP threads.

Example for 512 CPUs

MPI ranks: 512
OpenMP threads per rank: 1

KPAR = 256

To decrease the memory requirement you can alternatively set
MPI ranks: 512
OpenMP threads per rank: 1

KPAR = 128

or
MPI ranks: 512
OpenMP threads per rank: 1

KPAR = 64

and so on. Or also try
MPI ranks: 256
OpenMP threads per rank: 2

KPAR = 128

References