Requests for technical support from the VASP group should be posted in the VASP-forum.

Difference between revisions of "Machine learning force field calculations: Basics"

From Vaspwiki
Jump to navigationJump to search
Line 149: Line 149:
 
  {{TAGBL|ML_FF_NHYP2_MB}} = 4
 
  {{TAGBL|ML_FF_NHYP2_MB}} = 4
 
  {{TAGBL|ML_FF_LMAX2_MB}} = 4
 
  {{TAGBL|ML_FF_LMAX2_MB}} = 4
  {{TAGBL|ML_FF_LAFILT_MB}} = .TRUE.
+
  {{TAGBL|ML_FF_LAFILT2_MB}} = .TRUE.
  {{TAGBL|ML_FF_IAFILT_MB}} = 2
+
  {{TAGBL|ML_FF_IAFILT2_MB}} = 2
  {{TAGBL|ML_FF_AFILT_MB}} = 2D-3
+
  {{TAGBL|ML_FF_AFILT2_MB}} = 2D-3
 
   
 
   
 
  ### Lesser important tags for machine learning
 
  ### Lesser important tags for machine learning

Revision as of 09:38, 20 August 2019

How to run machine learning force field calculations

In this section we describe how to run calculations using machine learning force field method in VASP. Many of the tags will be covered in more detail below.

There are three modes of how to run a machine learning calculation:

  1. On-the-fly force field generation from scratch. This step involves ab initio calculations.
  2. Continuing on-the-fly learning from already existing force-fields. This step involves ab initio calculations.
  3. Force field calculations without learning. This step involves no ab initio calculations.

A typical example where we want to learn a force field for multiple phases would consist of the following steps:

  • First we choose a small to medium sized supercell of phase A. Then we run on-the-fly force field calculations until the force field on the that phase is sufficiently accurate.
  • Next we would like to learn phase B. We would run a continuation run using the force field learned for phase A. The machine will automatically collect the necessary reference structures from phase B adding it to the force field. In many cases this will be only very few since the force field from phase A is already good for phase B, but this is very much case dependent.
  • We would repeat the previous step for arbitrary phases.
  • After the force field is sufficiently learned we would apply it to our problem but probably to much larger cell sizes. Hence we want to switch off learning on larger cells and use only the force field which is orders of magnitudes faster than the ab initio calculations. The force field is in many cases transferable for the same constituting elements and if the phases used provide mostly sampled atomic environments. But it should be still cautiously judged whether the force field can describe rare events in the larger cell or not.

1) On-the-fly force field generation from scratch

In this calculation we need no special input files. We first need to setup a molecular dynamics calculation as usual (for molecular dynamics calculations see Molecular Dynamics). All the important tags for machine learning are added to the INCAR file. For a detailed description of the tags see below.

Here are the steps one needs to do to set up an on-the-fly machine learning force field calculation from the scratch:

  • Set up usual MD.
  • Enable machine learning by adding the following line to the INCAR file:
ML_FF_LMLFF = .TRUE.
  • Select learning for scratch by adding the following line to the INCAR file:
ML_FF_ISTART = 0
  • Calculate atomic energy (see Calculation of atoms) of all constituting atom types and add as a list to the INCAR file (the order of the elements must coincide with the order of the elements in the POTCAR file):
ML_FF_EATOM = E_at1 E_at2 ...
This option is only used if ML_FF_ISCALE_TOTEN_MB=1 is used. For ML_FF_ISCALE_TOTEN_MB=2 the reference energy becomes the average of the total energy of the training data.
  • Change possibly the most important parameters of the machine learning force field:
ML_FF_MB_MB = 1000
ML_FF_MCONF = 1000
ML_FF_CSF = 0.02
ML_FF_CTIFOR = 0.0
ML_FF_SION2_MB = 0.50
ML_FF_MRB2_MB = 9
ML_FF_RCUT2_MB = 5.0
  • Run force field generation and monitor learning. The main output files for the force field generation are the ML_LOGFILE, ML_ABNCAR and ML_FFNCAR files. If the number of reference structures exceeds the maximum number of allowed reference structures ML_FF_MCONF or the basis set exceeds the maximum number of allowed basis functions ML_FF_MB_MB the calculation automatically stops with an error message. THen one simply needs to increase the appropriate number and continue the calculation as described under 2). MIND: ML_FF_MCONF and ML_FF_MB_MB shall not be set in advance to too large numbers, since they are the dimensions of the design matrix which is the memory bottle neck of the calculations. This matrix has to be allocated statically in advance of the calculations to make the algorithm efficient.

2) Continuing on-the-fly learning from already existing force-fields

Principally the calculation steps are the same as for learning from scratch.

Here are the steps that need to be carried out to continue learning on-the-fly from already existing force-fields:

  • Copy ML_ABNCAR to ML_ABCAR. The ML_ABCAR file stores the ab initio reference data.
  • Either copy CONTCAR to POSCAR if you want to continue to learn on the same phase or get new POSCAR if training is continued on another phase.
  • Select continuation runs with learning by changing ML_FF_ISTART in the INCAR file to:
ML_FF_ISTART = 1
  • Run calculation.

3) Force field calculations without learning

Here are the steps that need to be carried out to run the force fields without learning:

ML_FF_ISTART = 2
  • Run calculation.


Major input tags

In this section we describe the most important input tags for standard machine learning force field calculations. Most of the tags are using defaults and should be only changed by experienced users in cases where the changes are essential. All tags controlling the method are set in the INCAR file.

Enabling machine learning

To enable machine learned force fields the tag ML_FF_LMLFF=.TRUE. has to be always set.

Type of machine learning calculation

The second most important tag is the ML_FF_ISTART tag. This tag selects how the machine learned force field is used. The following three cases are possible for this tag:

  1. ML_FF_ISTART=0: Starting from scratch the force field is learned on the fly.
  2. ML_FF_ISTART=1: Training data was already gathered from previous runs and on-the-fly learning is continued using this data.
  3. ML_FF_ISTART=2: Force-field is used to carry out calculation where the learning is turned off.

All three of these modes are described in more detail later on this page.

Reference total energies

For accurate calculations the usage of the reference total energies of isolated atoms in the system should be provided using the ML_FF_EATOM tag. These are written as a list for each atom species according to the order they appear in the POSCAR and POTCAR files. A sample input using 3 species would look like this:

ML_FF_EATOM = E_1 E_2 E_3

The energies of the atoms should be obtained from previous calculations using an isolated atom in a sufficiently large box. This step is calculationally cheap. It is not mandatory to use the ML_FF_EATOM tag but strongly advised (see Calculation of atoms. If the tag is not specified default values of 0.0 eV/atom are assumed. Although sufficiently accurate force fields can be obtained in most tested calculations, we have not tested this feature enough and hence would suggest to calculate the energies of isolated atoms before. This ML_FF_EATOM is only used if ML_FF_ISCALE_TOTEN_MB=1 is used. For ML_FF_ISCALE_TOTEN_MB=2 the reference energy becomes the average of the total energy of the training data.

Important parameters cotrolling basis sets and training sets

  • ML_FF_MB_MB: This tag controls the maximum number of basis sets. The default value of 1000 was tested to be sufficient in many cases, but several cases need higher values, such as e.g. MgO. If the maximum number of basis sets is reached in a calculation the calculation is stopped by default with a warning message (ML_FF_LBASIS_DISCARD=.FALSE.). If that appears the calculation has to be restarted with a higher value for ML_FF_MB_MB. Mind: If ML_FF_MB_MB is set to a too high value memory problems can occur, since the necessary arrays for this tag are allocated statically at the beginning of the calculation.
  • ML_FF_MCONF: This flag sets the maximum number of configurations used for training. The behaviour of this tag is very similar to ML_FF_MB_MB and should be also monitored. By default (ML_FF_LCONF_DISCARD=.FALSE.) the calculation is stopped and gives an error message if the number of configurations reaches the value set in the input. Mind: If ML_FF_MCONF is set to a too high value memory problems can occur, since the necessary arrays for this tag are set statically at the beginning of the calculation.
  • ML_FF_MCONF_NEW: This tag specifies sets the number of configurations that are stored temporally as candidates for the training data. The purpose of this set is to block operations for expensive calculational steps that would be otherwise sequentially calculated. This way a faster performance is obtained at the cost of a small memory overhead. The value of ML_FF_MCONF_NEW=5 was fully empirically obtained and is not carved into stone.
  • ML_FF_RCUT1_MB and ML_FF_RCUT2_MB:.
  • Radial basis set:
    • ML_FF_MRB1_MB, ML_FF_MRB2_MB: These tags set the number of radial basis sets used to expand the atomic distribution of the radial and angular density. These tags depend very sensitively on the cut-off radius of the descriptor (ML_FF_RCUT1_MB and ML_FF_RCUT2_MB) and the width of the Gaussian functions used in the broadening of the atomic distributions (ML_FF_SION1_MB and ML_FF_SION2_MB). The error occuring due to the expansion of the radial basis functions is monitored in the ML_LOGFILE file by looking for the following line "Error in radial expansion: ...". A typical good value for the error threshold which was purely empirically determined (by us and in reference [1]) is . So the number of basis functions should be adjusted until the error written in the ML_LOGFILE is smaller than this value. A more detailed description of the basis sets is given in appendix A of reference [2].
    • ML_FF_SION1_MB,ML_FF_SION2_MB:.
  • Angular momentum quantum numbers:
    • ML_FF_LMAX2_MB: This tag specifies the maximum angular momentum quantum number of spherical harmonics used to expand atomic distributions.
    • ML_FF_LAFILT2_MB: This tag specifies whether angular momentum filtering is active or not. By activating the angular filtering (ML_FF_LAFILT2_MB=.TRUE. and using the filtering function from reference [3] the computation can be noticably speeded up without loosing too much accuracy. Also by using the the angular filtering the maximum angular momentum number cut-off ML_FF_LMAX2_MB=6 can be lowered to a value of 4 again gaining computational speed. The user is still advised to check the accuracy of the angular filtering for his application.
    • ML_FF_IAFILT2_MB: This tag selects the type of angular filtering. We advise to use the default (ML_FF_IAFILT2_MB=2).
    • ML_FF_AFILT2_MB: This parameter sets the filtering parameter of the filtering function from reference [3]. The default of ML_FF_AFILT2_MB=0.002 worked well in most tested applications, but we advise the user to check this parameter for his application.

Example input for liquid Si

{{TAGBL|SYSTEM = Si_lquid
### Electronic structure part
PREC = FAST
ALGO = FAST
SIGMA = 0.1
ISPIN = 1
ISMEAR = 0
ENCUT = 325
NELM = 100
EDIFF = 1E-4
NELMIN = 6
LREAL = A
ISYM = -1

### MD part
IBRION = 0
ISIF = 2
NSW = 30000
POTIM = 1.0

### Output part
NBLOCK = 1
NWRITE = 1
INIWAV = 1
IWAVPR = 1
ISTART = 0
LWAVE = .FALSE.
LCHARG = .FALSE.

### Machine Learning part
### Major tags for machine learning
ML_FF_LMLFF = .TRUE.
ML_FF_ISTART = 0
ML_FF_EATOM = -0.7859510000
ML_FF_MB_MB = 1000
ML_FF_MCONF = 1000
ML_FF_MCONF_NEW = 5
ML_FF_CSF = 0.02
ML_FF_CTIFOR = 0.0
ML_FF_CSIG = 2E-1
ML_FF_WTOTEN = 1.0
ML_FF_WTIFOR = 1.0
ML_FF_WTSIF =  1.0

### Descriptor related tags for machine learning
ML_FF_W1_MB = 0D-1
ML_FF_W2_MB = 10D-1
ML_FF_LNORM1_MB = .FALSE.
ML_FF_LNORM2_MB = .TRUE.
ML_FF_RCUT1_MB = 6.0
ML_FF_RCUT2_MB = 5.0
ML_FF_MRB1_MB = 6
ML_FF_MRB2_MB = 9
ML_FF_SION1_MB = 0.50
ML_FF_SION2_MB = 0.50
ML_FF_NHYP1_MB = 1
ML_FF_NHYP2_MB = 4
ML_FF_LMAX2_MB = 4
ML_FF_LAFILT2_MB = .TRUE.
ML_FF_IAFILT2_MB = 2
ML_FF_AFILT2_MB = 2D-3

### Lesser important tags for machine learning
ML_FF_MSPL1_MB = 100
ML_FF_MSPL2_MB = 100
ML_FF_NR1_MB = 100
ML_FF_NR2_MB = 100
ML_FF_NWRITE = 2
ML_FF_NDIM_SCALAPACK = 2
ML_FF_ISAMPLE = 3
ML_FF_IERR = 3
ML_FF_LMLMB = .TRUE.
ML_FF_CDOUB = 2.0
ML_FF_LCRITERIA = .TRUE.
ML_FF_CSLOPE = 1E-1
ML_FF_NMDINT = 10
ML_FF_MHIS = 10
ML_FF_IWEIGHT = 3
ML_FF_LEATOM_MB = .FALSE.
ML_FF_LHEAT_MB = .FALSE.
ML_FF_ISOAP1_MB = 1
ML_FF_ISOAP2_MB = 1
ML_FF_ICUT1_MB = 1
ML_FF_ICUT2_MB = 1
ML_FF_IBROAD1_MB = 2
ML_FF_IBROAD2_MB = 2
ML_FF_IREG_MB = 2
ML_FF_SIGV0_MB = 1.0
ML_FF_SIGW0_MB = 1.0

References