Requests for technical support from the VASP group should be posted in the VASP-forum.
Machine learning force field calculations: Basics
In general to perform a machine-learning force field calculation, you need to set
ML_FF_LMLFF = .TRUE.
in the INCAR file. Then depending on the particular calculation, you need to set the values of additional INCAR tags. In the first few sections, we list the tags that a user may typically encounter. Most of the other input are set to defaults and should be only changed by experienced users in cases where the changes are essential.
In the following most of the tags are only shown for the angular descriptor (tags containing a 2 in it). Almost each tag has an analogous tag for the radial descriptor (tags containing 1 in it). The usage of these tags is the same for both descriptors.
Type of machine learning calculation
In this section, we describe the three modes in which machine learning calculations can be done in VASP and show exemplary INCAR settings. A typical example showing these modes in action is the machine-learning of a force field for a material with two phases A and B. Initially, we have no force field of the material, so we choose a small to medium sized supercell of phase A to generate a new force field from scratch. In this step, ab initio calculations are performed whenever necessary improving the force field on this phase until it is sufficiently accurate. When applied to phase B, the force field learned on phase A might contain useful information about the local configurations. Hence one would run a continuation run and the machine will automatically collect the necessary structure datasets from phase B to refine the force field. In many cases, only few such structure datasets are required, but it is still necessary to verify this for every case. After the force field is sufficiently trained, one can use it to describe much larger cell sizes. Hence, one can switch off learning on larger cells and use only the force field. This is then orders of magnitudes faster than the ab initio calculation. If the sampled atomic environments are similar to the structure datasets used for learning, the force field is transferable for the same constituting elements, but it should be still cautiously judged whether the force field can describe rare events in the larger cell.
On-the-fly force field generation from scratch
To generate a new force field, one does not need any special input files. First, one sets up a molecular dynamics calculation as usual (see Molecular Dynamics) adding the machine learning related ones to the INCAR file. To start from scratch add
ML_FF_ISTART = 0
Continuing on-the-fly learning from already existing force-fields
To continue from a previous run, copy the following files
ML_FF_ISTART = 1
in the INCAR file.
The continuation can cover a very different structure than before or even new elements.
Force field calculations without learning
Once a sufficiently accurate force field has been generated, one can use it to predict properties. Copy the force field information (and possibly the structures)
ML_FF_ISTART = 2
Reference total energies
To obtain the force field, one needs a reference total energy. For ML_FF_ISCALE_TOTEN_MB=2 this reference energy is set to the average of the total energy of the training data. This is the default setting.
Alternatively, reference atomic calculations can be performed (see Calculation of atoms). One can then specify to use the atomic energy and give reference energies for all atoms by setting the following variables in the INCAR file
ML_FF_ISCALE_TOTEN_MB=1 ML_FF_EATOM = E_at1 E_at2 ...
If the tag ML_FF_EATOM is not specified, default values of 0.0 eV/atom are assumed.
Converging a MLFF calculation
If a very fine spatial resolution is required due to small distances, or rapid spatial variations of the potential, the Gaussian broadening in the atomic density can be lowered, by setting the parameter
Since more basis functions are required to describe the less smoothened density a larger number of radial basis functions are required. This is controlled by the following tag:
The number of basis functions has to be increased by the same ratio as the Gaussian broadening was decreased and vice versa. The number of basis functions have to be changed the same way if the cut off radius of the descriptor
Weighting of energy, forces and stress
In many cases the force field can be optimized to reproduce one of the target observables accurately by weighting the desired quantity more strongly. Of course at the same time other observables are less well reproduced. Empirically in many test cases up to a given weight ratio the improvement of the more strongly weighted observable was much larger than the accuracy loss of the other observables. The optimum ratio depends on the material and the parameters of the force field. So it has to be determined for each case separately. The weights of the energy, forces and stress can be changed in ML_FF_WTOTEN, ML_FF_WTIFOR and ML_FF_WTSIF, respectively. The default value is 1.0 for each. Since the input tags define the ratio of the weights, it suffices to raise the value of only one observable.
We advise to increase ML_FF_WTOTEN (at least to 10) whenever energies are important.
Caution: number of structures and basis functions
The maximum number of structure datasets ML_FF_MCONF and basis functions ML_FF_MB_MB constitutes a memory bottleneck of the calculation, because the required arrays are allocated statically at the beginning of the calculation. Therefore you must not set these input variables to too larger numbers initially. If either the number of structure datasets or the size of the basis set exceeds its respective maximum number, the calculation automatically stops with an error message. Then one should increase the number and restart the calculation. For a detailed description what influences the size of the basis see below.
In this section, we describe other important input tags for standard machine learning force field calculations. Typically a user does not need to tweak the default values.
Radial basis set
- ML_FF_MRB1_MB, ML_FF_MRB2_MB
- These tags set the number of radial basis sets used to expand the atomic distribution of the radial and angular density. These tags depend very sensitively on the cut-off radius of the descriptor (ML_FF_RCUT1_MB and ML_FF_RCUT2_MB) and the width of the Gaussian functions used in the broadening of the atomic distributions (ML_FF_SION1_MB and ML_FF_SION2_MB). The error occuring due to the expansion of the radial basis functions is monitored in the ML_LOGFILE file by searching for the following line "Error in radial expansion: ...". A typical reasonable value for the error threshold that was empirically determined (by us and in reference ) is . Hence, the number of basis functions should be adjusted until the error written in the ML_LOGFILE is smaller than this value. A more detailed description of the basis sets is given in appendix A of reference .
Angular momentum quantum numbers
- This tag specifies the maximum angular momentum quantum number of spherical harmonics used to expand atomic distributions.
- This tag specifies whether angular momentum filtering is active or not. By activating the angular filtering (ML_FF_LAFILT2_MB=.TRUE. and using the filtering function from reference  the computation can be noticably speeded up without loosing too much accuracy. Also by using the the angular filtering the maximum angular momentum number cut-off ML_FF_LMAX2_MB=6 can be lowered to a value of 4 again gaining computational speed. The user is still advised to check the accuracy of the angular filtering for his application.
- This tag selects the type of angular filtering. We advise to use the default (ML_FF_IAFILT2_MB=2).
- This parameter sets the filtering parameter of the filtering function from reference . The default of ML_FF_AFILT2_MB=0.002 worked well in most tested applications, but we advise the user to check this parameter for his application.
New structure dataset block
This tag specifies the number of structure datasets that are stored temporally as candidates for the training data. The purpose of this is to block operations for expensive calculations that would be otherwise sequentially executed. In this way a faster performance is obtained at the cost of a small memory overhead. The value of ML_FF_MCONF_NEW=5 was optimized empirically, but for different systems other choices might be more performant.
Example input for liquid Si
SYSTEM = Si_lquid ### Electronic structure part PREC = FAST ALGO = FAST SIGMA = 0.1 ISPIN = 1 ISMEAR = 0 ENCUT = 325 NELM = 100 EDIFF = 1E-4 NELMIN = 6 LREAL = A ISYM = -1 ### MD part IBRION = 0 ISIF = 2 NSW = 30000 POTIM = 1.0 ### Output part LWAVE = .FALSE. LCHARG = .FALSE. ### Machine Learning part ### Major tags for machine learning ML_FF_LMLFF = .TRUE. ML_FF_ISTART = 0
- ↑ W. J. Szlachta, A. P. Bartók, and G. Csányi, Phys. Rev. B 90, 104108 (2014).
- ↑ R. Jinnouchi, F. Karsai and G. Kresse, Phys. Rev. B 100. 014105 (2019).
- ↑ a b [ J. P. Boyd, Chebyshev and Fourier Spectral Methods (Dover Publications, New York, 2000).]