Best practices for machine-learned force fields: Difference between revisions

From VASP Wiki
Line 8: Line 8:
* It is important to learn the correct forces. This requires converged electronic calculations with respect to electronic parameters, i.e., the number of k points, the plane-wave cutoff ({{TAG|ENCUT}}), electronic minimization algorithm, etc.
* It is important to learn the correct forces. This requires converged electronic calculations with respect to electronic parameters, i.e., the number of k points, the plane-wave cutoff ({{TAG|ENCUT}}), electronic minimization algorithm, etc.
* Switch off the symmetry as in molecular-dynamics runs ({{TAG|ISYM}}=0).
* Switch off the symmetry as in molecular-dynamics runs ({{TAG|ISYM}}=0).
* If possible, heat up the system gradually, i.e., use temperature ramping from a low temperature to a temperature approximately 30% higher than the desired application temperature. The heat up Be careful not to induce phase transitions when heating above the target temperature.
* If possible, heat up the system gradually, i.e., use temperature ramping from a low temperature to a temperature approximately 30% higher than the desired application temperature. The determination of the threshold of the maximum error in forces for Bayesian prediction ({{TAG|ML_CTIFOR}}) is done on the basis of previous Bayesian error prediction of forces. At the beginning of the on-the-fly learning the force field is very inaccurate and also the prediction of Bayesian errors is inaccurate. In many cases this can lead to an overestimation of {{TAG|ML_CTIFOR}}. If this happens no further learning will be done since each forthcoming structure has a predicted error that is smaller than the threshold, although not enough training structures and local reference configurations have been collected and the force field has a bad accuracy. The heating of the systems greatly helps to avoid these "unlucky situations", because the error of the system increases with the temperature and the threshold for will be easily exceeded, leading to further learning and improvement of the force field. Since the error of the system decreases with decreasing the temperature, the threshold would be always stuck at the high temperature errors in cooling runs. Hence we don't advise to use cooling runs for on-the-fly learning. Finally, when using temperature ramps, always be very careful to work in a temperature region where no phase transitions are induced.
* If possible, prefer molecular-dynamics training runs in the [[NpT ensemble]], the additional cell fluctuations improve the robustness of the resulting force field. However, for liquids allow only scaling of the supercell (no change of angles, no individual lattice vector scaling) because otherwise the cell may collapse. This can be achieved with the {{FILE|ICONST}} file, see 3) and 4) [[ICONST#Restrictions_on_the_volume_and.2For_shape_of_the_simulation_cell|here]].
* If possible, prefer molecular-dynamics training runs in the [[NpT ensemble]], the additional cell fluctuations improve the robustness of the resulting force field. However, for liquids allow only scaling of the supercell (no change of angles, no individual lattice vector scaling) because otherwise the cell may collapse. This can be achieved with the {{FILE|ICONST}} file, see 3) and 4) [[ICONST#Restrictions_on_the_volume_and.2For_shape_of_the_simulation_cell|here]].
* For simulations without fixed lattice, restart often to reinitialize the PAW basis of the KS orbitals and avoid Pulay stress.
* For simulations without fixed lattice, restart often ({{TAG|ML_ISTART}}=1) to reinitialize the PAW basis of the KS orbitals and avoid Pulay stress.
* When the system contains different components, train them separately first. For instance, when the system has a surface of a crystal and a molecule binding on that surface. First, train the bulk crystal, then the surface, next the isolated molecule, and finally the entire system.
* When the system contains different components, train them separately first. For instance, when the system has a surface of a crystal and a molecule binding on that surface. First, train the bulk crystal, then the surface, next the isolated molecule, and finally the entire system. This way a significant amount of ab-initio calculation can be saved in the combined system which is anyway the computationally most expensive. 


=== Monitoring ===
=== Monitoring ===

Revision as of 12:27, 19 July 2022

Using the machine-learning–force-fields method, VASP can construct force fields based on ab-initio simulations. There are many aspects to carefully consider while constructing, testing, retraining, and applying a force field. Here, we list some best practices but bare in mind that the list is incomplete and the method has not been applied to a large number of systems. Therefore, we recommend using the usual rigorous checking that is necessary for any research project.

Training

We strongly advise to follow this guideline in on-the-fly learning:

  • It is generally possible to train force fields on a smaller unit cell and then apply it to a larger system. Be careful to choose the structure large enough, so that the phonons or collective vibrations "fit" into the supercell.
  • It is important to learn the correct forces. This requires converged electronic calculations with respect to electronic parameters, i.e., the number of k points, the plane-wave cutoff (ENCUT), electronic minimization algorithm, etc.
  • Switch off the symmetry as in molecular-dynamics runs (ISYM=0).
  • If possible, heat up the system gradually, i.e., use temperature ramping from a low temperature to a temperature approximately 30% higher than the desired application temperature. The determination of the threshold of the maximum error in forces for Bayesian prediction (ML_CTIFOR) is done on the basis of previous Bayesian error prediction of forces. At the beginning of the on-the-fly learning the force field is very inaccurate and also the prediction of Bayesian errors is inaccurate. In many cases this can lead to an overestimation of ML_CTIFOR. If this happens no further learning will be done since each forthcoming structure has a predicted error that is smaller than the threshold, although not enough training structures and local reference configurations have been collected and the force field has a bad accuracy. The heating of the systems greatly helps to avoid these "unlucky situations", because the error of the system increases with the temperature and the threshold for will be easily exceeded, leading to further learning and improvement of the force field. Since the error of the system decreases with decreasing the temperature, the threshold would be always stuck at the high temperature errors in cooling runs. Hence we don't advise to use cooling runs for on-the-fly learning. Finally, when using temperature ramps, always be very careful to work in a temperature region where no phase transitions are induced.
  • If possible, prefer molecular-dynamics training runs in the NpT ensemble, the additional cell fluctuations improve the robustness of the resulting force field. However, for liquids allow only scaling of the supercell (no change of angles, no individual lattice vector scaling) because otherwise the cell may collapse. This can be achieved with the ICONST file, see 3) and 4) here.
  • For simulations without fixed lattice, restart often (ML_ISTART=1) to reinitialize the PAW basis of the KS orbitals and avoid Pulay stress.
  • When the system contains different components, train them separately first. For instance, when the system has a surface of a crystal and a molecule binding on that surface. First, train the bulk crystal, then the surface, next the isolated molecule, and finally the entire system. This way a significant amount of ab-initio calculation can be saved in the combined system which is anyway the computationally most expensive.

Monitoring

  • Always monitor the current prediction errors, Bayesian error estimation and error threshold by searching the ML_LOGFILE for ERR (4th column), BEEF (4th column) and THRUPD (4th column), respectively.

Tuning on-the-fly parameters

In case too many or too few training structures are selected there are some on-the-fly parameters which can be tuned:

If not enough or too many local reference structures are extracted from the training data there are additional tags for tuning:

Testing

  • Set up an independent test set of random configurations. Then, check the average errors comparing forces, stresses and energy differences of two structures based on DFT and predicted by machine-learned force fields.
  • If you have both, ab-initio reference data and a calculation using force fields, check the agreement of some physical properties. For instance, you might check the relaxed lattice parameters, phonons, relative energies of different phases, the elastic constant, the formation of defects etc.
  • plot blocked averages

Retraining

Application

  • set the ab-initio parameters to small values. VASP cannot circumvent the initialization of KS orbitals although they are not used during the MD run with MLFF

Related articles

Machine-learned force fields, Ionic minimization, Molecular dynamics