Requests for technical support from the VASP group should be posted in the VASP-forum.

# Difference between revisions of "Machine learning force field: Theory"

(105 intermediate revisions by 3 users not shown) | |||

Line 1: | Line 1: | ||

− | Here we present the theory for on-the-fly machine learning force fields | + | Here we present the theory for on-the-fly machine learning force fields. The theory will be presented in a very condensed manner and for a more detailed description of the methods we refer the readers to Refs. {{cite|jinnouchi:prl:2019}}, {{cite|jinnouchi2:arx:2019}} and {{cite|jinnouchi:jcm:20}}. |

+ | |||

+ | == Introduction == | ||

+ | Molecular dynamics is one of the most important methods for the determination of dynamic properties. The quality of the molecular dynamics simulation depends very strongly on the accuracy of the calculational method, but higher accuracy usually comes at the cost of increased computational demand. A very accurate method is ab initio molecular dynamics, where the interactions of atoms and electrons are calculated fully quantum mechanically from ab initio calculations (such as e.g. DFT). Unfortunately this method is very often limited to small simulation cells and simulation times. One way to hugely speed up these calculations is by using force fields, which are parametrizations of the potential energy. These parametrizations can range frome very simple functions with only empirical parameters, to very complex functions parametrized using thousands of ab initio calculations. Usually making accurate force fields manually is a very time consuming task that needs tremendous expertise and know how. | ||

+ | |||

+ | Another way to greatly reduce computational cost and required human intervention is by machine learning. Here, in the prediction of the target property the method automatically interpolates between known training systems which were previously calculated by ab initio. This way the generation of force fields is already significantly simplified compared to classical force field which need manual (or even empirical) adjustment of the parameters. Nevertheless there is still the problem how to choose the proper (minimal) training data. One very efficient and automic way to solve that is to adapt on-the-fly learning. Here an MD calculation is used for the learning. During the run of the MD calculation ab initio data is picked out and added to the training data. From the existing data a force field is continuously built up. At each step it is judged whether to make an ab initio calculation and possibly add the data to the force field or to use the force field for that step and actually skip learning for that step. Hence the name "on the fly" learning. The crucial point here is the probability model for the estimation of errors. This model not force field is build up from newly sampled data. Hence, the more accurate the force field gets the less sampling is needed and the more expensive ab initio steps are skipped. This way not only the convergence of the force field can be controlled but also a very wide spread scan through phase space for the training structures can be performed. The crucial point for an on-the-fly machine learning which will be explained with the rest of the methodology in the following subsections, is to be able to predict errors of the force field on a newly sampled structure without the necessity to perform an ab initio calculation on that structure. | ||

== Algorithm == | == Algorithm == | ||

Line 5: | Line 10: | ||

[[File:MLFF main algorithm.png|500px|thumb|Fig. 1: On-the-fly machine learning force field generation scheme.]] | [[File:MLFF main algorithm.png|500px|thumb|Fig. 1: On-the-fly machine learning force field generation scheme.]] | ||

− | + | To obtain the machine-learning force field several '''structure datasets''' are required. A structure dataset defines the Bravais lattice and the atomic positions of the system and contains the total energy, the forces and the stress tensor calculated by first principles. Given these structure datasets, the machine identifies '''local configurations''' around an atom to learn what force field is appropriate. The local configuration measures the radial and angular distribution of neighboring atoms around this given site and is captured in so called [[#Descriptors|'''descriptors''']]. | |

− | |||

− | |||

− | |||

The on-the-fly force field generation scheme is given by the following steps (a flowchart of the algorithm is shown in Fig. 1): | The on-the-fly force field generation scheme is given by the following steps (a flowchart of the algorithm is shown in Fig. 1): | ||

− | #The machine predicts the energy, forces and stress tensor and their uncertainties | + | #The machine predicts the energy, the forces and the stress tensor and their uncertainties for a given structure using the existing force field. |

− | #The machine decides whether to | + | #The machine decides whether to perform a first principles calculation (proceed with step 3); otherwise we skip to step 5. |

− | # | + | #Performing the first principles calculation, we obtain a new structure dataset which may improve the force field. |

− | #If the number of newly collected structures reaches a certain threshold | + | #If the number of newly collected structures reaches a certain threshold or if the predicted uncertainty becomes too large, the machine learns an improved force field by including the new structure datasets and local configurations. |

− | #Atomic positions | + | #Atomic positions and velocities are updated by using the force field (if accurate enough) or the first principles calculation. |

− | + | #If the desired total number of ionic steps ({{TAG|NSW}}) is reached we are done; otherwise proceed with step 1. | |

− | |||

− | == | + | == Local energies == |

The potential energy <math>U</math> of a structure with <math>N_{a}</math> atoms is approximated as | The potential energy <math>U</math> of a structure with <math>N_{a}</math> atoms is approximated as | ||

Line 32: | Line 33: | ||

</math> | </math> | ||

− | Here <math> f_{\mathrm{cut}}</math> is a cut-off function acting outside of <math>R_{\mathrm{cut}}</math>. | + | Here <math> f_{\mathrm{cut}}</math> is a cut-off function acting outside of <math>R_{\mathrm{cut}}</math> and <math>g\left(\mathbf{r}-\mathbf{r}_{ij}\right)</math> is a delta function. |

+ | |||

+ | The atom distribution can be also be written as a sum of individual distributions | ||

+ | |||

+ | <math> | ||

+ | \rho_{i}\left(\mathbf{r}\right) = \sum\limits_{j \ne i}^{N_{\mathrm{a}}} \rho_{ij}\left(\mathbf{r}\right). | ||

+ | </math> | ||

+ | |||

+ | Here <math> \rho_{ij}\left(\mathbf{r}\right) </math> describes the probability of find an atom <math>j</math> at a position <math>\mathbf{r}</math> with respect to atom <math>i</math>. | ||

+ | |||

+ | == Descriptors == | ||

+ | Similar to the Smooth Overlap of Atomic Positions{{cite|bartok:prb:2013}} (SOAP) method the delta function <math>g(\mathbf{r})</math> is approximated as | ||

<math> | <math> | ||

Line 50: | Line 62: | ||

</math> | </math> | ||

− | where <math>\hat{\mathbf{r}}</math> denotes the unit vector the vector <math>\mathbf{r}</math> between atoms <math>i</math> and <math>j</math> [see Fig. 2 (a)]. | + | where <math>\hat{\mathbf{r}}</math> denotes the unit vector of the vector <math>\mathbf{r}</math> between atoms <math>i</math> and <math>j</math> [see Fig. 2 (a)]. |

The Radial descriptor can also be regarded as a two-body descriptor. | The Radial descriptor can also be regarded as a two-body descriptor. | ||

Line 58: | Line 70: | ||

<math> | <math> | ||

− | + | \rho_{i}^{(3)}\left(r,s,\theta\right) = \iint d\hat{\mathbf{r}} d\hat{\mathbf{s}} \delta\left(\hat{\mathbf{r}}\cdot\hat{\mathbf{s}} - \mathrm{cos}\theta\right) \sum\limits_{j=1}^{N_{a}} \sum\limits_{k \ne j}^{N_{a}} \rho_{ik} \left(r\hat{\mathbf{r}}\right) \rho_{ij} \left(s\hat{\mathbf{s}}\right) | |

</math> | </math> | ||

− | where <math>\theta</math> denotes the angle between two vectors <math>\mathbf{r}_{ij}</math> and <math>\mathbf{r}_{ik}</math> [see Fig. 2 (b)]. The function <math>\rho_{i}^{(3)}</math> | + | where <math>\theta</math> denotes the angle between two vectors <math>\mathbf{r}_{ij}</math> and <math>\mathbf{r}_{ik}</math> [see Fig. 2 (b)]. The important difference of the function <math>\rho_{i}^{(3)}</math> compared to the angular distribution function (also called power spectrum within the Gaussian Approximation Potential) used in reference {{cite|bartok:prl:2010}} is that no self interaction is included, where <math>j</math> and <math>k</math> have the same distance from <math>i</math> and the angle between the two is zero. It can be shown {{cite|jinnouchi:jcm:20}} that the self-interaction component are equivalent to the two body descriptor. This means that our angular descriptor is a pure angular descriptor, containing no two-body components and it cannot be expressed as linear combinations of the power spectrum. The advantage of this descriptor is that it enables us to separately control the effects of two- and three-body descriptors. |

=== Basis set expansion === | === Basis set expansion === | ||

Line 70: | Line 82: | ||

</math> | </math> | ||

− | where <math>c_{nlm}^{i}</math>, <math>\chi_{nl}</math> and <math>Y_{lm}</math> denote expansion coefficients, radial basis functions and spherical harmonics, respectively. The indices <math>n</math>, <math>l</math> and <math>m</math> denote | + | where <math>c_{nlm}^{i}</math>, <math>\chi_{nl}</math> and <math>Y_{lm}</math> denote expansion coefficients, radial basis functions and spherical harmonics, respectively. The indices <math>n</math>, <math>l</math> and <math>m</math> denote radial numbers, angular and magnetic quantum numbers, respectively. |

− | By using the above equation the radial descriptor and angular descriptor | + | By using the above equation the radial descriptor and angular descriptor can be written as |

<math> | <math> | ||

Line 83: | Line 95: | ||

\rho_{i}^{(3)}\left(r,s,\theta\right) = \sum\limits_{l=1}^{L_{\mathrm{max}}} \sum\limits_{n=1}^{N^{l}_{\mathrm{R}}}\sum\limits_{\nu=1}^{N^{l}_{\mathrm{R}}} \sqrt{\frac{2l+1}{2}} p_{n\nu l}^{i}\chi_{nl}\left(r\right)\chi_{\nu l}\left(s\right)P_{l}\left(\mathrm{cos}\theta\right), | \rho_{i}^{(3)}\left(r,s,\theta\right) = \sum\limits_{l=1}^{L_{\mathrm{max}}} \sum\limits_{n=1}^{N^{l}_{\mathrm{R}}}\sum\limits_{\nu=1}^{N^{l}_{\mathrm{R}}} \sqrt{\frac{2l+1}{2}} p_{n\nu l}^{i}\chi_{nl}\left(r\right)\chi_{\nu l}\left(s\right)P_{l}\left(\mathrm{cos}\theta\right), | ||

</math> | </math> | ||

+ | |||

+ | where <math>\chi_{\nu l}</math> and <math>P_{l}</math> represent normalized spherical Bessel functions and Legendre polynomials of order <math>l</math>. | ||

+ | |||

+ | The expansion coefficients for the angular descriptor can be converted to | ||

<math> | <math> | ||

− | p_{n\nu l}^{i}=\sqrt{\frac{8\pi^{2}}{2l+1}} \sum\limits_{m=-l}^{l} c_{nlm}^{i} c_{\nu lm}^{i*}, | + | p_{n\nu l}^{i}=\sqrt{\frac{8\pi^{2}}{2l+1}} \sum\limits_{m=-l}^{l} \left[ c_{nlm}^{i} c_{\nu lm}^{i*} - c_{nlm}^{ij} c_{\nu lm}^{ij*}\right] , |

+ | </math> | ||

+ | |||

+ | where <math>c_{nlm}^{ij}</math> denotes expansion coefficients of the distribution <math>\rho_{ij}(\mathbf{r})</math> with respect to <math>\chi_{\nu l}</math> and <math>P_{l}</math> | ||

+ | |||

+ | <math> | ||

+ | \rho_{ij}(\mathbf{r}) = \sum\limits_{l=1}^{L_{\mathrm{max}}} \sum\limits_{m=-l}^{l} \sum\limits_{n=1}^{N^{l}_{ \mathrm{R}}} c_{nlm}^{ij}\chi_{nl} \left( r \right) Y_{lm} \left( \hat{\mathbf{r}} \right) | ||

+ | </math> | ||

+ | |||

+ | and | ||

+ | |||

+ | <math> | ||

+ | c_{nlm}^{i} = \sum\limits_{j}^{N_{a}} c_{nlm}^{ij}. | ||

</math> | </math> | ||

− | + | ||

+ | |||

+ | In many cases <math>\chi_{nl}</math> is multiplied with an angular filtering function{{cite|boyd:book:2000}} <math>\eta</math> ({{TAG|ML_IAFILT2}}), which can noticably reduce the necessary basis set size without losing accuracy in the calculations | ||

+ | |||

+ | <math> | ||

+ | \eta_{l,a_{\mathrm{FILT}}}=\frac{1}{1+a_{\mathrm{FILT}} [l (l+1)]^{2}} | ||

+ | </math> | ||

+ | |||

+ | where <math>a_{\mathrm{FILT}}</math> ({{TAG|ML_AFILT2}}) is a paramater controling the extent of the filtering. A larger value for the parameter should provide more filtering. | ||

== Potential energy fitting == | == Potential energy fitting == | ||

Line 101: | Line 137: | ||

<math> | <math> | ||

− | K \left(\mathbf{X}_{i},\mathbf{X}_{i_\mathrm{B}}\right) =\beta | + | K \left(\mathbf{X}_{i},\mathbf{X}_{i_\mathrm{B}}\right) = \left[ \beta \mathbf{X}_{i}^{(2)} \cdot \mathbf{X}_{i_{\mathrm{B}}}^{(2)} + (1-\beta) \mathbf{\hat{X}}_{i}^{(3)} \cdot \mathbf{\hat{X}}_{i_{\mathrm{B}}}^{(3)} \right]^{\zeta}. |

</math> | </math> | ||

− | Here the vectors <math>\mathbf{X}_{i}^{(2)}</math> and <math>\mathbf{\hat{X}}_{i}^{(3)}</math> contain all coefficients <math>c_{n00}^{i}</math> and <math>p_{n\nu l}^{i}</math>, respectively. The notation <math>\mathbf{\hat{X}}_{i}^{(3)}</math> indicates that it is a normalized vector of <math>\mathbf{X}_{i}^{(3)}</math>. The | + | Here the vectors <math>\mathbf{X}_{i}^{(2)}</math> and <math>\mathbf{\hat{X}}_{i}^{(3)}</math> contain all coefficients <math>c_{n00}^{i}</math> and <math>p_{n\nu l}^{i}</math>, respectively. The notation <math>\mathbf{\hat{X}}_{i}^{(3)}</math> indicates that it is a normalized vector of <math>\mathbf{X}_{i}^{(3)}</math>. The parameter <math>\beta</math> ({{TAG|ML_W1}}) controls the weighting of the radial and angular terms, respectively. The parameter <math>\zeta</math> ({{TAG|ML_NHYP}}) controls the sharpness of the kernel and the order of the many-body interactions. |

=== Matrix vector form of linear equations === | === Matrix vector form of linear equations === | ||

Line 116: | Line 152: | ||

where <math>\mathbf{Y}</math> is a super vector consisting of the sub vectors <math> \{\mathbf{y}^{\alpha}|\alpha=1,...,N_{\mathrm{st}}\}</math>. Here each <math>\mathbf{y}^{\alpha}</math> contains the first principle energies per atom, forces and stress tensors for each structure <math>\alpha</math>. <math> N_\mathrm{st}</math> denotes the total number of structures. The size of <math>\mathbf{Y}</math> is <math> N_{\mathrm{st}} \times (1+3N^{\alpha}_{a}+6)</math>. | where <math>\mathbf{Y}</math> is a super vector consisting of the sub vectors <math> \{\mathbf{y}^{\alpha}|\alpha=1,...,N_{\mathrm{st}}\}</math>. Here each <math>\mathbf{y}^{\alpha}</math> contains the first principle energies per atom, forces and stress tensors for each structure <math>\alpha</math>. <math> N_\mathrm{st}</math> denotes the total number of structures. The size of <math>\mathbf{Y}</math> is <math> N_{\mathrm{st}} \times (1+3N^{\alpha}_{a}+6)</math>. | ||

− | The matrix <math>\mathbf{ | + | The matrix <math>\mathbf{\Phi}</math> is also called as ''design'' matrix{{cite|bishop:book:2006}}. The rows of this matrix are blocked for each structure <math>\alpha</math>, where the first line of each block consists of the kernel used to calculate the energy. The subsequent <math>3 N^{\alpha}_{a}</math> lines consist of the derivatives of the kernel with respect to the atomic coordinates used to calculate the forces. The final 6 lines within each structure consist of the derivatives of the kernel with respect to the unit cell coordinates used to calculate the stress tensor components. The overall size of <math>\mathbf{Y}</math> is <math>N_{\mathrm{st}} \times (1+3 N^{\alpha}_{a}+6)\times N_\mathrm{B} </math> looking like as follows |

<math> | <math> | ||

Line 142: | Line 178: | ||

</math> | </math> | ||

− | + | == Bayesian error estimation == | |

− | |||

− | |||

− | |||

− | |||

− | |||

− | |||

Ultimately for error prediction we want to get the maximized probability of observing a new structure <math>\mathbf{y}</math> on basis of the training set <math>\mathbf{Y}</math>, which is denoted as <math>p \left( \mathbf{y} | \mathbf{Y} \right)</math>. For this we need to get from the error of the linear fitting coefficients <math>\mathbf{w}</math> in the reproduction of the training data <math>p\left( \mathbf{w} | \mathbf{Y} \right)</math> to <math>p \left( \mathbf{y} | \mathbf{Y} \right)</math> which is explained in the following. | Ultimately for error prediction we want to get the maximized probability of observing a new structure <math>\mathbf{y}</math> on basis of the training set <math>\mathbf{Y}</math>, which is denoted as <math>p \left( \mathbf{y} | \mathbf{Y} \right)</math>. For this we need to get from the error of the linear fitting coefficients <math>\mathbf{w}</math> in the reproduction of the training data <math>p\left( \mathbf{w} | \mathbf{Y} \right)</math> to <math>p \left( \mathbf{y} | \mathbf{Y} \right)</math> which is explained in the following. | ||

Line 170: | Line 200: | ||

</math> | </math> | ||

− | The parameters <math>\sigma_{\mathrm{w}}^{2}</math> and <math>\sigma_{\mathrm{v}}^{2}</math> need to be optimized to balance the accuracy and robustness of the force field (see below).The | + | The parameters <math>\sigma_{\mathrm{w}}^{2}</math> and <math>\sigma_{\mathrm{v}}^{2}</math> need to be optimized to balance the accuracy and robustness of the force field (see below). The normalization is obtained by |

<math> | <math> | ||

Line 198: | Line 228: | ||

</math> | </math> | ||

− | + | By using the relation | |

<math> | <math> | ||

Line 213: | Line 243: | ||

\mathbf{\sigma}=\sigma_{\mathrm{v}}^{2}\mathbf{I}+\mathbf{\phi}^{\mathrm{T}}\mathbf{\Sigma}\mathbf{\phi}. | \mathbf{\sigma}=\sigma_{\mathrm{v}}^{2}\mathbf{I}+\mathbf{\phi}^{\mathrm{T}}\mathbf{\Sigma}\mathbf{\phi}. | ||

</math> | </math> | ||

+ | |||

+ | Here <math>\mathbf{\phi}</math> is similar to <math>\mathbf{\Phi}</math> but only for the new structure <math>\mathbf{y}</math>. | ||

The mean vector <math>\mathbf{\phi}\mathbf{\bar w}</math> contains the results of the predictions on the dimensionless energy per atom, forces and stress tensor. | The mean vector <math>\mathbf{\phi}\mathbf{\bar w}</math> contains the results of the predictions on the dimensionless energy per atom, forces and stress tensor. | ||

− | The diagonal elements of <math>\mathbf{\sigma}</math>, which correspond to the variances of the predicted results, are used as the uncertainty in the prediction. | + | The diagonal elements of <math>\mathbf{\sigma}</math>, which correspond to the variances of the predicted results, are used as the uncertainty in the prediction. |

+ | |||

+ | Finally to get the best results and to prevent overfitting the parameters <math>\sigma_{\mathrm{v}}^{2}</math> and <math>\sigma_{\mathrm{w}}^{2}</math> have to be optimized. To achieve this, we use the evidence approximation{{cite|gull:book:1989}}{{cite|mackay:neu:2012}}{{cite|jinnouchi:pcl:2017}} (also called as empirical bayes, 2 maximum likelihood or generalized maximum likelihood), which maximizes the evidence function (also called model evidence) defined as | ||

+ | |||

+ | <math> | ||

+ | p\left(\mathbf{Y}|\sigma_{\mathrm{v}}^{2},\sigma_{\mathrm{w}}^{2}\right) =\left(\frac{1}{\sqrt{2\pi\sigma_{\mathrm{v}}^{2}}}\right)^{M} \left(\frac{1}{\sqrt{2\pi\sigma_{\mathrm{w}}^{2}}}\right)^{N_{\mathrm{B}}} \int \mathrm{exp}\left[-E\left( \mathbf{w} \right) \right] d\mathbf{w}, | ||

+ | </math> | ||

+ | |||

+ | <math> | ||

+ | E\left( \mathbf{w} \right) = \frac{1}{2\sigma_{\mathrm{v}}^{2}} || \mathbf{\Phi}\mathbf{w}-\mathbf{Y}||^{2} + \frac{1}{ 2\sigma_{\mathrm{w}}^{2}}||\mathbf{w}||^{2}. | ||

+ | </math> | ||

+ | |||

+ | The function <math>p\left(\mathbf{Y}|\sigma_{\mathrm{v}}^{2},\sigma_{\mathrm{w}}^{2}\right)</math> corresponds to the probability that the regression model with parameters <math>\sigma_{\mathrm{v}}^{2}</math> and <math>\sigma_{\mathrm{w}}^{2}</math> provides the reference data <math>\mathbf{Y}</math>. Hence the best fit is optimized by maximizing this probability. The maximization is carried out by simultaneously solving the following equations | ||

+ | |||

+ | <math> | ||

+ | \sigma^{2}_{\mathrm{w}}=\frac{|\mathbf{\bar{w}}|^{2}}{\gamma}, | ||

+ | </math> | ||

+ | |||

+ | <math> | ||

+ | \sigma^{2}_{\mathrm{v}}=\frac{|\mathbf{T}-\mathbf{\phi}\mathbf{\bar{w}}|^{2}}{M-\gamma}, | ||

+ | </math> | ||

+ | |||

+ | <math> | ||

+ | \gamma=\sum\limits_{k=1}^{N_{\mathrm{B}}} \frac{\lambda_{k}}{\lambda_{k}+1/\sigma^{2}_{\mathrm{w}}} | ||

+ | </math> | ||

+ | |||

+ | where <math>\lambda_{k}</math> are the eigenvalues of the matrix <math>\mathbf{\Phi}^{\mathrm{T}}\mathbf{\Phi}/\sigma^{2}_{\mathrm{v}}</math>. | ||

+ | |||

+ | == Sparsification == | ||

+ | Within the machine learning force field methods the sparsification of local reference configurations and the angular descriptors is supported. | ||

+ | The sparsification of local reference configurations is by default used and the extent is mainly controlled by {{TAG|ML_EPS_LOW}}. This is procedure is really important to avoid overcompleteness and to damp the acquisition of new configurations special cases. The sparsification of angular descriptors is by default not used and should be used very cautiously and only if it's necessary. The description of the usage of this feature is given in {{TAG|ML_LSPARSDES}}. | ||

+ | |||

+ | === Sparsification of local reference configurations === | ||

+ | |||

+ | We start by defining the similarity kernel for the local configurations with each other | ||

+ | |||

+ | <math> | ||

+ | \mathbf{K}_{i_{B},j_{B}}=\mathbf{K} \left(\mathbf{X}_{iB},\mathbf{X}_{j_\mathrm{B}}\right). | ||

+ | </math> | ||

+ | |||

+ | The CUR algorithm starts out from the diagonalization of this matrix | ||

+ | |||

+ | <math> | ||

+ | \mathbf{U}^{T}\mathbf{K}_{i_{B},j_{B}} \mathbf{U}= \mathbf{L} = \textrm{diag}(l_{1},l_{2},...,l_{N_{B}}), | ||

+ | </math> | ||

+ | |||

+ | where <math>\mathbf{U}</math> is an eigenvector matrix defined as | ||

+ | |||

+ | <math> | ||

+ | \mathbf{U} = (\mathbf{u}_{1},\mathbf{u}_{2},...,\mathbf{u}_{N_{b}}, | ||

+ | </math> | ||

+ | |||

+ | <math> | ||

+ | \mathbf{U}_{j}^{T} = (u_{1j},u_{2j},...,u_{N_{b}j}). | ||

+ | </math> | ||

+ | |||

+ | Using these notations the <math>j</math>th column vector of the matrix <math>\mathbf{K}_{i_{B},j_{B}}</math> is expressed as | ||

+ | |||

+ | <math> | ||

+ | \mathbf{k}_{j}=\sum\limits_{\chi=1}^{N_{B}} (u_{j\chi}l_{\chi}) \mathbf{u}_{\chi}. | ||

+ | </math> | ||

+ | |||

+ | In contrast to the original CUR algorithm{{cite|mahoney:pnas:2009}} which was developed to efficiently select a few significant columns from <math>\mathbf{K}_{i_{B},j_{B}}</math>, | ||

+ | in our algorithm we want to look for (few) insignificant local configurations and throw them away. | ||

+ | In our algorithm we dispose of the columns of <math>\mathbf{K}_{i_{B},j_{B}}</math> that are correlated with the <math>N_{\mathrm{low}}</math> eigenvectors <math>u_{\chi}</math> with the smalles eigenvalues <math>l_{\chi}</math>. The correlation is measured by the statistical leverage scoring measured for each column <math>j</math> of <math>\mathbf{K}_{i_{B},j_{B}}</math> as | ||

+ | |||

+ | <math> | ||

+ | \omega_{j}= \frac{1}{N_{\mathrm{low}}} \sum\limits_{\chi=1}^{N_{B}} \gamma_{\chi,j}, | ||

+ | </math> | ||

+ | |||

+ | <math> | ||

+ | \gamma_{\chi,j} = \left\{ \begin{array}{cl} u_{j\chi}^{2} &\mathrm{if} \,\, l_{\chi} > \epsilon_{\mathrm{low}} \\ | ||

+ | 0 &\mathrm{otherwise} \end{array} \right. , | ||

+ | </math> | ||

+ | |||

+ | where <math>\epsilon_{\mathrm{low}}</math> (see also {{TAG|ML_EPS_LOW}}) is a threshold for the lowest eigenvalues. | ||

+ | |||

+ | === Sparsification of angular descriptors === | ||

+ | |||

+ | The sparsification of the angular descriptors is done in a similar manner as for the local reference configurations. | ||

+ | We start by defining the <math> N_{D} \times N_{D}</math> square matrx | ||

+ | |||

+ | <math> | ||

+ | \mathbf{A}=\mathbf{X}\mathbf{X}^{T}. | ||

+ | </math> | ||

+ | |||

+ | Here <math>N_{D} </math> denotes the number of angular descriptors given by | ||

+ | |||

+ | <math> | ||

+ | N_{D} = \sum\limits_{l=1}^{L_{\mathrm{max}}} \frac{1}{2} N_{R}^{l} \left( N_{R}^{l} + 1 \right) \left( L_{\mathrm{max}} + 1 \right). | ||

+ | </math> | ||

+ | |||

+ | In this equation the symmetry of the descriptors <math> \rho_{n \nu l}= \rho_{\nu n l}</math> is already taken into account. | ||

+ | The matrix <math> \mathbf{X} </math> is constructed from the vectors of the angular descriptors <math>\left(\mathbf{x}_{1}^{(3)},\mathbf{x}_{2}^{(3)},...,\mathbf{N_{B}} \right)</math>. The matrix <math> \mathbf{X} </math> has dimension <math>N_{D} \times N_{B}</math> and the matrix product is done over the <math>N_{B}</math> elements of the local configurations. | ||

+ | |||

+ | In analogy to the local configurations the <math>j</math>th element of the matrix <math>\mathbf{A}</math> is written as | ||

+ | |||

+ | <math> | ||

+ | \mathbf{a}_{j} = \sum\limits_{\chi=1}^{N_{D}} (u_{j\chi}l_{\chi})\mathbf{u_{\chi}}. | ||

+ | </math> | ||

+ | |||

+ | Opposite to the sparsification of the local configuration and more in line with the original CUR method, the columns of matrix <math>\mathbf{A}</math> are kept when they are strongly correlated to the <math> k</math> (see also {{TAG|ML_NRANK_SPARSDES}} eigenvectors <math>u_{\chi}</math> which have the largest eigenvalues <math>l_{\chi}</math>. The correlation of the eigenvalues is then measured via a leverage scoring | ||

+ | |||

+ | <math> | ||

+ | \omega_{j} = \frac{1}{k} \sum\limits_{\chi=1}^{k} u_{j\chi}^{2}. | ||

+ | </math> | ||

+ | |||

+ | From the leverage scorings, the ones with the highest values are selected until the ratio of the selected descriptors to the total number of descriptors becomes a previosly selected value <math>x_{\mathrm{spars}}</math> (see also {{TAG|ML_RDES_SPARSDES}}). | ||

== References == | == References == | ||

Line 223: | Line 362: | ||

---- | ---- | ||

− | [[Category:Machine Learning]][[Category:Machine Learned Force Fields]][[Category:Theory]][[Category: | + | [[Category:Machine Learning]][[Category:Machine Learned Force Fields]][[Category:Theory]][[Category:Alpha]] |

## Latest revision as of 10:40, 7 September 2021

Here we present the theory for on-the-fly machine learning force fields. The theory will be presented in a very condensed manner and for a more detailed description of the methods we refer the readers to Refs. ^{[1]}, ^{[2]} and ^{[3]}.

## Introduction

Molecular dynamics is one of the most important methods for the determination of dynamic properties. The quality of the molecular dynamics simulation depends very strongly on the accuracy of the calculational method, but higher accuracy usually comes at the cost of increased computational demand. A very accurate method is ab initio molecular dynamics, where the interactions of atoms and electrons are calculated fully quantum mechanically from ab initio calculations (such as e.g. DFT). Unfortunately this method is very often limited to small simulation cells and simulation times. One way to hugely speed up these calculations is by using force fields, which are parametrizations of the potential energy. These parametrizations can range frome very simple functions with only empirical parameters, to very complex functions parametrized using thousands of ab initio calculations. Usually making accurate force fields manually is a very time consuming task that needs tremendous expertise and know how.

Another way to greatly reduce computational cost and required human intervention is by machine learning. Here, in the prediction of the target property the method automatically interpolates between known training systems which were previously calculated by ab initio. This way the generation of force fields is already significantly simplified compared to classical force field which need manual (or even empirical) adjustment of the parameters. Nevertheless there is still the problem how to choose the proper (minimal) training data. One very efficient and automic way to solve that is to adapt on-the-fly learning. Here an MD calculation is used for the learning. During the run of the MD calculation ab initio data is picked out and added to the training data. From the existing data a force field is continuously built up. At each step it is judged whether to make an ab initio calculation and possibly add the data to the force field or to use the force field for that step and actually skip learning for that step. Hence the name "on the fly" learning. The crucial point here is the probability model for the estimation of errors. This model not force field is build up from newly sampled data. Hence, the more accurate the force field gets the less sampling is needed and the more expensive ab initio steps are skipped. This way not only the convergence of the force field can be controlled but also a very wide spread scan through phase space for the training structures can be performed. The crucial point for an on-the-fly machine learning which will be explained with the rest of the methodology in the following subsections, is to be able to predict errors of the force field on a newly sampled structure without the necessity to perform an ab initio calculation on that structure.

## Algorithm

To obtain the machine-learning force field several **structure datasets** are required. A structure dataset defines the Bravais lattice and the atomic positions of the system and contains the total energy, the forces and the stress tensor calculated by first principles. Given these structure datasets, the machine identifies **local configurations** around an atom to learn what force field is appropriate. The local configuration measures the radial and angular distribution of neighboring atoms around this given site and is captured in so called **descriptors**.

The on-the-fly force field generation scheme is given by the following steps (a flowchart of the algorithm is shown in Fig. 1):

- The machine predicts the energy, the forces and the stress tensor and their uncertainties for a given structure using the existing force field.
- The machine decides whether to perform a first principles calculation (proceed with step 3); otherwise we skip to step 5.
- Performing the first principles calculation, we obtain a new structure dataset which may improve the force field.
- If the number of newly collected structures reaches a certain threshold or if the predicted uncertainty becomes too large, the machine learns an improved force field by including the new structure datasets and local configurations.
- Atomic positions and velocities are updated by using the force field (if accurate enough) or the first principles calculation.
- If the desired total number of ionic steps (NSW) is reached we are done; otherwise proceed with step 1.

## Local energies

The potential energy of a structure with atoms is approximated as

The local energies are functionals of the probability density to find another atom at the position around the atom within a cut-off radius defined as

Here is a cut-off function acting outside of and is a delta function.

The atom distribution can be also be written as a sum of individual distributions

Here describes the probability of find an atom at a position with respect to atom .

## Descriptors

Similar to the Smooth Overlap of Atomic Positions^{[4]} (SOAP) method the delta function is approximated as

Unfortunately is not rotationally invariant. To deal with this problem intermediate functions or *descriptors* depending on possessing rotational invariance are introduced:

### Radial descriptor

This is the simplest descriptor which relies on the radial distribution function

where denotes the unit vector of the vector between atoms and [see Fig. 2 (a)]. The Radial descriptor can also be regarded as a two-body descriptor.

### Angular descriptor

In most cases the radial descriptor is not enough to distinguish different probability densities , since two different can yield the same . To improve on this angular information between two radial descriptors is also incorporated within an angular descriptor

where denotes the angle between two vectors and [see Fig. 2 (b)]. The important difference of the function compared to the angular distribution function (also called power spectrum within the Gaussian Approximation Potential) used in reference ^{[5]} is that no self interaction is included, where and have the same distance from and the angle between the two is zero. It can be shown ^{[3]} that the self-interaction component are equivalent to the two body descriptor. This means that our angular descriptor is a pure angular descriptor, containing no two-body components and it cannot be expressed as linear combinations of the power spectrum. The advantage of this descriptor is that it enables us to separately control the effects of two- and three-body descriptors.

### Basis set expansion

The atomic probability density can be also expanded in terms of basis functions

where , and denote expansion coefficients, radial basis functions and spherical harmonics, respectively. The indices , and denote radial numbers, angular and magnetic quantum numbers, respectively.

By using the above equation the radial descriptor and angular descriptor can be written as

and

where and represent normalized spherical Bessel functions and Legendre polynomials of order .

The expansion coefficients for the angular descriptor can be converted to

where denotes expansion coefficients of the distribution with respect to and

and

In many cases is multiplied with an angular filtering function^{[6]} (ML_IAFILT2), which can noticably reduce the necessary basis set size without losing accuracy in the calculations

where (ML_AFILT2) is a paramater controling the extent of the filtering. A larger value for the parameter should provide more filtering.

## Potential energy fitting

It is convenient to express the local potential energy of atom in structure in terms of linear coefficients and a kernel as follows

where is the basis set size. The kernel measures the similarity between a *local* configuration from the training set and basis set . Using the radial and angular descriptors it is written as

Here the vectors and contain all coefficients and , respectively. The notation indicates that it is a normalized vector of . The parameter (ML_W1) controls the weighting of the radial and angular terms, respectively. The parameter (ML_NHYP) controls the sharpness of the kernel and the order of the many-body interactions.

### Matrix vector form of linear equations

Similarly to the energy the forces and the stress tensor are also described as linear functions of the coefficients . All three are fitted simultaneously which leads to following matrix-vector form

where is a super vector consisting of the sub vectors . Here each contains the first principle energies per atom, forces and stress tensors for each structure . denotes the total number of structures. The size of is .

The matrix is also called as *design* matrix^{[7]}. The rows of this matrix are blocked for each structure , where the first line of each block consists of the kernel used to calculate the energy. The subsequent lines consist of the derivatives of the kernel with respect to the atomic coordinates used to calculate the forces. The final 6 lines within each structure consist of the derivatives of the kernel with respect to the unit cell coordinates used to calculate the stress tensor components. The overall size of is looking like as follows

## Bayesian error estimation

Ultimately for error prediction we want to get the maximized probability of observing a new structure on basis of the training set , which is denoted as . For this we need to get from the error of the linear fitting coefficients in the reproduction of the training data to which is explained in the following.

First we obtain from the Bayesian theorem

,

where we assume multivariate Gaussian distributions for the likelihood function

and the (conjugate) prior

The parameters and need to be optimized to balance the accuracy and robustness of the force field (see below). The normalization is obtained by

Using the equations from above and the completing square method^{[7]} is obtained as follows

Since the prior and the likelood function are described by multivariate Gaussian distributions the posterior describes a multivariate Gaussian written as

where we use the following definitions for the center of the Guassian distribution and the covariance matrix

By using the relation

and the completing square method^{[7]} the distribution of is written as

Here is similar to but only for the new structure .

The mean vector contains the results of the predictions on the dimensionless energy per atom, forces and stress tensor. The diagonal elements of , which correspond to the variances of the predicted results, are used as the uncertainty in the prediction.

Finally to get the best results and to prevent overfitting the parameters and have to be optimized. To achieve this, we use the evidence approximation^{[8]}^{[9]}^{[10]} (also called as empirical bayes, 2 maximum likelihood or generalized maximum likelihood), which maximizes the evidence function (also called model evidence) defined as

The function corresponds to the probability that the regression model with parameters and provides the reference data . Hence the best fit is optimized by maximizing this probability. The maximization is carried out by simultaneously solving the following equations

where are the eigenvalues of the matrix .

## Sparsification

Within the machine learning force field methods the sparsification of local reference configurations and the angular descriptors is supported. The sparsification of local reference configurations is by default used and the extent is mainly controlled by ML_EPS_LOW. This is procedure is really important to avoid overcompleteness and to damp the acquisition of new configurations special cases. The sparsification of angular descriptors is by default not used and should be used very cautiously and only if it's necessary. The description of the usage of this feature is given in ML_LSPARSDES.

### Sparsification of local reference configurations

We start by defining the similarity kernel for the local configurations with each other

The CUR algorithm starts out from the diagonalization of this matrix

where is an eigenvector matrix defined as

Using these notations the th column vector of the matrix is expressed as

In contrast to the original CUR algorithm^{[11]} which was developed to efficiently select a few significant columns from ,
in our algorithm we want to look for (few) insignificant local configurations and throw them away.
In our algorithm we dispose of the columns of that are correlated with the eigenvectors with the smalles eigenvalues . The correlation is measured by the statistical leverage scoring measured for each column of as

where (see also ML_EPS_LOW) is a threshold for the lowest eigenvalues.

### Sparsification of angular descriptors

The sparsification of the angular descriptors is done in a similar manner as for the local reference configurations. We start by defining the square matrx

Here denotes the number of angular descriptors given by

In this equation the symmetry of the descriptors is already taken into account. The matrix is constructed from the vectors of the angular descriptors . The matrix has dimension and the matrix product is done over the elements of the local configurations.

In analogy to the local configurations the th element of the matrix is written as

Opposite to the sparsification of the local configuration and more in line with the original CUR method, the columns of matrix are kept when they are strongly correlated to the (see also ML_NRANK_SPARSDES eigenvectors which have the largest eigenvalues . The correlation of the eigenvalues is then measured via a leverage scoring

From the leverage scorings, the ones with the highest values are selected until the ratio of the selected descriptors to the total number of descriptors becomes a previosly selected value (see also ML_RDES_SPARSDES).

## References

- ↑ R. Jinnouchi, J. Lahnsteiner, F. Karsai, G. Kresse and M. Bokdam, Phys. Rev. Lett. 122, 225701 (2019).
- ↑ R. Jinnouchi, F. Karsai and G. Kresse, Phys. Rev. B 100. 014105 (2019).
- ↑
^{a}^{b}R. Jinnouchi, F. Karsai, C. Verdi, R. Asahi and G. Kresse, J. Chem. Phys.**152**, 234102 (2020). - ↑ A. P. Bartók, R. Kondor, and G. Csányi, Phys. Rev. B 87, 184115 (2013).
- ↑ A. P. Bartók, M.C. Payne, R. Kondor, and G. Csányi, Phys. Rev. Lett 104, 136403 (2010).
- ↑ [ J. P. Boyd, Chebyshev and Fourier Spectral Methods (Dover Publications, New York, 2000).]
- ↑
^{a}^{b}^{c}C. M. Bishop, Pattern Recognition and Machine Learning, (New York: Springer), (2006). - ↑ S.F. Gull, and J. Skilling, Maximum Entropy Bayesian Methods, Fundam. Theor. Phys., 28th ed. (Springer, Dordrecht, 1989).
- ↑ D. J. C. Mackay, Neural Computation 4, 415 (1992).
- ↑ R. Jinnouchi, and R. Asahi, J. Phys. Chem. Lett. 8, 4279 (2017).
- ↑ M. W. Mahoney and P. Drineas, Proc. Natl. Acad. Sci. USA
**106**, 697 (2009).