The learning problem can be framed as finding a low dimensional feature space where the non-linear dynamics of atoms can be represented by a linear transition matrix. Assume that the atoms we are interested in follow any non-linear dynamics $$F: x_{t+\tau} = F(x_t)$$. We hope to find a mapping function $$\chi(\cdot)$$ to map the cartesian coordinates of atoms $$x$$ to a low dimensional feature space, such that there exists a low dimensional linear transition matrix $$K: \chi(x_{t+\tau}) \approx K\chi(x_t)$$ that can approximate non-linear dynamics of $$F$$. Effectively, we are mapping each atom into several “states”, and we can understand their dynamics by analyzing the transition matrix $$K$$.
The mapping function $$\chi(\cdot)$$ can be learned from time-series MD simulation data $$\{x_t\}$$ by minimizing a dynamical loss function (VAMP), but direct learning of $$\chi$$ is typically not possible in materials. This is because atoms can move between structurally similar yet distinct chemical environments, making the problem exponentially more complex. We need to encode this symmetry of atoms into the neural networks to solve this problem. We used a type of graph convolutional neural networks (CGCNN) from earlier work to encode the atomic structures in a way that respects the symmetries. In GdyNets, CGCNN is trained with time-series MD data to learn $$\chi$$ with a VAMP loss.