Transformer-based neural networks capture organic chemistry grammar from unsupervised learning of chemical reactions
Abstract
Humans use different domain languages to represent, explore, and communicate scientific concepts. During the last few hundred years, chemists compiled the language of chemical synthesis, inferring a series of "reaction rules'' from knowing how atoms rearrange during a chemical transformation, a process called atom-mapping. Atom-mapping is a laborious experimental task and, when tackled with computational methods, requires continuous annotation of chemical reactions and the extension of logically consistent directives. This work demonstrates that Transformer neural networks learn atom-mapping information between products and reactants without supervision or human labeling. Using the Transformer attention weights, we build a chemically agnostic, attention-guided reaction mapper, called RXNMapper, and extract coherent chemical grammar from unannotated sets of reactions. Our method shows remarkable performance in terms of accuracy and speed, even for strongly imbalanced and chemically complex reactions with non-trivial atom-mapping. We provide the missing link between data-driven and rule-based approaches for numerous chemical reaction tasks. The open-source RXNMapper code and a demo can be found on http://rxnmapper.ai.