Latthe, Ashvini A., Tare, Arti V., and Pande, Vijay N.
2022 IEEE Conference on Interdisciplinary Approaches in Technology and Management for Social Innovation (IATMSI) Interdisciplinary Approaches in Technology and Management for Social Innovation (IATMSI), 2022 IEEE Conference on. :1-4 Dec, 2022
Lute, Shreyash D., Pande, Vijay N., Jha, Nitin, and Sanvatsarkar, Uday
2022 IEEE Conference on Interdisciplinary Approaches in Technology and Management for Social Innovation (IATMSI) Interdisciplinary Approaches in Technology and Management for Social Innovation (IATMSI), 2022 IEEE Conference on. :1-4 Dec, 2022
2022 4th International Conference on Advances in Computing, Communication Control and Networking (ICAC3N) Advances in Computing, Communication Control and Networking (ICAC3N), 2022 4th International Conference on. :1672-1675 Dec, 2022
Tare, Arti V., Sathe, Vishal A., Tikle, Kshitij C., and Pande, Vijay N.
2022 IEEE International Conference on Power Electronics, Drives and Energy Systems (PEDES) Power Electronics, Drives and Energy Systems (PEDES), 2022 IEEE International Conference on. :1-5 Dec, 2022
Voelz, Vincent A., Pande, Vijay S., and Bowman, Gregory R.
Subjects
Quantitative Biology - Biomolecules and Physics - Chemical Physics
Abstract
Simulations of biomolecules have enormous potential to inform our understanding of biology but require extremely demanding calculations. For over twenty years, the Folding@home distributed computing project has pioneered a massively parallel approach to biomolecular simulation, harnessing the resources of citizen scientists across the globe. Here, we summarize the scientific and technical advances this perspective has enabled. As the project's name implies, the early years of Folding@home focused on driving advances in our understanding of protein folding by developing statistical methods for capturing long-timescale processes and facilitating insight into complex dynamical processes. Success laid a foundation for broadening the scope of Folding@home to address other functionally relevant conformational changes, such as receptor signaling, enzyme dynamics, and ligand binding. Continued algorithmic advances, hardware developments such as GPU-based computing, and the growing scale of Folding@home have enabled the project to focus on new areas where massively parallel sampling can be impactful. While previous work sought to expand toward larger proteins with slower conformational changes, new work focuses on large-scale comparative studies of different protein sequences and chemical compounds to better understand biology and inform the development of small molecule drugs. Progress on these fronts enabled the community to pivot quickly in response to the COVID-19 pandemic, expanding to become the world's first exascale computer and deploying this massive resource to provide insight into the inner workings of the SARS-CoV-2 virus and aid the development of new antivirals. This success provides a glimpse of what's to come as exascale supercomputers come online, and Folding@home continues its work. Comment: 24 pages, 6 figures
2020 Third International Conference on Advances in Electronics, Computers and Communications (ICAECC) Advances in Electronics, Computers and Communications (ICAECC), 2020 Third International Conference on. :1-6 Dec, 2020
Jacob, Joel A., Tare, Arti V., Vyawahare, Vishwesh A., and Pande, Vijay N.
2016 IEEE International WIE Conference on Electrical and Computer Engineering (WIECON-ECE) Electrical and Computer Engineering (WIECON-ECE), 2016 IEEE International WIE Conference on. :164-169 Dec, 2016
Gomes, Joseph, McKiernan, Keri A., Eastman, Peter, and Pande, Vijay S.
Subjects
Condensed Matter - Disordered Systems and Neural Networks, Condensed Matter - Strongly Correlated Electrons, and Quantum Physics
Abstract
The classical simulation of quantum systems typically requires exponential resources. Recently, the introduction of a machine learning-based wavefunction ansatz has led to the ability to solve the quantum many-body problem in regimes that had previously been intractable for existing exact numerical methods. Here, we demonstrate the utility of the variational representation of quantum states based on artificial neural networks for performing quantum optimization. We show empirically that this methodology achieves high approximation ratio solutions with polynomial classical computing resources for a range of instances of the Maximum Cut (MaxCut) problem whose solutions have been encoded into the ground state of quantum many-body systems up to and including 256 qubits. Comment: Second Workshop on Machine Learning and the Physical Sciences (NeurIPS 2019), Vancouver, Canada
Physics - Chemical Physics and Physics - Computational Physics
Abstract
Two types of approaches to modeling molecular systems have demonstrated high practical efficiency. Density functional theory (DFT), the most widely used quantum chemical method, is a physical approach predicting energies and electron densities of molecules. Recently, numerous papers on machine learning (ML) of molecular properties have also been published. ML models greatly outperform DFT in terms of computational costs, and may even reach comparable accuracy, but they are missing physicality - a direct link to Quantum Physics - which limits their applicability. Here, we propose an approach that combines the strong sides of DFT and ML, namely, physicality and low computational cost. By generalizing the famous Hohenberg-Kohn theorems, we derive general equations for exact electron densities and energies that can naturally guide applications of ML in Quantum Chemistry. Based on these equations, we build a deep neural network that can compute electron densities and energies of a wide range of organic molecules not only much faster, but also closer to exact physical values than current versions of DFT. In particular, we reached a mean absolute error in energies of molecules with up to eight non-hydrogen atoms as low as 0.9 kcal/mol relative to CCSD(T) values, noticeably lower than those of DFT (down to ~3 kcal/mol on the same set of molecules) and ML (down to ~1.5 kcal/mol) methods. A simultaneous improvement in the accuracy of predictions of electron densities and energies suggests that the proposed approach describes the physics of molecules better than DFT functionals developed by "human learning" earlier. Thus, physics-based ML offers exciting opportunities for modeling, with high-theory-level quantum chemical accuracy, of much larger molecular systems than currently possible. Comment: arXiv admin note: substantial text overlap with arXiv:1809.02723
We train a neural network to predict human gene expression levels based on experimental data for rat cells. The network is trained with paired human/rat samples from the Open TG-GATES database, where paired samples were treated with the same compound at the same dose. When evaluated on a test set of held out compounds, the network successfully predicts human expression levels. On the majority of the test compounds, the list of differentially expressed genes determined from predicted expression levels agrees well with the list of differentially expressed genes determined from actual human experimental data. Comment: 12 pages, 5 figures
Computer Science - Machine Learning and Statistics - Machine Learning
Abstract
Many applications of machine learning require a model to make accurate pre-dictions on test examples that are distributionally different from training ones, while task-specific labels are scarce during training. An effective approach to this challenge is to pre-train a model on related tasks where data is abundant, and then fine-tune it on a downstream task of interest. While pre-training has been effective in many language and vision domains, it remains an open question how to effectively use pre-training on graph datasets. In this paper, we develop a new strategy and self-supervised methods for pre-training Graph Neural Networks (GNNs). The key to the success of our strategy is to pre-train an expressive GNN at the level of individual nodes as well as entire graphs so that the GNN can learn useful local and global representations simultaneously. We systematically study pre-training on multiple graph classification datasets. We find that naive strategies, which pre-train GNNs at the level of either entire graphs or individual nodes, give limited improvement and can even lead to negative transfer on many downstream tasks. In contrast, our strategy avoids negative transfer and improves generalization significantly across downstream tasks, leading up to 9.4% absolute improvements in ROC-AUC over non-pre-trained models and achieving state-of-the-art performance for molecular property prediction and protein function prediction. Comment: Accepted as a spotlight to ICLR 2020
Feinberg, Evan N., Sheridan, Robert, Joshi, Elizabeth, Pande, Vijay S., and Cheng, Alan C.
Subjects
Computer Science - Machine Learning and Statistics - Machine Learning
Abstract
The Absorption, Distribution, Metabolism, Elimination, and Toxicity (ADMET) properties of drug candidates are estimated to account for up to 50% of all clinical trial failures. Predicting ADMET properties has therefore been of great interest to the cheminformatics and medicinal chemistry communities in recent decades. Traditional cheminformatics approaches, whether the learner is a random forest or a deep neural network, leverage fixed fingerprint feature representations of molecules. In contrast, in this paper, we learn the features most relevant to each chemical task at hand by representing each molecule explicitly as a graph, where each node is an atom and each edge is a bond. By applying graph convolutions to this explicit molecular representation, we achieve, to our knowledge, unprecedented accuracy in prediction of ADMET properties. By challenging our methodology with rigorous cross-validation procedures and prospective analyses, we show that deep featurization better enables molecular predictors to not only interpolate but also extrapolate to new regions of chemical space. Comment: 41 pages
We train a neural network to predict chemical toxicity based on gene expression data. The input to the network is a full expression profile collected either in vitro from cultured cells or in vivo from live animals. The output is a set of fine grained predictions for the presence of a variety of pathological effects in treated animals. When trained on the Open TG-GATEs database it produces good results, outperforming classical models trained on the same data. This is a promising approach for efficiently screening chemicals for toxic effects, and for more accurately evaluating drug candidates based on preclinical data. Comment: 12 pages, 2 figures, 4 tables
Physics - Chemical Physics and Physics - Computational Physics
Abstract
Density functional theory (DFT) is one of the main methods in Quantum Chemistry that offers an attractive trade off between the cost and accuracy of quantum chemical computations. The electron density plays a key role in DFT. In this work, we explore whether machine learning - more specifically, deep neural networks (DNNs) - can be trained to predict electron densities faster than DFT. First, we choose a practically efficient combination of a DFT functional and a basis set (PBE0/pcS-3) and use it to generate a database of DFT solutions for more than 133,000 organic molecules from a previously published database QM9. Next, we train a DNN to predict electron densities and energies of such molecules. The only input to the DNN is an approximate electron density computed with a cheap quantum chemical method in a small basis set (HF/cc-VDZ). We demonstrate that the DNN successfully learns differences in the electron densities arising both from electron correlation and small basis set artifacts in the HF computations. All qualitative features in density differences, including local minima on lone pairs, local maxima on nuclei, toroidal shapes around C-H and C-C bonds, complex shapes around aromatic and cyclopropane rings and CN group, etc. are captured by the DNN. Accuracy of energy predictions by the DNN is ~ 1 kcal/mol, on par with other models reported in the literature, while those models do not predict the electron density. Computations with the DNN, including HF computations, take much less time that DFT computations (by a factor of ~20-30 for most QM9 molecules in the current version, and it is clear how it could be further improved).
In this paper we introduce Curriculum GANs, a curriculum learning strategy for training Generative Adversarial Networks that increases the strength of the discriminator over the course of training, thereby making the learning task progressively more difficult for the generator. We demonstrate that this strategy is key to obtaining state-of-the-art results in image generation. We also show evidence that this strategy may be broadly applicable to improving GAN training in other data modalities.
Sharma, Rishi, Farimani, Amir Barati, Gomes, Joe, Eastman, Peter, and Pande, Vijay
Subjects
Statistics - Machine Learning and Computer Science - Machine Learning
Abstract
In typical machine learning tasks and applications, it is necessary to obtain or create large labeled datasets in order to to achieve high performance. Unfortunately, large labeled datasets are not always available and can be expensive to source, creating a bottleneck towards more widely applicable machine learning. The paradigm of weak supervision offers an alternative that allows for integration of domain-specific knowledge by enforcing constraints that a correct solution to the learning problem will obey over the output space. In this work, we explore the application of this paradigm to 2-D physical systems governed by non-linear differential equations. We demonstrate that knowledge of the partial differential equations governing a system can be encoded into the loss function of a neural network via an appropriately chosen convolutional kernel. We demonstrate this by showing that the steady-state solution to the 2-D heat equation can be learned directly from initial conditions by a convolutional neural network, in the absence of labeled training data. We also extend recent work in the progressive growing of fully convolutional networks to achieve high accuracy (< 1.5% error) at multiple scales of the heat-flow problem, including at the very large scale (1024x1024). Finally, we demonstrate that this method can be used to speed up exact calculation of the solution to the differential equations via finite difference.
Generating novel graph structures that optimize given objectives while obeying some given underlying rules is fundamental for chemistry, biology and social science research. This is especially important in the task of molecular graph generation, whose goal is to discover novel molecules with desired properties such as drug-likeness and synthetic accessibility, while obeying physical laws such as chemical valency. However, designing models to find molecules that optimize desired properties while incorporating highly complex and non-differentiable rules remains to be a challenging task. Here we propose Graph Convolutional Policy Network (GCPN), a general graph convolutional network based model for goal-directed graph generation through reinforcement learning. The model is trained to optimize domain-specific rewards and adversarial loss through policy gradient, and acts in an environment that incorporates domain-specific rules. Experimental results show that GCPN can achieve 61% improvement on chemical property optimization over state-of-the-art baselines while resembling known molecules, and achieve 184% improvement on the constrained property optimization task. Comment: NeurIPS 2018, spotlight presentation
Farimani, Amir Barati, Feinberg, Evan N., and Pande, Vijay S.
Subjects
Quantitative Biology - Biomolecules and Quantitative Biology - Quantitative Methods
Abstract
Many important analgesics relieve pain by binding to the $\mu$-Opioid Receptor ($\mu$OR), which makes the $\mu$OR among the most clinically relevant proteins of the G Protein Coupled Receptor (GPCR) family. Despite previous studies on the activation pathways of the GPCRs, the mechanism of opiate binding and the selectivity of $\mu$OR are largely unknown. We performed extensive molecular dynamics (MD) simulation and analysis to find the selective allosteric binding sites of the $\mu$OR and the path opiates take to bind to the orthosteric site. In this study, we predicted that the allosteric site is responsible for the attraction and selection of opiates. Using Markov state models and machine learning, we traced the pathway of opiates in binding to the orthosteric site, the main binding pocket. Our results have important implications in designing novel analgesics. Comment: 25 pages, 8 figures