A computational framework for fast inverse design of microstructures

Physics-aware deep generative models can be used for the inverse design of material microstructures. Deep neural networks, efficiently trained with multi-fidelity training data are ideal surrogates for evaluating the physics-based constraints within such design frameworks.
A computational framework for fast inverse design of microstructures
Like

Generative adversarial networks (GANs), a class of generative techniques that model data distribution, have enjoyed massive success in areas such as computer vision for creating realistic landscapes and human faces. Translating this success to applications in engineering and science fields has remained challenging. This is, to a large extent, because such applications require the results of any generative model to be physically viable, i.e., satisfying a set of physics-based constraints. An active area of research has developed to address these issues, commonly known as physics-informed machine learning. This area of study aims to develop computational frameworks that leverage advances in deep learning while incorporating physics constraints. 

Figure 1. (A) Invariance Network for inverse design of two-phase microstructure. (B) Deep neural network-based high-fidelity surrogate model. (C) Deep neural network-based multi-fidelity surrogate model. 

In recent work published in Nature Computational Science, we developed Invariance Networks (InvNet), as a formal approach for incorporating physics information in a generative model. Building upon a conventional Wasserstein-GAN framework, InvNet includes an additional physics invariance module that ensures that select physics knowledge is encoded into the generative model. In other words, we want the output samples of the model to satisfy some physics constraints in addition to belonging to the same distribution as our training data. While this idea might seem simple in theory, in practice, there are significant challenges to realizing this concept for practical application. Here, we were interested in the inverse design problem of generating microstructure designs for photovoltaic devices with tailored properties. Specifically, we seek to generate (a family of) microstructure designs that exhibit a desired (range) of photovoltaic performance. 

We evaluate every potential microstructure using a microstructure-aware photovoltaic performance simulator. This simulator solves the set of partial differential equations describing the photophysics in the active layer of an organic solar cell to compute the current-voltage characteristics. We hit a wall in trying to incorporate this complex PDE simulator as an invariance constraint. There was no straightforward way to backpropagate errors from the output through such a simulator to the generative model. This remains an active area of continuing research. Additionally, even if such an approach is developed, the time complexity of a single forward solve would have made this computationally infeasible. With each PDE solve taking ~1 hour of CPU-wall clock time, checking for constraint satisfaction for every sample generated by the generative model during the entire training process simply would have been infeasible. To put things into perspective, our model was trained for 10,000 iterations, which will result in at least 10,000 hours of extra training time. 

To alleviate this problem, we deployed deep neural network-based surrogate models. Rather than solving the differential equations, we train a surrogate model to predict the photovoltaic properties of a given microstructure. In addition to significant speed-up in mapping a microstructure design to photovoltaic performance (~60,000X speed-up), using a neural network surrogate also has the benefit of it being differentiable. This ensured that we can efficiently train a constrained generative model in an end-to-end fashion using gradient-based optimizations. This fact has exciting implications, especially in the field of PDE-constrained optimization, where the PDE-constraints are often non-differentiable (and/or very compute-intensive to solve). Another welcome implication is that with a surrogate model, the constraints also need not be expressed analytically. All we need to have is a set of constraint-satisfying data, and we can express these unknown constraints through a well-trained surrogate model. 

These insights allowed us to establish a generative modeling framework that can account for any general constraints via a surrogate model. However, there remained another challenge. Using a surrogate model implied the need to generate labeled data, which can be an expensive proposition. To circumvent this challenge, we rely on training a multi-fidelity surrogate model. Instead of training the surrogate on a large dataset of high fidelity labels (that come from solving a complex PDE), we initially train the surrogate on low-fidelity labels, key descriptors of the microstructure design, which are highly correlated with the high fidelity labels. The surrogate model is then further fine-tuned with a small amount of high-fidelity labels (current-voltage characteristic). This drastically reduces the data requirement of our framework by 5X without significantly affecting the final performance of the generative model.

Figure 2. Visual comparison of microstructures generated with two variants of InvNet and microstructures sampled from the true dataset. (A)  Morphologies generated with multi-fidelity  InvNet. (B) Morphologies generated with high-fidelity InvNet. (C) True morphologies sampled from the dataset.

Putting it all together, we now have a constrained generative modeling framework, where surrogate models represent the constraints, and we know that the resources required to train the surrogate model can be reduced by using multi-fidelity data. Our results illustrate that InvNet has fairly good generalization capabilities and can generate various candidate microstructure designs with multiple targeted properties. More importantly, the lesson learned from this study is that we can leverage data in multiple forms to learn a surrogate model in a data-efficient manner. This surrogate can then be used to encode some form of physics-based constraints when training a generative model. This renders the InvNet framework an important member of physics-informed neural networks, as we believe that the framework is general-purpose and is applicable to a broad range of applications.

Finally, there are certainly some limitations to the current framework of utilizing data-driven surrogates. From additional experiments, we have observed that because the surrogate is purely trained on data, the surrogate’s performance does decrease outside the support of training data. Nevertheless, for future works, we envision extending the current InvNet framework with the capability to incorporate explicit physics constraints in a tractable manner to extrapolate beyond the support of data as well as introducing manufacturing constraints to generate high-performance, manufacturable microstructure designs. 

For more details, please refer to our paper published in Nature Computational Science:  “Fast Inverse Design of Microstructures via Generative Invariance Networks”.





Please sign in or register for FREE

If you are a registered user on Research Communities by Springer Nature, please sign in