Abstract
Computational fluid dynamics (CFD) simulations are useful in the field of engineering design as they provide deep insights on product or system performance without the need to construct and test physical prototypes. However, they can be very computationally intensive to run. Machine learning methods have been shown to reconstruct high-resolution single-phase turbulent fluid flow simulations from low-resolution inputs. This offers a potential avenue towards alleviating computational cost in iterative engineering design applications. However, little work thus far has explored the application of machine learning image super-resolution methods to multiphase fluid flow (which is important for emerging fields such as marine hydrokinetic energy conversion). In this work, we apply a modified version of the super-resolution generative adversarial network (SRGAN) model to a multiphase turbulent fluid flow problem, specifically to reconstruct fluid phase fraction at a higher resolution. Two models were created in this work, one which incorporates a physics-informed term in the loss function and one which does not, and the results are discussed and compared. We found that both models significantly outperform non-machine learning upsampling methods and can preserve a substantial amount of detail, showing the versatility of the SRGAN model for upsampling multiphase fluid simulations. However, the difference in accuracy between the two models is minimal indicating that, in the context studied here, the additional complexity of a physics-informed approach may not be justified.
Introduction
Computational fluid dynamics (CFD) simulations have become an important tool in engineering design and computational design synthesis [1]. By significantly reducing the number of physical tests needed to iteratively refine the design of a product or system, CFD simulations can greatly reduce the total cost incurred during the design process. However, one of the most significant limitations of CFD is the tradeoff between fidelity and computation time. In general, CFD simulations model fluid flow by resolving the physical domain as a discrete mesh, and creating systems of differential equations (instances of the Navier–Stokes equations) which are solved or approximated using numerical computing methods [2]. To improve the fidelity of a CFD simulation, the density of the discretized mesh must be increased. However, a denser mesh also incurs significantly longer computation times forcing engineers to make a tradeoff between required fidelity and allowable computation time. Machine learning (ML) has been applied to the field of CFD to mitigate this detail/computation time tradeoff with some success. For instance, deep neural networks can be used to increase the resolution of CFD simulations through a process known as super-resolution (SR). These studies demonstrate that deep learning accelerated CFD may help to mitigate the fidelity/computational tradeoff.
We add this burgeoning field of research by demonstrating that these methods can also be applied to multiphase flow. The addition of a second fluid phase into a CFD simulation can significantly complicate the problem by adding the phase fraction as a simulation quantity that must be computed across the mesh [3]. In this paper, we demonstrate the applicability of existing image super-resolution methods to multiphase CFD simulations. Specifically, we demonstrate the feasibility of using generative adversarial network (GAN) architecture to approximate volume-of-fluids (VOFs) multiphase fluid flow results. Furthermore, we compare two GAN variations, one which incorporates a physics-informed term in the loss function and one which does not to assess the value potential value of a physics-informed approach for this application area. The remainder of the paper is organized as follows. First, we provide a background section which reviews image super-resolution, generative adversarial networks, and super-resolution applications in CFD. Next, we detail the methodology used in this work, including synthetic data generation and GAN architecture. Finally, we present results and conclusions.
Background
The project exists at the intersection of several different fields of research, and some background is provided on each in the following sections.
Image Super-Resolution.
A large variety of unique methods and architectures have been experimented with, such as deep Laplacian pyramid networks [8], dense skip connections [9], and deep residual channel attention networks [10]. The most common benchmark model in this field of research is the super-resolution convolutional neural network (SRCNN). Originally proposed by Dong et al. [11], this model uses a convolutional neural network (CNN) to create a mapping between low-resolution and high-resolution images. Rather than attempting to directly capture high-level features, SRCNN instead minimizes a lower-level pixel-wise loss. Further refinements to this model have included fast SRCNN [12], which dramatically decreased the runtime, and very deep super-resolution [13], which substantially improved accuracy through a deep neural network architecture.
Generative adversarial networks (GANs) can also be used to for super-resolution, as demonstrated by the super-resolution generative adversarial network (SRGAN), created by Ledig et al. [14]. Compared to SRCNN, SRGAN can infer very high-level features, leading to much more natural and realistic-looking super-resolved images. This can be attributed to the GAN architecture’s generative capabilities, as well as a powerful loss function. This increased accuracy comes at the cost of an increased number of weighting parameters in the network, which does slightly reduce computational efficiency during training and inference.
Generative Adversarial Networks.
The super-resolution algorithm applied in this work is based on a GAN architecture, which consists of training two separate deep neural networks against each other [15]. One network, the generator, is responsible for generating the actual output. The other network, the discriminator, is responsible for distinguishing between the generator’s output and the ground truth data. During the training process, the generator becomes better at producing outputs that resemble the ground truth, and the discriminator becomes better at discerning them. Ideally, the generator should become strong enough to fool the discriminator. The resulting generator network can then be used for inference, while the discriminator is discarded in the final model.
The versatility of the GAN architecture has led to its application to a multitude of different problems. GANs have been demonstrated to produce remarkably good results in novel tasks, such as generating human faces [16] and transforming photographs into paintings [17]. They have also proven to be useful for many practical engineering applications, such as designing airfoils [18], predicting stress distribution in structures [19], and estimating leakage parameters for liquid pipelines [20]. However, GANs can be large in size and difficult to train and are vulnerable to mode collapse and vanishing gradients [15]. Their large number of parameters also leads to slower inference, making them less suitable for time sensitive tasks [21].
Image Super-Resolution Applied to Computational Fluid Dynamics.
The resolution of a CFD simulation lies in the density of cells in the mesh, which is functionally analogous to pixels in an image. Therefore, for rectilinear meshes, image super-resolution techniques can be applied directly to CFD simulations. The work of Fukami et al. [22] was one of the earlier applications of SR to CFD, showing that turbulent flows could be reconstructed using a relatively simple CNN as well as a hybrid downsampled skip-connection multi-scale model. Another work by Liu et al. [23] produced further results with other CNN architectures as well as a multiple temporal paths convolutional network. They were able to show that deep learning methods outperformed traditional upsampling methods like bicubic interpolation but were still limited in their capabilities to produce results that follow the physical constraints of fluids [23]. Recently, GANs have applied to the CFD super-resolution problem as well, namely in the work by Xie et al. [24] with a novel application in 3D smoke diffusion.
Recent work has sought to imbue neural networks with explicit knowledge of physical phenomena resulting in the creation of physics-informed neural networks [25]. Physics-informed neural networks have been shown to produce good results for some applications, such as the modeling of materials [26] and high-speed flows [27], but show limitations in their current form that are not yet well understood [27]. For instance, it has been demonstrated that physics-informed models fail to provide good solution approximations for flow in porous media when shocks are introduced [28] and that some long short-term memory (LSTM) models may not perfectly resolve turbulent conditions [29].
Two recent physics-informed SR models for single-phase CFD simulations are MeshfreeFlowNet by Jiang et al. [30] and turbulence enrichment generative adversarial network (TEGAN) by Subramaniam et al. [31]. TEGAN was specifically trained to upsample instances of the incompressible forced isotropic homogeneous turbulence problem in 3D and is constrained by the time-dependent Navier–Stokes equations and the Poisson equation [31]. This model can upsample both pressure and velocity for four total fields in 3D (velocity in the x-, y-, z-directions as well as pressure) with substantial accuracy in each field. That work also demonstrated that adding a non-zero weight to the physics loss greatly improved the results but too high of a weight may be detrimental. This result is extremely important, not only legitimizing the use of physics-based loss functions for this field but also giving some evidence for other researchers to use while tuning the hyperparameters of their own models.
Methods
Problem Description.
The goal of the current work is to use image SR techniques to increase the resolution of the fluid phase fraction generated by a solver that uses the volume-of-fluids (VOFs) method of modeling two-phase incompressible turbulent fluid flow. We make use of the InterFoam solver from openfoam [32], an open-source and highly performant CFD software library commonly used in both industry and research [33]. Upsampling the output approximates the results of increasing the density of the discretized mesh and ideally will produce additional detail without incurring the cost of computation that comes with direct numerical solutions. To make our CFD simulations suitable for image SR models, we chose to discretize them with a uniform rectilinear mesh, which ensures the computational analogy to pixels typically used in SR. In this work, only the fluid phase fraction is considered. The model trained in this work specifically upsamples from a mesh resolution of 16 × 16 to 64 × 64 (see Fig. 1).
Training Data Generation.
Training data were generated using a slightly modified version of the openfoam “DamBreak” tutorial case, a two-dimensional interphase laminar flow simulation which depicts a mass of water falling from the air and onto the ground, then colliding with a solid immovable dam in the center of the floor. This creates a large amount of fluid movement for the model to learn and predict. A schematic figure of the specific dimensions used in this case is provided in Fig. 2.
A total of 800 cases were simulated with a uniform mesh density of 64 × 64 for 3 s with 60 discrete time-steps each (0.05 s increments), with the fluid data at each time-step serving as a data point. Every time-step constitutes a unique training sample, resulting in 48,000 total data training samples. The initial position, size, and shape of the water mass are randomized between cases, with several examples shown in Fig. 3. The rationale for this was to create a more varied set of training samples, as changing these conditions changes the amount of kinetic energy in the system, which in turn varies the fluid behavior between cases. Specifically, the shape of the water mass was randomized in a two-step process. First, the lower left point of the mass (denoted by (x1, y1) in Fig. 2) was selected from a uniform random distribution across the cells in the simulation domain. Next, the upper right point of the mass (denoted by x2, y2) was selected from a uniform random distribution across the cells in the simulation domain to the upper right of (x1, y1).
Each of these samples was then downsampled using linear interpolation, creating pairs of high- and low-resolution samples (see Fig. 4). This dataset was then split into two categories: pre-ground-impact and post-ground-impact (see Fig. 5). In this work, we focus on the post-ground-impact dataset since most of the fluid deformation occurs after the fluid collides with the ground. The pre-ground-impact dataset can likely be predicted with much simpler methods (i.e., projectile motion) and is not used in this paper. The model presented is trained completely on the post-ground data. This post-ground dataset has 19,664 total samples. This constitutes the portion of the 48,000 total samples that are simulated after ground impact of the falling mass of water. Of these, 5000 samples were randomly chosen for training and testing the model.
Network Architecture.
The network used here is based heavily on the SRGAN architecture, created by Ledig, and the full network layout is shown in Fig. 6. This is a GAN, and as such is composed of a generator (left) and discriminator (right).
The generator begins with a convolutional layer with a 9 × 9 kernel size with 64 filters and a PReLU activation, followed by 16 residual blocks with skip connections, which consist of two convolutional layers with 64 filters and a 3 × 3 kernel size each with a batch normalization layer, as well as two upsampling blocks which contain a convolutional layer with 256 filters and a 3 × 3 kernel size, and an 2D upsampling layer. Then, the last layer is a convolutional layer with one filter and a kernel size of 9 × 9, with a sigmoid activation layer. The sigmoid activation function is chosen for the last layer instead of the hyperbolic tangent because the fluid phase fraction values must be between zero and one.
The discriminator consists of eight convolutional layers with a kernel size of 3 × 3, split into pairs of 64 filters, 128 filters, 256 filters, and 512 filters. Each convolutional layer uses the leaky ReLU activation function. The last few layers consist of two dense layers and a sigmoid activation function.
Loss Function Design.
While the discriminator network was compiled and trained with the original binary cross-entropy loss function, the generator was trained differently from the original SRGAN implementation. Instead, the generator was trained with a custom loss function combining a traditional mean squared error (MSE) term with additional physics-based constraints, since this dataset follows a known set of governing equations.
Training and Comparison.
Training was performed on a computer with an NVIDIA RTX 3070 GPU on TensorFlow 2.5.0. The network was trained using 5000 samples from the dataset, which was split into 80% for training and 20% for testing. The network was trained with a batch size of 16 for 1000 epochs, and the Adam optimizer was used with a learning rate of 1 × 10−4.
In the current work, we train and investigate two models with different combinations of λ1 and λ2 (and therefore different loss functions). Namely, we investigate the case where λ1 = 1.0 and λ2 = 0.0, corresponding to a traditional MSE-only loss function, and the case where λ1 = 0.5 and λ2 = 0.5, corresponding to a novel loss function that combines a traditional term with a physics-informed term. We refer to the former as “MSE only” and to the latter as “MSE + physics.” These models are compared against traditional, non-ML upsampling approaches.
Results and Discussion
Figure 7 provides several comparisons between different upsampling approaches, namely, linear upsampling, bicubic upsampling, and the two models presented in this work. These samples were not present in the original training data. The furthest left column in the figure shows the low-resolution input, and the furthest right column shows the ground truth high-resolution simulation. The GAN models were able to capture a high degree of detail, with many high-level features preserved and visible. It seems that the generator was able to learn many of the turbulent and chaotic fluid behaviors present in the dataset. There seems to be very little difference visually between the MSE-only model and the MSE + physics model.
An examination of Fig. 7 also indicates that the GAN model outputs have much more clarity and detail than the traditional upsampling techniques. Turbulent trails of fluid were captured in the upsampled results of these models which are not present in the linear or bicubic results. Some details that are almost imperceptible in the low-resolution input were inferred by the GAN models. For instance, thin, wispy trails of water splashing in the air that are almost completely absent from the input and are only a few pixels in size in the high-resolution simulation are captured (see Figs. 7(AE) and 7(CE)). In addition, small pockets of air resulting from the collision of the water with the bottom boundary were somewhat preserved by the mode (see like in Figs. 7(BE) and 7(DE)). In some cases, like Figs. 7(BE) and 7(DE), these features are more pronounced in the MSE + physics model than in the MSE-only model. A more extensive qualitative analysis should be conducted to assess the extent to which this statement holds. These details do differ slightly from the ground truth, but the ability of the model to learn these details is still noteworthy.
Figure 8 shows the MSE loss values measured across all models with a log axis. This shows that the GAN models achieve loss that is substantially lower than nearest-neighbor, bicubic, and linear upsampling approaches.
This figure reveals quantitative evidence to back up several of the observations from Fig. 7. For instance, there is little difference between the loss values of the MSE-only and MSE + physics models. The primary different is that the MSE + physics model seems to have smaller error bounds than the MSE-only model. However, the MSE-only model does achieve some lower loss values. Both models significantly outperform bicubic and linear upsampling. The variance is significant (crossing orders of magnitude for all upsampling approaches), which may be explained by the variability of the dataset, as demonstrated in Fig. 3.
As measured by loss values only, the MSE-only model does appear to outperform the physics-informed model. It could be that MSE alone provides enough information to guide the training process on this dataset, and the scope of the output space is not further reduced by adding the physical constraint. In other words, minimizing the MSE value may already result in fluid volume consistency. More comprehensive models that reconstruct other fluid properties like pressure and velocity have a much larger output space, and therefore may benefit more from physical constraints. A more complex physics-based constraint could yield better results for the current multiphase problem.
The capabilities of the model presented here should help make the argument for further research into the field of SRCFD, because the computational resources saved could potentially be immense. Upsampling low-resolution CFD simulations to an acceptable quality using a neural network instead of directly running high-resolution CFD simulations with traditional numerical methods can save a lot of computation time, potentially leading to the streamlining of many workflows. The results presented here are not perfect, but the success found here definitely warrants further research and experimentation, nonetheless.
Conclusions and Future Work
The contributions described in this work are twofold. First, we demonstrate that existing image super-resolution methods can be applied to multiphase fluid flow with only minimal changes. Specifically SRGAN, the model demonstrated here, is capable of reconstructing turbulent multiphase flow at a higher resolution and with high accuracy than naïve upsampling approach. Second, we evaluate the potential value of a physics-informed approach to the multiphase fluid flow super-resolution task. Specifically, we show that a mass conservation term in the loss function shows little improvement over a traditional loss function.
Future work should further investigate the application of refined and modified SRCFD models to multiphase turbulent flow. CFD simulations include a time component, and subsequent time-steps are dependent on one another. Recurrent models could possibly provide improved results by taking advantage of the information contains in subsequent simulation frames. This work also focused solely on the fluid phase fraction, but a comprehensive model that reconstructs additional fluid properties like pressure and velocity for multiphase flow would potentially benefit more significantly from physics-based constraints.
The data used in this work are limited in two specific ways. First, all simulations were conducted with the same set of boundary conditions. The integration of diverse boundary conditions directly in the data representation or loss function should be explored in future work. Second, this work relied entirely on two-dimensional CFD simulations. Three-dimensional simulations incur even greater computational penalties when resolution is increased, so super-resolution should be investigated for those cases as well. This would require an updated SRGAN architecture which uses three-dimensional convolutions rather than two-dimensional convolutions. Predictions over three-dimensional domains have been demonstrated in other super-resolution tasks [34] and should be feasible for multiphase flow as well.
Acknowledgment
This material was based upon work supported by the Defense Advanced Research Projects Agency through cooperative agreement FA8750-20-C-0002. Any opinions, findings, and conclusions or recommendations expressed in this thesis are those of the authors and do not necessarily reflect the views of the sponsors.
Conflict of Interest
There are no conflicts of interest.
Data Availability Statement
The datasets generated and supporting the findings of this article are obtainable from the corresponding author upon reasonable request. The data and information that support the findings of this article are freely available.