Technical Articles and Newsletters

Physics-Informed Machine Learning: Cloud-Based Deep Learning and Acoustic Patterning for Organ Cell Growth Research

By Samuel J. Raymond, Massachusetts Institute of Technology

To grow organ tissue from cells in the lab, researchers need a noninvasive way to hold the cells in place. One promising approach is acoustic patterning, which involves using acoustic energy to position and hold cells in a desired pattern as they develop into tissue. By applying acoustic waves to a microfluidic device, researchers have actuated micron-scale cells into simple patterns, such as lines and grids.

My colleagues and I have developed a combined deep learning and numerical simulation approach that enables us to arrange cells into much more complex patterns of our own design. We saved weeks of effort by conducting the entire workflow in MATLAB® and using parallel computing to accelerate key steps such as generating the training dataset from our simulator and training the deep learning neural network.

Acoustic Patterning with Microchannels

In a microfluidic device, fluid and fluid-borne particles or cells are manipulated in submillimeter-sized microchannels that can be made into different shapes. To create acoustic patterns within these microchannels, a surface acoustic wave (SAW) is generated using an interdigital transducer (IDT) and directed toward a channel wall (Figure 1a). In the fluid within the channel, the acoustic waves produce pressure minima and maxima that are aligned with the channel wall (Figure 1b). The shape of the channel walls can therefore be configured to yield specific acoustic fields within the channel1 (Figure 1c). The acoustic fields arrange particles within the fluid into patterns that correspond to the locations where the forces from these acoustic waves are minimized (Figure 1d).

Figure 1. Acoustic patterning in microchannels. 

While it is possible to compute the acoustic field that will result from a particular channel shape, the reverse is not possible: Designing a channel shape to produce a desired field is a nontrivial task for anything but simple grid-like patterns. Because the solution space is effectively unbounded, analytical approaches are not feasible.

This new workflow uses a large collection of simulated results (of randomized shapes) and deep learning to overcome this limitation. My colleagues and I first solved the forward problem by simulating pressure fields from known shapes in MATLAB. We then used the results to train a deep neural network to solve the inverse problem: identifying the microchannel shape needed to produce a desired acoustic field pattern.

Solving the Forward Problem: Simulating Pressure Fields

In earlier work, our team had developed a simulation engine in MATLAB that solves for the pressure field given a specific channel geometry using the Huygens-Fresnel principle, which holds that any point on a plane wave is a point source of spherical waves (Figure 2). 

Figure 2. Acoustic pressure field generated for a specific channel geometry.

The simulation engine relies on a variety of matrix operations. Because these operations are executed in MATLAB, each simulation takes only a fraction of a second to run, and we needed to simulate tens of thousands of unique shapes and their corresponding 2D pressure fields. We accelerated this process by running the simulations in parallel on a multicore workstation with Parallel Computing Toolbox™.

Once we had the data we needed, it was used to train a deep learning network to infer a channel shape from a given pressure field, essentially reversing the order of input and output.

Training a Deep Network for the Inverse Problem

First, thresholding was performed on the simulated pressure field values to help speed up the training process. This resulted in the creation of 151 x 151 2D matrices of ones and zeroes, which we flattened into a 1D vector that would be the input to the deep learning network. To minimize the number of output neurons, we used a Fourier coefficient representation that captured the channel shape outline (Figure 3).

Figure 3. Fourier series approximation of an equilateral triangle rotated 20 with (from left to right) 3, 10, and 20 coefficients.

We built the initial network using the Deep Network Designer app and refined it programmatically to balance accuracy, versatility, and training speed (Figure 4). We trained the network using an adaptive moment estimation (ADAM) solver on an NVIDIA® Titan RTX GPU.

Figure 4. The fully connected, feedforward network with four hidden layers.

Verifying the Results

To verify the trained network, we used it to infer a channel geometry from a given pressure field and used that geometry as input to the simulation engine to reconstruct the pressure field. We then compared the original and generated pressure fields. The pressure minima and maxima within the two fields matched each other closely (Figure 5).

Figure 5. Workflow for verifying the deep learning network.

Next, we performed a number of real-world tests. To indicate the regions where we wanted particles to aggregate, we drew customized images with Microsoft® Paint. These included a variety of different single- and multiline images that would have been difficult to produce without our technique. The trained network was then used to infer the channel geometries needed to produce these defined regions. Finally, with the help of our partners, we fabricated a number of microfluidic devices based on the inferred geometries. Each of these devices was then injected with1 μm polystyrene particles suspended in fluid into the shaped channels, and a SAW was applied to the device. The results showed the particles aggregating along the regions that were indicated in our custom-made images (Figure 6).

Figure 6. Bottom: Regions drawn in Microsoft Paint (purple) superimposed on the simulated acoustic field needed to aggregate particles in those regions. Top: Resulting patterns of suspended polystyrene particles in a fabricated microfluidic device.

Transitioning to the Cloud

As we look to the next stage of this project, we are updating our deep learning network to use images of acoustic fields as input and produce images of channel shapes as output rather than using a flattened vector and Fourier coefficients, respectively. The hope is that this change will enable us to use channel shapes not easily defined by a Fourier series that can vary with time. However, it will require a much larger dataset for training, a more complex network architecture, and significantly more computing resources. As a result, we are transitioning the network and its training data to the cloud.

Fortunately, the MathWorks Cloud Center provides a convenient platform to quickly spin up and close down instances of high-performance cloud computing resources. One of the more irksome aspects of conducting scientific research in the cloud is the interaction with instances, which involves moving our algorithms and data between the cloud and our local machine. MATLAB Parallel Server™ abstracts the more complex aspects of cloud computing, enabling us to run locally or in the cloud with a few simple menu clicks. This ease of use lets us focus on the scientific problem rather than on the tools needed to tackle it. 

Using MATLAB with NVIDIA GPU-enabled Amazon Web Services instances, we plan to train the updated network with data stored in Amazon® S3™ buckets. We can then use the trained network on local workstations to make inferences (which do not require high-performance computing) and experiment with different acoustic field patterns. This work will give us a baseline for other physics-informed machine learning projects.


The author would like to acknowledge the contributions of David J. Collins, Richard O’Rorke, Mahnoush Tayebi, Ye Ai, and John Williams to this project.

1 Collins, D. J., O’Rorke, R., Devendran, C., Ma, Z., Han, J., Neild, A. & Ai, Y. “Self-Aligned Acoustofluidic Particle Focusing and Patterning in Microfluidic Channels from Channel-Based Acoustic Waveguides.” Phys. Rev. Lett. 120, 074502 (2018).

About the Author

Sam Raymond is a postdoctoral scholar at Stanford University, having completed his Ph.D. in the Center for Computational Science and Engineering (CCSE) at MIT. His research interests include physics-informed machine learning, applying high-performance computing, deep learning, and meshfree methods to solve partial differential equations to simulate real-world phenomena.

Published 2020

View Articles for Related Industries