Deep Learning and Machine Learning for Signal Processing Applications - MATLAB
Video Player is loading.
Current Time 0:00
Duration 33:07
Loaded: 0.00%
Stream Type LIVE
Remaining Time 33:07
 
1x
  • descriptions off, selected
  • en (Main), selected
    Video length is 33:07

    Deep Learning and Machine Learning for Signal Processing Applications

    Overview

    Deep Learning and Machine Learning are powerful tools to build applications for signals and time-series data across a broad range of industries. These applications range from predictive maintenance and health monitoring to financial portfolio forecasting and advanced driver assistance systems.

    In this session, through detailed examples we will showcase several techniques and apps in MATLAB to build predictive models for real-life applications. We will cover how to build your signal datasets, label your signals using apps, and preprocess the data. We will explore various feature extraction techniques that help to create robust and accurate AI models. We will also examine what are the key types of networks used for deep learning and how they are applied and how the trained models can be deployed on embedded hardware.

    Highlights

    • Easily manage signal datasets using datastores
    • Using Signal Labeler and Signal Analyzer App for AI workflows
    • Feature extraction techniques including AutoML techniques such as wavelet scattering and time-frequency representations
    • Acceleration of training using GPUs and deployment on embedded hardware like Raspberry Pi

    About the Presenter

    Esha Shah is a Product Manager at MathWorks focusing on Signal Processing and Wavelets Toolbox. She supports MATLAB users focusing on advanced signal processing and AI workflows. Before joining MathWorks, she received her Master’s in Engineering Management from Dartmouth College and Bachelor’s in Electronics and Telecommunication Engineering from Pune University, India

    Recorded: 24 Feb 2021

    Hi, everyone. Welcome to the session on Deep Learning and Machine Learning for Signal Processing Applications. My name is Esha Shah, and I'm a Product Manager at MathWorks, focusing on Signal Processing and Wavelet. Now, deep learning is a key technology that's driving the current AI trend. Deep Learning is a subset of Machine Learning, and both these technologies are increasingly being used for analysis of signal and time series data. At MathWorks, we are seeing customers successfully apply AI to many different applications in research and industry from biomedical signal processing to seismic analysis, to anomaly detection, and many others.

    In this session, I will go with the general AI workflow for signal data. And then using three examples, I will demonstrate how you can get started with using AI for your applications. Let's talk about the AI workflow. A simplified view of the workflow results in three main stages; Data Preparation, which includes the data collection, generation, cleaning, labeling, and preprocessing, building and training models, and ultimately, deployment into the field. I won't go through all three parts of the workflow. And talk about some of the challenges within each stage and how MATLAB can help address these challenges.

    Data preparation is one of the most critical ingredients to success. However, data can be very noisy and unstructured because it may be coming in from different sources. You may also struggle with finding sufficient label data. All these challenges can make the process really hard and time consuming. There are various tools and apps in MATLAB that you can use to do data preparation more effectively.

    Now data collection may not always be possible as it may be too expensive, or you may have some data but it is not sufficient. In this case, you can look at synthetic data generation or augmentation of our existing data. So you can generate synthetic data through simulations and using deep learning models. You can also create data for more specific applications like communications, radar, and others. So you can generate various wireless waveforms in MATLAB and add other channel impairments to communication signals. You can generate radar return for moving objects. And if you're working with audio data, you can create data by converting text to speech, or augment data by adding various audio effects.

    The next challenge could be finding little data or labeling your own data. The signal labeler app can be used to label signals, regions, and points manually and automatically. The signal analyzer app can be used to analyze, visualize, and pre-process signals without having to write any code. This is very useful to understand your signals and find the best preprocessing techniques.

    Another key step of data preparation is feature extraction. MATLAB and various toolboxes are very useful here. Features can be extracted in the time domain. And these features could be signal pattern, signal envelopes, peaks and others. In the frequency domain, the features may be spectral analysis of bandwidth measurements or other such features. Time frequency maps are also an excellent way to extract important features.

    Various toolboxes also provide application specific algorithms in audio processing, communication, navigation, and sensor fusion, and others. Extracting features reduces the dimensionality and variability of the data and ensures that the AI models are learning the relevant features. Now moving to AI model building and training. Selecting the right type of model and technique for your application requires an understanding of these different types of models. So you can choose to use machine learning models, these models may be supervised like SVM, or unsupervised Clustering.

    Machine Learning Models are generally simpler to interpret and understand. You may also choose to use deep learning models and build them from scratch. Deep Learning models like CNN and LSTMs have many layers, each of which performs a specific task. Another approach to Deep Learning is Transfer Learning, where you leverage a model that has already been trained for one task and use it for other tasks. I will demonstrate all of these techniques in the examples.

    The key thing to understand is that the model and technique selection is dependent on many different parameters. Understanding the trade-off between these parameters can help you select the best technique for your application. So if you have a lot of data but less time to spend on building a model, you can choose to do Deep Learning / Transfer Learning instead of building a machine learning model but manual feature extraction.

    On the other hand, if interpretability and a clear understanding of the model is important to your application, you can choose to go with a Machine Learning Model that could have a slightly lower predictive power. Now, our aim is to make it easy for you to get started with and use any of these models with MATLAB. We also know that the broader deep learning community is incredibly active and prolific. And new models are coming out all the time. So you have the ability to bring in the models from other platforms like TensorFlow, PyTorch, et cetera using our importers.

    You can also leverage any computing resources that you may have access to, like Multi CPUs, GPUs, GPUs on the cloud without needing to rewrite any code to accelerate the training process. And finally, coming to deployment. Once your model is ready, you can deploy the code to any processor or enterprise systems. We have a unique generation framework that allows models developed in MATLAB to be deployed anywhere without having to rewrite the original model. This gives you the ability to test and deploy the entire system from preprocessing to feature extraction to prediction. Now that we have a clear overview of Machine Learning and Deep Learning for Signal Processing applications in MATLAB, let's dive into the examples.

    The first example, we will look at is segmentation of a ECG Signal, that's a electrocardiograms signal. This is what a typical ECG signal looks like. And each waveform in the signal has three regions of interest; the P, Q, R, S and T. And after segmentation, this is output we need. The three main regions have to be identified. And the values and patterns of these regions are very important because they can be used for detection of arithemia and other cardiac conditions.

    The data set that we are using is openly available and has 210 ECG signals, each nearly 15 minutes long and the data has been labeled by a cardiologist. In this example, we will look at the entire AI workflow. So we will look at how to prepare data and extract features and understand how this helps. We will look at how to build a Deep Learning Model from scratch. And finally, how the model can be deployed on Raspberry Pi.

    To figure out which model to use, I went through the documentation examples and some research papers. The LSTM or the Long Shot Term Memory network, which is a recurrent network seems to be the best fit as it works well for time series data. When you are building your network from scratch looking at the literature and MATLAB documentation examples, can be a good place to understand which model to use and what layers are needed for that. These are the general layers that the LSTM network has.

    So now let's go to the MATLAB code. Here, we're starting with a liberal data set which we are going to use to train a network and verify with test data. But if you are working with unlabeled data, you can use the signal labeler to label the training data either manually or using custom functions. It helps make this process easier.

    Now, let's load the data set first. We will foresee how the data is saved and if any preprocessing is required before we can use it. It signal here is quite long but nearly 15 minutes of data. The labels for the data are stored in form of ROILimits and corresponding labels. But here as we are doing a sequence to sequence classification, we will need a label on each sample point in each region. We do this using a signal mask and then we are going to plot the signal. Samples that don't belong to these regions are labeled as NA.

    Now, we divide the data into training and test states. Since we saw the signals along, we will need to resize these to allow the network to work efficiently. And we also need to apply the signal mask to all the data. After transforming the data, we have signals a 5,000 sample points and corresponding labels. We can now create a network in the Deep Network Designer App. The app provides free trade networks and templates that you can use as a starting point.

    We will use the LSTM network template. The layers are displayed here. And we can set the different values for each layers. For the input layer as we are feeding in a single signal at a time, the input size is 1. And I'm setting the number of hidden layers in the LSTM layer to 200. This is because an entire waveform in the ECG signal with the PQRS and T regions lasts about 200 samples in our data. The classification layer should have the output size as that number of output classes, which is 4 here. So P, Q, R, S, T and NA.

    Once this is done, the Analyze Network option lets you verify and check for any errors in the network. I'm not seeing any here. So I will go ahead and exploit the network into the workspace for training. We then have to set training options. I've set the values you're using the deployment documentation again. And then retrain the network. If you have GPUs available, the trained network function will automatically detect and use it. I have used the GPU here to accelerate the training process. But it will still take a few minutes because of the size of the data.

    So I have sped up this training progress. The training graph shows the accuracy on top, which improves as the network trains. And the loss is plotted below, which is a penalty for misclassification. And it drops through the iterations.

    Once the network is strained, we can check the accuracy on our test data. The accuracy for P, Q, R, S, and T waves is very poor here. So using the raw data directly in our network was not optimal. Let's see if feature extraction can help you. We will use the signal analyzer for this with a sample signal from our data. The signal can be easily started in the time spectra and time frequency domains in the app.

    Since I can see a trend in the data, I will use the building detrending first to pre-process that signal. I'll create a duplicate signal and apply and try out various preprocessing techniques on that. The detrending helped a little. Now before I can do any further preprocessing, I looked at the characteristics of ECG signals. The P, Q, R, S, and T regions of ECG signals generally have a frequency between 0.5 to 40 hertz. Physiological signals often have noise below 0.5 hertz that comes from the constant breathing motion of the patient as a signal is being captured. There may also be noise beyond 40 hertz that is typically caused due to movement of the patient as the signal is captured.

    So the movements and this noise can help us focus more specifically on the waveforms of interest. So I've created a bank pass filter 4.5 hertz to 40 hertz. And we can see the output signal here. And the final signal definitely highlights the key regions more clearly. We can also use time frequency maps that sometimes capture the features of the signal more clearly. This is the short time Fourier transform, but I want to use the Fourier Syncrosqueeze Transform, which is another time frequency technique that gives one spectral estimate per sample. And it is useful for sequence to sequence classification.

    So we will use the transform over the frequency range of interest from 0.5 to 40 hertz. Now, we have 40 spectral values for example value. So we even have to edit our network for this. We only change the input size and export the network again. Here to save time, I'm going to download the network I had trained earlier, and I have the training results here. The accuracy for the training set is above 90%, which is already higher than using just the raw data.

    And then when we use it to classify the test set, the accuracy of all the regions has improved significantly. This demonstrates the importance of feature extraction when you want to use AI for signal processing applications. Now up to this point, we are using a data set that we have available offline. But we can also connect hardware to collect live data and perform the segmentation.

    Here, we are using the Bitalino card which is used to capture various physiological signals, and my colleague is hooked up to the sensor leds. We use the Bitalino support package to bring in the signal into a MATLAB app and the algorithm performs life segmentation of the ECG signal. The last step in this workflow was deploying this example to embedded hardware.

    I chose the Raspberry Pi here for a couple of reasons. One, because the Raspberry Pi ball is easily accessible. And two because it's based on an ARM Cortex A, similar to many other processors out there. MATLAB coder enables you to generate code and deploy your application to any ARM Cortex A based processor that support new ARM instructions. You get optimal performance because the generated code calls into ARM's compute library. Here, we will focus on hardware and loop testing and validation.

    This is our MATLAB algorithm that we want to deploy. Here, we take in the raw signal. We also load the pre-trained network here. Then, we perform some preprocessing to disguise the signals and extract features using FSST. Then we perform the prediction and some post-processing to get the final output. So this is my descript that I will use to generate and deploy the code on the Raspberry Pi. I have saved the details of the Raspberry Pi to connect with. I use the Raspberry Pi support package to do this.

    Here, we verified the generated code but the processor and loop. So we can use MATLAB Assad's bench to pass the input to the application on the target and get the results back into MATLAB. So I'm setting the verification mode built. You can also create a standalone executable. I will set up the configuration of the properties here. And then using the codegen option, I will generate the code and deploy it on the Raspberry Pi board.

    Once codegen is complete, we get this mix file. And I can use that to run the application on the Raspberry Pi. Using a test signal, we get the classification results. And the results look similar to what we expect here. Throughout the example, we did not have to write any C or C++ code. However, if you would like to use any custom libraries, you can always manually integrate the generated code and compile it into a bigger application.

    So with MATLAB, you can easily delete code for signal processing and the functions. You can also generate coda code for deployment on GPU boards. Now, I want to take a few minutes and talk about wavelets and how they help in AI tasks. A wavelet is a form of limited duration that has an average value of 0. Similar to how the Fourier analysis decomposes signals into sinusoidal components, wavelet analysis decomposers signals its wavelet components. However, unlike the infinite sign beeps, wavelets a very rare localized in time and frequency.

    So how is this useful? The wavelets signals are generally non stationary, and have slowly varying trends and some transients. Oftentimes, capturing these transients is important as this has key information. And wavelets are particularly good at representing such data. By scaling and shifting the wavelets, the transience and data can be captured. This makes Wavelet a very useful tool for AI- based applications like anomaly detection, health monitoring, analyzing biomedical signals, financial analysis, and many others, where the data is noisy and non-stationary.

    Many different wavelet techniques can be used for AI. Like the CWT, which is a time frequency map like the Short Time Fourier Transform. But the CWT provides better time resolution at lower frequencies and better frequency resolution at higher frequencies. Wavelets Scattering is another useful technique to get low variance features from real value data. Wavelet statistics like wavelet radians, multi-resolution techniques, and many other techniques can be used as well.

    I'll show you how we can use some of these techniques in the next example. Our next example is identifying and classifying crack and uncracked pavements. Detecting and finding cracks in the pavement is an area of a lot of research, as this can help reduce cost and increase the overall life of the pavement. Now sometimes, image analysis is used to find cracks. However, this is not easy because it requires a lot of specialized hardware to capture the data.

    Another approach is to use accelerometers installed in cars and use the signals from that to identify when the car has crossed over a cracked pavement. And we're using an open data set that has 327 samples, and the data is collected from vehicles moving at different speeds on pavements that have varying sizes of cracks. In this example, we will see feature extraction but CWT and Transfer Learning.

    Let me quickly go over what is Transfer Learning. Here, you start with the pre-treated network like AlexNet that has been trained on a very large data set of millions of images and has many layers. The initial layers in this network generally perform low level feature extraction that can be useful to any type of data. And the last few layers learn the task specific features. So then we replace the last few layers and retrain the network with new data. The retrained network can then be used and verified using our test data. This technique proves to be a great starting point when you don't have a lot of data. You can see that starting with Transfer Learning also allows the network to train faster and perform better.

    Now let's go to the code of our example. We will first load the raw data. Since 66% of the data is of uncracked pavements, that is a bias already. So even if our network marked all the signals as uncracked, we would get 66% accuracy. Also, when we look at the data itself, the signals are of a few different sizes from 369 samples to 650 in samples. So we resize the signal by parting and trimming some of the signals.

    Now, I'm going to plot some of the sample signals. I have plotted a cracked and uncracked signal. In the time domain, it is difficult to know what the difference really is. And this might be especially true when the cracks are small. So now, we can perform the CWT transform and display the scalogram. The scalogram for the correct signal shows some content at the low frequencies. And uncracked signal does not show this. Differentiating between these two signals is much easier with the scalograms.

    Next, we will perform CWT on all the data and save the scalogram as results. Now, the scalograms that have been served as images can be saved and an image datastore, and we divide the data for training and testing. Here, we use the Pre-train Google network as a starting point for transfer learning. And I'm going to plot all the layers of the network now.

    Clearly, the network has many layers. And creating a deep network from scratch would not be simple. But here, we will focus on the last few layers and we will replace those layers. I can do all of these steps in the Deep Network Designer app again, but I want to show what the code for these layers looks like. So we replace the dropout layer with a new one. This layer is added to prevent overfitting. Then for the fully connected layer, you will add the correct number of outputs, and we will also add a new softmax and classification layer to the network as these cannot be edited in the original network. Again, we said the trading options using the Deep Learning documentation.

    Now, that a network is ready, we can train the network. Here, I'm going to load my trained network from earlier and show you the training results. Then we can check the accuracy on our test results. The accuracy comes to, and we have only one misclassification with zero false negatives. This is a fantastic result, given that we were using a biased data set. Also, this example is very short with very few lines of code. So CWT and Transfer Learning is very easy to use if you're new to the whole area of Deep Learning, or you don't have too much time to develop a new model.

    Now, coming to our final example. Here, we want to perform classification of how human activities like walking, climbing up stairs, lying down, et cetera. The data is captured from sensors on mobile phones and the data set is available on mathworks.com. In this example, we will perform automatic feature extraction with Wavelet Scattering and find the best Machine Learning Model.

    So let's understand how Wavelet Scattering works. A typical CNN, or Convolutional Neural Network has three types of layers; the convolutional layer, a nonlinear layer, and an averaging layer. These layers are repeated many times to create a deep network. The training process in Deep Learning is used to learn the weights of these layers. Researchers found that often after training, the filters resembled wavelets, which led to the question, why not use wavelets directly in the first place? And this is exactly what Wavelet Scattering is. You're like the CNN that are convolution, non-linear, and averaging layers, but instead of learning the weights, the weights are fixed to wavelets. This means, no drilling is required.

    Wavelet Scattering can be performed with a couple of lines of code in MATLAB. To perform Wavelet Scattering, you only need to pass the signal length and sampling frequency. The features extracted from Wavelet Scattering can then be passed to AI models.

    So let's look at the code for this example. We will first load the raw data. The unbuffered data has a long signal but all activities. So now, we will perform Wavelet Scattering. Like the pseudocode, we just need to describe the scattering framework. I passed the sketching framework to a function which extracts the features and resizes the data. I'm loading the pre-train features for now, but the feature extraction process doesn't take very long.

    In the training and test sets that are nearly 500 wavelet features for each segment, that is a lot of features. So we can now use the FSC MRMR function. The charge exctract the 500 features for classification using Minimum Redundancy Maximum Relevance algorithm. I have then plotted the first 50 features.

    From this plot, we can see that the score drops off after the first 14 features. So we will use just these 14 features for machine learning. We can now open the Classification Learner app and bring in the data and the predictor variable into the app.

    We can look through the different models, and choose which to apply to the data. I'm going to apply all the ensemble methods to the data here. The app can quickly train the classifier because of the significant data reduction. The Bagged Tree has the highest accuracy. Now, it is also possible to optimize the ensemble classifiers. To do that, I'm going to select optimize the ensemble classifier. The various parameters that are optimized can be seen here. Then I can run the training.

    After 30 iterations, we have the best model selected with the correct parameters. And since the accuracy is higher than the bagged Tree, I will export this model. I also want to point out that you can optimize between all different types of models using the Fitcauto function as well.

    Now, we can move to testing the model but the test data. This gives us an accuracy of over 97% with only two missclassifications. With functions like FSC MRMR, and Fitcauto, and apps like the Classification Learner, does easy to perform machine learning on your data and try various models to find what works best for your application.

    And so that brings us to the end of our webinar. We have seen how MATLAB can be used for the different steps of the workflow. For data preparation, you have apps like Signal Labeler, and Signal Analyzer. We can generate and augment data if less data is available. And many different feature extraction techniques like Wavelet based techniques can be used. For model training, we saw how to build a variety of Machine Learning and Deep Learning Models, and how to use apps to create models without needing to write code.

    We saw acceleration of the training process using GPUs as well. For deployment, we saw how code generation can be easily used to deploy all the code on target hardware like Raspberry Pi. We use all of these products in the examples shown on the Webinar. There are many different resources that you can refer to. All the examples that I have shown today are part of our documentation, and we have a lot of white papers, videos, and other material to help you get started.