With just a few lines of MATLAB® code, you can apply deep learning techniques to your work whether you’re designing algorithms, preparing and labeling data, or generating code and deploying to embedded systems.
With MATLAB, you can:
- Create, modify, and analyze deep learning architectures using apps and visualization tools.
- Preprocess data and automate ground-truth labeling of image, video, and audio data using apps.
- Accelerate algorithms on NVIDIA® GPUs, cloud, and datacenter resources without specialized programming.
- Collaborate with peers using frameworks like TensorFlow, PyTorch, and MxNet.
- Simulate and train dynamic system behavior with reinforcement learning.
- Generate simulation-based training and test data from MATLAB and Simulink® models of physical systems.
See How Others Use MATLAB for Deep Learning
Uses semantic segmentation for terrain recognition in hyperspectral satellite data.
Labels LIDAR for verification of a radar-based automated driving system.
Trains convolutional neural networks on CT images to reduce radiation exposure risk.
MATLAB significantly reduces the time required to preprocess and label data sets for signal, image, video, lidar, audio, and text data. Synchronize disparate time series, replace outliers with interpolated values, deblur images, and filter noisy signals. Use interactive apps to label, crop, and identify important features, and built-in algorithms to help automate the process of labeling.
Start with a complete set of algorithms and prebuilt models, then create and modify deep learning models using the Deep Network Designer app. Incorporate deep learning models for domain-specific problems without having to create complex network architectures from scratch.
Use techniques to find the optimal network hyperparameters and Parallel Computing Toolbox™ and high-performance NVIDIA GPUs to accelerate these computationally intensive algorithms. Use visualization tools in MATLAB and techniques like Grad-CAM and occlusion sensitivity to gain insights into your model. Use Simulink to evaluate the impact of your trained deep learning model on system-level performance.
- Interactively Build Experiments to Fine-Tune and Compare Deep Learning Networks (6:06)
- How to Set Up Your Own Deep Learning Experiments (4:08)
- Interactively Modify a Deep Learning Network for Transfer Learning (2:43)
- Train Networks on NVIDIA DGX Systems
- Accelerate Development and Training of Deep Learning Networks with NVIDIA GPU Cloud
Data for accurate models is critical, and MATLAB can generate more data when you don’t have enough of the right scenarios. For example, use synthetic images from gaming engines, such as Unreal Engine®, to incorporate more edge cases. Use generative adversarial networks (GANs) to create custom simulated images.
Test algorithms before data is available from sensors by generating synthetic data from Simulink, an approach commonly used in automated driving systems.
It’s not an either/or choice between MATLAB and open source frameworks. MATLAB allows you to access the latest research from anywhere using ONNX import capabilities, and you can also use a library of prebuilt models, including NASNet, SqueezeNet, Inception-v3, and ResNet-101, to get started quickly. The ability to call Python from MATLAB and MATLAB from Python allows you to easily collaborate with colleagues that are using open source.
Deploy your trained model on embedded systems, enterprise systems, FPGA devices, or the cloud. MATLAB supports automatic CUDA® code generation for the trained network as well as for preprocessing and postprocessing to specifically target the latest NVIDIA GPUs.
When performance matters, you can generate code that leverages optimized libraries from Intel®, NVIDIA, and ARM® to create deployable models with high-performance inference speed. For edge deployment you can prototype your network on an FPGA and then generate production-ready HDL to target any device.
Get a Free Trial
30 days of exploration at your fingertips.
Explore Deep Learning Toolbox
Talk to a deep learning expert.