parallel pool for deep learning
1 view (last 30 days)
I am training a network for deeplearning.
In order to speed up the training, I created a parpool (in this case of 2 (instead of one).
However the process time for the training remains the same, so it looks like the parpool does not wotk for training a deeplearning network.
I do have the parallel toolbox 2019b (ver 7.1) and deep learning toolbox 2019b (ver 13.0)
Can anyone help me?
Maksym Tymchenko on 10 Mar 2023
This example demonstrated how you can train a single deep learning network using multiple GPUs in parallel:
If you are interested in training multiple deep learning experiments in parallel instead, check out this example: