TrainingOptionsLM
Description
Use a TrainingOptionsLM
object to set training options for the
Levenberg–Marquardt (LM) optimizer.
The LM algorithm [1] interpolates between gradient descent and Gauss-Newton methods, and can be more robust for small neural networks. It approximates second order derivatives using a Jacobian outer product. Use the LM algorithm for regression networks with small numbers of learnable parameters, where you can process the data set in a single batch.
Creation
Create a TrainingOptionsLM
object by using the trainingOptions
function and specifying "lm"
as the first
input argument.
Properties
LM
MaxIterations
— Maximum number of iterations
1000
(default) | positive integer
Maximum number of iterations to use for training, specified as a positive integer.
The LM solver is a full-batch solver, which means that it processes the entire training set in a single iteration.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
InitialDampingFactor
— Initial damping factor
0.001
(default) | positive scalar
Initial damping factor, specified as a positive scalar.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
MaxDampingFactor
— Maximum damping factor
1e10
(default) | positive scalar
Maximum damping factor, specified as a positive scalar.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
DampingIncreaseFactor
— Factor for increasing damping factor
10
(default) | positive scalar greater than 1
Factor for increasing damping factor, specified as a positive scalar greater than 1.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
DampingDecreaseFactor
— Factor for decreasing damping factor
0.1
(default) | positive scalar less than 1
Factor for decreasing damping factor, specified as a positive scalar less than 1.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
GradientTolerance
— Relative gradient tolerance
1e-5
(default) | positive scalar
Relative gradient tolerance, specified as a positive scalar.
The software stops training when the relative gradient is less than or equal to GradientTolerance
.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
StepTolerance
— Step size tolerance
1e-5
(default) | positive scalar
Step size tolerance, specified as a positive scalar.
The software stops training when the step that the algorithm takes is less than or equal to
StepTolerance
.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
Data Formats
InputDataFormats
— Description of input data dimensions
"auto"
(default) | string array | cell array of character vectors | character vector
Description of the input data dimensions, specified as a string array, character vector, or cell array of character vectors.
If InputDataFormats
is "auto"
, then the software uses
the formats expected by the network input. Otherwise, the software uses the specified
formats for the corresponding network input.
A data format is a string of characters, where each character describes the type of the corresponding data dimension.
The characters are:
"S"
— Spatial"C"
— Channel"B"
— Batch"T"
— Time"U"
— Unspecified
For example, consider an array containing a batch of sequences where the first, second,
and third dimensions correspond to channels, observations, and time steps, respectively. You
can specify that this array has the format "CBT"
(channel, batch,
time).
You can specify multiple dimensions labeled "S"
or "U"
.
You can use the labels "C"
, "B"
, and
"T"
once each, at most. The software ignores singleton trailing
"U"
dimensions after the second dimension.
For a neural networks with multiple inputs net
, specify an array of
input data formats, where InputDataFormats(i)
corresponds to the
input net.InputNames(i)
.
For more information, see Deep Learning Data Formats.
Data Types: char
| string
| cell
TargetDataFormats
— Description of target data dimensions
"auto"
(default) | string array | cell array of character vectors | character vector
Description of the target data dimensions, specified as one of these values:
"auto"
— If the target data has the same number of dimensions as the input data, then thetrainnet
function uses the format specified byInputDataFormats
. If the target data has a different number of dimensions to the input data, then thetrainnet
function uses the format expected by the loss function.String array, character vector, or cell array of character vectors — The
trainnet
function uses the data formats you specify.
A data format is a string of characters, where each character describes the type of the corresponding data dimension.
The characters are:
"S"
— Spatial"C"
— Channel"B"
— Batch"T"
— Time"U"
— Unspecified
For example, consider an array containing a batch of sequences where the first, second,
and third dimensions correspond to channels, observations, and time steps, respectively. You
can specify that this array has the format "CBT"
(channel, batch,
time).
You can specify multiple dimensions labeled "S"
or "U"
.
You can use the labels "C"
, "B"
, and
"T"
once each, at most. The software ignores singleton trailing
"U"
dimensions after the second dimension.
For more information, see Deep Learning Data Formats.
Data Types: char
| string
| cell
Monitoring
Plots
— Plots to display during neural network training
"none"
(default) | "training-progress"
Plots to display during neural network training, specified as one of these values:
"none"
— Do not display plots during training."training-progress"
— Plot training progress.
The plot shows the training and validation loss, training and validation metrics
specified by the Metrics
property, and additional information about
the training progress.
To programmatically open and close the training progress plot after training, use the show
and close
functions with the second output of the trainnet
function. You can use the show
function to view the training progress even if the Plots
training option is specified as "none"
.
To switch the y-axis scale to logarithmic, use the axes toolbar.
For more information about the plot, see Monitor Deep Learning Training Progress.
Metrics
— Metrics to monitor
[]
(default) | character vector | string array | function handle | deep.DifferentiableFunction
object | cell array | metric object
Metrics to monitor, specified as one of these values:
Built-in metric or loss function name — Specify metrics as a string scalar, character vector, or a cell array or string array of one or more of these names:
Metrics:
"accuracy"
— Accuracy (also known as top-1 accuracy)"auc"
— Area under ROC curve (AUC)"fscore"
— F-score (also known as F1-score)"precision"
— Precision"recall"
— Recall"rmse"
— Root mean squared error"mape"
— Mean absolute percentage error (MAPE)
Loss functions:
"crossentropy"
— Cross-entropy loss for classification tasks."indexcrossentropy"
— Index cross-entropy loss for classification tasks."binary-crossentropy"
— Binary cross-entropy loss for binary and multilabel classification tasks."mae"
/"mean-absolute-error"
/"l1loss"
— Mean absolute error for regression tasks."mse"
/"mean-squared-error"
/"l2loss"
— Mean squared error for regression tasks."huber"
— Huber loss for regression tasks
Note that setting the loss function as
"crossentropy"
and specifying"index-crossentropy"
as a metric or setting the loss function as"index-crossentropy"
and specifying"crossentropy"
as a metric is not supported.Built-in metric object — If you need more flexibility, you can use built-in metric objects. The software supports these built-in metric objects:
When you create a built-in metric object, you can specify additional options such as the averaging type and whether the task is single-label or multilabel.
Custom metric function handle — If the metric you need is not a built-in metric, then you can specify custom metrics using a function handle. The function must have the syntax
metric = metricFunction(Y,T)
, whereY
corresponds to the network predictions andT
corresponds to the target responses. For networks with multiple outputs, the syntax must bemetric = metricFunction(Y1,…,YN,T1,…TM)
, whereN
is the number of outputs andM
is the number of targets. For more information, see Define Custom Metric Function.deep.DifferentiableFunction
object — Function object with custom backward function. For more information, see Define Custom Deep Learning Operations.Custom metric object — If you need greater customization, then you can define your own custom metric object. For an example that shows how to create a custom metric, see Define Custom Metric Object. For general information about creating custom metrics, see Define Custom Deep Learning Metric Object. Specify your custom metric as the
Metrics
option of thetrainingOptions
function.
If you specify a metric as a function handle, a deep.DifferentiableFunction
object, or a custom metric object and train the neural network using the
trainnet
function, then the layout of the targets that the software
passes to the metric depends on the data type of the targets, and the loss function that you
specify in the trainnet
function and the other metrics that you specify:
If the targets are numeric arrays, then the software passes the targets to the metric directly.
If the loss function is
"index-crossentropy"
and the targets are categorical arrays, then the software automatically converts the targets to numeric class indices and passes them to the metric.For other loss functions, if the targets are categorical arrays, then the software automatically converts the targets to one-hot encoded vectors and then passes them to the metric.
Example: Metrics=["accuracy","fscore"]
Example: Metrics=["accuracy",@myFunction,precisionObj]
ObjectiveMetricName
— Name of objective metric
"loss"
(default) | string scalar | character vector
Name of objective metric to use for early stopping and returning the best network, specified as a string scalar or character vector.
The metric name must be "loss"
or match the name of a metric specified by
the Metrics
argument. Metrics specified using function handles are not
supported. To specify the ObjectiveMetricName
value as the name of a
custom metric, the value of the Maximize
property of the custom metric
object must be nonempty. For more information, see Define Custom Deep Learning Metric Object.
For more information about specifying the objective metric for early stopping, see ValidationPatience
. For more information about returning the best network using the objective metric, see OutputNetwork
.
Data Types: char
| string
Verbose
— Flag to display training progress information
1
(true
) (default) | 0
(false
)
Flag to display training progress information in the command window, specified as 1
(true
) or 0
(false
).
When this property is 1
(true
), the software displays this information:
Variable | Description |
---|---|
Iteration | Iteration number. |
TimeElapsed | Time elapsed in hours, minutes, and seconds. |
TrainingLoss | Training loss. |
ValidationLoss | Validation loss. If you do not specify validation data, then the software does not display this information. |
GradientNorm | Norm of the gradients. |
StepNorm | Norm of the steps. |
If you specify additional metrics in the training options, then
they also appear in the verbose output. For example, if you set the Metrics
training option to "accuracy"
, then the information includes the
TrainingAccuracy
and ValidationAccuracy
variables.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
| logical
VerboseFrequency
— Frequency of verbose printing
50
(default) | positive integer
Frequency of verbose printing, which is the number of iterations between printing to the Command Window, specified as a positive integer.
If you validate the neural network during training, then the software also prints to the command window every time validation occurs.
To enable this property, set the Verbose
training option to
1
(true
).
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
OutputFcn
— Output functions
function handle | cell array of function handles
Output functions to call during training, specified as a function handle or cell array of function handles. The software calls the functions once before the start of training, after each iteration, and once when training is complete.
The functions must have the syntax stopFlag = f(info)
, where info
is a structure containing information about the training progress, and stopFlag
is a scalar that indicates to stop training early. If stopFlag
is 1
(true
), then the software stops training. Otherwise, the software continues training.
The trainnet
function passes the output function the structure info
that contains these fields:
Field | Description |
---|---|
Iteration | Iteration number |
TimeElapsed | Time elapsed in hours, minutes, and seconds |
TrainingLoss | Training loss |
ValidationLoss | Validation loss. If you do not specify validation data, then the software does not display this information. |
GradientNorm | Norm of the gradients |
StepNorm | Norm of the steps |
State | Iteration training state, specified as "start" , "iteration" , or "done" . |
If you specify additional metrics in the training options, then
they also appear in the training information. For example, if you set the
Metrics
training option to "accuracy"
, then the
information includes the TrainingAccuracy
and
ValidationAccuracy
fields.
If a field is not calculated or relevant for a certain call to the output functions, then that field contains an empty array.
For an example showing how to use output functions, see Custom Stopping Criteria for Deep Learning Training.
Data Types: function_handle
| cell
Validation
ValidationData
— Data to use for validation during training
[]
(default) | datastore | cell array | minibatchqueue
object
Data to use for validation during training, specified as []
, a datastore, a table, a cell array, or a minibatchqueue
object that contains the validation predictors and targets.
During training, the software uses the validation data to calculate the validation loss and
metric values. To specify the validation frequency, use the ValidationFrequency
training option. You can also use the validation data to
stop training automatically when the validation objective metric stops improving. By
default, the objective metric is set to the loss. To turn on automatic validation stopping,
use the ValidationPatience
training option.
If ValidationData
is []
, then the software does
not validate the neural network during training.
If your neural network has layers that behave differently during prediction than during training (for example, dropout layers), then the validation loss can be lower than the training loss.
If ValidationData
is []
, then the software does not validate the neural network during training.
Specify the validation data as a datastore, minibatchqueue
object, or the
cell array {predictors,targets}
, where predictors
contains the validation predictors and targets
contains the validation
targets. Specify the validation predictors and targets using any of the formats supported by
the trainnet
function.
For more information, see the input arguments of the trainnet
function.
ValidationFrequency
— Frequency of neural network validation
50
(default) | positive integer
Frequency of neural network validation in number of iterations, specified as a positive integer.
The ValidationFrequency
value is the number of iterations between
evaluations of validation metrics. To specify validation data, use the ValidationData
training option.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
ValidationPatience
— Patience of validation stopping
Inf
(default) | positive integer
Patience of validation stopping of neural network training, specified as a positive integer or Inf
.
ValidationPatience
specifies the number of times that the objective metric on the validation set can be worse than or equal to the previous best value before neural network training stops. If ValidationPatience
is Inf
, then the values of the validation metric do not cause training to stop early. The software aims to maximize or minimize the metric, as specified by the Maximize
property of the metric. When the objective metric is "loss"
, the software aims to minimize the loss value.
The returned neural network depends on the OutputNetwork
training option. To return the neural network with the best validation metric value, set the OutputNetwork
training option to "best-validation"
.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
OutputNetwork
— Neural network to return when training completes
"auto"
(default) | "last-iteration"
| "best-validation"
Neural network to return when training completes, specified as one of the following:
"auto"
– Use"best-validation"
ifValidationData
is specified. Otherwise, use"last-iteration"
."best-validation"
– Return the neural network corresponding to the training iteration with the best validation metric value, where the metric to optimize is specified by theObjectiveMetricName
option. To use this option, you must specify theValidationData
training option."last-iteration"
– Return the neural network corresponding to the last training iteration.
Normalization
ResetInputNormalization
— Option to reset input layer normalization
1
(true
) (default) | 0
(false
)
Option to reset input layer normalization, specified as one of the following:
1
(true
) — Reset the input layer normalization statistics and recalculate them at training time.0
(false
) — Calculate normalization statistics at training time when they are empty.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
| logical
BatchNormalizationStatistics
— Mode to evaluate statistics in batch normalization layers
"auto"
(default) | "population"
| "moving"
Mode to evaluate the statistics in batch normalization layers, specified as one of the following:
"population"
— Use the population statistics. After training, the software finalizes the statistics by passing through the training data once more and uses the resulting mean and variance."moving"
— Approximate the statistics during training using a running estimate given by update stepswhere and denote the updated mean and variance, respectively, and denote the mean and variance decay values, respectively, and denote the mean and variance of the layer input, respectively, and and denote the latest values of the moving mean and variance values, respectively. After training, the software uses the most recent value of the moving mean and variance statistics. This option supports CPU and single GPU training only.
"auto"
— Use the"moving"
option.
Sequence
SequenceLength
— Option to pad or truncate input sequences
"longest"
(default) | "shortest"
Option to pad or truncate the input sequences, specified as one of these options:
"longest"
— Pad sequences to have the same length as the longest sequence. This option does not discard any data, though padding can introduce noise to the neural network."shortest"
— Truncate sequences to have the same length as the shortest sequence. This option ensures that the function does not add padding, at the cost of discarding data.
To learn more about the effects of padding and truncating the input sequences, see Sequence Padding and Truncation.
SequencePaddingDirection
— Direction of padding or truncation
"right"
(default) | "left"
Direction of padding or truncation, specified as one of these options:
"right"
— Pad or truncate sequences on the right. The sequences start at the same time step and the software truncates or adds padding to the end of each sequence."left"
— Pad or truncate sequences on the left. The software truncates or adds padding to the start of each sequence so that the sequences end at the same time step.
Because recurrent layers process sequence data one time step at a time, when the recurrent
layer OutputMode
property is "last"
, any padding in
the final time steps can negatively influence the layer output. To pad or truncate sequence
data on the left, set the SequencePaddingDirection
argument to "left"
.
For sequence-to-sequence neural networks (when the OutputMode
property is
"sequence"
for each recurrent layer), any padding in the first time
steps can negatively influence the predictions for the earlier time steps. To pad or
truncate sequence data on the right, set the SequencePaddingDirection
option to "right"
.
To learn more about the effects of padding and truncating sequences, see Sequence Padding and Truncation.
SequencePaddingValue
— Value by which to pad input sequences
0
(default) | scalar
Value by which to pad the input sequences, specified as a scalar.
Do not pad sequences with NaN
, because doing so can
propagate errors through the neural network.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
Hardware and Acceleration
ExecutionEnvironment
— Hardware resource
"auto"
(default) | "gpu"
| "cpu"
Hardware resource, specified as one of these values:
"auto"
— Use a GPU if one is available. Otherwise, use the CPU."gpu"
— Use the GPU. Using a GPU requires a Parallel Computing Toolbox™ license and a supported GPU device. For information about supported devices, see GPU Computing Requirements (Parallel Computing Toolbox). If Parallel Computing Toolbox or a suitable GPU is not available, then the software returns an error."cpu"
— Use the CPU.
Acceleration
— Performance optimization
"auto"
(default) | "none"
Performance optimization, specified as one of these values:
"auto"
– Automatically apply a number of optimizations suitable for the input network and hardware resources."none"
– Disable all optimizations.
Checkpoints
CheckpointPath
— Path for saving checkpoint neural networks
""
(default) | string scalar | character vector
Path for saving the checkpoint neural networks, specified as a string scalar or character vector.
If you do not specify a path (that is, you use the default
""
), then the software does not save any checkpoint neural networks.If you specify a path, then the software saves checkpoint neural networks to this path and assigns a unique name to each neural network. You can then load any checkpoint neural network and resume training from that neural network.
If the folder does not exist, then you must first create it before specifying the path for saving the checkpoint neural networks. If the path you specify does not exist, then the software throws an error.
Data Types: char
| string
CheckpointFrequency
— Frequency of saving checkpoint neural networks
30
(default) | positive integer
Frequency of saving checkpoint neural networks in iterations, specified as a positive integer.
This option only has an effect when CheckpointPath
is nonempty.
Data Types: single
| double
| int8
| int16
| int32
| int64
| uint8
| uint16
| uint32
| uint64
Examples
Create Training Options for LM Optimizer
Create a set of options for training a neural network using the LM optimizer.
Use an initial damping factor of 0.002.
Use a maximum damping factor of .
Increase the damping using a factor of 12.
Decrease the damping using a factor of 0.2.
options = trainingOptions("lm", ... InitialDampingFactor=0.002, ... MaxDampingFactor=1e9, ... DampingIncreaseFactor=12, ... DampingDecreaseFactor=0.2)
options = TrainingOptionsLM with properties: MaxIterations: 1000 InitialDampingFactor: 0.0020 MaxDampingFactor: 1.0000e+09 DampingDecreaseFactor: 0.2000 DampingIncreaseFactor: 12 GradientTolerance: 1.0000e-05 StepTolerance: 1.0000e-05 SequenceLength: 'longest' CheckpointFrequency: 30 Verbose: 1 VerboseFrequency: 50 ValidationData: [] ValidationFrequency: 50 ValidationPatience: Inf ObjectiveMetricName: 'loss' CheckpointPath: '' ExecutionEnvironment: 'auto' OutputFcn: [] Metrics: [] Plots: 'none' SequencePaddingValue: 0 SequencePaddingDirection: 'right' InputDataFormats: "auto" TargetDataFormats: "auto" ResetInputNormalization: 1 BatchNormalizationStatistics: 'auto' OutputNetwork: 'auto' Acceleration: "auto"
Algorithms
Levenberg–Marquardt
The LM algorithm [1] interpolates between gradient descent and Gauss-Newton methods, and can be more robust for small neural networks. It approximates second order derivatives using a Jacobian outer product. Use the LM algorithm for regression networks with small numbers of learnable parameters, where you can process the data set in a single batch.
The algorithm updates the learnable parameters W at iteration k+1 using the update step given by
where ΔWk the change of the weights at iteration k given by
Here, Hk is the approximated Hessian at iteration k and is the gradient of the loss at iteration k with respect to the learnable parameters. The algorithm approximates the Hessian using
where Jk is the Jacobian matrix at iteration k, μk is the damping factor at iteration k, and I is the identity matrix.
The solver uses the damping factor to adjust the step size taken each iteration and adaptively updates it each iteration. It increases and decreases the damping factor when iterations increase and decrease the loss, respectively. These adjustments make the optimizer take larger and smaller steps when the loss is increasing and decreasing, respectively.
When the loss increases or decreases, the solver adaptively increases or decreases the
damping factor by multiplying it by DampingIncreaseFactor
and
DampingDecreaseFactor
, respectively.
References
[1] Marquardt, Donald W. “An Algorithm for Least-Squares Estimation of Nonlinear Parameters.” Journal of the Society for Industrial and Applied Mathematics 11, no. 2 (June 1963): 431–41. https://doi.org/10.1137/0111030.
Version History
Introduced in R2024b
See Also
trainingOptions
| trainnet
| dlnetwork
| analyzeNetwork
| Deep Network Designer
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .
You can also select a web site from the following list:
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)