# nssTrainingLBFGS

## Description

L-BFGS options set object to train an `idNeuralStateSpace`

network using `nlssest`

.

## Creation

Create an `nssTrainingLBFGS`

object using `nssTrainingOptions`

and specifying `"lbfgs"`

as input argument.

## Properties

`UpdateMethod`

— Solver used to update network parameters

`"LBFGS"`

(default)

Solver used to update network parameters, returned as a string. This property is read-only.

Use `nssTrainingOptions("adam")`

,
`nssTrainingOptions("sgdm")`

, or
`nssTrainingOptions("rmsprop")`

to return an options set object for
the Adam, SGDM, or RMSProp solvers respectively. For more information on these
algorithms, see the Algorithms section of `trainingOptions`

(Deep Learning Toolbox).

`MaxIterations`

— Maximum number of iterations

`100`

(default) | positive integer

Maximum number of iterations to use for training, specified as a positive integer.

The L-BFGS solver is a full-batch solver, which means that it processes the entire training set in a single iteration.

`LineSearchMethod`

— Method to find suitable learning rate

`"weak-wolfe"`

(default) | `"strong-wolfe"`

| `"backtracking"`

Method to find suitable learning rate, specified as one of these values:

`"weak-wolfe"`

— Search for a learning rate that satisfies the weak Wolfe conditions. This method maintains a positive definite approximation of the inverse Hessian matrix.`"strong-wolfe"`

— Search for a learning rate that satisfies the strong Wolfe conditions. This method maintains a positive definite approximation of the inverse Hessian matrix.`"backtracking"`

— Search for a learning rate that satisfies sufficient decrease conditions. This method does not maintain a positive definite approximation of the inverse Hessian matrix.

`HistorySize`

— Number of state updates to store

`10`

(default) | positive integer

Number of state updates to store, specified as a positive integer. Values between 3 and 20 suit most tasks.

The L-BFGS algorithm uses a history of gradient calculations to approximate the Hessian matrix recursively. For more information, see Limited-Memory BFGS (Deep Learning Toolbox).

`InitialInverseHessianFactor`

— Initial value that characterizes approximate inverse Hessian matrix

`1`

(default) | positive scalar

Initial value that characterizes the approximate inverse Hessian matrix, specified as a positive scalar.

To save memory, the L-BFGS algorithm does not store and invert the dense Hessian
matrix *B*. Instead, the algorithm uses the approximation $${B}_{k-m}^{-1}\approx {\lambda}_{k}I$$, where *m* is the history size, the inverse Hessian
factor $${\lambda}_{k}$$ is a scalar, and *I* is the identity matrix. The
algorithm then stores the scalar inverse Hessian factor only. The algorithm updates
the inverse Hessian factor at each step.

The initial inverse hessian factor is the value of $${\lambda}_{0}$$.

For more information, see Limited-Memory BFGS (Deep Learning Toolbox).

`MaxNumLineSearchIterations`

— Maximum number of line search iterations

`20`

(default) | positive integer

Maximum number of line search iterations to determine the learning rate, specified as a positive integer.

`GradientTolerance`

— Relative gradient tolerance

`1e-6`

(default) | positive scalar

Relative gradient tolerance, specified as a positive scalar.

The software stops training when the relative gradient is less than or equal to
`GradientTolerance`

.

`StepTolerance`

— Step size tolerance

`1e-6`

(default) | positive scalar

Step size tolerance, specified as a positive scalar.

The software stops training when the step that the algorithm takes is less than or
equal to `StepTolerance`

.

`LossFcn`

— Type of function used to calculate loss

`"MeanAbsoluteError"`

(default) | `"MeanSquaredError"`

Type of function used to calculate loss, specified as one of the following:

`"MeanAbsoluteError"`

— uses the mean value of the absolute error.`"MeanSquaredError"`

— uses the mean value of the squared error.

`PlotLossFcn`

— Option to plot the value of the loss function during training

`true`

(default) | `false`

Option to plot the value of the loss function during training, specified as one of the following:

`true`

— plots the value of the loss function during training.`false`

— does not plot the value of the loss function during training.

`Lambda`

— Loss function regularization constant

`0`

(default) | positive scalar

Constant coefficient applied to the regularization term added to the loss function, specified as a positive scalar.

The loss function with the regularization term is given by:

${\widehat{V}}_{N}\left(\theta \right)=\frac{1}{N}{\displaystyle \sum _{t=1}^{N}{\epsilon}^{2}\left(t,\theta \right)+\frac{1}{N}\lambda {\Vert \theta \Vert}^{2}}$

where *t* is the time variable, *N*
is the size of the batch, *ε* is the sum of the reconstruction loss and
autoencoder loss, *θ* is a concatenated vector of weights and biases of
the neural network, and *λ* is the regularization constant that you can
tune.

For more information, see Regularized Estimates of Model Parameters.

`Beta`

— Coefficient applied to tune the reconstruction loss of an autoencoder

`0`

(default) | nonnegative scalar

Coefficient applied to tune the reconstruction loss of an autoencoder, specified as a nonnegative scalar.

Reconstruction loss measures the difference between the original input
(`x`

) and its reconstruction
(`x`

) after encoding and decoding. You
calculate this loss as the L2 norm of (_{r}`x`

`-`

`x`

) divided by the batch size
(_{r}`N`

).

`WindowSize`

— Size of data frames

`intmax`

(default) | positive integer

Number of samples in each frame or batch when segmenting data for model training, specified as a positive integer.

`Overlap`

— Size of overlap

`0`

(default) | integer

Number of samples in the overlap between successive frames when segmenting data for model training, specified as an integer. A negative integer indicates that certain data samples are skipped when creating the data frames.

`ODESolverOptions`

— ODE solver options for continuous-time systems

`nssDLODE45`

(default)

ODE solver options to integrate continuous-time neural state-space systems,
specified as an `nssDLODE45`

object.

Use dot notation to access properties such as the following:

`Solver`

— Solver type, set as`"dlode45"`

. This is a read-only property.`InitialStepSize`

— Initial step size, specified as a positive scalar. If you do not specify an initial step size, then the solver bases the initial step size on the slope of the solution at the initial time point.`MaxStepSize`

— Maximum step size, specified as a positive scalar. It is an upper bound on the size of any step taken by the solver. The default is one tenth of the difference between final and initial time.`AbsoluteTolerance`

— Absolute tolerance, specified as a positive scalar. It is the largest allowable absolute error. Intuitively, when the solution approaches 0,`AbsoluteTolerance`

is the threshold below which you do not worry about the accuracy of the solution since it is effectively 0.`RelativeTolerance`

— Relative tolerance, specified as a positive scalar. This tolerance measures the error relative to the magnitude of each solution component. Intuitively, it controls the number of significant digits in a solution, (except when it is smaller than the absolute tolerance).

For more information, see `odeset`

.

`InputInterSample`

— Input interpolation method

`'foh'`

(default) | `'zoh'`

| `'spline'`

| `'cubic'`

| `'makima'`

| `'pchip'`

Input interpolation method, specified as one of the following:

`'zoh'`

— uses zero-order hold interpolation method.`'foh'`

— uses first-order hold interpolation method.`'cubic'`

— uses cubic interpolation method.`'makima'`

— uses modified Akima interpolation method.`'pchip'`

— uses shape-preserving piecewise cubic interpolation method.`'spline'`

— uses spline interpolation method.

This is the interpolation method used to interpolate the input when integrating
continuous-time neural state-space systems. For more information, see interpolation
methods in `interp1`

.

## Object Functions

## Examples

### Create L-BFGS Option Set to Train a Neural State-Space System

Use `nssTrainingOptions`

to return an options set object to train an `idNeuralStateSpace`

system.

`lbfgsOpts = nssTrainingOptions("lbfgs")`

lbfgsOpts = nssTrainingLBFGS with properties: UpdateMethod: "LBFGS" LineSearchMethod: "weak-wolfe" MaxIterations: 100 MaxNumLineSearchIterations: 20 HistorySize: 10 InitialInverseHessianFactor: 1 GradientTolerance: 1.0000e-06 StepTolerance: 1.0000e-06 Lambda: 0 Beta: 0 LossFcn: "MeanAbsoluteError" PlotLossFcn: 1 ODESolverOptions: [1x1 idoptions.nssDLODE45] InputInterSample: 'spline' WindowSize: 2.1475e+09 Overlap: 0

Use dot notation to access the object properties.

lbfgsOpts.PlotLossFcn = false;

You can use `lbfgsOpts`

as an input argument to `nlssest`

to specify the training options for the state or the non-trivial output network of an `idNeuralStateSpace`

object.

## Version History

**Introduced in R2024b**

## See Also

### Objects

### Functions

`nssTrainingOptions`

|`nlssest`

|`odeset`

|`generateMATLABFunction`

|`idNeuralStateSpace/evaluate`

|`idNeuralStateSpace/linearize`

|`sim`

|`createMLPNetwork`

|`setNetwork`

### Blocks

## MATLAB Command

You clicked a link that corresponds to this MATLAB command:

Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.

Select a Web Site

Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: .

You can also select a web site from the following list:

## How to Get Best Site Performance

Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.

### Americas

- América Latina (Español)
- Canada (English)
- United States (English)

### Europe

- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)

- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)