Main Content

nlarxOptions

Option set for nlarx

Description

opt = nlarxOptions creates the default option set for nlarx. Use dot notation to modify this option set for your specific application. Any options that you do not modify retain their default values.

example

opt = nlarxOptions(Name,Value) creates an option set with options specified by one or more Name,Value pair arguments.

example

Examples

collapse all

opt = nlarxOptions;

Create a default option set for nlarx, and use dot notation to modify specific options.

opt = nlarxOptions;

Turn on the estimation progress display.

opt.Display = 'on';

Minimize the norm of the simulation error.

opt.Focus = 'simulation';

Use a subspace Gauss-Newton least squares search with a maximum of 25 iterations.

opt.SearchMethod = 'gn';
opt.SearchOptions.MaxIterations = 25;

Create an option set for nlarx specifying the following options:

  • Turn off iterative estimation for the default wavelet network estimation.

  • Turn on the estimation progress-viewer display.

opt = nlarxOptions('IterativeWavenet','off','Display','on');

Input Arguments

collapse all

Name-Value Arguments

Specify optional pairs of arguments as Name1=Value1,...,NameN=ValueN, where Name is the argument name and Value is the corresponding value. Name-value arguments must appear after other arguments, but the order of the pairs does not matter.

Before R2021a, use commas to separate each name and value, and enclose Name in quotes.

Example: 'Focus','simulation','SearchMethod','grad' specifies that the norm of the simulation error is minimized using a steepest descent least squares search.

Minimization objective, specified as the comma-separated pair consisting of 'Focus' and one of the following:

  • 'prediction' — Minimize the norm of the prediction error, which is defined as the difference between the measured output and the one-step ahead predicted response of the model.

  • 'simulation' — Minimize the norm of the simulation error, which is defined as the difference between the measured output and simulated response of the model.

Estimation progress display setting, specified as the comma-separated pair consisting of 'Display' and one of the following:

  • 'off' — No progress or results information is displayed.

  • 'on' — Information on model structure and estimation results are displayed in a progress-viewer window.

Weighting of prediction error in multi-output model estimations, specified as the comma-separated pair consisting of 'OutputWeight' and one of the following:

  • 'noise' — Optimal weighting is automatically computed as the inverse of the estimated noise variance. This weighting minimizes det(E'*E), where E is the matrix of prediction errors. This option is not available when using 'lsqnonlin' as a 'SearchMethod'.

  • A positive semidefinite matrix, W, of size equal to the number of outputs. This weighting minimizes trace(E'*E*W/N), where E is the matrix of prediction errors and N is the number of data samples.

Option to normalize estimation data, specified as true or false. If Normalize is true, then the algorithm uses the method specified in NormalizationOptions to normalize the data.

Option set for configuring normalization, specified as the options shown in the following table. The first option, NormalizationMethod, determines which method the algorithm uses. The default option is 'auto'. For idnlarx models, a setting of 'auto' is equivalent to a setting of 'center'. Except for 'medianiqr', each specific method in NormalizationMethod has an associated configuration option, such as CenterMethodType when you specify the 'center' method. For more information about these methods, see the MATLAB® function normalize.

Method or Method OptionValueDescriptionDefault
NormalizationMethod'auto'Set method automatically.

'auto'

(equivalent to 'center')

'center'Center data to have mean 0.
'zscore'z-score with mean 0 and standard deviation 1.
'norm'2-norm.
'scale'Scale by standard deviation.
'range'Rescale range of data to [min,max].
'medianiqr'Center and scale data to have median 0 and interquartile scale of 1.

CenterMethodType (applies to 'center')

'mean'Center to have mean 0.'mean'
'median'Center to have median 0.

ZScoreType (applies to 'zscore')

'std'Center and scale to have mean 0 and standard deviation 1.'std'
'robust'Center and scale to have median 0 and median absolute deviation 1.

ScaleMethodType (applies to 'scale')

'std'Scale by standard deviation.'std'
'mad'Scale by median absolute deviation.
'iqr'Scale by interquartile range.
'first'Scale by first element of data.

NormValue (applies to 'norm')

Positive real valuep-norm, where p is a positive integer.2

Range (applies to 'range')

2-element row vectorRescale range of data to an interval of the form [a b], where a < b.[0 1]

Numerical search method used for iterative parameter estimation, specified as the one of the values in the following table.

SearchMethodDescription
'auto'

Automatic method selection

A combination of the line search algorithms, 'gn', 'lm', 'gna', and 'grad', is tried in sequence at each iteration. The first descent direction leading to a reduction in estimation cost is used.

'gn'

Subspace Gauss-Newton least-squares search

Singular values of the Jacobian matrix less than GnPinvConstant*eps*max(size(J))*norm(J) are discarded when computing the search direction. J is the Jacobian matrix. The Hessian matrix is approximated as JTJ. If this direction shows no improvement, the function tries the gradient direction.

'gna'

Adaptive subspace Gauss-Newton search

Eigenvalues less than gamma*max(sv) of the Hessian are ignored, where sv contains the singular values of the Hessian. The Gauss-Newton direction is computed in the remaining subspace. gamma has the initial value InitialGnaTolerance (see Advanced in 'SearchOptions' for more information). This value is increased by the factor LMStep each time the search fails to find a lower value of the criterion in fewer than five bisections. This value is decreased by the factor 2*LMStep each time a search is successful without any bisections.

'lm'

Levenberg-Marquardt least squares search

Each parameter value is -pinv(H+d*I)*grad from the previous value. H is the Hessian, I is the identity matrix, and grad is the gradient. d is a number that is increased until a lower value of the criterion is found.

'grad'

Steepest descent least-squares search

'lsqnonlin'

Trust-region-reflective algorithm of lsqnonlin (Optimization Toolbox)

This algorithm requires Optimization Toolbox™ software.

'fmincon'

Constrained nonlinear solvers

You can use the sequential quadratic programming (SQP) and trust-region-reflective algorithms of the fmincon (Optimization Toolbox) solver. If you have Optimization Toolbox software, you can also use the interior-point and active-set algorithms of the fmincon solver. Specify the algorithm in the SearchOptions.Algorithm option. The fmincon algorithms might result in improved estimation results in the following scenarios:

  • Constrained minimization problems when bounds are imposed on the model parameters.

  • Model structures where the loss function is a nonlinear or nonsmooth function of the parameters.

  • Multiple-output model estimation. A determinant loss function is minimized by default for multiple-output model estimation. fmincon algorithms are able to minimize such loss functions directly. The other search methods such as 'lm' and 'gn' minimize the determinant loss function by alternately estimating the noise variance and reducing the loss value for a given noise variance value. Hence, the fmincon algorithms can offer better efficiency and accuracy for multiple-output model estimations.

Option set for the search algorithm, specified as the comma-separated pair consisting of 'SearchOptions' and a search option set with fields that depend on the value of SearchMethod:

SearchOptions Structure When SearchMethod is Specified as 'gn', 'gna', 'lm', 'grad', or 'auto'

Field NameDescriptionDefault
Tolerance

Minimum percentage difference between the current value of the loss function and its expected improvement after the next iteration, specified as a positive scalar. When the percentage of expected improvement is less than Tolerance, the iterations stop. The estimate of the expected loss-function improvement at the next iteration is based on the Gauss-Newton vector computed for the current parameter value.

1e-5
MaxIterations

Maximum number of iterations during loss-function minimization, specified as a positive integer. The iterations stop when MaxIterations is reached or another stopping criterion is satisfied, such as Tolerance.

Setting MaxIterations = 0 returns the result of the start-up procedure.

Use sys.Report.Termination.Iterations to get the actual number of iterations during an estimation, where sys is an idtf model.

20
Advanced

Advanced search settings, specified as a structure with the following fields:

Field NameDescriptionDefault
GnPinvConstant

Jacobian matrix singular value threshold, specified as a positive scalar. Singular values of the Jacobian matrix that are smaller than GnPinvConstant*max(size(J)*norm(J)*eps) are discarded when computing the search direction. Applicable when SearchMethod is 'gn'.

10000
InitialGnaTolerance

Initial value of gamma, specified as a positive scalar. Applicable when SearchMethod is 'gna'.

0.0001
LMStartValue

Starting value of search-direction length d in the Levenberg-Marquardt method, specified as a positive scalar. Applicable when SearchMethod is 'lm'.

0.001
LMStep

Size of the Levenberg-Marquardt step, specified as a positive integer. The next value of the search-direction length d in the Levenberg-Marquardt method is LMStep times the previous one. Applicable when SearchMethod is 'lm'.

2
MaxBisections

Maximum number of bisections used for line search along the search direction, specified as a positive integer.

25
MaxFunctionEvaluations

Maximum number of calls to the model file, specified as a positive integer. Iterations stop if the number of calls to the model file exceeds this value.

Inf
MinParameterChange

Smallest parameter update allowed per iteration, specified as a nonnegative scalar.

0
RelativeImprovement

Relative improvement threshold, specified as a nonnegative scalar. Iterations stop if the relative improvement of the criterion function is less than this value.

0
StepReduction

Step reduction factor, specified as a positive scalar that is greater than 1. The suggested parameter update is reduced by the factor StepReduction after each try. This reduction continues until MaxBisections tries are completed or a lower value of the criterion function is obtained.

StepReduction is not applicable for SearchMethod 'lm' (Levenberg-Marquardt method).

2

SearchOptions Structure When SearchMethod Is Specified as 'lsqnonlin'

Field NameDescriptionDefault
FunctionTolerance

Termination tolerance on the loss function that the software minimizes to determine the estimated parameter values, specified as a positive scalar.

The value of FunctionTolerance is the same as that of opt.SearchOptions.Advanced.TolFun.

1e-5
StepTolerance

Termination tolerance on the estimated parameter values, specified as a positive scalar.

The value of StepTolerance is the same as that of opt.SearchOptions.Advanced.TolX.

1e-6
MaxIterations

Maximum number of iterations during loss-function minimization, specified as a positive integer. The iterations stop when MaxIterations is reached or another stopping criterion is satisfied, such as FunctionTolerance.

The value of MaxIterations is the same as that of opt.SearchOptions.Advanced.MaxIter.

20

SearchOptions Structure When SearchMethod Is Specified as 'fmincon'

Field NameDescriptionDefault
Algorithm

fmincon optimization algorithm, specified as one of the following:

  • 'sqp' — Sequential quadratic programming algorithm. The algorithm satisfies bounds at all iterations, and it can recover from NaN or Inf results. It is not a large-scale algorithm. For more information, see Large-Scale vs. Medium-Scale Algorithms (Optimization Toolbox).

  • 'trust-region-reflective' — Subspace trust-region method based on the interior-reflective Newton method. It is a large-scale algorithm.

  • 'interior-point' — Large-scale algorithm that requires Optimization Toolbox software. The algorithm satisfies bounds at all iterations, and it can recover from NaN or Inf results.

  • 'active-set' — Requires Optimization Toolbox software. The algorithm can take large steps, which adds speed. It is not a large-scale algorithm.

For more information about the algorithms, see Constrained Nonlinear Optimization Algorithms (Optimization Toolbox) and Choosing the Algorithm (Optimization Toolbox).

'sqp'
FunctionTolerance

Termination tolerance on the loss function that the software minimizes to determine the estimated parameter values, specified as a positive scalar.

1e-6
StepTolerance

Termination tolerance on the estimated parameter values, specified as a positive scalar.

1e-6
MaxIterations

Maximum number of iterations during loss function minimization, specified as a positive integer. The iterations stop when MaxIterations is reached or another stopping criterion is satisfied, such as FunctionTolerance.

100

To specify field values in SearchOptions, create a default nlarxOptions set and modify the fields using dot notation. Any fields that you do not modify retain their default values.

opt = nlarxOptions;
opt.SearchOptions.MaxIter = 15;
opt.SearchOptions.Advanced.RelImprovement = 0.5;

Option to remove sparse regressors from the nonlinear ARX model, specified as logical 0 (false) or logical 1 (true).

Set this option to true to use a sparsification algorithm in the nlarx function to identify sparse regressors. The function removes these regressors from the nonlinear mapping function of the output idnlarx model, leaving only the optimal subset of regressors. To see which regressors are removed, enter sys.RegressorUsage at the command line, where sys is the idnlarx model.

To configure the sparsification algorithm, use the option SparsificationOptions.

The sparsification algorithm, also known as structured pruning, is similar to the lasso (least absolute shrinkage and selection operator) technique.

Option set for configuring sparsification, specified as one or more of the options shown in this table.

OptionDescriptionDefault
SparsityMeasure

Form of the sparsification penalty added to the prediction error minimization objective, specified as one of these values:

  • "l1" — L1 norm of the parameters that multiply the regressors

  • "l0" — L0 pseudonorm of the parameters that multiply the regressors

  • "log-sum" — Sum of the log of the absolute values of the parameters that multiply the regressors

If several parameters multiply a certain regressor in different parts of the nonlinear mapping function, interpret the sparsification penalty in the group sense.

"log-sum"
GroupNorm

Norm used to measure the contribution of each parameter group to the sparsification penalty, specified as 2 or Inf.

The default of 2 implies that the 2-norm of each parameter group is used in the sparsity measure.

2

Lambda

Sparsification penalty, λ, specified as a positive scalar.

The sparsification penalty is the cost, c, of a regressor being identified as sparse in the minimization objective, defined as

c = f + λ × p,

where:

  • f is the fitting objective (prediction error norm).

  • p is the parameter property (l1 or l0 norm of the parameter groups multiplying the regressors, normalized by the minimum parameter group norm).

1
MaxIterations

Maximum number of Alternating Direction Method of Multipliers (ADMM) iterations to run to sparsify the regressor selection, specified as a positive integer.

20

When configuring the sparsification algorithm, keep these points in mind:

  • Getting the best results can require trying various solver and sparsification options. The options with the most significant effect on the results are the:

    • Choice of sparsification measure (SparsificationOptions.SparsityMeasure = 'log-sum', 'l1', or 'l2')

    • Search method (SearchMethod = 'lm', 'lsqnonlin', and so on)

    • Value of the SparsificationOptions.Lambda option, which describes the strength of the sparsification penalty in the minimization objective

  • Sparsification with the Focus property set to 'prediction' can be significantly faster than specification with Focus = 'simulation', but the results might not lead to good simulation models. To balance these approaches, consider performing sparsification with Focus = 'prediction' first. Then, perform a follow-up estimation using the selected regressors and Focus = 'simulation'.

Dependencies

To enable these options, set SparsifyRegressors to true.

Iterative idWaveletNetwork estimation setting, specified as the comma-separated pair consisting of 'IterativeWavenet' and one of the following:

  • 'auto' — First estimation is noniterative and subsequent estimations are iterative.

  • 'on' — Perform iterative estimation only.

  • 'off' — Perform noniterative estimation only.

This option applies only when using an idWaveletNetwork nonlinearity estimator.

Input-channel intersample behavior for transformations between discrete time and continuous time, specified as 'auto', 'zoh','foh', or 'bl'.

The definitions of the three behavior values are as follows:

  • 'zoh' — Zero-order hold maintains a piecewise-constant input signal between samples.

  • 'foh' — First-order hold maintains a piecewise-linear input signal between samples.

  • 'bl' — Band-limited behavior specifies that the continuous-time input signal has zero power above the Nyquist frequency.

iddata objects have a similar property, data.InterSample, that contains the same behavior value options. When the InputInterSample value is 'auto' and the estimation data is in an iddata object data, the software uses the data.InterSample value. When the estimation data is instead contained in a timetable or a matrix pair, with the 'auto' option, the software uses 'zoh'.

The software applies the same option value to all channels and all experiments.

Options for regularized estimation of model parameters, specified as the comma-separated pair consisting of 'Regularization' and a structure with fields:

Field NameDescriptionDefault
LambdaBias versus variance trade-off constant, specified as a nonnegative scalar.0 — Indicates no regularization.
RWeighting matrix, specified as a vector of nonnegative scalars or a square positive semidefinite matrix. The length must be equal to the number of free parameters in the model, np. Use the nparams command to determine the number of model parameters.1 — Indicates a value of eye(np).
Nominal

The nominal value towards which the free parameters are pulled during estimation, specified as one of the following:

  • 'zero' — Pull parameters towards zero.

  • 'model' — Pull parameters towards preexisting values in the initial model. Use this option only when you have a well-initialized idnlarx model with finite parameter values.

'zero'

To specify field values in Regularization, create a default nlarxOptions set and modify the fields using dot notation. Any fields that you do not modify retain their default values.

opt = nlarxOptions;
opt.Regularization.Lambda = 1.2;
opt.Regularization.R = 0.5*eye(np);

Regularization is a technique for specifying model flexibility constraints, which reduce uncertainty in the estimated parameter values. For more information, see Regularized Estimates of Model Parameters.

Additional advanced options, specified as the comma-separated pair consisting of 'Advanced' and a structure with fields:

Field NameDescriptionDefault
ErrorThresholdThreshold for when to adjust the weight of large errors from quadratic to linear, specified as a nonnegative scalar. Errors larger than ErrorThreshold times the estimated standard deviation have a linear weight in the loss function. The standard deviation is estimated robustly as the median of the absolute deviations from the median of the prediction errors, divided by 0.7. If your estimation data contains outliers, try setting ErrorThreshold to 1.6.0 — Leads to a purely quadratic loss function.
MaxSizeMaximum number of elements in a segment when input-output data is split into segments, specified as a positive integer.250000

To specify field values in Advanced, create a default nlarxOptions set and modify the fields using dot notation. Any fields that you do not modify retain their default values.

opt = nlarxOptions;
opt.Advanced.ErrorThreshold = 1.2;

Output Arguments

collapse all

Option set for nlarx command, returned as an nlarxOptions option set.

Version History

Introduced in R2015a

expand all

See Also