RegressionNeuralNetwork
Description
A RegressionNeuralNetwork
object is a trained neural network for
regression, such as a feedforward, fully connected network. In a feedforward, fully connected
network, the first fully connected layer has a connection from the network input (predictor
data X
), and each
subsequent layer has a connection from the previous layer. Each fully connected layer
multiplies the input by a weight matrix (LayerWeights
) and
then adds a bias vector (LayerBiases
). An
activation function follows each fully connected layer, excluding the last (Activations
and
OutputLayerActivation
). The final fully connected layer produces the network's
output, namely predicted response values. For more information, see Neural Network Structure.
Creation
Create a RegressionNeuralNetwork
object by using fitrnet
.
Properties
Neural Network Properties
This property is read-only.
Sizes of the fully connected layers in the neural network model.
The property value depends on the method used to fit the model.
For models fit using a
dlnetwork
or layer array that specifies the neural network architecture, this property is empty. In this case, to examine the neural network architecture of the model, convert the model to adlnetwork
object using thedlnetwork
(Deep Learning Toolbox) function.Otherwise, the property is a positive integer vector, where the ith element of
LayerSizes
is the number of outputs in the ith fully connected layer of the neural network model. In this case,LayerSizes
does not include the size of the final fully connected layer. This layer always has one output for each response variable.
Data Types: single
| double
This property is read-only.
Learned layer weights for the fully connected layers.
The property value depends on the method used to fit the model.
For models fit using a
dlnetwork
or layer array that specifies the neural network architecture, this property is empty. In this case, to examine the learnable parameters of the model, convert the model to adlnetwork
object using thedlnetwork
(Deep Learning Toolbox) function.Otherwise, the property is a cell array, where entry i in the cell array corresponds to the layer weights for the fully connected layer i. For example,
Mdl.LayerWeights{1}
returns the weights for the first fully connected layer of the modelMdl
. In this case,LayerWeights
includes the weights for the final fully connected layer.
Data Types: cell
This property is read-only.
Learned layer biases for the fully connected layers.
The property value depends on the method used to fit the model.
For models fit using a
dlnetwork
or layer array that specifies the neural network architecture, this property is empty. In this case, to examine the learnable parameters of the model, convert the model to adlnetwork
object using thedlnetwork
(Deep Learning Toolbox) function.Otherwise, the property is a cell array, where entry i in the cell array corresponds to the layer biases for the fully connected layer i. For example,
Mdl.LayerBiases{1}
returns the biases for the first fully connected layer of the modelMdl
. In this case,LayerBiases
includes the biases for the final fully connected layer.
Data Types: cell
This property is read-only.
Activation functions for the fully connected layers of the neural network model.
The property value depends on the method used to fit the model.
For models fit using a
dlnetwork
or layer array that specifies the neural network architecture, this property is empty. In this case, to examine the neural network architecture of the model, convert the model to adlnetwork
object using thedlnetwork
(Deep Learning Toolbox) function.Otherwise, the property is a character vector or cell array of character vectors.
If
Activations
contains only one activation function, then it is the activation function for every fully connected layer of the neural network model, excluding the final fully connected layer, which does not have an activation function (OutputLayerActivation
).If
Activations
is an array of activation functions, then the ith element is the activation function for the ith layer of the neural network model.
If Activations
is a character vector or a cell array of character vectors, then the values are from this table.
Value | Description |
---|---|
"relu" | Rectified linear unit (ReLU) function — Performs a threshold operation on each element of the input, where any value less than zero is set to zero, that is, |
"tanh" | Hyperbolic tangent (tanh) function — Applies the |
"sigmoid" | Sigmoid function — Performs the following operation on each input element: |
"none" | Identity function — Returns each input element without performing any transformation, that is, f(x) = x |
Data Types: char
| cell
This property is read-only.
Activation function for the final fully connected layer.
The property value depends on the method used to fit the model.
For models fit using a
dlnetwork
or layer array that specifies the neural network architecture, this property is empty. In this case, to examine the neural network architecture of the model, convert the model to adlnetwork
object using thedlnetwork
(Deep Learning Toolbox) function.Otherwise, the property is
'none'
.
This property is read-only.
Parameter values used to train the RegressionNeuralNetwork
model,
returned as a NeuralNetworkParams
object.
ModelParameters
contains parameter values such as the
name-value arguments used to train the regression neural network model.
Access the properties of ModelParameters
by using dot
notation. For example, access the function used to initialize the fully connected
layer weights of a model Mdl
by using
Mdl.ModelParameters.LayerWeightsInitializer
.
Convergence Control Properties
This property is read-only.
Convergence information, returned as a structure array.
The structure has these fields:
Field | Description |
---|---|
Iterations | Number of training iterations used to train the neural network model |
TrainingLoss | Training mean squared error (MSE) for the returned model, or
resubLoss(Mdl) for model Mdl |
Gradient | Gradient of the loss function with respect to the weights and biases at the iteration corresponding to the returned model |
Step | Step size at the iteration corresponding to the returned model |
Time | Total time spent across all iterations (in seconds) |
ValidationLoss | Validation MSE for the returned model |
ValidationChecks | Maximum number of times in a row that the validation loss was greater than or equal to the minimum validation loss |
ConvergenceCriterion | Criterion for convergence |
History | See TrainingHistory |
For models fit using a dlnetwork
or layer array that specifies
the neural network architecture, the value for the ValidationChecks
is always NaN
.
Data Types: struct
This property is read-only.
Training history, returned as a table.
Column | Description |
---|---|
Iteration | Training iteration |
TrainingLoss | Training mean squared error (MSE) for the model at this iteration |
Gradient | Gradient of the loss function with respect to the weights and biases at this iteration |
Step | Step size at this iteration |
Time | Time spent during this iteration (in seconds) |
ValidationLoss | Validation MSE for the model at this iteration |
ValidationChecks | Running total of times that the validation loss is greater than or equal to the minimum validation loss |
For models fit using a dlnetwork
or layer array that specifies
the neural network architecture, the table does not include the
Time
and ValidationChecks
columns.
Data Types: table
This property is read-only.
Solver used to train the neural network model, returned as
'LBFGS'
. To create a RegressionNeuralNetwork
model, fitrnet
uses a limited-memory
Broyden-Fletcher-Goldfarb-Shanno quasi-Newton algorithm (LBFGS) as its loss function
minimization technique, where the software minimizes the mean squared error
(MSE).
Predictor Properties
This property is read-only.
Predictor variable names, returned as a cell array of character vectors. The order of the elements of PredictorNames
corresponds to the order in which the predictor names appear in the training data.
Data Types: cell
This property is read-only.
Categorical predictor indices, returned as a vector of positive integers. Assuming that the predictor data contains observations in rows, CategoricalPredictors
contains index values corresponding to the columns of the predictor data that contain categorical predictors. If none of the predictors are categorical, then this property is empty ([]
).
Data Types: double
This property is read-only.
Expanded predictor names, returned as a cell array of character vectors. If the model uses encoding for categorical variables, then ExpandedPredictorNames
includes the names that describe the expanded variables. Otherwise, ExpandedPredictorNames
is the same as PredictorNames
.
Data Types: cell
Since R2023b
This property is read-only.
Predictor means, returned as a numeric vector. If you set Standardize
to
1
or true
when
you train the neural network model, then the length of the
Mu
vector is equal to the
number of expanded predictors (see
ExpandedPredictorNames
). The
vector contains 0
values for dummy variables
corresponding to expanded categorical predictors.
If you set Standardize
to 0
or false
when you train the neural network model, then the Mu
value is an empty vector ([]
).
Data Types: double
Since R2023b
This property is read-only.
Predictor standard deviations, returned as a numeric vector. If you set
Standardize
to 1
or true
when you train the neural network model, then the length of the
Sigma
vector is equal to the number of expanded predictors (see
ExpandedPredictorNames
). The vector contains
1
values for dummy variables corresponding to expanded
categorical predictors.
If you set Standardize
to 0
or false
when you train the neural network model, then the Sigma
value is an empty vector ([]
).
Data Types: double
This property is read-only.
Unstandardized predictors used to train the neural network model, returned as a
numeric matrix or table. X
retains its original orientation, with
observations in rows or columns depending on the value of the
ObservationsIn
name-value argument in the call to
fitrnet
.
Data Types: single
| double
| table
Response Properties
This property is read-only.
Names of the response variables, returned as a character vector or cell array of character vectors.
Data Types: char
| cell
This property is read-only.
Response data used to train the model, returned as a numeric vector, matrix, or
table. Each row of Y
represents the response values of the
corresponding observation in X
.
Data Types: single
| double
Response transformation function, specified as 'none'
or a
function handle. ResponseTransform
describes how the software
transforms raw response values.
For a MATLAB® function or a function that you define, enter its function handle. For
example, you can enter Mdl.ResponseTransform =
@function
, where
function
accepts the original response values
and returns an output of the same size containing the transformed responses.
Data Types: char
| function_handle
Other Data Properties
This property is read-only.
Cross-validation optimization of hyperparameters, specified as a BayesianOptimization
object or a table of hyperparameters and associated
values. This property is nonempty if the 'OptimizeHyperparameters'
name-value pair argument is nonempty when you create the model. The value of
HyperparameterOptimizationResults
depends on the setting of the
Optimizer
field in the
HyperparameterOptimizationOptions
structure when you create the
model.
Value of Optimizer Option | Value of HyperparameterOptimizationResults |
---|---|
"bayesopt" (default) | Object of class BayesianOptimization |
"gridsearch" or "randomsearch" | Table of hyperparameters used, observed objective function values (cross-validation loss), and rank of observations from lowest (best) to highest (worst) |
This property is read-only.
Number of observations in the training data stored in X
and
Y
, returned as a positive numeric scalar.
Data Types: double
This property is read-only.
Observations of the original training data stored in the model, returned as a
logical vector. This property is empty if all observations are stored in
X
and Y
.
Data Types: logical
This property is read-only.
Observation weights used to train the model, returned as an
n-by-1 numeric vector. n is the number of
observations (NumObservations
).
The software normalizes the observation weights specified in the
Weights
name-value argument so that the elements of
W
sum up to 1.
Data Types: single
| double
Object Functions
compact | Reduce size of machine learning model |
crossval | Cross-validate machine learning model |
dlnetwork (Deep Learning Toolbox) | Deep learning neural network |
lime | Local interpretable model-agnostic explanations (LIME) |
partialDependence | Compute partial dependence |
plotPartialDependence | Create partial dependence plot (PDP) and individual conditional expectation (ICE) plots |
shapley | Shapley values |
resubLoss | Resubstitution regression loss |
resubPredict | Predict responses for training data using trained regression model |
gather | Gather properties of Statistics and Machine Learning Toolbox object from GPU |
Examples
Train a neural network regression model, and assess the performance of the model on a test set.
Load the carbig
data set, which contains measurements of cars made in the 1970s and early 1980s. Create a table containing the predictor variables Acceleration
, Displacement
, and so on, as well as the response variable MPG
.
load carbig cars = table(Acceleration,Displacement,Horsepower, ... Model_Year,Origin,Weight,MPG);
Remove rows of cars
where the table has missing values.
cars = rmmissing(cars);
Categorize the cars based on whether they were made in the USA.
cars.Origin = categorical(cellstr(cars.Origin)); cars.Origin = mergecats(cars.Origin,["France","Japan",... "Germany","Sweden","Italy","England"],"NotUSA");
Partition the data into training and test sets. Use approximately 80% of the observations to train a neural network model, and 20% of the observations to test the performance of the trained model on new data. Use cvpartition
to partition the data.
rng("default") % For reproducibility of the data partition c = cvpartition(height(cars),"Holdout",0.20); trainingIdx = training(c); % Training set indices carsTrain = cars(trainingIdx,:); testIdx = test(c); % Test set indices carsTest = cars(testIdx,:);
Train a neural network regression model by passing the carsTrain
training data to the fitrnet
function. For better results, specify to standardize the predictor data.
Mdl = fitrnet(carsTrain,"MPG","Standardize",true)
Mdl = RegressionNeuralNetwork PredictorNames: {'Acceleration' 'Displacement' 'Horsepower' 'Model_Year' 'Origin' 'Weight'} ResponseName: 'MPG' CategoricalPredictors: 5 ResponseTransform: 'none' NumObservations: 314 LayerSizes: 10 Activations: 'relu' OutputLayerActivation: 'none' Solver: 'LBFGS' ConvergenceInfo: [1×1 struct] TrainingHistory: [708×7 table] Properties, Methods
Mdl
is a trained RegressionNeuralNetwork
model. You can use dot notation to access the properties of Mdl
. For example, you can specify Mdl.TrainingHistory
to get more information about the training history of the neural network model.
Evaluate the performance of the regression model on the test set by computing the test mean squared error (MSE). Smaller MSE values indicate better performance.
testMSE = loss(Mdl,carsTest,"MPG")
testMSE = 7.1092
Specify a custom neural network architecture using Deep Learning Toolbox™.
To specify a neural network of fully connected layers connected in series, use arguments like the LayerSizes
argument to configure the neural network architecture. For neural networks with more complex architecture (such as, neural networks with skip connections), you can specify the architecture using the Network
name-value argument with a dlnetwork
object.
Load the carbig
data set.
load carbig
X = [Acceleration Cylinders Displacement Weight];
Y = MPG;
Delete rows of the data where either array has missing values.
R = rmmissing([X Y]); X = R(:,1:end-1); Y = R(:,end);
Partition the data into training data (XTrain
and YTrain
) and test data (XTest
and YTest
). Reserve approximately 20% of the observations for testing, and use the rest of the observations for training.
rng("default") % For reproducibility of the partition c = cvpartition(length(Y),Holdout=0.2); trainingIdx = training(c); % Indices for the training set XTrain = X(trainingIdx,:); YTrain = Y(trainingIdx); testIdx = test(c); % Indices for the test set XTest = X(testIdx,:); YTest = Y(testIdx);
Define a neural network architecture with these characteristics:
A feature input layer with an input size that matches the number of predictors.
Three fully connected layers followed by ReLU layers, connected in series, where the fully connected layers have output sizes of 12, and addition layers after the second and third fully connected layers.
Skip connections around the second and third fully connected layers using the addition layers.
A final fully connected layer with an output size that matches the number of responses.
inputSize = size(XTrain,2); outputSize = size(YTrain,2); net = dlnetwork; layers = [ featureInputLayer(inputSize) fullyConnectedLayer(12) reluLayer(Name="relu1") fullyConnectedLayer(12) additionLayer(2,Name="add2") reluLayer(Name="relu2") fullyConnectedLayer(12) additionLayer(2,Name="add3") reluLayer fullyConnectedLayer(outputSize)]; net = addLayers(net,layers); net = connectLayers(net,"relu1","add2/in2"); net = connectLayers(net,"relu2","add3/in2");
Visualize the neural network architecture in a plot.
figure plot(net)
Train a neural network regression model.
Mdl = fitrnet(XTrain,YTrain,Network=net,Standardize=true)
Mdl = RegressionNeuralNetwork ResponseName: 'Y' CategoricalPredictors: [] ResponseTransform: 'none' NumObservations: 319 LayerSizes: [] Activations: '' OutputLayerActivation: '' Solver: 'LBFGS' ConvergenceInfo: [1×1 struct] TrainingHistory: [1000×5 table] View network information using dlnetwork. Properties, Methods
Evaluate the performance of the regression model on the test set by computing the test mean squared error (MSE). Smaller values indicate better predictive accuracy.
testMSE = loss(Mdl,XTest,YTest)
testMSE = 14.3926
Extended Capabilities
Usage notes and limitations:
The
predict
object function supports code generation for models not fit using adlnetwork
or layer array that specifies the neural network architecture.
For more information, see Introduction to Code Generation.
Usage notes and limitations:
The following object functions fully support GPU arrays:
The object functions execute on a GPU if at least one of the following applies:
The model was fitted with GPU arrays.
The predictor data that you pass to the object function is a GPU array.
The response data that you pass to the object function is a GPU array.
For more information, see Run MATLAB Functions on a GPU (Parallel Computing Toolbox).
Version History
Introduced in R2021aStarting in R2023b, training observations with missing predictor values are included in the X
, Y
, and W
data properties. The RowsUsed
property indicates the training observations stored in the model, rather than those used for training. Observations with missing predictor values continue to be omitted from the model training process.
In previous releases, the software omitted training observations that contained missing predictor values from the data properties of the model.
Neural network models include Mu
and Sigma
properties that contain the means and standard deviations, respectively, used to standardize the predictors before training. The properties are empty when the fitting function does not perform any standardization.
See Also
fitrnet
| predict
| loss
| RegressionPartitionedNeuralNetwork
| CompactRegressionNeuralNetwork
| dlnetwork
(Deep Learning Toolbox)
MATLAB Command
You clicked a link that corresponds to this MATLAB command:
Run the command by entering it in the MATLAB Command Window. Web browsers do not support MATLAB commands.
Select a Web Site
Choose a web site to get translated content where available and see local events and offers. Based on your location, we recommend that you select: United States.
You can also select a web site from the following list
How to Get Best Site Performance
Select the China site (in Chinese or English) for best site performance. Other MathWorks country sites are not optimized for visits from your location.
Americas
- América Latina (Español)
- Canada (English)
- United States (English)
Europe
- Belgium (English)
- Denmark (English)
- Deutschland (Deutsch)
- España (Español)
- Finland (English)
- France (Français)
- Ireland (English)
- Italia (Italiano)
- Luxembourg (English)
- Netherlands (English)
- Norway (English)
- Österreich (Deutsch)
- Portugal (English)
- Sweden (English)
- Switzerland
- United Kingdom (English)