This is machine translation

Translated by Microsoft
Mouseover text to see original. Click the button below to return to the English version of the page.

Note: This page has been translated by MathWorks. Click here to see
To view all translated materials including this page, select Country from the country navigator on the bottom of this page.


Function fitting neural network


net = fitnet(hiddenSizes)
net = fitnet(hiddenSizes,trainFcn)



net = fitnet(hiddenSizes) returns a function fitting neural network with a hidden layer size of hiddenSizes.


net = fitnet(hiddenSizes,trainFcn) returns a function fitting neural network with a hidden layer size of hiddenSizes and training function, specified by trainFcn.


collapse all

Load the training data.

[x,t] = simplefit_dataset;

The 1-by-94 matrix x contains the input values and the 1-by-94 matrix t contains the associated target output values.

Construct a function fitting neural network with one hidden layer of size 10.

net = fitnet(10);

View the network.


The sizes of the input and output are zero. The software adjusts the sizes of these during training according to the training data.

Train the network net using the training data.

net = train(net,x,t);

View the trained network.


You can see that the sizes of the input and output are 1.

Estimate the targets using the trained network.

y = net(x);

Assess the performance of the trained network. The default performance function is mean squared error.

perf = perform(net,y,t)
perf =


The default training algorithm for a function fitting network is Levenberg-Marquardt ( 'trainlm' ). Use the Bayesian regularization training algorithm and compare the performance results.

net = fitnet(10,'trainbr');
net = train(net,x,t);
y = net(x);
perf = perform(net,y,t)
perf =


The Bayesian regularization training algorithm improves the performance of the network in terms of estimating the target values.

Input Arguments

collapse all

Size of the hidden layers in the network, specified as a row vector. The length of the vector determines the number of hidden layers in the network.

Example: For example, you can specify a network with 3 hidden layers, where the first hidden layer size is 10, the second is 8, and the third is 5 as follows: [10,8,5]

The input and output sizes are set to zero. The software adjusts the sizes of these during training according to the training data.

Data Types: single | double

Training function name, specified as one of the following.

Training FunctionAlgorithm



Bayesian Regularization


BFGS Quasi-Newton


Resilient Backpropagation


Scaled Conjugate Gradient


Conjugate Gradient with Powell/Beale Restarts


Fletcher-Powell Conjugate Gradient


Polak-Ribiére Conjugate Gradient


One Step Secant


Variable Learning Rate Gradient Descent


Gradient Descent with Momentum


Gradient Descent

Example: For example, you can specify the variable learning rate gradient descent algorithm as the training algorithm as follows: 'traingdx'

For more information on the training functions, see Train and Apply Multilayer Shallow Neural Networks and Choose a Multilayer Neural Network Training Function.

Data Types: char

Output Arguments

collapse all

Function fitting network, returned as a network object.


  • Function fitting is the process of training a neural network on a set of inputs in order to produce an associated set of target outputs. After you construct the network with the desired hidden layers and the training algorithm, you must train it using a set of training data. Once the neural network has fit the data, it forms a generalization of the input-output relationship. You can then use the trained network to generate outputs for inputs it was not trained on.

Introduced in R2010b