Layer recurrent neural network
Row vector of increasing 0 or positive delays,
Row vector of one or more hidden layer sizes,
Backpropagation training function,
and returns a layer recurrent neural network.
Layer recurrent neural networks are similar to feedforward networks, except that each
layer has a recurrent connection with a tap delay associated with it. This allows the network
to have an infinite dynamic response to time series input data. This network is similar to the
time delay (
timedelaynet) and distributed delay (
distdelaynet) neural networks, which have finite input responses.
This example shows how to use a layer recurrent neural network to solve a simple time series problem.
[X,T] = simpleseries_dataset; net = layrecnet(1:2,10); [Xs,Xi,Ai,Ts] = preparets(net,X,T); net = train(net,Xs,Ts,Xi,Ai); view(net) Y = net(Xs,Xi,Ai); perf = perform(net,Y,Ts)
perf = 6.1239e-11
layerDelays— Input delays
[1:2](default) | row vector
Zero or positive input delays, specified as an increasing row vector.
hiddenSizes— Hidden sizes
10(default) | row vector
Sizes of the hidden layers, specified as a row vector of one or more elements.
trainFcn— Training function name
Training function name, specified as one of the following.
Scaled Conjugate Gradient
Conjugate Gradient with Powell/Beale Restarts
Fletcher-Powell Conjugate Gradient
Polak-Ribiére Conjugate Gradient
One Step Secant
Variable Learning Rate Gradient Descent
Gradient Descent with Momentum
Example: For example, you can specify the variable learning rate gradient descent
algorithm as the training algorithm as follows:
For more information on the training functions, see Train and Apply Multilayer Shallow Neural Networks and Choose a Multilayer Neural Network Training Function.