Normalized perceptron weight and bias learning function
[dW,LS] = learnpn(W,P,Z,N,A,T,E,gW,gA,D,LP,LS)
info = learnpn('
learnpn is a weight and bias learning function. It can result in faster
learnp when input vectors have widely varying
[dW,LS] = learnpn(W,P,Z,N,A,T,E,gW,gA,D,LP,LS) takes several inputs,
Learning parameters, none,
Learning state, initially should be =
New learning state
info = learnpn(' returns useful
information for each
code character vector:
Names of learning parameters
Default learning parameters
Returns 1 if this function uses
Here you define a random input
P and error
E for a
layer with a two-element input and three neurons.
p = rand(2,1); e = rand(3,1);
learnpn only needs these values to calculate a weight change
(see “Algorithm” below), use them to do so.
dW = learnpn(,p,,,,,e,,,,,)
Perceptrons do have one real limitation. The set of input vectors must be linearly separable if a solution is to be found. That is, if the input vectors with targets of 1 cannot be separated by a line or hyperplane from the input vectors associated with values of 0, the perceptron will never be able to classify them correctly.
learnpn calculates the weight change
dW for a given
neuron from the neuron’s input
P and error
E according to
the normalized perceptron learning rule:
pn = p / sqrt(1 + p(1)^2 + p(2)^2) + ... + p(R)^2) dw = 0, if e = 0 = pn', if e = 1 = -pn', if e = -1
The expression for
dW can be summarized as
dw = e*pn'
Introduced before R2006a