Feedforwardnet for XOR problem with logsig transfer function

Hi, I want learn feed forward net for my classification problem. But network doesn't learn anything useful, so I start checking network setting.
I choose xor problem. I find that network doesn't learn xor if the output transfer function is set to logsig! Sample code:
x = [0 1 0 1 0 1 0 1 0 1 0 1 0 1 0 1; 0 0 1 1 0 0 1 1 0 0 1 1 0 0 1 1];
y =[0 1 1 0 0 1 1 0 0 1 1 0 0 1 1 0];
net = feedforwardnet([2 2]);
net = configure(net,x,y);
net.layers{3}.transferFcn = 'logsig';
[net,tr] = train(net,x,y);
net(x)
The outputs are: 0.5000 0.5000 0.9985 0.5000 0.5000 0.5000 0.9985 0.5000 0.5000 0.5000 0.9985 0.5000 0.5000 0.5000 0.9985 0.5000
I try few times and I never got any value smaller then 0.5. If I comment 6th line setting transfer function to logsig the net learn perfect.
Does anyone know why network with logsig doesn't work?
P.S My matlab version MATLAB 7.12.0 (R2011a).

 Accepted Answer

Reduce confusion: Never use feedforwardnet.
Use fitnet for regression or curve-fitting.
Use patternnet for classification or pattern-recognition.
Both call feedforwardnet but have more appropriate defaults and plots.
For quick comparisons of defaults, OMIT semicolons:
net = fitnet
net = patternnet
net = feedforwardnet
For detailed comparisons of source code
type fitnet
type patternnet
type feedforwardnet
----------------------------------------------
net = feedforwardnet([2 2]);
1. You specified 2 hidden layers instead of 1
net = configure(net,x,y);
2a. No need to use configure unless you are in a loop. train will automatically initialize if no weights have been assigned.
2b. Stick with MATLAB notation of t for target and y for output
net.layers{3}.transferFcn = 'logsig';
3. Inappropriate for the default output mapmaxmin to [-1,1]. That is why you are getting the 0.5.
[net,tr] = train(net,x,y);
net(x)
4.Use extended LHS and MATLAB t, y notation (semicolons omitted for clarification)
N = length(t)
trueclass = 1 + round(t)
net = patternnet(2) % One hidden layer
rng(0) % initialize RNG
[ net tr y ] = train(net,x,t);
assignedclass = 1+round(y)
err = assignedclass~=trueclass
Nerr = sum(err)
PctErr = 100*Nerr/N
To mitigate the possibility that rng(0) may not produce a good set of initial weights, use a loop over weight initializations AFTER rng(0) and start the loop using configure. Careful with indexing.
Thank you for formally accepting my answer.
Greg

More Answers (0)

Asked:

P
P
on 12 Sep 2013

Community Treasure Hunt

Find the treasures in MATLAB Central and discover how the community can help you!

Start Hunting!