srchcha
1-D minimization using Charalambous' method
Syntax
[a,gX,perf,retcode,delta,tol] = srchcha(net,X,Pd,Tl,Ai,Q,TS,dX,gX,perf,dperf,delta,tol,ch_perf)
Description
srchcha
is a linear search routine. It searches in a given direction to
locate the minimum of the performance function in that direction. It uses a technique based on
Charalambous’ method.
[a,gX,perf,retcode,delta,tol] = srchcha(net,X,Pd,Tl,Ai,Q,TS,dX,gX,perf,dperf,delta,tol,ch_perf)
takes these inputs,
net | Neural network |
X | Vector containing current values of weights and biases |
Pd | Delayed input vectors |
Tl | Layer target vectors |
Ai | Initial input delay conditions |
Q | Batch size |
TS | Time steps |
dX | Search direction vector |
gX | Gradient vector |
perf | Performance value at current |
dperf | Slope of performance value at current |
delta | Initial step size |
tol | Tolerance on search |
ch_perf | Change in performance on previous step |
and returns
a | Step size that minimizes performance |
gX | Gradient at new minimum point |
perf | Performance value at new minimum point |
retcode | Return code that has three elements. The first two elements correspond to the number of function evaluations in the two stages of the search. The third element is a return code. These have different meanings for different search algorithms. Some might not be used in this function. |
0 Normal | |
1 Minimum step taken | |
2 Maximum step taken | |
3 Beta condition not met | |
delta | New initial step size, based on the current step size |
tol | New tolerance on search |
Parameters used for the Charalambous algorithm are
alpha | Scale factor that determines sufficient reduction in
|
beta | Scale factor that determines sufficiently large step size |
gama | Parameter to avoid small reductions in performance, usually set to 0.1 |
scale_tol | Parameter that relates the tolerance |
The defaults for these parameters are set in the training function that calls them. See
traincgf
, traincgb
, traincgp
, trainbfg
, and trainoss
.
Dimensions for these variables are
Pd |
| Each element |
Tl |
| Each element |
Ai |
| Each element |
where
Ni | = | net.numInputs |
Nl | = | net.numLayers |
LD | = | net.numLayerDelays |
Ri | = | net.inputs{i}.size |
Si | = | net.layers{i}.size |
Vi | = | net.targets{i}.size |
Dij | = | Ri * length(net.inputWeights{i,j}.delays) |
More About
Algorithms
srchcha
locates the minimum of the performance function in the search
direction dX
, using an algorithm based on the method described in
Charalambous (see reference below).
References
Charalambous, C., “Conjugate gradient algorithm for efficient training of artificial neural networks,” IEEE Proceedings, Vol. 139, No. 3, June, 1992, pp. 301–310.
Version History
Introduced before R2006a