Design radial basis network
net = newrb(P,T,goal,spread,MN,DF)
Radial basis networks can be used to approximate functions.
neurons to the hidden layer of a radial basis network until it meets the specified mean squared
net = newrb(P,T,goal,spread,MN,DF) takes two of these arguments,
Mean squared error goal (default = 0.0)
Spread of radial basis functions (default = 1.0)
Maximum number of neurons (default is
Number of neurons to add between displays (default = 25)
and returns a new radial basis network.
spread is, the smoother the function approximation. Too large
a spread means a lot of neurons are required to fit a fast-changing function. Too small a spread
means many neurons are required to fit a smooth function, and the network might not generalize
newrb with different spreads to find the best value for a given
Here you design a radial basis network, given inputs
P and targets
P = [1 2 3]; T = [2.0 4.1 5.9]; net = newrb(P,T);
The network is simulated for a new input.
P = 1.5; Y = sim(net,P)
newrb creates a two-layer network. The first layer has
radbas neurons, and calculates its weighted inputs with
dist and its net input with
netprod. The second layer has
purelin neurons, and calculates its weighted input with
dotprod and its net inputs with
netsum. Both layers have
radbas layer has no neurons. The following steps are
repeated until the network’s mean squared error falls below
The network is simulated.
The input vector with the greatest error is found.
radbas neuron is added with weights equal to that
purelin layer weights are redesigned to