# newrb

Design radial basis network

## Description

takes two of these arguments:`net`

= newrb(`P`

,`T`

,`goal`

,`spread`

,`MN`

,`DF`

)

`P`

—`R`

-by-`Q`

matrix of`Q`

input vectors`T`

—`S`

-by-`Q`

matrix of`Q`

target class vectors`goal`

— Mean squared error goal`spread`

— Spread of radial basis functions`MN`

— Maximum number of neurons`DF`

— Number of neurons to add between displays

Radial basis networks can be used to approximate functions. `newrb`

adds neurons to the hidden layer of a radial basis network until it meets the specified mean
squared error goal.

The larger `spread`

is, the smoother the function approximation. Too
large a spread means a lot of neurons are required to fit a fast-changing function. Too
small a spread means many neurons are required to fit a smooth function, and the network
might not generalize well. Call `newrb`

with different spreads to find the
best value for a given problem.

## Examples

## Input Arguments

## Output Arguments

## Algorithms

`newrb`

creates a two-layer network. The first layer has
`radbas`

neurons, and calculates its weighted inputs with
`dist`

and its net input with `netprod`

. The second layer
has `purelin`

neurons, and calculates its weighted input with
`dotprod`

and its net inputs with `netsum`

. Both layers
have biases.

Initially the `radbas`

layer has no neurons. The following steps are
repeated until the network’s mean squared error falls below `goal`

.

The network is simulated.

The input vector with the greatest error is found.

A

`radbas`

neuron is added with weights equal to that vector.The

`purelin`

layer weights are redesigned to minimize error.

## Version History

**Introduced before R2006a**