Nearest neighbor search using *K*d-tree

`KDTreeSearcher`

model objects store
results of a nearest neighbors search using the *K*d-tree
algorithm. Results that you can store include the training data, the
distance metric and its parameters, and the maximal number of data
points in each leaf node (i.e., the bucket size). The *K*d-tree
algorithm partitions an *n*-by-*K* data
set by recursively splitting *n* points in *K*-dimensional
space into a binary tree. To find the nearest neighbors of a query
observation, `KDTreeSearcher`

restricts the training
data space to the training observations in the leaf node that the
query observation belongs to.

Once you create or train a `KDTreeSearcher`

model
object, you can search the stored tree to find all neighboring points
to the query data by performing a nearest neighbors search using `knnsearch`

or
radius search using `rangesearch`

. The *K*d-tree
algorithm is particularly useful when:

*K*is relatively small (i.e.,*K*< 10).The training and query sets are not sparse.

The training and query sets have many observations.

knnsearch | k-nearest neighbors search using Kd-tree or exhaustive search |

rangesearch | Find all neighbors within specified distance using exhaustive search or Kd-tree |

Train a `KDTreeSearcher`

model object using `KDTreeSearcher`

or `createns`

.

Was this topic helpful?