Radial basis function

From Wikipedia, the free encyclopedia

Jump to: navigation, search

A radial basis function (RBF) is a real-valued function whose value depends only on the distance from the origin, so that \phi(\mathbf{x}) = \phi(\|\mathbf{x}\|); or alternatively on the distance from some other point c, called a center, so that \phi(\mathbf{x}, \mathbf{c}) = \phi(\|\mathbf{x}-\mathbf{c}\|). Any function φ that satisfies the property \phi(\mathbf{x}) = \phi(\|\mathbf{x}\|) is a radial function. The norm is usually Euclidean distance.

Radial basis functions are typically used to build up function approximations of the form

y(\mathbf{x}) = \sum_{i=1}^N w_i \, \phi(\|\mathbf{x} - \mathbf{c}_i\|),

where the approximating function y(x) is represented as a sum of N radial basis functions, each associated with a different center ci, and weighted by an appropriate coefficient wi. Approximation schemes of this kind have been particularly used in time series prediction and control of nonlinear systems exhibiting sufficiently simple chaotic behaviour, 3D reconstruction in computer graphics (for ex. hierarchical RBF).

The sum can also be interpreted as a rather simple single-layer type of artificial neural network called a radial basis function network, with the radial basis functions taking on the role of the activation functions of the network. It can be shown that any continuous function on a compact interval can in principle be interpolated with arbitrary accuracy by a sum of this form, if a sufficiently large number N of radial basis functions are used.

Two unnormalized Gaussian radial basis functions in one input dimension. The basis function centers are located at c1=0.75 and c2=3.25.

[edit] RBF types

Commonly used types of radial basis functions include (r = \|\mathbf{x} - \mathbf{c}_i\|):

\varphi(r) = \exp(-\beta r^2)\, for some β > 0
\varphi(r) = \sqrt{r^2 + \beta^2}\, for some β > 0
\varphi(r) = r^k,\; k=1,3,5,\dots
\varphi(r) = r^k \ln(r),\; k=2,4,6,\dots
\varphi(r) = r^2 \ln(r)\;

[edit] Estimating the weights

The approximant y(x) is differentiable with respect to the weights wi. The weights could thus be learned using any of the standard iterative methods for neural networks. But such iterative schemes are not in fact necessary: because the approximating function is linear in the weights wi, the wi can simply be estimated directly, using the matrix methods of linear least squares.

Personal tools