|On this page…|
Because Bj,k is nonzero only on the interval (tj..tj+k), the linear system for the B-spline coefficients of the spline to be determined, by interpolation or least squares approximation, or even as the approximate solution of some differential equation, is banded, making the solving of that linear system particularly easy. For example, to construct a spline s of order k with knot sequence t1 ≤ t2 ≤··· ≤ tn+k so that s(xi)=yi for i=1, ..., n, use the linear system
for the unknown B-spline coefficients aj in which each equation has at most k nonzero entries.
Also, many theoretical facts concerning splines are most easily stated and/or proved in terms of B-splines. For example, it is possible to match arbitrary data at sites uniquely by a spline of order k with knot sequence (t1, ..., tn+k) if and only if Bj,k(xj)≠0 for all j (Schoenberg-Whitney Conditions). Computations with B-splines are facilitated by stable recurrence relations
provides a useful expression for the jth B-spline coefficient of the spline s in terms of its value and derivatives at an arbitrary site τ between tj and tj+k, and with ψj(t):=(tj+1–t)··· (tj+k–1–t)/(k–1)! It can be used to show that aj(s) is closely related to s on the interval [tj..tj+k], and seems the most efficient means for converting from ppform to B-form.
The above constructive approach is not the only avenue to splines. In the variational approach, a spline is obtained as a best interpolant, e.g., as the function with smallest mth derivative among all those matching prescribed function values at certain sites. As it turns out, among the many such splines available, only those that are piecewise-polynomials or, perhaps, piecewise-exponentials have found much use. Of particular practical interest is the smoothing spline s = sp which, for given data (xi,yi) with x∊[a..b], all i, and given corresponding positive weights wi, and for given smoothing parameter p, minimizes
over all functions f with m derivatives. It turns out that the smoothing spline s is a spline of order 2m with a break at every data site. The smoothing parameter, p, is chosen artfully to strike the right balance between wanting the error measure
small. The hope is that s contains as much of the information, and as little of the supposed noise, in the data as possible. One approach to this (used in spaps) is to make F(Dmf) as small as possible subject to the condition that E(f) be no bigger than a prescribed tolerance. For computational reasons, spaps uses the (equivalent) smoothing parameter ρ=p/(1–p), i.e., minimizes ρE(f) + F(Dmf). Also, it is useful at times to use the more flexible roughness measure
with λ a suitable positive weight function.