Main Content

Multivariate Normal Regression

Introduction

This section focuses on using likelihood-based methods for multivariate normal regression. The parameters of the regression model are estimated via maximum likelihood estimation. For multiple series, this requires iteration until convergence. The complication due to the possibility of missing data is incorporated into the analysis with a variant of the EM algorithm known as the ECM algorithm.

The underlying theory of maximum likelihood estimation and the definition and significance of the Fisher information matrix can be found in Caines [1] and Cramér [2]. The underlying theory of the ECM algorithm can be found in Meng and Rubin [8] and Sexton and Swensen [9].

In addition, these two examples of maximum likelihood estimation are presented:

Multivariate Normal Linear Regression

Suppose that you have a multivariate normal linear regression model in the form

[Z1Zm]~N([H1bHmb], [C00C]),

where the model has m observations of n-dimensional random variables Z1, ..., Zm with a linear regression model that has a p-dimensional model parameter vector b. In addition, the model has a sequence of m design matrices H1, ..., Hm, where each design matrix is a known n-by-p matrix.

Given a parameter vector b and a collection of design matrices, the collection of m independent variables Zk is assumed to have independent identically distributed multivariate normal residual errors ZkHk b with n-vector mean 0 and n-by-n covariance matrix C for each k = 1, ..., m.

A concise way to write this model is

ZkN(Hkb,C)

for k = 1, ..., m.

The goal of multivariate normal regression is to obtain maximum likelihood estimates for b and C given a collection of m observations z1, ..., zm of the random variables Z1, ..., Zm. The estimated parameters are the p distinct elements of b and the n (n + 1)/2 distinct elements of C (the lower-triangular elements of C).

Note

Quasi-maximum likelihood estimation works with the same models but with a relaxation of the assumption of normally distributed residuals. In this case, however, the parameter estimates are asymptotically optimal.

Maximum Likelihood Estimation

To estimate the parameters of the multivariate normal linear regression model using maximum likelihood estimation, it is necessary to maximize the log-likelihood function over the estimation parameters given observations z1, ... , zm.

Given the multivariate normal model to characterize residual errors in the regression model, the log-likelihood function is

L(z1,,zm;b,C)=12mnlog(2π)+12mlog(det(C))+12k=1m(zkHkb)TC1(zkHkb).

Although the cross-sectional residuals must be independent, you can use this log-likelihood function for quasi-maximum likelihood estimation. In this case, the estimates for the parameters b and C provide estimates to characterize the first and second moments of the residuals. See Caines [1] for details.

Except for a special case (see Special Case of Multiple Linear Regression Model), if both the model parameters in b and the covariance parameters in C are to be estimated, the estimation problem is intractably nonlinear and a solution must use iterative methods. Denote estimates for the parameters b and C for iteration t = 0, 1, ... with the superscript notation b(t) and C(t).

Given initial estimates b(0) and C(0) for the parameters, the maximum likelihood estimates for b and C are obtained using a two-stage iterative process with

b(t+1)=(k=1mHkT(C(t))1Hk)1(k=1mHkT(C(t))1zk)

and

C(t+1)=1mk=1m(zkHkb(t+1))(zkHkb(t+1))T

for t = 0, 1, ... .

Special Case of Multiple Linear Regression Model

The special case mentioned in Maximum Likelihood Estimation occurs if n = 1 so that the sequence of observations is a sequence of scalar observations. This model is known as a multiple linear regression model. In this case, the covariance matrix C is a 1-by-1 matrix that drops out of the maximum likelihood iterates so that a single-step estimate for b and C can be obtained with converged estimates b(1) and C(1).

Least-Squares Regression

Another simplification of the general model is called least-squares regression. If b(0) = 0 and C(0) = I, then b(1) and C(1) from the two-stage iterative process are least-squares estimates for b and C, where

bLS=(k=1mHkTHk)1(k=1mHkTzk)

and

CLS=1mk=1m(zkHkbLS)(zkHkbLS)T.

Mean and Covariance Estimation

A final simplification of the general model is to estimate the mean and covariance of a sequence of n-dimensional observations z1, ..., zm. In this case, the number of series is equal to the number of model parameters with n = p and the design matrices are identity matrices with Hk = I for i = 1, ..., m so that b is an estimate for the mean and C is an estimate of the covariance of the collection of observations z1, ..., zm.

Convergence

If the iterative process continues until the log-likelihood function increases by no more than a specified amount, the resultant estimates are said to be maximum likelihood estimates bML and CML.

If n = 1 (which implies a single data series), convergence occurs after only one iterative step, which, in turn, implies that the least-squares and maximum likelihood estimates are identical. If, however, n > 1, the least-squares and maximum likelihood estimates are usually distinct.

In Financial Toolbox™ software, both the changes in the log-likelihood function and the norm of the change in parameter estimates are monitored. Whenever both changes fall below specified tolerances (which should be something between machine precision and its square root), the toolbox functions terminate under an assumption that convergence has been achieved.

Fisher Information

Since maximum likelihood estimates are formed from samples of random variables, their estimators are random variables; an estimate derived from such samples has an uncertainty associated with it. To characterize these uncertainties, which are called standard errors, two quantities are derived from the total log-likelihood function.

The Hessian of the total log-likelihood function is

2L(z1,,zm;θ)

and the Fisher information matrix is

I(θ)=E[2L(z1,,zm;θ)],

where the partial derivatives of the 2 operator are taken with respect to the combined parameter vector Θ that contains the distinct components of b and C with a total of q = p + n (n + 1)/2 parameters.

Since maximum likelihood estimation is concerned with large-sample estimates, the central limit theorem applies to the estimates and the Fisher information matrix plays a key role in the sampling distribution of the parameter estimates. Specifically, maximum likelihood parameter estimates are asymptotically normally distributed such that

(θ(t)θ)N(0,I1,(θ(t))) as t,

where Θ is the combined parameter vector and Θ(t) is the estimate for the combined parameter vector at iteration t = 0, 1, ... .

The Fisher information matrix provides a lower bound, called a Cramér-Rao lower bound, for the standard errors of estimates of the model parameters.

Statistical Tests

Given an estimate for the combined parameter vector Θ, the squared standard errors are the diagonal elements of the inverse of the Fisher information matrix

s2(θ^i)=(I1(θ^i))ii

for i = 1, ..., q.

Since the standard errors are estimates for the standard deviations of the parameter estimates, you can construct confidence intervals so that, for example, a 95% interval for each parameter estimate is approximately

θ^i±1.96s(θ^i)

for i = 1, ..., q.

Error ellipses at a level-of-significance α ε [0, 1] for the parameter estimates satisfy the inequality

(θθ^)TI(θ^)(θθ^)χ1α,q2

and follow a χ2 distribution with q degrees-of-freedom. Similar inequalities can be formed for any subcollection of the parameters.

In general, given parameter estimates, the computed Fisher information matrix, and the log-likelihood function, you can perform numerous statistical tests on the parameters, the model, and the regression.

See Also

| | | | | | | | | | | | | | | | | |

Related Topics