Main Content

What Is a Linear Regression Model?

A linear regression model describes the relationship between a dependent variable, y, and one or more independent variables, X. The dependent variable is also called the response variable. Independent variables are also called explanatory or predictor variables. Continuous predictor variables are also called covariates, and categorical predictor variables are also called factors. The matrix X of observations on predictor variables is usually called the design matrix.

A multiple linear regression model is

yi=β0+β1Xi1+β2Xi2++βpXip+εi,i=1,,n,

where

  • n is the number of observations.

  • yi is the ith response.

  • βk is the kth coefficient, where β0 is the constant term in the model. Sometimes, design matrices might include information about the constant term. However, fitlm or stepwiselm by default includes a constant term in the model, so you must not enter a column of 1s into your design matrix X.

  • Xij is the ith observation on the jth predictor variable, j = 1, ..., p.

  • εi is the ith noise term, that is, random error.

If a model includes only one predictor variable (p = 1), then the model is called a simple linear regression model.

In general, a linear regression model can be a model of the form

yi=β0+k=1Kβkfk(Xi1,Xi2,,Xip)+εi,i=1,,n,

where f (.) is a scalar-valued function of the independent variables, Xijs. The functions, f (X), might be in any form including nonlinear functions or polynomials. The linearity, in the linear regression models, refers to the linearity of the coefficients βk. That is, the response variable, y, is a linear function of the coefficients, βk.

Some examples of linear models are:

yi=β0+β1Xi1+β2Xi2+β3Xi3+εiyi=β0+β1Xi1+β2Xi2+β3Xi13+β4Xi22+εiyi=β0+β1Xi1+β2Xi2+β3Xi1Xi2+β4logXi3+εi

The following, however, are not linear models since they are not linear in the unknown coefficients, βk.

logyi=β0+β1Xi1+β2Xi2+εiyi=β0+β1Xi1+1β2Xi2+eβ3Xi1Xi2+εi

The usual assumptions for linear regression models are:

  • The noise terms, εi, are uncorrelated.

  • The noise terms, εi, have independent and identical normal distributions with mean zero and constant variance, σ2. Thus,

    E(yi)=E(k=0Kβkfk(Xi1,Xi2,,Xip)+εi)=k=0Kβkfk(Xi1,Xi2,,Xip)+E(εi)=k=0Kβkfk(Xi1,Xi2,,Xip)

    and

    V(yi)=V(k=0Kβkfk(Xi1,Xi2,,Xip)+εi)=V(εi)=σ2

    So the variance of yi is the same for all levels of Xij.

  • The responses yi are uncorrelated.

The fitted linear function is

y^i=k=0Kbkfk(Xi1,Xi2,,Xip),i=1,,n,

where y^i is the estimated response and bks are the fitted coefficients. The coefficients are estimated so as to minimize the mean squared difference between the prediction vector y^ and the true response vector y, that is y^y. This method is called the method of least squares. Under the assumptions on the noise terms, these coefficients also maximize the likelihood of the prediction vector.

In a linear regression model of the form y = β1X1 + β2X2 + ... + βpXp, the coefficient βk expresses the impact of a one-unit change in predictor variable, Xj, on the mean of the response E(y), provided that all other variables are held constant. The sign of the coefficient gives the direction of the effect. For example, if the linear model is E(y) = 1.8 – 2.35X1 + X2, then –2.35 indicates a 2.35 unit decrease in the mean response with a one-unit increase in X1, given X2 is held constant. If the model is E(y) = 1.1 + 1.5X12 + X2, the coefficient of X12 indicates a 1.5 unit increase in the mean of Y with a one-unit increase in X12 given all else held constant. However, in the case of E(y) = 1.1 + 2.1X1 + 1.5X12, it is difficult to interpret the coefficients similarly, since it is not possible to hold X1 constant when X12 changes or vice versa.

References

[1] Neter, J., M. H. Kutner, C. J. Nachtsheim, and W. Wasserman. Applied Linear Statistical Models. IRWIN, The McGraw-Hill Companies, Inc., 1996.

[2] Seber, G. A. F. Linear Regression Analysis. Wiley Series in Probability and Mathematical Statistics. John Wiley and Sons, Inc., 1977.

See Also

| |

Related Topics