The Linear regression reference article from the English Wikipedia on 24-Apr-2004
(provided by Fixed Reference: snapshots of Wikipedia from wikipedia.org)

Linear regression

Watch videos on African life
In statistics, linear regression is a method of estimating the conditional expected value of one variable y given the values of some other variable or variables x. The variable of interest, y, is conventionally called the "dependent variable". The terms "endogenous variable" and "output variable" are also used. The other variables x are called the "independent variables". The terms "exogenous variables" and "input variables" are also used. The dependent and independent variables may be scalars or vectors. If the dependent variable is a vector, one speaks of multiple linear regression.

The term independent variable suggests that its value can be chosen at will, and the dependent variable is an effect, i.e., causally dependent on the independent variable, as in a stimulus-response model. Although many linear regression models are formulated as models of cause and effect, the direction of causation may just as well go the other way, or indeed there need not be any causal relation at all.

Regression, in general, is the problem of estimating a conditional expected value. Linear regression is called "linear" because the relation of the dependent to the independent variables is a linear function of some parameters. Regression models which are not a linear function of the parameters are called nonlinear regression models. A neural network is an example of a nonlinear regression model.

Still more generally, regression may be viewed as a special case of density estimation. The joint distribution of the dependent and independent variables can be constructed from the conditional distribution of the dependent variable and the marginal distribution of the independent variables. In some problems, it is convenient to work in the other direction: from the joint distribution, the conditional distribution of the dependent variable can be derived.

Table of contents
1 Historical remarks
2 Statement of the linear regression model
3 Parameter estimation
4 References
5 External links

Historical remarks

The earliest form of linear regression was the method of least squares, which was published by Legendre in 1805, and by Gauss in 1809. The term "least squares" is from Legendre's term, moindres quarrés. However, Gauss claimed that he had known the method since 1795.

Legendre and Gauss both applied the method to the problem of determining, from astronomical observations, the orbits of bodies about the sun. Euler had worked on the same problem (1748) without success. Gauss published a further development of the theory of least squares in 1821, including a version of the Gauss-Markov theorem.

The term "reversion" was used in the nineteenth century to describe a biological phenomenon, namely that the progeny of exceptional individuals tend on average to be less exceptional than their parents, and more like their more distant ancestors. Francis Galton studied this phenomenon, and applied the slightly misleading term "regression towards mediocrity" to it (parents of exceptional individuals also tend on average to be less exceptional than their children). For Galton, regression had only this biological meaning, but his work (1877, 1885) was extended by Karl Pearson and G.U. Yule to a more general statistical context (1897, 1903). In the work of Pearson and Yule, the joint distribution of the dependent and independent variables is assumed to be Gaussian. This assumption was weakened by R.A. Fisher in his works of 1922 and 1925. Fisher assumed that the conditional distribution of the dependent variable is Gaussian, but the joint distribution need not be. In this respect, Fisher's assumption is closer to Gauss's formulation of 1821.

Statement of the linear regression model

A linear regression model is typically stated in the form

The right hand side may take other forms, but generally comprises a linear combination of the parameters, here denoted α and β. The term ε represents the unpredicted or unexplained variation in the dependent variable; it is conventionally called the "error" whether it is really a measurement error or not. The error term is conventionally assumed to have expected value equal to zero, as a nonzero expected value could be absorbed into α. See also errors and residuals in statistics; the difference between an error and a residual is also dealt with below.

An equivalent formulation which explicitly shows the linear regression as a model of conditional expectation is

with the conditional distribution of y given x essentially the same as the distribution of the error term.

A linear regression model need not be affine, let alone linear, in the independent variables x. For example,

is a linear regression model, for the right-hand side is a linear combination of the parameters α, β, and γ. In this case it is useful to think of x2 as a new independent variable, formed by modifying the original variable x. Indeed, any linear combination of functions f(x), g(x), h(x), ..., is linear regression model, so long as these functions do not have any free parameters (otherwise the model is generally a nonlinear regression model). The least-squares estimates of α, β, and γ are linear in the response variable y, and nonlinear x (they are nonlinear in x even if the γ and α terms are absent; if only β were present then doubling all observed x values would multiply the least-squares estimate of β by 1/2).

Parameter estimation

Often in linear regression problems statisticians rely on the Gauss-Markov assumptions:

(See also Gauss-Markov theorem. That result says that under the assumptions above, least-squares estimators are in a certain sense optimal.)

Sometimes stronger assumptions are relied on:

If xi is a vector we can take the product βxi to be a "dot-product".

A statistician will usually estimate the unobservable values of the parameters α and β by the method of least squares, which consists of finding the values of a and b that minimize the sum of squares of the residuals

Those values of a and b are the "least-squares estimates." The residuals may be regarded as estimates of the errors; see also errors and residuals in statistics.

Notice that, whereas the errors are independent, the residuals cannot be independent because the use of least-squares estimates implies that the sum of the residuals must be 0, and the dot-product of the vector of residuals with the vector of -values must be 0, i.e., we must have

and


 

These two linear constraints imply that the vector of residuals must lie within a certain (n − 2)-dimensional subspace of Rn; hence we say that there are "n − 2 degrees of freedom for error". If one assumes the errors are normally distributed and independent, then it can be shown to follow that 1) the sum of squares of residuals

is distributed as

i.e., the sum of squares divided by the error-variance σ2, has a chi-square distribution with n − 2 degrees of freedom, and 2) the sum of squares of residuals is actually probabilistically independent of the estimates a, b of the parameters α and β.

These facts make it possible to use Student's t-distribution with n − 2 degrees of freedom (so named in honor of the pseudonymous "Student") to find confidence intervals for α and β.

Denote by capital Y the column vector whose ith entry is yi, and by capital X the n x 2 matrix whose second column contains the xi as its ith entry, and whose first column contains n 1s. Let ε be the column vector containing the errors εi. Let δ and d be respectively the 2x1 column vector containing α and β and the 2x1 column vector containing the estimates a and b. Then the model can be written as

where ε is normally distributed with expected value 0 (i.e., a column vector of 0s) and variance σ2 In, where In is the n x n identity matrix. The matrix Xd (where (remember) d is the vector of estimates) is then the orthogonal projection of Y onto the column space of X.

Then it can be shown that

(where X' is the transpose of X) and the sum of squares of residuals is

The fact that the matrix X(X'X)-1X' is a symmetric idempotent matrix is incessantly relied on both in computations and in proofs of theorems. The linearity of d as a function of the vector Y, expressed above by saying d = (X' X)-1 X' Y, is the reason why this is called "linear" regression. Nonlinear regression uses nonlinear methods of estimation.

The matrix In - X (X' X)-1 X' that appears above is a symmetric idempotent matrix of rank n - 2. Here is an example of the use of that fact in the theory of linear regression. The finite-dimensional spectral theorem of linear algebra says that any real symmetric matrix M can be diagonalized by an orthogonal matrix G, i.e., the matrix G'MG is a diagonal matrix. If the matrix M is also idempotent, then the diagonal entries in G'MG must be idempotent numbers. Only two real numbers are idempotent: 0 and 1. So In-X(X'X)-1X', after diagonalization, has n − 2 0s and two 1s on the diagonal. That is most of the work in showing that the sum of squares of residuals has a chi-square distribution with n-2 degrees of freedom.

Note: A useful alternative to linear regression is robust regression in which mean absolute error is minimized instead of mean squared error as in linear regression. Robust regression is computationally much more intensive than linear regression and is somewhat more difficult to implement as well.

Summarizing the data

We sum the observations, the squares of the Y's and X's and the products of X*Y to obtain the following quantities.

and similarly.
and SYY similarly.

Estimating beta

We use the summary statistics above to calculate b, the estimate of beta.

Estimating alpha

We use the estimate of beta and the other statistics to estimate alpha by:

Displaying the residuals

The first method of displaying the residuals use the histogram or cumulative distribution to depict the similarity (or lack thereof) to a normal distribution. Non-normality suggests that the model may not be a good summary description of the data.

We plot the residuals,

against the independent variable, x. There should be no discernible trend or pattern if the model is satisfactory for this data. Some of the possible problems are:

Studentized residuals can be used in outlier detection.

Ancillary statistics

The sum of squared deviations can be partitioned as in ANOVA to indicate what part of the dispersion of the dependent variable is explained by the independent variable.

The correlation coefficient, r, can be calculated by

This statistic is a measure of how well a straight line describes the data. Values near zero suggest that the model is ineffective. r2 is frequently interpreted as the fraction of the variability explained by the independent variable, X

References

Historical

Modern theory

Modern practice

External links