Ridge_regression
Ridge_regression
Ridge_regression
The theory was first introduced by Hoerl and Kennard in 1970 in their Technometrics papers "Ridge
regressions: biased estimation of nonorthogonal problems" and "Ridge regressions: applications in
nonorthogonal problems".[5][6][1]
Ridge regression was developed as a possible solution to the imprecision of least square estimators when
linear regression models have some multicollinear (highly correlated) independent variables—by creating
a ridge regression estimator (RR). This provides a more precise ridge parameters estimate, as its variance
and mean square estimator are often smaller than the least square estimators previously derived.[7][2]
Overview
In the simplest case, the problem of a near-singular moment matrix is alleviated by adding positive
elements to the diagonals, thereby decreasing its condition number. Analogous to the ordinary least
squares estimator, the simple ridge estimator is then given by
where is the regressand, is the design matrix, is the identity matrix, and the ridge parameter
serves as the constant shifting the diagonals of the moment matrix.[8] It can be shown that this estimator
is the solution to the least squares problem subject to the constraint , which can be expressed as
a Lagrangian minimization:
which shows that is nothing but the Lagrange multiplier of the constraint.[9] In fact, there is a one-to-
one relationship between and and since, in practice, we do not know , we define heuristically or
find it via additional data-fitting strategies, see Determination of the Tikhonov factor.
Note that, when , in which case the constraint is non-binding, the ridge estimator reduces to
ordinary least squares. A more general approach to Tikhonov regularization is discussed below.
History
Tikhonov regularization was invented independently in many different contexts. It became widely known
through its application to integral equations in the works of Andrey Tikhonov[10][11][12][13][14] and David
L. Phillips.[15] Some authors use the term Tikhonov–Phillips regularization. The finite-dimensional
case was expounded by Arthur E. Hoerl, who took a statistical approach,[16] and by Manus Foster, who
interpreted this method as a Wiener–Kolmogorov (Kriging) filter.[17] Following Hoerl, it is known in the
statistical literature as ridge regression,[18] named after ridge analysis ("ridge" refers to the path from the
constrained maximum).[19]
Tikhonov regularization
Suppose that for a known real matrix and vector , we wish to find a vector such that
The standard approach is ordinary least squares linear regression. However, if no satisfies the equation
or more than one does—that is, the solution is not unique—the problem is said to be ill posed. In such
cases, ordinary least squares estimation leads to an overdetermined, or more often an underdetermined
system of equations. Most real-world phenomena have the effect of low-pass filters in the forward
direction where maps to . Therefore, in solving the inverse-problem, the inverse mapping operates
as a high-pass filter that has the undesirable tendency of amplifying noise (eigenvalues / singular values
are largest in the reverse mapping where they were smallest in the forward mapping). In addition,
ordinary least squares implicitly nullifies every element of the reconstructed version of that is in the
null-space of , rather than allowing for a model to be used as a prior for . Ordinary least squares seeks
to minimize the sum of squared residuals, which can be compactly written as
In order to give preference to a particular solution with desirable properties, a regularization term can be
included in this minimization:
for some suitably chosen Tikhonov matrix . In many cases, this matrix is chosen as a scalar multiple of
the identity matrix ( ), giving preference to solutions with smaller norms; this is known as L2
regularization. [20] In other cases, high-pass operators (e.g., a difference operator or a weighted Fourier
operator) may be used to enforce smoothness if the underlying vector is believed to be mostly continuous.
This regularization improves the conditioning of the problem, thus enabling a direct numerical solution.
An explicit solution, denoted by , is given by
The effect of regularization may be varied by the scale of matrix . For this reduces to the
T −1
unregularized least-squares solution, provided that (A A) exists. Note that in case of a complex matrix
, as usual the transpose has to be replaced by the Hermitian matrix .
L2 regularization is used in many contexts aside from linear regression, such as classification with
logistic regression or support vector machines,[21] and matrix factorization.[22]
If the parameter fit comes with a covariance matrix of the estimated parameter uncertainties , then the
regularisation matrix will be
In the context of arbitrary likelihood fits, this is valid, as long as the quadratic approximation of the
likelihood function is valid. This means that, as long as the perturbation from the unregularised result is
small, one can regularise any result that is presented as a best fit point with a covariance matrix. No
detailed knowledge of the underlying likelihood function is needed. [23]
where we have used to stand for the weighted norm squared (compare with the
Mahalanobis distance). In the Bayesian interpretation is the inverse covariance matrix of , is the
expected value of , and is the inverse covariance matrix of . The Tikhonov matrix is then given as a
factorization of the matrix (e.g. the Cholesky factorization) and is considered a whitening
filter.
This generalized problem has an optimal solution which can be written explicitly using the formula
Lavrentyev regularization
In some situations, one can avoid using the transpose , as proposed by Mikhail Lavrentyev.[24] For
example, if is symmetric positive definite, i.e. , so is its inverse , which can thus be
used to set up the weighted norm squared in the generalized Tikhonov regularization,
leading to minimizing
This minimization problem has an optimal solution which can be written explicitly using the formula
which is nothing but the solution of the generalized Tikhonov problem where
and is zero elsewhere. This demonstrates the effect of the Tikhonov parameter on the condition number of
the regularized problem. For the generalized case, a similar representation can be derived using a
generalized singular-value decomposition.[25]
Finally, it is related to the Wiener filter:
where is the residual sum of squares, and is the effective number of degrees of freedom.
Using the previous SVD decomposition, we can simplify the above expression:
and
Bayesian interpretation
Although at first the choice of the solution to this regularized problem may look artificial, and indeed the
matrix seems rather arbitrary, the process can be justified from a Bayesian point of view.[32] Note that
for an ill-posed problem one must necessarily introduce some additional assumptions in order to get a
unique solution. Statistically, the prior probability distribution of is sometimes taken to be a
multivariate normal distribution. [33] For simplicity here, the following assumptions are made: the means
are zero; their components are independent; the components have the same standard deviation . The
data are also subject to errors, and the errors in are also assumed to be independent with zero mean and
standard deviation . Under these assumptions the Tikhonov-regularized solution is the most probable
solution given the data and the a priori distribution of , according to Bayes' theorem.[34]
See also
LASSO estimator is another regularization method in statistics.
Elastic net regularization
Matrix regularization
Notes
a. In statistics, the method is known as ridge regression, in machine learning it and its
modifications are known as weight decay, and with multiple independent discoveries, it is
also variously known as the Tikhonov–Miller method, the Phillips–Twomey method, the
constrained linear inversion method, L2 regularization, and the method of linear
regularization. It is related to the Levenberg–Marquardt algorithm for non-linear least-
squares problems.
References
1. Hilt, Donald E.; Seegrist, Donald W. (1977). Ridge, a computer program for calculating ridge
regression estimates (https://www.biodiversitylibrary.org/bibliography/68934).
doi:10.5962/bhl.title.68934 (https://doi.org/10.5962%2Fbhl.title.68934).
2. Gruber, Marvin (1998). Improving Efficiency by Shrinkage: The James--Stein and Ridge
Regression Estimators (https://books.google.com/books?id=wmA_R3ZFrXYC&pg=PA2).
CRC Press. p. 2. ISBN 978-0-8247-0156-7.
3. Kennedy, Peter (2003). A Guide to Econometrics (https://books.google.com/books?id=B8I5
SP69e4kC&pg=PA205) (Fifth ed.). Cambridge: The MIT Press. pp. 205–206. ISBN 0-262-
61183-X.
4. Gruber, Marvin (1998). Improving Efficiency by Shrinkage: The James–Stein and Ridge
Regression Estimators (https://books.google.com/books?id=wmA_R3ZFrXYC&pg=PA7).
Boca Raton: CRC Press. pp. 7–15. ISBN 0-8247-0156-9.
5. Hoerl, Arthur E.; Kennard, Robert W. (1970). "Ridge Regression: Biased Estimation for
Nonorthogonal Problems". Technometrics. 12 (1): 55–67. doi:10.2307/1267351 (https://doi.o
rg/10.2307%2F1267351). JSTOR 1267351 (https://www.jstor.org/stable/1267351).
6. Hoerl, Arthur E.; Kennard, Robert W. (1970). "Ridge Regression: Applications to
Nonorthogonal Problems". Technometrics. 12 (1): 69–82. doi:10.2307/1267352 (https://doi.o
rg/10.2307%2F1267352). JSTOR 1267352 (https://www.jstor.org/stable/1267352).
7. Jolliffe, I. T. (2006). Principal Component Analysis (https://books.google.com/books?id=6ZU
MBwAAQBAJ&pg=PA178). Springer Science & Business Media. p. 178. ISBN 978-0-387-
22440-4.
8. For the choice of in practice, see Khalaf, Ghadban; Shukur, Ghazi (2005). "Choosing
Ridge Parameter for Regression Problems". Communications in Statistics – Theory and
Methods. 34 (5): 1177–1182. doi:10.1081/STA-200056836 (https://doi.org/10.1081%2FSTA-
200056836). S2CID 122983724 (https://api.semanticscholar.org/CorpusID:122983724).
9. van Wieringen, Wessel (2021-05-31). "Lecture notes on ridge regression". arXiv:1509.09169
(https://arxiv.org/abs/1509.09169) [stat.ME (https://arxiv.org/archive/stat.ME)].
10. Tikhonov, Andrey Nikolayevich (1943). "Об устойчивости обратных задач" (https://web.arc
hive.org/web/20050227163812/http://a-server.math.nsc.ru/IPP/BASE_WORK/tihon_en.html)
[On the stability of inverse problems]. Doklady Akademii Nauk SSSR. 39 (5): 195–198.
Archived from the original (http://a-server.math.nsc.ru/IPP/BASE_WORK/tihon_en.html) on
2005-02-27.
11. Tikhonov, A. N. (1963). "О решении некорректно поставленных задач и методе
регуляризации". Doklady Akademii Nauk SSSR. 151: 501–504.. Translated in "Solution of
incorrectly formulated problems and the regularization method". Soviet Mathematics. 4:
1035–1038.
12. Tikhonov, A. N.; V. Y. Arsenin (1977). Solution of Ill-posed Problems. Washington: Winston &
Sons. ISBN 0-470-99124-0.
13. Tikhonov, Andrey Nikolayevich; Goncharsky, A.; Stepanov, V. V.; Yagola, Anatolij Grigorevic
(30 June 1995). Numerical Methods for the Solution of Ill-Posed Problems (https://www.spri
nger.com/us/book/9780792335832). Netherlands: Springer Netherlands. ISBN 0-7923-
3583-X. Retrieved 9 August 2018.
14. Tikhonov, Andrey Nikolaevich; Leonov, Aleksandr S.; Yagola, Anatolij Grigorevic (1998).
Nonlinear ill-posed problems (https://www.springer.com/us/book/9789401751698). London:
Chapman & Hall. ISBN 0-412-78660-5. Retrieved 9 August 2018.
15. Phillips, D. L. (1962). "A Technique for the Numerical Solution of Certain Integral Equations
of the First Kind" (https://doi.org/10.1145%2F321105.321114). Journal of the ACM. 9: 84–
97. doi:10.1145/321105.321114 (https://doi.org/10.1145%2F321105.321114).
S2CID 35368397 (https://api.semanticscholar.org/CorpusID:35368397).
16. Hoerl, Arthur E. (1962). "Application of Ridge Analysis to Regression Problems". Chemical
Engineering Progress. 58 (3): 54–59.
17. Foster, M. (1961). "An Application of the Wiener-Kolmogorov Smoothing Theory to Matrix
Inversion". Journal of the Society for Industrial and Applied Mathematics. 9 (3): 387–392.
doi:10.1137/0109031 (https://doi.org/10.1137%2F0109031).
18. Hoerl, A. E.; R. W. Kennard (1970). "Ridge regression: Biased estimation for nonorthogonal
problems". Technometrics. 12 (1): 55–67. doi:10.1080/00401706.1970.10488634 (https://do
i.org/10.1080%2F00401706.1970.10488634).
19. Hoerl, Roger W. (2020-10-01). "Ridge Regression: A Historical Context" (https://www.tandfo
nline.com/doi/full/10.1080/00401706.2020.1742207). Technometrics. 62 (4): 420–425.
doi:10.1080/00401706.2020.1742207 (https://doi.org/10.1080%2F00401706.2020.174220
7). ISSN 0040-1706 (https://search.worldcat.org/issn/0040-1706).
20. Ng, Andrew Y. (2004). Feature selection, L1 vs. L2 regularization, and rotational invariance
(https://icml.cc/Conferences/2004/proceedings/papers/354.pdf) (PDF). Proc. ICML.
21. R.-E. Fan; K.-W. Chang; C.-J. Hsieh; X.-R. Wang; C.-J. Lin (2008). "LIBLINEAR: A library for
large linear classification". Journal of Machine Learning Research. 9: 1871–1874.
22. Guan, Naiyang; Tao, Dacheng; Luo, Zhigang; Yuan, Bo (2012). "Online nonnegative matrix
factorization with robust stochastic approximation". IEEE Transactions on Neural Networks
and Learning Systems. 23 (7): 1087–1099. doi:10.1109/TNNLS.2012.2197827 (https://doi.or
g/10.1109%2FTNNLS.2012.2197827). PMID 24807135 (https://pubmed.ncbi.nlm.nih.gov/24
807135). S2CID 8755408 (https://api.semanticscholar.org/CorpusID:8755408).
23. Koch, Lukas (2022). "Post-hoc regularisation of unfolded cross-section measurements".
Journal of Instrumentation. 17 (10): 10021. arXiv:2207.02125 (https://arxiv.org/abs/2207.021
25). Bibcode:2022JInst..17P0021K (https://ui.adsabs.harvard.edu/abs/2022JInst..17P0021
K). doi:10.1088/1748-0221/17/10/P10021 (https://doi.org/10.1088%2F1748-0221%2F17%2
F10%2FP10021).
24. Lavrentiev, M. M. (1967). Some Improperly Posed Problems of Mathematical Physics. New
York: Springer.
25. Hansen, Per Christian (Jan 1, 1998). Rank-Deficient and Discrete Ill-Posed Problems:
Numerical Aspects of Linear Inversion (1st ed.). Philadelphia, USA: SIAM. ISBN 978-0-
89871-403-6.
26. P. C. Hansen, "The L-curve and its use in the numerical treatment of inverse problems", [1]
(https://www.sintef.no/globalassets/project/evitameeting/2005/lcurve.pdf)
27. Wahba, G. (1990). "Spline Models for Observational Data". CBMS-NSF Regional
Conference Series in Applied Mathematics. Society for Industrial and Applied Mathematics.
Bibcode:1990smod.conf.....W (https://ui.adsabs.harvard.edu/abs/1990smod.conf.....W).
28. Golub, G.; Heath, M.; Wahba, G. (1979). "Generalized cross-validation as a method for
choosing a good ridge parameter" (http://www.stat.wisc.edu/~wahba/ftp1/oldie/golub.heath.
wahba.pdf) (PDF). Technometrics. 21 (2): 215–223. doi:10.1080/00401706.1979.10489751
(https://doi.org/10.1080%2F00401706.1979.10489751).
29. Tarantola, Albert (2005). Inverse Problem Theory and Methods for Model Parameter
Estimation (http://www.ipgp.jussieu.fr/~tarantola/Files/Professional/SIAM/index.html)
(1st ed.). Philadelphia: Society for Industrial and Applied Mathematics (SIAM). ISBN 0-
89871-792-2. Retrieved 9 August 2018.
30. Huang, Yunfei.; et al. (2019). "Traction force microscopy with optimized regularization and
automated Bayesian parameter selection for comparing cells" (https://www.ncbi.nlm.nih.gov/
pmc/articles/PMC6345967). Scientific Reports. 9 (1): 537. arXiv:1810.05848 (https://arxiv.or
g/abs/1810.05848). Bibcode:2019NatSR...9..539H (https://ui.adsabs.harvard.edu/abs/2019
NatSR...9..539H). doi:10.1038/s41598-018-36896-x (https://doi.org/10.1038%2Fs41598-018
-36896-x). PMC 6345967 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6345967).
PMID 30679578 (https://pubmed.ncbi.nlm.nih.gov/30679578).
31. Huang, Yunfei; Gompper, Gerhard; Sabass, Benedikt (2020). "A Bayesian traction force
microscopy method with automated denoising in a user-friendly software package".
Computer Physics Communications. 256: 107313. arXiv:2005.01377 (https://arxiv.org/abs/2
005.01377). doi:10.1016/j.cpc.2020.107313 (https://doi.org/10.1016%2Fj.cpc.2020.107313).
32. Greenberg, Edward; Webster, Charles E. Jr. (1983). Advanced Econometrics: A Bridge to
the Literature. New York: John Wiley & Sons. pp. 207–213. ISBN 0-471-09077-8.
33. Huang, Yunfei.; et al. (2019). "Traction force microscopy with optimized regularization and
automated Bayesian parameter selection for comparing cells" (https://www.ncbi.nlm.nih.gov/
pmc/articles/PMC6345967). Scientific Reports. 9 (1): 537. arXiv:1810.05848 (https://arxiv.or
g/abs/1810.05848). Bibcode:2019NatSR...9..539H (https://ui.adsabs.harvard.edu/abs/2019
NatSR...9..539H). doi:10.1038/s41598-018-36896-x (https://doi.org/10.1038%2Fs41598-018
-36896-x). PMC 6345967 (https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6345967).
PMID 30679578 (https://pubmed.ncbi.nlm.nih.gov/30679578).
34. Vogel, Curtis R. (2002). Computational methods for inverse problems. Philadelphia: Society
for Industrial and Applied Mathematics. ISBN 0-89871-550-4.
35. Amemiya, Takeshi (1985). Advanced Econometrics (https://archive.org/details/advancedeco
nomet00amem/page/60). Harvard University Press. pp. 60–61 (https://archive.org/details/ad
vancedeconomet00amem/page/60). ISBN 0-674-00560-0.
Further reading
Gruber, Marvin (1998). Improving Efficiency by Shrinkage: The James–Stein and Ridge
Regression Estimators (https://books.google.com/books?id=wmA_R3ZFrXYC). Boca Raton:
CRC Press. ISBN 0-8247-0156-9.
Kress, Rainer (1998). "Tikhonov Regularization" (https://books.google.com/books?id=Jv_ZB
wAAQBAJ&pg=PA86). Numerical Analysis. New York: Springer. pp. 86–90. ISBN 0-387-
98408-9.
Press, W. H.; Teukolsky, S. A.; Vetterling, W. T.; Flannery, B. P. (2007). "Section 19.5. Linear
Regularization Methods" (http://apps.nrbook.com/empanel/index.html#pg=1006). Numerical
Recipes: The Art of Scientific Computing (3rd ed.). New York: Cambridge University Press.
ISBN 978-0-521-88068-8.
Saleh, A. K. Md. Ehsanes; Arashi, Mohammad; Kibria, B. M. Golam (2019). Theory of Ridge
Regression Estimation with Applications (https://books.google.com/books?id=v0KCDwAAQ
BAJ). New York: John Wiley & Sons. ISBN 978-1-118-64461-4.
Taddy, Matt (2019). "Regularization" (https://books.google.com/books?id=yPOUDwAAQBAJ
&pg=PA69). Business Data Science: Combining Machine Learning and Economics to
Optimize, Automate, and Accelerate Business Decisions. New York: McGraw-Hill. pp. 69–
104. ISBN 978-1-260-45277-8.