Abstract
The generalized linear model (GLM) plays a key role in regression analyses. In high-dimensional data, the sparse GLM has been used but it is not robust against outliers. Recently, the robust methods have been proposed for the specific example of the sparse GLM. Among them, we focus on the robust and sparse linear regression based on the \(\gamma\)-divergence. The estimator of the \(\gamma\)-divergence has strong robustness under heavy contamination. In this paper, we extend the robust and sparse linear regression based on the \(\gamma\)-divergence to the robust and sparse GLM based on the \(\gamma\)-divergence with a stochastic optimization approach to obtain the estimate. We adopt the randomized stochastic projected gradient descent as a stochastic optimization approach and extend the established convergence property to the classical first-order necessary condition. By virtue of the stochastic optimization approach, we can efficiently estimate parameters for very large problems. Particularly, we show the linear regression, logistic regression and Poisson regression with \(L_1\) regularization in detail as specific examples of robust and sparse GLM. In numerical experiments and real data analysis, the proposed method outperformed comparative methods.
Similar content being viewed by others
References
Alfons, A., Croux, C., & Gelper, S. (2013). Sparse least trimmed squares regression for analyzing high-dimensional large data sets. The Annals of Applied Statistics, 7(1), 226–248.
Beck, A., & Teboulle, M. (2009). A fast iterative shrinkage-thresholding algorithm for linear inverse problems. SIAM Journal on Imaging Sciences, 2(1), 183–202. https://doi.org/10.1137/080716542.
Bootkrajang, J., & Kabán, A. (2013). Classification of mislabelled microarrays using robust sparse logistic regression. Bioinformatics, 29(7), 870–877. https://doi.org/10.1093/bioinformatics/btt078.
Borwein, J., & Lewis, A. S. (2010). Convex analysis and nonlinear optimization: Theory and examples. Berlin: Springer Science & Business Media.
Bottou, L. (2010). Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT’2010, Springer, pp. 177–186.
Bubeck, S. (2015). Convex optimization: Algorithms and complexity. Found Trends in Machine Learning, 8(3–4), 231–357. https://doi.org/10.1561/2200000050.
Chi, E. C., & Scott, D. W. (2014). Robust parametric classification and variable selection by a minimum distance criterion. Journal of Computational and Graphical Statistics, 23(1), 111–128. https://doi.org/10.1080/10618600.2012.737296.
Dean, C., & Lawless, J. F. (1989). Tests for detecting overdispersion in poisson regression models. Journal of the American Statistical Association, 84(406), 467–472. https://doi.org/10.1080/01621459.1989.10478792. http://www.tandfonline.com/doi/abs/10.1080/01621459.1989.10478792.
Duchi, J., & Singer, Y. (2009). Efficient online and batch learning using forward backward splitting. Journal of Machine Learning Research, 10, 2899–2934. http://dl.acm.org/citation.cfm?id=1577069.1755882.
Duchi, J., Shalev-Shwartz, S., Singer, Y., & Chandra, T. (2008). Efficient projections onto the l1-ball for learning in high dimensions. In Proceedings of the 25th International Conference on Machine Learning, ICML ’08, ACM, New York, NY, USA, pp 272–279, https://doi.org/10.1145/1390156.1390191.
Duchi, J. C., Shalev-Shwartz, S., Singer, Y., & Tewari, A. (2010). Composite objective mirror descent. In COLT 2010 - The 23rd Conference on Learning Theory, pp 14–26. http://colt2010.haifa.il.ibm.com/papers/COLT2010proceedings.pdf#page=22.
Duchi, J. C., Hazan, E., & Singer, Y. (2011). Adaptive subgradient methods for online learning and stochastic optimization. Journal of Machine Learning Research, 12, 2121–2159. http://dblp.uni-trier.de/db/journals/jmlr/jmlr12.html#DuchiHS11.
Fernandes, K., Vinagre, P., & Cortez, P. (2015). A proactive intelligent decision support system for predicting the popularity of online news. In F. Pereira, P. Machado, E. Costa, & A. Cardoso (Eds.), Progress in artificial intelligence (pp. 535–546). Cham: Springer International Publishing.
Fujisawa, H., & Eguchi, S. (2008). Robust parameter estimation with a small bias against heavy contamination. Journal of Multivariate Analysis, 99(9), 2053–2081.
Ghadimi, S., & Lan, G. (2016). Accelerated gradient methods for nonconvex nonlinear and stochastic programming. Mathematical Programming, 156(1), 59–99. https://doi.org/10.1007/s10107-015-0871-8.
Ghadimi, S., Lan, G., & Zhang, H. (2016). Mini-batch stochastic approximation methods for nonconvex stochastic composite optimization. Mathematical Programming, 155(1–2), 267–305. https://doi.org/10.1007/s10107-014-0846-1.
Hunter, D. R., & Lange, K. (2004). A tutorial on mm algorithms. The American Statistician, 58(1), 30–37.
Kanamori, T., & Fujisawa, H. (2015). Robust estimation under heavy contamination using unnormalized models. Biometrika, 102(3), 559–572.
Kawashima, T., & Fujisawa, H. (2017). Robust and sparse regression via \(\gamma\)-divergence. Entropy, 19, 608. https://doi.org/10.3390/e19110608.
Khan, J. A., Van Aelst, S., & Zamar, R. H. (2007). Robust linear model selection based on least angle regression. Journal of the American Statistical Association, 102(480), 1289–1299.
Kivinen, J., & Warmuth, M. K. (1995). Exponentiated gradient versus gradient descent for linear predictors. Information and Computation, 132, 1–63.
Lambert, D. (1992). Zero-inflated poisson regression, with an application to defects in manufacturing. Technometrics, 34(1), 1–14. http://www.jstor.org/stable/1269547.
McCullagh, P., & Nelder, J. (1989). Generalized linear models, Second Edition. Chapman and Hall/CRC Monographs on Statistics and Applied Probability Series, Chapman & Hall. http://books.google.com/books?id=h9kFH2_FfBkC.
Nelder, J. A., & Wedderburn, R. W. M. (1972). Generalized linear models. Journal of the Royal Statistical Society Series A (General), 135(3), 370–384. http://www.jstor.org/stable/2344614.
Nesterov, Y. (2007). Gradient methods for minimizing composite objective function. CORE Discussion Papers 2007076, Université catholique de Louvain, Center for Operations Research and Econometrics (CORE). https://EconPapers.repec.org/RePEc:cor:louvco:2007076.
Rockafellar, R. T. (1970). Convex analysis. Princeton Mathematical Series. Princeton: Princeton University Press.
Tibshirani, R. (1996). Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society Series B pp 267–288.
Xiao, L. (2010). Dual averaging methods for regularized stochastic learning and online optimization. Journal of Machine Learning Research, 11, 2543–2596.
Zou, H., & Hastie, T. (2005). Regularization and variable selection via the elastic net. Journal of the Royal Statistical Society: Series B, 67(2), 301–320.
Acknowledgements
This work was partially supported by JSPS KAKENHI Grant Number 17K00065.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix 1
Here, we prove a convergence of \(\sum _{y=0}^\infty f(y|x_{t,i};\theta ^{(t)})^{1+\gamma }\) and \(\sum _{y=0}^\infty (y- y_{t,i} ) f(y|x_{t,i};\theta ^{(t)})^{1+\gamma }\). First, let us consider \(\sum _{y=0}^\infty f(y|x_{t,i};\theta ^{(t)})^{1+\gamma }\) and we denote n-th term that \(S_n = f(n|x_{t,i};\theta ^{(t)})^{1+\gamma }\). Then, we use the dalembert ratio test for \(S_n\):
Therefore, \(\sum _{y=0}^\infty f(y|x_{t,i};\theta ^{(t)})^{1+\gamma }\) converges.
Next, let us consider \(\sum _{y=0}^\infty (y - y_{t,i} ) f(y|x_{t,i};\theta ^{(t)})^{1+\gamma }\) and we denote n-th term that \(S^{'}_n =(n - y_{t,i} ) f(n|x_{t,i};\theta ^{(t)})^{1+\gamma }\). Then, we use the dalembert ratio test for \(S^{'}_n\):
Therefore, \(\sum _{y=0}^\infty (y-y_{t,i}) f(y|x_{t,i};\theta ^{(t)})^{1+\gamma }\) converges.
Appendix 2
The proof of Theorem 2.
The directional derivative of the differentiable function always exists and is represented by the dot product with the gradient of the differentiable function and the direction given by
Moreover, the directional derivative of the (proper) convex function exists at the relative interior point of the domain and is greater than the dot product with the subgradient of the convex function and direction (Rockafellar 1970) given by
Then, by the optimality condition of (16), we have the following equation
Therefore, we can obtain (21) from \(P_{X,R} \approx 0\), (22), (23), (24) and (25) as follows;
Rights and permissions
About this article
Cite this article
Kawashima, T., Fujisawa, H. Robust and sparse regression in generalized linear model by stochastic optimization. Jpn J Stat Data Sci 2, 465–489 (2019). https://doi.org/10.1007/s42081-019-00049-9
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s42081-019-00049-9