Nothing Special   »   [go: up one dir, main page]

skip to main content
10.1007/978-3-031-40725-3_22guideproceedingsArticle/Chapter ViewAbstractPublication PagesConference Proceedingsacm-pubtype
Article

Robust Losses in Deep Regression

Published: 05 September 2023 Publication History

Abstract

What is the noise distribution of a given regression problem is not known in advance and, given that the assumption on which noise is present is reflected on the loss to be used, a consequence is that neither the loss choice should be fixed beforehand. In this work we will address this issue examining seven regression losses, some of them proposed in the field of robust linear regression, over twelve problems. While in our experiments some losses appear as better suited for most of the problems, we feel more appropriate to conclude that the choice of a “best loss” is problem dependent and perhaps should be handled similarly to what is done in hyperparameter selection.

References

[1]
Anand P, Rastogi R, and Chandra S A class of new support vector regression models Appl. Soft Comput. 2020 94 106446
[2]
Barrodale I and Roberts FDK An improved algorithm for discrete l1 linear approximation SIAM J. Numer. Anal. 1973 10 5 839-848
[3]
Belagiannis, V., Rupprecht, C., Carneiro, G., Navab, N.: Robust optimization for deep regression, pp. 2830–2838 (2015)
[4]
Bishop, C.M.: Pattern Recognition Machine Learning. Information Science and Statistics, Springer, New York (2006)
[5]
Brabanter, K.D., Brabanter, J.D., Suykens, J.A.K., Vandewalle, J., Moor, B.D.: Robustness of kernel based regression: influence and weight functions. In: The 2012 International Joint Conference on Neural Networks (IJCNN), Brisbane, Australia, 10–15 June 2012, pp. 1–8. IEEE (2012)
[6]
De Carvalho FDA, Neto EDAL, and Ferreira MR A robust regression method based on exponential-type kernel functions Neurocomputing 2017 234 58-74
[7]
Chang, C.C.C., Lin, C.J.: LIBSVM data: classification, regression, and multi-label. https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/
[8]
Chollet, F., et al.: Keras (2015). https://github.com/fchollet/keras
[9]
Daubechies I, Devore R, Fornasier M, and Güntürk C Iteratively reweighted least squares minimization for sparse recovery Commun. Pure Appl. Math. 2010 63 1 1-38
[10]
Diaz-Vico D, Prada J, Omari A, and Dorronsoro J Deep support vector neural networks Integr. Comput.-Aided Eng. 2020 27 4 389-402
[11]
Diskin, T., Draskovic, G., Pascal, F., Wiesel, A.: Deep robust regression. In: 2017 IEEE 7th International Workshop on Computational Advances in Multi-Sensor Adaptive Processing, CAMSAP 2017, pp. 1–5. IEEE (2017)
[12]
Huber, P.J.: Robust Statistics. Wiley Series in Probability and Statistics, Wiley, Hoboken (1981)
[13]
Liu, T., Tao, D.: On the robustness and generalization of Cauchy regression. In: 2014 4th IEEE International Conference on Information Science and Technology, pp. 100–105 (2014)
[14]
Messem AV and Christmann A A review on consistency and robustness properties of support vector machines for heavy-tailed distributions Adv. Data Anal. Classif. 2010 4 2–3 199-220
[15]
Ortega, J., Rheinboldt, W.: Iterative Solution of Nonlinear Equations in Several Variables. Classics in Applied Mathematics, Society for Industrial and Applied Mathematics (1970)
[16]
Paszke, A., et al.: Automatic differentiation in pytorch. In: Advances in Neural Information Processing Systems vol. 32, pp. 8024–8035. Curran Associates, Inc. (2019)
[17]
Prada J and Dorronsoro JR General noise support vector regression with non-constant uncertainty intervals for solar radiation prediction J. Modern Power Syst. Clean Energy 2018 6 2 268-280
[18]
Saleh, R.A., Saleh, A.K.M.E.: Statistical properties of the log-cosh loss function used in machine learning. arXiv (2022)
[19]
Shen, G., Jiao, Y., Lin, Y., Huang, J.: Robust nonparametric regression with deep neural networks (2021)
[20]
Smola AJ and Schölkopf B A tutorial on support vector regression Stat. Comput. 2004 14 3 199-222
[21]
Venables, W.N., Ripley, B.D.: Modern Applied Statistics with S, 4th edn. Springer, Cham (2002). Statistics and Computing
[22]
Yang L, Ren Z, Wang Y, and Dong H A robust regression framework with laplace kernel-induced loss Neural Comput. 2017 29 11 3014-3039
[23]
Zhang, A., Lipton, Z.C., Li, M., Smola, A.J.: Dive into deep learning. CoRR abs/2106.11342 (2021)

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image Guide Proceedings
Hybrid Artificial Intelligent Systems: 18th International Conference, HAIS 2023, Salamanca, Spain, September 5–7, 2023, Proceedings
Sep 2023
788 pages
ISBN:978-3-031-40724-6
DOI:10.1007/978-3-031-40725-3

Publisher

Springer-Verlag

Berlin, Heidelberg

Publication History

Published: 05 September 2023

Author Tags

  1. Deep Neural Networks
  2. Robust Regression
  3. MSE
  4. MAE
  5. Huber loss
  6. log cosh loss
  7. Cauchy loss

Qualifiers

  • Article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 0
    Total Downloads
  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 19 Feb 2025

Other Metrics

Citations

View Options

View options

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media