Nothing Special   »   [go: up one dir, main page]

skip to main content
research-article

Learning From Mistakes: Machine Learning Enhanced Human Expert Effort Estimates

Published: 01 June 2022 Publication History

Abstract

In this paper, we introduce a novel approach to predictive modeling for software engineering, named Learning From Mistakes (LFM). The core idea underlying our proposal is to automatically learn from past estimation errors made by human experts, in order to predict the characteristics of their future misestimates, therefore resulting in improved future estimates. We show the feasibility of LFM by investigating whether it is possible to predict the type, severity and magnitude of errors made by human experts when estimating the development effort of software projects, and whether it is possible to use these predictions to enhance future estimations. To this end we conduct a thorough empirical study investigating 402 maintenance and new development industrial software projects. The results of our study reveal that the type, severity and magnitude of errors are all, indeed, predictable. Moreover, we find that by exploiting these predictions, we can obtain significantly better estimates than those provided by random guessing, human experts and traditional machine learners in 31 out of the 36 cases considered (86 percent), with large and very large effect sizes in the majority of these cases (81 percent). This empirical evidence opens the door to the development of techniques that use the power of machine learning, coupled with the observation that human errors are predictable, to support engineers in estimation tasks rather than replacing them with machine-provided estimates.

References

[1]
L. C. Briand and I. Wieczorek, “Resource estimation in software engineering,” in Encyclopedia of Software Engineering. Hoboken, NJ, USA: Wiley, 2002.
[2]
A. Trendowicz and R. Jeffery, Software Project Effort Estimation: Foundations and Best Practice Guidelines for Success. Berlin, Germany: Springer, 2014, pp. 277–293.
[3]
K. Molkken and M. Jörgensen, “A review of surveys on software effort estimation,” in Proc. Int. Symp. Empir. Softw. Eng., 2003, pp. 223–230.
[4]
M. Jørgensen, “A review of studies on expert estimation of software development effort,” J. Syst. Softw., vol. 70, no. 1/2, pp. 37–60, 2004.
[5]
T. M. Gruschke and M. Jørgensen, “The role of outcome feedback in improving the uncertainty assessment of software development effort estimates,” ACM Trans. Softw. Eng. Methodol., vol. 17, no. 4, pp. 1–35, 2008.
[6]
M. Usman, E. Mendes, and J. Börstler, “Effort estimation in agile software development: A survey on the state of the practice,” in Proc. 19th Int. Conf. Eval. Assessment Softw. Eng., 2015, pp. 1–10.
[7]
S. McConnell, Software Estimation: Demystifying the Black Art. Redmond, WA, USA: Microsoft Press, 2006.
[8]
F. Ferrucci, M. Harman, and F. Sarro, Search-Based Software Project Management. Berlin, Germany: Springer, 2014, pp. 373–399.
[9]
M. Jørgensen and T. Halkjelsvik, “Sequence effects in the estimation of software development effort,” J. Syst. Softw., vol. 159, 2020, Art. no.
[10]
M. Jørgensen, “Forecasting of software development work effort: Evidence on expert judgement and formal models,” Int. J. Forecast., vol. 23, no. 3, pp. 449–462, 2007.
[11]
S. G. MacDonell and M. J. Shepperd, “Combining techniques to optimize effort predictions in software project management,” J. Syst. Softw., vol. 66, no. 2, pp. 91–98, 2003.
[12]
M. J. Shepperd and S. G. MacDonell, “Evaluating prediction systems in software project estimation,” Inf. Softw. Technol., vol. 54, no. 8, pp. 820–827, 2012.
[13]
F. Sarro and A. Petrozziello, “Linear programming as a baseline for software effort estimation,” ACM Trans. Softw. Eng. Methodol., vol. 27, no. 3, pp. 12:1–12:28, 2018.
[14]
E. Kocaguneli and T. Menzies, “Software effort models should be assessed via leave-one-out validation,” J. Syst. Softw., vol. 86, no. 7, pp. 1879–1890, 2013.
[15]
M. Aly, “Survey on multiclass classification methods extensible algorithms,” Neural Netw., 2005.
[16]
F. Walkerden and R. Jeffery, “An empirical study of analogy-based software effort estimation,” Empir. Softw. Eng., vol. 4, no. 2, pp. 135–158, 1999.
[17]
E. Kocaguneli, T. Menzies, A. Bener, and J. W. Keung, “Exploiting the essential assumptions of analogy-based effort estimation,” IEEE Trans. Softw. Eng., vol. 38, no. 2, pp. 425–438, Mar./Apr. 2012.
[18]
F. Ferrucci, C. Gravino, R. Oliveto, and F. Sarro, “Using tabu search to estimate software development effort,” in Proc. Int. Workshop Softw. Meas., 2009, pp. 307–320.
[19]
F. Ferrucci, C. Gravino, R. Oliveto, F. Sarro, and E. Mendes, “Investigating tabu search for web effort estimation,” in Proc. 36th EUROMICRO Conf. Softw. Eng. Adv. Appl., 2010, pp. 350–357.
[20]
A. Corazza, S. D. Martino, F. Ferrucci, C. Gravino, F. Sarro, and E. Mendes, “Using tabu search to configure support vector regression for effort estimation,” Empir. Softw. Eng., vol. 18, no. 3, pp. 506–546, 2013.
[21]
F. Sarro, A. Petrozziello, and M. Harman, “Multi-objective software effort estimation,” in Proc. 38th Int. Conf. Softw. Eng., 2016, pp. 619–630.
[22]
M. Choetkiertikul, H. K. Dam, T. Tran, T. Pham, A. Ghose, and T. Menzies, “A deep learning model for estimating story points,” IEEE Trans. Softw. Eng., vol. 45, no. 7, pp. 637–656, Jul. 2019.
[23]
A. Arcuri and L. C. Briand, “A Hitchhiker’s guide to statistical tests for assessing randomized algorithms in software engineering,” Softw. Testing Verification Rel., vol. 24, no. 3, pp. 219–250, 2014.
[24]
W. B. Langdon, J. J. Dolado, F. Sarro, and M. Harman, “Exact mean absolute error of baseline predictor, MARP0,” Inf. Softw. Technol., vol. 73, pp. 16–18, 2016.
[25]
W. Fu, T. Menzies, and X. Shen, “Tuning for software analytics: Is it really necessary?,” Inf. Softw. Technol., vol. 76, pp. 135–146, 2016.
[26]
C. Tantithamthavorn and A. E. Hassan, “An experience report on defect modelling in practice: Pitfalls and challenges,” in Proc. 40th Int. Conf. Softw. Eng.: Softw. Eng. Practice, 2018, pp. 286–295.
[27]
A. J. Albrecht and J. E. Gaffney, “Software function, source lines of code, and development effort prediction: A software science validation,” IEEE Trans. Softw. Eng., vol. SE-9, no. 6, pp. 639–648, Nov. 1983.
[28]
A. Abranet al., “The COSMIC functional size measurement method – measurement manual,” version 4.0.1, 2015. [Online]. Available: http://www.cosmicon.com/portal/public/MMv4.0.1.pdf
[29]
Ç. Gencel and O. Demirörs, “Functional size measurement revisited,” ACM Trans. Softw. Eng. Methodol., vol. 17, no. 3, 2008, Art. no.
[30]
H. Van Heeringen and E. Van Gorp, “Measure the functional size of a mobile app: Using the COSMIC functional size measurement method,” in Proc. Joint Conf. Int. Workshop Softw. Meas. Int. Conf. Softw. Process Product Meas., 2014, pp. 11–16.
[31]
F. Ferrucci, C. Gravino, P. Salza, and F. Sarro, “Investigating functional and code size measures for mobile applications,” in Proc. 41st Euromicro Conf. Softw. Eng. Adv. Appl., 2015, pp. 365–368.
[32]
F. Ferrucci, C. Gravino, P. Salza, and F. Sarro, “Investigating functional and code size measures for mobile applications: A replicated study,” in Proc. Int. Conf. Product-Focused Softw. Process Improvement, 2015, pp. 271–287.
[33]
L. De Marco, F. Ferrucci, C. Gravino, F. Sarro, S. Abrahão, and J. Gómez, “Functional versus design measures for model-driven web applications: A case study in the context of web effort estimation,” in Proc. 3rd Int. Workshop Emerg. Trends Softw. Metric, 2012, pp. 21–27.
[34]
B. Marín, O. Pastor, and A. Abran, “Towards an accurate functional size measurement procedure for conceptual models in an MDA environment,” Data Knowl. Eng., vol. 69, no. 5, pp. 472–490, 2010.
[35]
S. Abrahão, L. D. Marco, F. Ferrucci, J. Gómez, C. Gravino, and F. Sarro, “Definition and evaluation of a COSMIC measurement procedure for sizing web applications in a model-driven development environment,” Inf. Softw. Technol., vol. 104, pp. 144–161, 2018.
[36]
S. Di Martino, F. Ferrucci, C. Gravino, and F. Sarro, “Web effort estimation: Function point analysis vs. COSMIC,” Inf. Softw. Technol., vol. 72, pp. 90–109, 2016.
[37]
ISBSG, “The international software benchmarking standards group,” 2019. [Online]. Available: http://www.isbsg.org
[38]
M. Fernández-Diego and F. G. L. Guevara, “Potential and limitations of the ISBSG dataset in enhancing software engineering research: A mapping review,” Inf. Softw. Technol., vol. 56, pp. 527–544, 2014.
[39]
B. Kitchenham, S. L. Pfleeger, B. McColl, and S. Eagan, “An empirical study of maintenance and development estimation accuracy,” J. Syst. Softw., vol. 64, no. 1, pp. 57–77, 2002.
[40]
L. Breiman, Classification and Regression Trees. Evanston, IL, USA: Routledge, 2017.
[41]
T. Cover and P. Hart, “Nearest neighbor pattern classification,” IEEE Trans. Inf. Theory, vol. 13, no. 1, pp. 21–27, Jan. 1967.
[42]
P. Langleyet al., “An analysis of Bayesian classifiers,” in Proc. 10th Nat. Conf. Artif. Intell., 1992, pp. 223–228.
[43]
A. Charnes and W. W. Cooper, “Programming with linear fractional functionals,” Naval Res. Logistics Quart., vol. 9, no. 3/4, pp. 181–186, 1962.
[44]
A. Liawet al., “Classification and regression by RandomForest,” R News, vol. 2, no. 3, pp. 18–22, 2002.
[45]
C. Tantithamthavorn, S. McIntosh, A. E. Hassan, and K. Matsumoto, “The impact of automated parameter optimization on defect prediction models,” IEEE Trans. Softw. Eng., vol. 45, no. 7, pp. 683–711, Jul. 2019.
[46]
A. Corazza, S. Di Martino, F. Ferrucci, C. Gravino, F. Sarro, and E. Mendes, “How effective is tabu search to configure support vector regression for effort estimation?” in Proc. 6th Int. Conf. Predictive Models Softw. Eng., 2010, pp. 4:1–4:10.
[47]
C. Tantithamthavorn, S. McIntosh, A. E. Hassan, and K. Matsumoto, “Automated parameter optimization of classification techniques for defect prediction models,” in Proc. 38th Int. Conf. Softw. Eng., 2016, pp. 321–332.
[48]
L. Breiman, J. H. Friedman, R. A. Olshen, and C. J. Stone, Classification and Regression Trees. Belmont, CA, USA: Wadsworth, 1984.
[49]
B. D. Ripley, Pattern Recognition and Neural Networks. Cambridge, U.K.: Cambridge Univ. Press, 2007.
[50]
I. H. Witten and E. Frank, Data Mining: Practical Machine Learning Tools and Techniques, 2nd ed. San Mateo, CA, USA: Morgan Kaufmann, 2005.
[51]
L. Breiman, “Random forests,” Mach. Learn., vol. 45, no. 1, pp. 5–32, 2001.
[52]
B. Sigweni, M. Shepperd, and T. Turchi, “Realistic assessment of software effort estimation models,” in Proc. 20th Int. Conf. Eval. Assessment Softw. Eng., 2016, pp. 1–6.
[53]
T. Fawcett, “An introduction to ROC analysis,” Pattern Recognit. Lett., vol. 27, no. 8, pp. 861–874, 2006.
[54]
D. J. Hand and R. J. Till, “A simple generalisation of the area under the ROC curve for multiple class classification problems,” Mach. Learn., vol. 45, no. 2, pp. 171–186, 2001.
[55]
M. Friedman, “The use of ranks to avoid the assumption of normality implicit in the analysis of variance,” J. Amer. Statist. Assoc., vol. 32, no. 200, pp. 675–701, 1937.
[56]
P. Nemenyi, “Distribution-free multiple comparisons,” Princeton Univ., 1963.
[57]
J. Demšar, “Statistical comparisons of classifiers over multiple data sets,” J. Mach. Learn. Res., vol. 7, no. Jan., pp. 1–30, 2006.
[58]
J. Cohen, Statistical Power Analysis for the Behavioral Sciences, 2nd ed. Mahwah, NJ, USA: Lawrence Earlbaum Associates, 1988.
[59]
R. Rosenthal, H. Cooper, and L. Hedges, “Parametric measures of effect size,” in The Handbook of Research Synthesis. New York, NY, USA: Russell Sage, 1994, pp. 231–244.
[60]
J. Cohen, Statistical Power Analysis for the Behavioral Sciences. New York, NY, USA: Academic, 2013.
[61]
J. A. Rosenthal, “Qualitative descriptors of strength of association and effect size,” J. Soc. Service Res., vol. 21, no. 4, pp. 37–59, 1996.
[62]
B. Kitchenham, L. Pickard, and S. L. Pfleeger, “Case studies for method and tool evaluation,” IEEE Softw., vol. 12, no. 4, pp. 52–62, Jul. 1995.
[63]
E. Mendes, S. Counsell, N. Mosley, C. Triggs, and I. Watson, “A comparative study of cost estimation models for web hypermedia applications,” Empir. Softw. Eng., vol. 8, no. 23, pp. 163–196, 2003.
[64]
P. A. Whigham, C. A. Owen, and S. G. Macdonell, “A baseline model for software effort estimation,” ACM Trans. Softw. Eng. Methodol., vol. 24, no. 3, pp. 20:1–20:11, 2015.
[65]
L. C. Briand and J. Wüst, “Modeling development effort in object-oriented systems using design properties,” IEEE Trans. Softw. Eng., vol. 27, no. 11, pp. 963–986, Nov. 2001.
[66]
T. Foss, E. Stensrud, B. Kitchenham, and I. Myrtveit, “A simulation study of the model evaluation criterion MMRE,” IEEE Trans. Softw. Eng., vol. 29, no. 11, pp. 985–995, Nov. 2003.
[67]
B. Kitchenham and E. Mendes, “Why comparative effort prediction studies may be invalid,” in Proc. 5th Int. Conf. Predictor Models Softw. Eng., 2009, Art. no.
[68]
L. C. Briand and I. Wieczorek, “Software resource estimation,” in Encyclopedia of Software Engineering. Hoboken, NJ, USA: Wiley, 2002, pp. 1160–1196.
[69]
M. Shepperd and C. Schofield, “Estimating software project effort using analogies,” IEEE Trans. Softw. Eng., vol. 23, no. 11, pp. 736–743, Nov. 2000.
[70]
J. Wen, S. Li, Z. Lin, Y. Hu, and C. Huang, “Systematic literature review of machine learning based software development effort estimation models,” Inf. Softw. Technol., vol. 54, no. 1, pp. 41–59, 2012.
[71]
A. Idri, M. Hosni, and A. Abran, “Systematic literature review of ensemble effort estimation,” J. Syst. Softw., vol. 118, no. C, pp. 151–175, 2016.
[72]
F. Sarro, “Search-based predictive modelling for software engineering: How far have we gone?,” in Proc. 11th Int. Symp. Search-Based Softw. Eng., 2019, pp. 3–7.
[73]
C. Gencel, “How to use cosmic functional size in effort estimation models?,” in Proc. Int. Conf. Softw. Process Product Meas., 2008, pp. 196–207.
[74]
M. de Freitas Junior, M. Fantinato, and V. Sun, “Improvements to the function point analysis method: A systematic literature review,” IEEE Trans. Eng. Manag., vol. 62, no. 4, pp. 495–506, Nov. 2015.
[75]
F. Ferrucci, C. Gravino, and F. Sarro, “Conversion from IFPUG FPA to COSMIC: Within-vs without-company equations,” in Proc. 40th EUROMICRO Conf. Softw. Eng. Adv. Appl., 2014, pp. 293–300.
[76]
S. Di Martino, F. Ferrucci, C. Gravino, and F. Sarro, “Assessing the effectiveness of approximate functional sizing approaches for effort estimation,” Inf. Softw. Technol., vol. 123, 2020, Art. no. [Online]. Available: http://www.sciencedirect.com/science/article/pii/S0950584920300604
[77]
E. Mendes, M. Kalinowski, D. Martins, F. Ferrucci, and F. Sarro, “Cross- vs. within-company cost estimation studies revisited: An extended systematic review,” in Proc. 18th Int. Conf. Eval. Assessment Softw. Eng., 2014, pp. 12:1–12:10.
[78]
T. Menzieset al., “Local versus global lessons for defect prediction and effort estimation,” IEEE Trans. Softw. Eng., vol. 39, no. 6, pp. 822–834, Jun. 2013.
[79]
E. Kocaguneli, T. Menzies, and E. Mendes, “Transfer learning in effort estimation,” Empir. Softw. Eng., vol. 20, no. 3, pp. 813–843, 2015.
[80]
L. Minku, F. Sarro, E. Mendes, and F. Ferrucci, “How to make best use of cross-company data for web effort estimation?,” in Proc. ACM/IEEE Int. Symp. Empir. Softw. Eng. Meas., 2015, pp. 1–10.
[81]
M. Jorgensen, “Realism in assessment of effort estimation uncertainty: It matters how you ask,” IEEE Trans. Softw. Eng., vol. 30, no. 4, pp. 209–217, Apr. 2004.
[82]
M. Jørgensen and D. Sjöberg, “An effort prediction interval approach based on the empirical distribution of previous estimation accuracy,” Inf. Softw. Technol., vol. 45, no. 3, pp. 123–136, 2003.
[83]
M. Jørgensen and D. I. K. Sjoeberg, “An effort prediction interval approach based on the empirical distribution of previous estimation accuracy,” Inf. Softw. Technol., vol. 45, no. 3, pp. 123–136, 2003.
[84]
M. Jørgensen, “Looking back on previous estimation error as a method to improve the uncertainty assessment of benefits and costs of software development projects,” in Proc. 9th Int. Workshop Empir. Softw. Eng. Practice, 2018, pp. 19–24.
[85]
K. H. Teigen and M. JØrgensen, “When 90% confidence intervals are 50% certain: On the credibility of credible intervals,” Appl. Cogn. Psychol., vol. 19, no. 4, pp. 455–475, 2005.
[86]
M. Jørgensen and K. Moløkken, “Combination of software development effort prediction intervals: Why, when and how?” in Proc. 14th Int. Conf. Softw. Eng. Knowl. Eng., 2002, pp. 425–428.
[87]
M. Jørgensen, K. H. Teigen, and K. MoløKken, “Better sure than safe? Over-confidence in judgement based software development effort prediction intervals,” J. Syst. Softw., vol. 70, no. 1/2, pp. 79–93, 2004.
[88]
M. Jørgensen, “The ignorance of confidence levels in minimum-maximum software development effort intervals,” Lecture Notes Softw. Eng., vol. 2, no. 4, 2014, Art. no.
[89]
A. L. Lederer and J. Prasad, “Causes of inaccurate software development cost estimates,” J. Syst. Softw., vol. 31, no. 2, pp. 125–134, 1995.
[90]
T. Connolly and D. Dean, “Decomposed versus holistic estimates of effort required for software writing tasks,” Manage. Sci., vol. 43, no. 7, pp. 1029–1045, 1997.
[91]
A. R. Gray, S. G. MacDonell, and M. J. Shepperd, “Factors systematically associated with errors in subjective estimates of software development effort: The stability of expert judgment,” in Proc. 6th Int. Softw. Metrics Symp., 1999, pp. 216–227.
[92]
M. Jørgensen, “Regression models of software development effort estimation accuracy and bias,” Empir. Softw. Eng., vol. 9, no. 4, pp. 297–314, 2004.
[93]
M. Jorgensen and S. Grimstad, “Software development estimation biases: The role of interdependence,” IEEE Trans. Softw. Eng., vol. 38, no. 3, pp. 677–693, May/Jun. 2012.
[94]
H. L. Hollingworth, “The central tendency of judgment,” J. Philosophy Psychol. Sci. Methods, vol. 7, no. 17, pp. 461–469, 1910.
[95]
D. Kahneman and A. Tversky, “Intuitive prediction: Biases and corrective procedures,” Tech. Rep., 2013.
[96]
M. M. Roy and N. J. Christenfeld, “Bias in memory predicts bias in estimation of future task duration,” Memory Cogn., vol. 35, no. 3, pp. 557–564, 2007.
[97]
M. M. Roy, S. T. Mitten, and N. J. Christenfeld, “Correcting memory improves accuracy of predicted task duration,” J. Exp. Psychol.: Appl., vol. 14, no. 3, 2008, Art. no.
[98]
M. Jorgensen and K. Molokken-Ostvold, “Reasons for software effort estimation error: Impact of respondent role, information collection approach, and data analysis method,” IEEE Trans. Softw. Eng., vol. 30, no. 12, pp. 993–1007, Dec. 2004.
[99]
Y. Kultur, B. Turhan, and A. B. Bener, “ENNA: Software effort estimation using ensemble of neural networks with associative memory,” in Proc. 16th ACM SIGSOFT Int. Symp. Found. Softw. Eng., 2008, pp. 330–338.
[100]
S. Akbarinasaji, B. Caglayan, and A. Bener, “Predicting bug-fixing time,” J. Syst. Softw., vol. 136, no. C, pp. 173–186, 2018.
[101]
F. Sarro, M. Harman, Y. Jia, and Y. Zhang, “Customer rating reactions can be predicted purely using app features,” in Proc. 26th IEEE Int. Requirements Eng. Conf., 2018, pp. 76–87.
[102]
P. Thongtanunam, C. Tantithamthavorn, R. G. Kula, N. Yoshida, H. Iida, and K.-Ichi Matsumoto, “Who should review my code? A file location-based code-reviewer recommendation approach for modern code review,” in Proc. IEEE 22nd Int. Conf. Softw. Anal. Evol. Reeng., 2015, pp. 141–150.
[103]
M. Jimenez, R. Rwemalika, M. Papadakis, F. Sarro, Y. L. Traon, and M. Harman, “The importance of accounting for real-world labelling when predicting software vulnerabilities,” in Proc. ACM Joint Meeting Eur. Softw. Eng. Conf. Symp. Found. Softw. Eng., 2019, pp. 695–705.
[104]
J. C. Nash, “The (Dantzig) simplex method for linear programming,” Comput. Sci. Eng., vol. 2, no. 1, pp. 29–31, 2000.

Cited By

View all
  • (2024)Fine-SE: Integrating Semantic Features and Expert Features for Software Effort EstimationProceedings of the IEEE/ACM 46th International Conference on Software Engineering10.1145/3597503.3623349(1-12)Online publication date: 20-May-2024
  • (2024)A random forest model for early-stage software effort estimation for the SEERA datasetInformation and Software Technology10.1016/j.infsof.2024.107413169:COnline publication date: 1-May-2024
  • (2023)Agile Effort Estimation: Have We Solved the Problem Yet? Insights From a Replication StudyIEEE Transactions on Software Engineering10.1109/TSE.2022.322873949:4(2677-2697)Online publication date: 1-Apr-2023

Recommendations

Comments

Please enable JavaScript to view thecomments powered by Disqus.

Information & Contributors

Information

Published In

cover image IEEE Transactions on Software Engineering
IEEE Transactions on Software Engineering  Volume 48, Issue 6
June 2022
355 pages

Publisher

IEEE Press

Publication History

Published: 01 June 2022

Qualifiers

  • Research-article

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)0
  • Downloads (Last 6 weeks)0
Reflects downloads up to 03 Oct 2024

Other Metrics

Citations

Cited By

View all
  • (2024)Fine-SE: Integrating Semantic Features and Expert Features for Software Effort EstimationProceedings of the IEEE/ACM 46th International Conference on Software Engineering10.1145/3597503.3623349(1-12)Online publication date: 20-May-2024
  • (2024)A random forest model for early-stage software effort estimation for the SEERA datasetInformation and Software Technology10.1016/j.infsof.2024.107413169:COnline publication date: 1-May-2024
  • (2023)Agile Effort Estimation: Have We Solved the Problem Yet? Insights From a Replication StudyIEEE Transactions on Software Engineering10.1109/TSE.2022.322873949:4(2677-2697)Online publication date: 1-Apr-2023

View Options

View options

Get Access

Login options

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media