Abstract
Application of machine learning techniques for analysis and prediction in healthcare heavily relies on patient data. Lab tests, medical instruments, and other sources generate a significant portion of patient healthcare data, which may contain imprecision ranges of up to 20%. However, previous studies have shown that prediction models built from such imprecise data tend to be “brittle” or unstable. Even a minor deviation within the normal imprecision range can lead to varying prediction results. Measuring the stability of such models is a crucial challenge for public health agencies in large cities in China. In this paper, we report our preliminary results on measuring stability, specifically, develop a “voting” based metric to assess the predictive stability of a model. We also formulate an effective method to calculate this metric that allows us to observe, understand, and compare the predictive stability of a model at a finer granularity. We conducted experiments on the MIMIC dataset and a predefined dataset to test the baseline method and two commonly used improved methods for handling noisy and imprecise data. The results showed that the improved methods demonstrated lower instability than the original method, highlighting the soundness of our evaluation metrics. Notably, when two models had similar accuracy for the same task, our evaluation metric identifies varying levels of stability among the models, suggesting that our proposed method provides a valuable perspective for model selection that deserves further explanation.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Mckinney, S.M., et al.: International evaluation of an AI system for breast cancer screening. Nature 577, 89–94 (2020)
Mckinney, S.M., et al.: Artificial intelligence will soon change the landscape of medical physics research and practice. Med. Phys. 45(5), 1791–1793 (2018)
Gulshan, V., et al.: Development and validation of a deep learning algorithm for detection of diabetic retinopathy in retinal fundus photographs. JAMA 316(22), 2402–2410 (2016)
Esteva, A., et al.: Dermatologist-level classification of skin cancer with deep neural networks. Nature 542(7639), 115–118 (2017)
Lipkovich, I., Dmitrienko, A., d’agostino, R.B.: Tutorial in biostatistics: data-driven subgroup identification and analysis in clinical trials. Stat. Med. 36, 08 (2016)
Wang, M., Su, J., Lu, H.Q.: Impact of medical data imprecision on learning results. In: Proceedings of SIGKDD Workshop (2020)
Webb, S., Rainforth, T., Teh, Y.W., Pawan Kumar, M.: A statistical approach to assessing neural network robustness. In: 7th International Conference on Learning Representations, ICLR 2019, New Orleans, LA, USA, 6–9 May 2019. OpenReview.net (2019)
Qin, M., Vucinic, D.: Training recurrent neural networks against noisy computations during inference (2018)
Yeo, K.: Short note on the behavior of recurrent neural network for noisy dynamical system. ArXiv, abs/1904.05158 (2019)
Cohen, J., Rosenfeld, E., Kolter, Z.: Certified adversarial robustness via randomized smoothing. In: International Conference on Machine Learning, pp. 1310–1320. PMLR (2019)
Zhang, B., Jiang, D., He, D., Wang, L.: Rethinking Lipschitz neural networks and certified robustness. A boolean function perspective. In: Advances in Neural Information Processing Systems (2022)
Johnson, A.E.W.: Mimic-III, a freely accessible critical care database. Sci. Data 3(1), 1–9 (2016)
Harutyunyan, H., Khachatrian, H., Kale, D.C., Steeg, G.V., Galstyan, A.: Multitask learning and benchmarking with clinical time series data. Sci. Data 6(1), 96 (2019)
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., Vladu, A.: Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083 (2017)
Wang, M., Lin, Z., Li, R., Li, Y., Jianwen, S.: Predicting disease progress with imprecise lab test results. Artif. Intell. Med. 132, 102373 (2022)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2023 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Li, Y., Wang, M., Su, J. (2023). An Evaluation Metric for Prediction Stability with Imprecise Data. In: Jin, Z., Jiang, Y., Buchmann, R.A., Bi, Y., Ghiran, AM., Ma, W. (eds) Knowledge Science, Engineering and Management. KSEM 2023. Lecture Notes in Computer Science(), vol 14117. Springer, Cham. https://doi.org/10.1007/978-3-031-40283-8_36
Download citation
DOI: https://doi.org/10.1007/978-3-031-40283-8_36
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-40282-1
Online ISBN: 978-3-031-40283-8
eBook Packages: Computer ScienceComputer Science (R0)