Nothing Special   »   [go: up one dir, main page]

Skip to main content

Advertisement

Log in

Multiclass Benchmarking Framework for Automated Acute Leukaemia Detection and Classification Based on BWM and Group-VIKOR

  • Systems-Level Quality Improvement
  • Published:
Journal of Medical Systems Aims and scope Submit manuscript

Abstract

This paper aims to assist the administration departments of medical organisations in making the right decision on selecting a suitable multiclass classification model for acute leukaemia. In this paper, we proposed a framework that will aid these departments in evaluating, benchmarking and ranking available multiclass classification models for the selection of the best one. Medical organisations have continuously faced evaluation and benchmarking challenges in such endeavour, especially when no single model is superior. Moreover, the improper selection of multiclass classification for acute leukaemia model may be costly for medical organisations. For example, when a patient dies, one such organisation will be legally or financially sued for incidents in which the model fails to fulfil its desired outcome. With regard to evaluation and benchmarking, multiclass classification models are challenging processes due to multiple evaluation and conflicting criteria. This study structured a decision matrix (DM) based on the crossover of 2 groups of multi-evaluation criteria and 22 multiclass classification models. The matrix was then evaluated with datasets comprising 72 samples of acute leukaemia, which include 5327 gens. Subsequently, multi-criteria decision-making (MCDM) techniques are used in the benchmarking and ranking of multiclass classification models. The MCDM used techniques that include the integrated BWM and VIKOR. BWM has been applied for the weight calculations of evaluation criteria, whereas VIKOR has been used to benchmark and rank classification models. VIKOR has also been employed in two decision-making contexts: individual and group decision making and internal and external group aggregation. Results showed the following: (1) the integration of BWM and VIKOR is effective at solving the benchmarking/selection problems of multiclass classification models. (2) The ranks of classification models obtained from internal and external VIKOR group decision making were almost the same, and the best multiclass classification model based on the two was ‘Bayes. Naive Byes Updateable’ and the worst one was ‘Trees.LMT’. (3) Among the scores of groups in the objective validation, significant differences were identified, which indicated that the ranking results of internal and external VIKOR group decision making were valid.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Salman, O., Zaidan, A., Zaidan, B., Naserkalid, and Hashim, M., Novel methodology for triage and prioritizing using “big data” patients with chronic heart diseases through telemedicine environmental.Int. J. Inf. Technol. Decis. Mak. 16(05):1211–1245, 2017.

    Google Scholar 

  2. Kalid, N. et al., Based on real time remote health monitoring systems: A new approach for prioritization “large scales data” patients with chronic heart diseases using body sensors and communication technology.J. Med. Syst. 42(4):69, 2018.

    PubMed  Google Scholar 

  3. Mohsin, A. H. et al., Based medical systems for patient’s authentication: Towards a new verification secure framework using CIA standard.J. Med. Syst. 43(7):192, 2019.

  4. Mohsin, A. H. et al., Real-time medical systems based on human biometric steganography: A systematic review.J. Med. Syst. 42(12):245, 2018.

    CAS  PubMed  Google Scholar 

  5. Mohsin, A. H. et al., Real-time remote health monitoring systems using body sensor information and finger vein biometric verification: A multi-layer systematic review.J. Med. Syst. 42(12):238, 2018.

    CAS  PubMed  Google Scholar 

  6. Albahri, O. S. et al., Systematic review of real-time remote health monitoring system in triage and priority-based sensor technology: Taxonomy, open challenges, motivation and recommendations.J. Med. Syst. 42(5), 2018.

  7. Abdulnabi, M. et al., A distributed framework for health information exchange using smartphone technologies.J. Biomed. Inform. 69:230–250, 2017.

    PubMed  Google Scholar 

  8. Zaidan, A. A. et al., Challenges, alternatives, and paths to sustainability: Better public health promotion using social networking pages as key tools.J. Med. Syst. 39(2):7, 2015.

    CAS  PubMed  Google Scholar 

  9. Mat Kiah, M. L. et al., Design and develop a video conferencing framework for real-time telemedicine applications using secure group-based communication architecture.J. Med. Syst. 38(10):133, 2014.

    CAS  PubMed  Google Scholar 

  10. Shuwandy, M. L. et al., Sensor-based mHealth authentication for real-time remote healthcare monitoring system: A multilayer systematic review.J. Med. Syst. 43(2):33, 2019.

    PubMed  Google Scholar 

  11. Talal, M. et al., Smart home-based IoT for real-time and secure remote health monitoring of triage and priority system using body sensors: Multi-driven systematic review.J. Med. Syst. 43(3):42, 2019.

    PubMed  Google Scholar 

  12. Zaidan, B. B. et al., A security framework for Nationwide health information exchange based on telehealth strategy.J. Med. Syst. 39(5):51, 2015.

    CAS  PubMed  Google Scholar 

  13. Hussain, M. et al., The landscape of research on smartphone medical apps: Coherent taxonomy, motivations, open challenges and recommendations.Comput. Methods Prog. Biomed. 122(3):393–408, 2015.

    Google Scholar 

  14. Zaidan, B. B. et al., Impact of data privacy and confidentiality on developing telemedicine applications: A review participates opinion and expert concerns.Int. J. Pharmacol. 7(3):382–387, 2011.

    Google Scholar 

  15. Kiah, M. L. M. et al., MIRASS: Medical informatics research activity support system using information mashup network. J. Med. Syst. 38(4):37, 2014.

    CAS  PubMed  Google Scholar 

  16. Mohsin, A. H. et al., Based Blockchain-PSO-AES techniques in finger vein biometrics: A novel verification secure framework for patient authentication.Comput. Stand. Interfaces, 2019.

  17. Hussain, M. et al., Conceptual framework for the security of mobile health applications on android platform.Telematics Inform. 35(5):1335, 2018.

    Google Scholar 

  18. Hussain, M. et al., A security framework for mHealth apps on android platform.Comput. Secur. 75:191–217, 2018.

    Google Scholar 

  19. Iqbal, S. et al., Real-time-based E-health systems: Design and implementation of a lightweight key management protocol for securing sensitive information of patients.Health Technol. (Berl):1–19, 2018.

  20. Alanazi, H. O. et al., Meeting the security requirements of electronic medical records in the ERA of high-speed computing.J. Med. Syst. 39(1):165, 2015.

    CAS  PubMed  Google Scholar 

  21. Nabi, M. S. A. et al., Suitability of using SOAP protocol to secure electronic medical record databases transmission.Int. J. Pharmacol. 6(6):959–964, 2010.

    Google Scholar 

  22. Kiah, M. L. M. et al., An enhanced security solution for electronic medical records based on AES hybrid technique with SOAP/XML and SHA-1.J. Med. Syst. 37(5):9971, 2013.

    PubMed  Google Scholar 

  23. Nabi, M. S. et al., Suitability of adopting S/MIME and OpenPGP email messages protocol to secure electronic medical records. In:Second International Conference on Future Generation Communication Technologies (FGCT 2013), 2013, 93–97.

  24. Kiah, M. L. M. et al., Open source EMR software: Profiling, insights and hands-on analysis.Comput. Methods Prog. Biomed. 117(2):360–382, 2014.

    CAS  Google Scholar 

  25. Alsalem, M. A. et al., A review of the automated detection and classification of acute leukaemia: Coherent taxonomy, datasets, validation and performance measurements, motivation, open challenges and recommendations.Comput. Methods Prog. Biomed. 158:93–112, 2018.

    CAS  Google Scholar 

  26. Srisukkham, W., Zhang, L., Neoh, S. C., Todryk, S., and Lim, C. P., Intelligent leukaemia diagnosis with bare-bones PSO based feature optimization.Appl. Soft Comput. 56:405–419, 2017.

    Google Scholar 

  27. Labati, R. D., Piuri, V., Scotti, F., and Ieee, All-IDB: The acute lymphoblastic leukemia image database for image processing. In:2011 18th Ieee International Conference on Image Processing, 2011.

  28. Lei, X., and Chen, Y., Multiclass classification of microarray data samples with flexible neural tree. In:2012 Spring Congress on Engineering and Technology, 2012, 1–4.

  29. Agaian, S., Madhukar, M., and Chronopoulos, A. T., Automated screening system for acute myelogenous leukemia detection in blood microscopic images.IEEE Syst. J. 8:995–1004, 2014.

    Google Scholar 

  30. Mohapatra, S., Patra, D., and Satpathi, S., Image analysis of blood microscopic images for acute leukemia detection. In:2010 International Conference on Industrial Electronics, Control and Robotics, 2010, 215–219.

  31. Bagasjvara, R. G., Candradewi, I., Hartati, S., and Harjoko, A., Automated detection and classification techniques of acute leukemia using image processing: A review. In:2016 2nd International Conference on Science and Technology-Computer (ICST), 2016, 35–43.

  32. Rawat, J., Singh, A., Bhadauria, H. S., and Virmani, J., Computer aided diagnostic system for detection of leukemia using microscopic images.Procedia Computer Science 70:748–756, 2015.

    Google Scholar 

  33. Snousy, M. B. A., El-Deeb, H. M., Badran, K., and Khlil, I. A. A., Suite of decision tree-based classification algorithms on cancer gene expression data.Egyptian Informatics Journal 12:73–82, 2011.

    Google Scholar 

  34. Goutam, D., and Sailaja, S., Classification of acute myelogenous leukemia in blood microscopic images using supervised classifier. In:2015 IEEE International Conference on Engineering and Technology (ICETECH), 2015, 1–5.

  35. Mishra, S., Majhi, B., Sa, P. K., and Sharma, L., Gray level co-occurrence matrix and random forest based acute lymphoblastic leukemia detection.Biomedical Signal Processing and Control 33:272–280, 2017.

    Google Scholar 

  36. Nguyen, T., and Nahavandi, S., Modified AHP for gene selection and Cancer classification using Type-2 fuzzy logic.IEEE Trans. Fuzzy Syst. 24:273–287, 2016.

    Google Scholar 

  37. Hossin, M., and Sulaiman, M., A review on evaluation metrics for data classification evaluations.International Journal of Data Mining & Knowledge Management Process 5:1, 2015.

    Google Scholar 

  38. Sokolova, M., and Lapalme, G., A systematic analysis of performance measures for classification tasks.Inf. Process. Manag. 45:427–437, 2009.

    Google Scholar 

  39. Krappe, S., Benz, M., Wittenberg, T., Haferlach, T., and Munzenmayer, C., Automated classification of bone marrow cells in microscopic images for diagnosis of leukemia: A comparison of two classification schemes with respect to the segmentation quality. In: Hadjiiski, L. M., Tourassi, G. D. (Eds),Medical Imaging 2015: Computer-Aided Diagnosis. Vol. 9414, 2015.

  40. Cui, Y., Zheng, C.-H., Yang, J., and Sha, W., Sparse maximum margin discriminant analysis for feature extraction and gene selection on gene expression data.Comput. Biol. Med. 43:933–941, 2013.

    CAS  PubMed  Google Scholar 

  41. Mohapatra, P., Chakravarty, S., and Dash, P. K., Microarray medical data classification using kernel ridge regression and modified cat swarm optimization based gene selection system.Swarm and Evolutionary Computation 28:144–160, 2016.

    Google Scholar 

  42. Wang, H.-Q., Wong, H.-S., Zhu, H., and Yip, T. T. C., A neural network-based biomarker association information extraction approach for cancer classification.J. Biomed. Inform. 42:654–666, 2009.

    CAS  PubMed  Google Scholar 

  43. Zhang, L., and Xiaojuan, H., Multiple SVM-RFE for multi-class gene selection on DNA microarray data. In:2015 International Joint Conference on Neural Networks (IJCNN), 2015, 1–6.

  44. Yongqiang, D., Bin, H., Yun, S., Chengsheng, M., Jing, C., Xiaowei, Z. et al., Feature selection of high-dimensional biomedical data using improved SFLA for disease diagnosis. In:2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2015, 458–463.

  45. Salem, H., Attiya, G., and El-Fishawy, N., Gene expression profiles based human cancer diseases classification. In:2015 11th International Computer Engineering Conference (ICENCO), 2015, 181–187.

  46. Campos, L. M. d., Cano, A., Castellano, J. G., and Moral, S., Bayesian networks classifiers for gene-expression data. In:2011 11th International Conference on Intelligent Systems Design and Applications, 2011, 1200–1206.

  47. Bhattacharjee, R., and Saini, L. M., Detection of acute lymphoblastic leukemia using watershed transformation technique. In:2015 International Conference on Signal Processing, Computing and Control (ISPCC), 2015, 383–386.

  48. Chandra, B., and Gupta, M., Robust approach for estimating probabilities in Naïve–Bayes classifier for gene expression data.Expert Syst. Appl. 38:1293–1298, 2011.

    Google Scholar 

  49. Singhal, V., and Singh, P., Local binary pattern for automatic detection of acute lymphoblastic leukemia. In:2014 Twentieth National Conference on Communications (NCC), 2014, 1–5.

  50. Rashid, S., and Maruf, G. M., An adaptive feature reduction algorithm for cancer classification using wavelet decomposition of serum proteomic and DNA microarray data. In:2011 IEEE International Conference on Bioinformatics and Biomedicine Workshops (BIBMW), 2011, 305–312.

  51. Ludwig, S. A., Jakobovic, D., and Picek, S., Analyzing gene expression data: Fuzzy decision tree algorithm applied to the classification of cancer data. In:2015 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 2015, 1–8.

  52. Saritha, M., Prakash, B. B., Sukesh, K., and Shrinivas, B., Detection of blood cancer in microscopic images of human blood samples: A review. In:2016 International Conference on Electrical, Electronics, and Optimization Techniques (ICEEOT), 2016, 596–600.

  53. Tai, W. L., Hu, R. M., Hsiao, H. C. W., Chen, R. M., and Tsai, J. J. P., Blood cell image classification based on hierarchical SVM. In:2011 IEEE International Symposium on Multimedia, 2011, 129–136.

  54. Kumar, P. G., Aruldoss Albert Victoire, T., Renukadevi, P., and Devaraj, D., Design of fuzzy expert system for microarray data classification using a novel genetic swarm algorithm.Expert Syst. Appl. 39:1811–1821, 2012.

    Google Scholar 

  55. He, Y., and Hui, S. C., Exploring ant-based algorithms for gene expression data analysis.Artif. Intell. Med. 47:105–119, 2009.

    PubMed  Google Scholar 

  56. Yusen, Z., and Liangyun, R., Two feature selections for analysis of microarray data. In:2010 IEEE Fifth International Conference on Bio-Inspired Computing: Theories and Applications (BIC-TA), 2010, 1259–1262.

  57. Rosa, J. L. D., Magpantay, A. E. A., Gonzaga, A. C., and Solano, G. A., Cluster center genes as candidate biomarkers for the classification of leukemia. In:IISA 2014, the 5th International Conference on Information, Intelligence, Systems and Applications, 2014, 124–129.

  58. Lu, X., Peng, X., Liu, P., Deng, Y., Feng, B., and Liao, B., A novel feature selection method based on CFS in cancer recognition. In:2012 IEEE 6th International Conference on Systems Biology (ISB), 2012, 226–231.

  59. Kumar, M., and Kumar Rath, S., Classification of microarray using MapReduce based proximal support vector machine classifier.Knowl.-Based Syst. 89:584–602, 2015.

    Google Scholar 

  60. Dash, S., Hill-climber based fuzzy-rough feature extraction with an application to cancer classification. In:13th International Conference on Hybrid Intelligent Systems (HIS 2013), 2013, 28–34.

  61. Wahbeh, A. H., Al-Radaideh, Q. A., Al-Kabi, M. N., and Al-Shawakfa, E. M., A comparison study between data mining tools over some classification methods.Int. J. Adv. Comput. Sci. Appl. Special Issue on Artificial Intelligence:18–26, 2011.

    Google Scholar 

  62. Rangra, K., and Bansal, D. K. L., Comparative study of data mining tools.International Journal of Advanced Research in Computer Science and Software Engineering 4(6), 2014.

  63. Yas, Q. M., Zaidan, A. A., Zaidan, B. B., Rahmatullah, B., and Karim, H. A., Comprehensive insights into evaluation and benchmarking of real-time skin detectors: Review, open issues & challenges, and recommended solutions. Measurement 114:243–260, 2018.

    Google Scholar 

  64. Wang, Z., and Palade, V., A comprehensive fuzzy-based framework for Cancer microarray data gene expression analysis. In:2007 IEEE 7th International Symposium on BioInformatics and BioEngineering, 2007, 1003–1010.

  65. Nazlibilek, S., Karacor, D., Ercan, T., Sazli, M. H., Kalender, O., and Ege, Y., Automatic segmentation, counting, size determination and classification of white blood cells.Measurement 55:58–65, 2014.

    Google Scholar 

  66. Bhattacharjee, R., and Saini, L. M., Robust technique for the detection of acute lymphoblastic leukemia. In:2015 IEEE Power, Communication and Information Technology Conference (PCITC), 2015, 657–662.

  67. Torkaman, A., Charkari, N. M., Aghaeipour, M., and Hajati, E., A recommender system for detection of leukemia based on cooperative game. In:2009 17th Mediterranean Conference on Control and Automation, 2009, 1126–1130.

  68. Escalante, H. J., Montes-y-Gómez, M., González, J. A., Gómez-Gil, P., Altamirano, L., Reyes, C. A. et al., Acute leukemia classification by ensemble particle swarm model selection.Artif. Intell. Med. 55:163–175, 2012.

    PubMed  Google Scholar 

  69. Madhloom, H. T., Kareem, S. A., and Ariffin, H., A robust feature extraction and selection method for the recognition of lymphocytes versus acute lymphoblastic leukemia. In:2012 International Conference on Advanced Computer Science Applications and Technologies (ACSAT), 2012, 330–335.

  70. Cornet, E., Perol, J. P., and Troussard, X., Performance evaluation and relevance of the CellaVision (TM) DM96 system in routine analysis and in patients with malignant hematological diseases.Int. J. Lab. Hematol. 30:536–542, 2008.

    CAS  PubMed  PubMed Central  Google Scholar 

  71. Rota, P., Groeneveld-Krentz, S., and Reiter, M., On automated flow cytometric analysis for MRD estimation of acute lymphoblastic Leukaemia: A comparison among different approaches. In:2015 IEEE International Conference on Bioinformatics and Biomedicine (BIBM), 2015, 438–441.

  72. Keeney, R. L., and Raiffa, H.,Decisions with Multiple Objectives: Preferences and Value Trade-Offs. Cambridge: Cambridge university press, 1993.

  73. Zaidan, A., Zaidan, B., Al-Haiqi, A., Kiah, M. L. M., Hussain, M., and Abdulnabi, M., Evaluation and selection of open-source EMR software packages based on integrated AHP and TOPSIS.J. Biomed. Inform. 53:390–404, 2015.

    CAS  PubMed  Google Scholar 

  74. Khatari, M. et al., Multi-criteria evaluation and benchmarking for active queue management methods: Open issues, challenges and recommended pathway solutions.Int. J. Inf. Technol. Decis. Mak.:S0219622019300039, 2019.

  75. Zaidan, A. A. et al., Multi-criteria analysis for OS-EMR software selection problem: A comparative study.Decis. Support. Syst. 78:15–27, 2015.

    Google Scholar 

  76. Zaidan, B. B. et al., A new digital watermarking evaluation and benchmarking methodology using an external group of evaluators and multi-criteria analysis based on ‘large-scale data.Softw. Pract. Exp. 47(10):1365–1392, 2017.

    Google Scholar 

  77. Yas, Q. M. et al., Towards on develop a framework for the evaluation and benchmarking of skin detectors based on artificial intelligent models using multi-criteria decision-making techniques.Int. J. Pattern Recognit. Artif. Intell. 31(03):1759002, 2017.

    Google Scholar 

  78. Belton, V., and Stewart, T.,Multiple Criteria Decision Analysis: An Integrated Approach. Boston: Kluwer Academic Publishers, 2002.

  79. Zaidan, B., Zaidan, A., Abdul Karim, H., and Ahmad, N., A new approach based on multi-dimensional evaluation and benchmarking for data hiding techniques.Int. J. Inf. Technol. Decis. Mak.:1–42, 2017.

  80. Zaidan, B., and Zaidan, A., Software and hardware FPGA-based digital watermarking and steganography approaches: Toward new methodology for evaluation and benchmarking using multi-criteria decision-making techniques.Journal of Circuits, Systems and Computers 26(07):1750116, 2017.

    Google Scholar 

  81. Abdullateef, B. N., Elias, N. F., Mohamed, H., Zaidan, A., and Zaidan, B., An evaluation and selection problems of OSS-LMS packages.SpringerPlus 5(1):248, 2016.

    PubMed  PubMed Central  Google Scholar 

  82. Qader, M. A. et al., A methodology for football players selection problem based on multi-measurements criteria analysis.Measurement 111:38–50, 2017.

    Google Scholar 

  83. Rahmatullah, B. et al., Multi-complex attributes analysis for optimum GPS baseband receiver tracking channels selection. In:2017 4th International Conference on Control, Decision and Information Technologies, CoDIT 2017. Vol. 2017, 2017, 1084–1088.

  84. Jumaah, F. M. et al., Technique for order performance by similarity to ideal solution for solving complex situations in multi-criteria optimization of the tracking channels of GPS baseband telecommunication receivers.Telecommun. Syst.:1–19, 2018.

  85. Petrovic-Lazarevic, S., & Abraham, A., Hybrid fuzzy-linear programming approach for multi criteria decision making problems.Neural Parallel & Scientific Comp., 11:53-68, 2003.

  86. Malczewski, J.,GIS and Multicriteria Decision Analysis. New York: Wiley, 1999.

  87. Alsalem, M., Zaidan, A., Zaidan, B., Hashim, M., Albahri, O., Albahri, A. et al., Systematic review of an automated multiclass detection and classification system for acute Leukaemia in terms of evaluation and benchmarking, open challenges, issues and methodological aspects.J. Med. Syst. 42(11):204, 2018.

    CAS  PubMed  Google Scholar 

  88. Yas, Q. M. et al., Comprehensive insights into evaluation and benchmarking of real-time skin detectors: Review, open issues & challenges, and recommended solutions.Measurement 114:243–260, 2018.

    Google Scholar 

  89. Zaidan, B. B., and Zaidan, A. A., Comparative study on the evaluation and benchmarking information hiding approaches based multi-measurement analysis using TOPSIS method with different normalisation, separation and context techniques.Measurement 117:277–294, 2018.

    Google Scholar 

  90. Zaidan, A. A. et al., A review on smartphone skin cancer diagnosis apps in evaluation and benchmarking: Coherent taxonomy, open issues and recommendation pathway solution.Health Technol. (Berl). 8(4):223–238, 2018.

    Google Scholar 

  91. Zionts, S., MCDM-if not a Roman numeral, then what?Interfaces 9:94–101, 1979.

    Google Scholar 

  92. Baltussen, R., and Niessen, L., Priority setting of health interventions: The need for multi-criteria decision analysis.Cost effectiveness and resource allocation 4:1, 2006.

    Google Scholar 

  93. Thokala, P., Devlin, N., Marsh, K., Baltussen, R., Boysen, M., Kalo, Z. et al., Multiple criteria decision analysis for health care decision making—An introduction: Report 1 of the ISPOR MCDA emerging good practices task force.Value Health 19:1–13, 2016.

    PubMed  Google Scholar 

  94. Oliveira, M., Fontes, D. B., and Pereira, T., Multicriteria decision making: A case study in the automobile industry.Annals of Management Science 3:109, 2014.

    Google Scholar 

  95. Tariq, I. et al., MOGSABAT: A metaheuristic hybrid algorithm for solving multi-objective optimisation problems.Neural Comput. & Applic. 30:1–15, 2018.

    Google Scholar 

  96. Enaizan, O. et al., Electronic medical record systems: Decision support examination framework for individual, security and privacy concerns using multi-perspective analysis.Health Technol., 1-18, 2018.

  97. Salih, M. M. et al., Survey on fuzzy TOPSIS state-of-the-art between 2007–2017.Comput. Oper. Res., 104:207–227, 2019.

  98. Kalid, N. et al., Based real time remote health monitoring systems: A review on patients prioritization and related" big data" using body sensors information and communication technology.J. Med. Syst. 42(2):30, 2018.

    Google Scholar 

  99. Jumaah, F. M. et al., Decision-making solution based multi-measurement design parameter for optimization of GPS receiver tracking channels in static and dynamic real-time positioning multipath environment.Measurement 118:83–95, 2018.

    Google Scholar 

  100. Jadhav, A., and Sonar, R., Analytic hierarchy process (AHP), weighted scoring method (WSM), and hybrid knowledge based system (HKBS) for software selection: A comparative study. In:2009 Second International Conference on Emerging Trends in Engineering & Technology, 2009, 991–997.

  101. Albahri, A. S. et al., Real-time fault-tolerant mHealth system: Comprehensive review of healthcare services, opens issues, challenges and methodological aspects.J. Med. Syst. 42(8):137, 2018 Springer US.

    CAS  PubMed  Google Scholar 

  102. Albahri, O. S. et al., Real-time remote health-monitoring systems in a Medical Centre: A review of the provision of healthcare services-based body sensor information, open challenges and methodological aspects.J. Med. Syst. 42(9):164, 2018.

    CAS  PubMed  Google Scholar 

  103. Talal, M. et al., Comprehensive review and analysis of anti-malware apps for smartphones.Telecommun. Syst., 1-53, 2019.

  104. Zaidan, A. A. et al., Based multi-agent learning neural network and Bayesian for real-time IoT skin detectors: A new evaluation and benchmarking methodology.Neural Comput. & Applic., 2019.

  105. Albahri, A. S. et al., Based multiple heterogeneous wearable sensors: A smart real-time health monitoring structured for hospitals distributor.IEEE Access 7:37269–37323, 2019.

    Google Scholar 

  106. Albahri, O. S. et al., Fault-tolerant mHealth framework in the context of IoT-based real-time wearable health data sensors.IEEE Access 7:50052–50080, 2019.

    Google Scholar 

  107. Whaiduzzaman, M., Gani, A., Anuar, N. B., Shiraz, M., Haque, M. N., and Haque, I. T., Cloud service selection using multicriteria decision analysis.Sci. World J. 2014:459375, 2014.

    Google Scholar 

  108. Aruldoss, M., Lakshmi, T. M., and Venkatesan, V. P., A survey on multi criteria decision making methods and its applications.American Journal of Information Systems 1:31–43, 2013.

    Google Scholar 

  109. Singh, A., & Malik, SK., Major MCDM techniques and their application-a review.IOSR Journal of Engineering, 4(5):15-25, 2014.

  110. Opricovic, S., and Tzeng, G.-H., Compromise solution by MCDM methods: A comparative analysis of VIKOR and TOPSIS.Eur. J. Oper. Res. 156:445–455, 2004.

    Google Scholar 

  111. Guo, S., and Zhao, H., Fuzzy best-worst multi-criteria decision-making method and its applications.Knowl.-Based Syst. 121:23–31, 2017.

    Google Scholar 

  112. Rezaei, J., Best-worst multi-criteria decision-making method.Omega 53:49–57, 2015.

    Google Scholar 

  113. Tavana, M., and Hatami-Marbini, A., A group AHP-TOPSIS framework for human spaceflight mission planning at NASA.Expert Syst. Appl. 38:13588–13603, 2011.

    Google Scholar 

  114. Zaidan, A. A., Zaidan, B. B., Albahri, O. S., Alsalem, M. A., Albahri, A. S., Yas, Q. M. et al., A review on smartphone skin cancer diagnosis apps in evaluation and benchmarking: Coherent taxonomy, open issues and recommendation pathway solution.Heal. Technol. 8:223–238, 2018.

    Google Scholar 

  115. Azeez, D., Ali, M. A. M., Gan, K. B., and Saiboon, I., Comparison of adaptive neuro-fuzzy inference system and artificial neutral networks model to categorize patients in the emergency department.SpringerPlus 2:416, 2013.

    PubMed  PubMed Central  Google Scholar 

  116. Ashour, O. M., and Okudan, G. E., Fuzzy AHP and utility theory based patient sorting in emergency departments. International Journal of Collaborative Enterprise 1:332–358, 2010.

    Google Scholar 

  117. Mills, A. F., A simple yet effective decision support policy for mass-casualty triage.Eur. J. Oper. Res. 253:734–745, 2016.

    Google Scholar 

  118. Adunlin, G., Diaby, V., and Xiao, H., Application of multicriteria decision analysis in health care: A systematic review and bibliometric analysis.Health Expect. 18:1894–1905, 2015.

    PubMed  Google Scholar 

  119. Jumaah, F., Zadain, A., Zaidan, B., Hamzah, A., and Bahbibi, R., Decision-making solution based multi-measurement design parameter for optimization of GPS receiver tracking channels in static and dynamic real-time positioning multipath environment.Measurement, 118:83-95, 2018.

  120. Yas, Q. M., Zaidan, A., Zaidan, B., Rahmatullah, B., and Karim, H. A., Comprehensive insights into evaluation and benchmarking of real-time skin detectors: Review, open issues & challenges, and recommended solutions.Measurement, 114:243-260, 2018.

  121. Nilsson, H., Nordström, E.-M., and Öhman, K., Decision support for participatory forest planning using AHP and TOPSIS.Forests 7:100, 2016.

    Google Scholar 

  122. Kornyshova, E., and Salinesi, C., MCDM techniques selection approaches: State of the art. In:2007 IEEE Symposium on Computational Intelligence in Multi-Criteria Decision-Making, 2007, 22–29.

  123. Kaya, İ., Çolak, M., and Terzi, F., Use of MCDM techniques for energy policy and decision-making problems: A review.Int. J. Energy Res. 42:2344–2372, 2018.

    Google Scholar 

  124. Wan Ahmad, W. N. K., Rezaei, J., Sadaghiani, S., and Tavasszy, L. A., Evaluation of the external forces affecting the sustainability of oil and gas supply chain using best worst method.J. Clean. Prod. 153:242–252, 2017.

    Google Scholar 

  125. Gupta, H., and Barua, M. K., Supplier selection among SMEs on the basis of their green innovation ability using BWM and fuzzy TOPSIS.J. Clean. Prod. 152:242–258, 2017.

    Google Scholar 

  126. Rezaei, J., Best-worst multi-criteria decision-making method: Some properties and a linear model.Omega 64:126–130, 2016.

    Google Scholar 

  127. Yang, Q., Zhang, Z., You, X., and Chen, T., Evaluation and classification of overseas talents in China based on the BWM for intuitionistic relations.Symmetry 8:137, 2016.

    Google Scholar 

  128. Opricovic, S., and Tzeng, G.-H., Extended VIKOR method in comparison with outranking methods.Eur. J. Oper. Res. 178:514–529, 2007.

    Google Scholar 

  129. Mahjouri, M., Ishak, M. B., Torabian, A., Abd Manaf, L., Halimoon, N., and Ghoddusi, J., Optimal selection of Iron and steel wastewater treatment technology using integrated multi-criteria decision-making techniques and fuzzy logic.Process Saf. Environ. Prot. 107:54–68, 2017.

    CAS  Google Scholar 

  130. Ren, J., Selection of sustainable prime mover for combined cooling, heat, and power technologies under uncertainties: An interval multicriteria decision-making approach.Int. J. Energy Res., 42(8):2655-2669, 2018.

    Google Scholar 

  131. Gupta, H., Evaluating service quality of airline industry using hybrid best worst method and VIKOR.J. Air Transp. Manag. 68:35–47, 2018.

    Google Scholar 

  132. Serrai, W., Abdelli, A., Mokdad, L., and Hammal, Y., An efficient approach for web service selection. In:2016 IEEE Symposium on Computers and Communication (ISCC), 2016, 167–172.

  133. Shojaei, P., Seyed Haeri, S. A., and Mohammadi, S., Airports evaluation and ranking model using Taguchi loss function, best-worst method and VIKOR technique.J. Air Transp. Manag. 68:4–13, 2018.

    Google Scholar 

  134. Serrai, W., Abdelli, A., Mokdad, L., and Hammal, Y., Towards an efficient and a more accurate web service selection using MCDM methods.J. Comput. Sci. 22:253–267, 2017.

    Google Scholar 

  135. Pamučar, D., Petrović, I., and Ćirović, G., Modification of the best–worst and MABAC methods: A novel approach based on interval-valued fuzzy-rough numbers.Expert Syst. Appl. 91:89–106, 2018.

    Google Scholar 

  136. Tian, Z.-p., Wang, J.-q., and Zhang, H.-y., An integrated approach for failure mode and effects analysis based on fuzzy best-worst, relative entropy, and VIKOR methods.Appl. Soft Comput., 72:636-646, 2018.

    Google Scholar 

  137. Chiu, W.-Y., Tzeng, G.-H., and Li, H.-L., A new hybrid MCDM model combining DANP with VIKOR to improve e-store business.Knowl.-Based Syst. 37:48–61, 2013.

    Google Scholar 

  138. Golub, T. R., Slonim, D. K., Tamayo, P., Huard, C., Gaasenbeek, M., Mesirov, J. P. et al., Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring.Science 286:531–537, 1999.

    CAS  PubMed  Google Scholar 

  139. Zhou, C., Wan, L., and Liang, Y., A hybrid algorithm of minimum spanning tree and nearest neighbor for classifying human cancers. In:Advanced Computer Theory and Engineering (ICACTE), 2010 3rd International Conference on, 2010, V5-585–V5-589.

  140. Chakraborty, S., Simultaneous cancer classification and gene selection with Bayesian nearest neighbor method: An integrated approach.Computational Statistics & Data Analysis 53:1462–1474, 2009.

    Google Scholar 

  141. Chunbao, Z., Liming, W., and Yanchun, L., A hybrid algorithm of minimum spanning tree and nearest neighbor for classifying human cancers. In:2010 3rd International Conference on Advanced Computer Theory and Engineering(ICACTE), 2010, V5-585–V5-589.

  142. Horng, J.-T., Wu, L.-C., Liu, B.-J., Kuo, J.-L., Kuo, W.-H., and Zhang, J.-J., An expert system to classify microarray gene expression data using gene selection by decision tree.Expert Syst. Appl. 36:9072–9081, 2009.

    Google Scholar 

  143. Garro, B. A., Rodríguez, K., and Vazquez, R. A., Designing artificial neural networks using differential evolution for classifying DNA microarrays. In:2017 IEEE Congress on Evolutionary Computation (CEC), 2017, 2767–2774.

  144. Al-Sahaf, H., Song, A., and Zhang, M., Hybridisation of genetic programming and nearest neighbour for classification. In:2013 IEEE Congress on Evolutionary Computation, 2013, 2650–2657.

  145. Deegalla, S., and Boström, H., Improving fusion of dimensionality reduction methods for nearest neighbor classification. In:2009 International Conference on Machine Learning and Applications, 2009, 771–775.

  146. Hasan, A., and Akhtaruzzaman, A. M., High dimensional microarray data classification using correlation based feature selection. In:2012 International Conference on Biomedical Engineering (ICoBE), 2012, 319–321.

  147. Huang, P. H., and Moh, T.-t., A non-linear non-weight method for multi-criteria decision making.Ann. Oper. Res. 248:239–251, 2017.

    Google Scholar 

  148. Aboutorab, H., Saberi, M., Asadabadi, M. R., Hussain, O., and Chang, E., ZBWM: The Z-number extension of best worst method and its application for supplier development.Expert Syst. Appl. 107:115–125, 2018.

    Google Scholar 

  149. Almahdi, E. M. et al., Based mobile patient monitoring systems: A prioritization framework using multi-criteria decision making techniques. J. Med. Syst. 43, 2019.

  150. Almahdi, E. M. et al., Mobile patient monitoring systems from a benchmarking aspect: Challenges, open issues and recommended solutions. J. Med. Syst. 43, 2019.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to A. A. Zaidan.

Ethics declarations

Conflict of interest

The authors declare no conflict of interest.

Ethical approval

All procedures performed in studies involving human participants were in accordance with the ethical standards of the institution and/or national research committee and with the 1964 Helsinki declaration and its later amendments or comparable ethical standards.

Informed consent

Informed consent was obtained from all individual participants included in the study.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the Topical Collection onSystems-Level Quality Improvement

Appendices

Appendix 1 pairwise comparisons

Section 1: Expert questionnaire

figure a

Dear Dr.,

The aim behind this questionnaire is to compare preferences between evaluation metrics of multiclass classification models of acute leukaemia for determining the importance for each metric. This questionnaire is a part of the research activities at Universiti Pendidikan Sultan Idris (UPSI)/Malaysia.

Background:

Name:

Years of experience:

E-Mail:

Position:

Prior to answering the questions, understanding the criteria assessed is important in arriving at a decision.

The criteria that usage for measurement the performance of a trained model on the test dataset. The evaluation criteria of acute leukaemia were divided into two main groups, namely, (1) reliability group, (2) time complexity;

The reliability group includes four subgroups of criteria, namely, (1) matrix of parameters has four metrics (i.e., confusion matrix: True positive, True negative, False negative, False positive), relationship of parameters has five metrics (i.e., Average Accuracy, Precision (Micro), Precision (Macro), Recall (Macro),, behaviour of parameters (F-score) and Error rate. The following Fig. 6 illustrates the levels:

Fig. 6
figure 6

illustrates the levels of evaluation criteria for multiclass classification models

Comparison questions

Comparison measurement scale

The comparisons (relative importance) of each criterion are measured according to a numerical scale from 1 to 9. These relative scales (1 to 9), as shown in Table 8, Please use this scale in comparison.

Table 8 Comparison measurement scale
  1. 1.

    Main Criteria

  1. A.

    Reliability: the degree of quality or state of being fit to be reliable value for any parameter. It is considered one of the main criteria in our study. This criterion includes four subsections will discuss in the next stage.

  2. B.

    Time Complexity: is the time consumed by the input and output sample images, that’s mean is the time required to complete the classification task of that algorithm.

Questions

  1. 1.1.

    Could you indicate, which of these two criteria you find is the MOST important and which one you find the LEAST important by marking the box? Please in Table 9, marking the cell of in front of the MOST important criterion and marking the cell of in front of the LEAST important criterion.

Table 9 Comparison to determine the most and least important criteria

You have selected X criterion as the most important criterion.

  1. 1.2.

    Please determine your preference of this criterion (X) over the other least important criterion by using 1 to 9 measurement scale.

    Please write the X criterion that you selected as most important criteria in green cell and the least important criterion in the grey cell in Table 10, and then write your preferences value.

Table 10 Comparison to determine the preference of most important criterion over other criteria
  1. 2.

    The sub-criteria (Level 2)

  1. A.

    Matrix of parameter:

    It provides the statistics for the number of correct and incorrect predictions made by a classification system compared with the actual classifications of the samples in the test data

  2. B.

    Relationship of parameter:

    Relationship of parameters also included three parameters that are more important criteria typically used to measure the quality ratio for any case will discuss in the next

    stage.

  3. C.

    Behaviour of parameter:

    Behaviour of parameters (f-score) that is to measure average harmonic mean and geometric for precision and recall perimeter will discuss in the next stage.

  4. D.

    Error rate

    Error rate within dataset: Basically, the procedure of dataset is to obtain the minimum error rate of the data during the implementation process of the training and validation applied in machine learning.

Questions

  1. 2.1.

    Could you indicate which one of these criteria (sub-criteria (Level 2)) consider the MOST important and which one you find the LEAST important? Please in Table 11, marking the cell of in front of the MOST important criterion and marking the cell of in front of the LEAST important criterion.

Table 11 Comparison to determine the most and least important criteria in level 2 of criteria

You have selected X criterion as the MOST important criterion and Y criterion as the LEAST important criterion

  1. 2.2.

    Please determine your preference of the criterion (X) over the other criteria by using 1 to 9 measurement scale.

    Please write the X criterion that you selected as most important criterion in green cell and the other criteria in the grey cells in Table 12, and then write your preferences value.

Table 12 Comparison to determine the preference of most important criterion over the other criteria in level 2 of criteria
  1. 2.3.

    You have selected Y criterion as the LEAST important criterion.

    Please determine your preference of all criteria over the Y criteria that you selected as LEAST important criterion by using 1 to 9 measurement scale.

    Please write the Y criterion that you selected as LEAST important criteria in green cell and the other criteria in the grey cells in Table 13, and then write your preferences value.

Table 13 Comparison to determine the preference of all criteria over the least important criterion in level 2 of criteria
  1. 3.

    The sub-criteria (A) of Matrix of parameter (level 3)

True positive

The number of elements correctly classified as positive by the test. When cancer cells are correctly identified

True negative

The number of elements correctly classified as negative by the test. When non-cancer cells are correctly identified

False positive

The number of elements classified as positive by the test, but they are not. When non-cancer cells are identified as cancerous

False negative

The number of elements classified as negative by the test, but they are not. When cancer cells are identified as noncancerous

Questions

  1. 3.1.

    Could you indicate which one of these criteria (sub-criteria A(Level 3)) consider the MOST important and which one you find the LEAST important? Please in Table 14, marking the cell of in front of the MOST important criterion and marking the cell of in front of the LEAST important criterion.

Table 14 Comparison to determine the most and least important criteria in the sub-criteria A level 3 of criteria

You have selected X criterion as the MOST important criterion and Y criterion as the LEAST important criterion

  1. 3.2.

    Please determine your preference of the criterion (X) over the other criteria by using 1 to 9 measurement scale.

    Please write the X criterion that you selected as most important criterion in green cell and the other criteria in the grey cells in Table 15, and then write your preferences value.

Table 15 Comparison to determine the preference of most important criterion over the other criteria in the sub-criteria A level 3 of criteria
  1. 3.3.

    You have selected Y criterion as the LEAST important criterion.

    Please determine your preference of all criteria over the Y criteria that you selected as LEAST important criterion by using 1 to 9 measurement scale.

    Please write the Y criterion that you selected as LEAST important criterion in green cell and the other criteria in the grey cells in Table 16, and then write your preferences value.

Table 16 Comparison to determine the preference of all criteria over the least important criterion in the sub-criteria A level 3 of criteria
  1. 4.

    The sub-criteria (B) of Relationship of parameter in (level 3)

Average Accuracy

The average effectiveness of all classes

Precision(micro)

is used to measure the positive patterns that are correctly predicted from the total predicted patterns in a positive class (Agreement of the data class labels with those of a classifiers)

Precision(macro)

Is an average per-class agreement of the data class labels with those of a classifier (An average per-class agreement of the data class with those of a classifiers).

Recall(Macro)

Recall is used to measure the fraction of positive patterns that are correctly classified

Questions

  1. 4.1.

    Could you indicate which one of these criteria (sub-criteria B (Level 3)) consider the MOST important and which one you find the LEAST important? Please in Table 17, marking the cell of in front of the MOST important criterion and marking the cell of in front of the LEAST important criterion.

Table 17 Comparison to determine the most and least important criteria in the sub-criteria B level 3 of criteria

X criterion selected as the best criterion and Y criterion as the LEAST important criterion

  1. 4.2.

    Determine your own preference of the criterion (X) compare the other criteria by using 1 to 9 measurement scale.

    Please write the X criterion that you selected as most important criterion in green cell and the other criteria in the grey cells in Table 18, and then write your preferences value.

Table 18 Comparison to determine the preference of most important criterion over the other criteria in the sub-criteria B level 3 of criteria
  1. 4.3.

    Y criterion selected as the worst criterion.

    Determine your own preference of all criteria compare with Y criterion that you selected as worst criterion by using 1 to 9 measurement scale.

    Please write the Y criterion that you selected as LEAST important criterion in green cell and the other criteria in the grey cells in Table 19, and then write your preferences value.

Table 19 Comparison to determine the preference of all criteria over the least important criterion in the sub-criteria B level 3 of criteria

Should you have any inquiry or wish to know the result please contact:

Mohammed Assim Mohammed Ali

Email: Mohammed.asum@gmail.com

Mobile phone: 0060189810357

……. Thanks for Your Time …….

Section 2: List of experts

Table 20 List of experts involved in the pairwise questionnaire

Appendix 2 results of the BWM method for second and third experts

Table 21 The results of the BWM method for weight preferences of the criteria of evaluation and benchmarking the multiclass classification (second expert)
Table 22 The results of the BWM method measurement for weight preferences of the evaluation and benchmarking for multiclass classification (Third expert)

Appendix 3 results of VIKOR for second and third experts

Table 23 Ranking results based on the second experts’ weights
Table 24 Ranking results based on the third experts’ weights

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Alsalem, M.A., Zaidan, A.A., Zaidan, B.B. et al. Multiclass Benchmarking Framework for Automated Acute Leukaemia Detection and Classification Based on BWM and Group-VIKOR. J Med Syst 43, 212 (2019). https://doi.org/10.1007/s10916-019-1338-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10916-019-1338-x

Keywords

Navigation