Semi-Automatic GUI Platform to Characterize Brain Development in Preterm Children Using Ultrasound Images
<p>The spatio-temporal fetal brain magnetic resonance atlas (CRL fetal brain atlas) at six representative gestational ages: 22, 25, 28, 31, 34, and 37 weeks. Axial, coronal, and sagittal views of the atlas have been shown for each age point [<a href="#B7-jimaging-09-00145" class="html-bibr">7</a>].</p> "> Figure 2
<p>Plans used in the study of a premature baby. For the coronal plane and following the alphabetical order from (<b>a</b>) to (<b>f</b>), we have the planes of <math display="inline"><semantics><mrow><mi>c</mi><mn>1</mn></mrow></semantics></math>, <math display="inline"><semantics><mrow><mi>c</mi><mn>2</mn></mrow></semantics></math>, <math display="inline"><semantics><mrow><mi>c</mi><mn>3</mn></mrow></semantics></math>, <math display="inline"><semantics><mrow><mi>c</mi><mn>4</mn></mrow></semantics></math>, <math display="inline"><semantics><mrow><mi>c</mi><mn>5</mn></mrow></semantics></math>, and <math display="inline"><semantics><mrow><mi>c</mi><mn>6</mn></mrow></semantics></math>; and following the same order but starting with (<b>g</b>) and ending with (<b>m</b>), the sagittal planes <math display="inline"><semantics><mrow><mi>s</mi><mn>1</mn></mrow></semantics></math>, <math display="inline"><semantics><mrow><mi>s</mi><mn>2</mn><mi>l</mi></mrow></semantics></math>, <math display="inline"><semantics><mrow><mi>s</mi><mn>2</mn><mi>r</mi></mrow></semantics></math>, <math display="inline"><semantics><mrow><mi>s</mi><mn>3</mn><mi>l</mi></mrow></semantics></math>, <math display="inline"><semantics><mrow><mi>s</mi><mn>3</mn><mi>r</mi></mrow></semantics></math>, <math display="inline"><semantics><mrow><mi>s</mi><mn>4</mn><mi>l</mi></mrow></semantics></math>, and <math display="inline"><semantics><mrow><mi>s</mi><mn>4</mn><mi>r</mi></mrow></semantics></math>.</p> "> Figure 3
<p>Semiautomatic groove detection platform.</p> "> Figure 4
<p>Main software components of the proposed tool.</p> "> Figure 5
<p>Preprocessing of an image with the objective of scaling the values between the maximum and minimum values of the image.</p> "> Figure 6
<p>Comparison of the original image with that obtained after applying the Sigmoid function with a cutoff value is 0.5 and the gain value is 10.</p> "> Figure 7
<p>Local filter has been applied to an ultrasound image (Original) with different surface sizes for analysis. As shown in the figure, the resulting images obtained using Mean 3 × 3, 9 × 9, 27 × 27, 55 × 55, and 81 × 81 pixels are presented from left to right as an example.</p> "> Figure 8
<p>Comparison of the original image with that obtained after applying a threshold and finally the closure function.</p> "> Figure 9
<p>Definition by means of the histogram, of the number of structures that have an area and the elimination of those with an area lower than the reference value (red line), in order to obtain a new image (third column) with those structures that meet the condition.</p> "> Figure 10
<p>Mask obtained from the morphological_chan_vese function of a groove to be segmented and its corresponding inversion because the number of pixels with value 1 was greater than 50%.</p> "> Figure 11
<p>Steps to be followed once the zone has been defined, defining the mask, the number of segments and finally the contour using the maskSLIC function.</p> "> Figure 12
<p>Result of the manual segmentation of each groove defined in the upper cards and carried out by the platform algorithms, in this case Threshold.</p> "> Figure 13
<p>Semiautomatic groove detection platform.</p> "> Figure 14
<p>Movement of the vertices of the segment defined by the algorithm and change of the coordinates of the corresponding groove in the table.</p> "> Figure 15
<p>Segmentation examples for the Sylvian sulcus (Manual segmentation), carried out between weeks 24 and 32 of gestation of a mime baby applying the different segmentation methods (Threshold, Sigmoid + Threshold, and Snakes).</p> "> Figure 16
<p>Examples of segmentation of different grooves in an ultrasound scan of a baby at week 29 of gestation and c4 coronal section.</p> "> Figure 17
<p>Segmentation of the Sylvian sulcus applying the three defined segmentation methods (Threshold, Sigmoid + Threshold, and Snakes) for different babies and weeks.</p> "> Figure 18
<p>Example of how the segmentation results vary with different methods depending on the accuracy of the manual segmentation, with the first row showing more precise manual segmentation and the second row showing less precise manual segmentation.</p> ">
Abstract
:1. Introduction
- Develop a semi-automatic GUI platform that enables the segmentation of cerebral grooves in ultrasound images.
- Exploring and implementing different segmentation techniques to effectively characterize the development of the brain in preterm children.
- Creating an atlas that incorporates the segmented regions for estimating gestational age and assessing developmental progress.
- Validating the accuracy and reliability of the developed atlas by collaborating with medical experts.
- Contributing to the advancement of artificial intelligence techniques for more effective and efficient segmentation in similar applications.
- Ensure that the platform is easy to use and accessible to medical professionals, even those without extensive image processing expertise.
2. Materials and Methods
2.1. Database and Experimental Data
- Coronal plane: orbital border (), sphenoidal ridge (), foramina of Monro and third ventricle (), fourth ventricle (), choroid plexus (), and visibility of the parietooccipital sulcus in the inferior tier of the image ().
- Sagittal plane: midsagittal (), lateral ventricles (, ), lateral fissure (), and lateral fissure at the bottom of the image (, ).
2.2. Graphical User Interface and Software Requirements
2.2.1. GUI Interface Design and Functionalities
- Layout: describes how the platform is, that is, the appearance it has; and the elements such as graphics, drop-down menus, etc., are defined; and their location, the size of each one of them, the color, etc., in the screen.
- appCallback: is used so that the platform has interactivity with the person who uses it and executes an action or algorithm to obtain a result.
- The buttons that allow the selection of the image from which the manual segmentation is going to be performed. These buttons are adaptive and show the infant identification, weeks, and plane (highlighted in red).
- Located in the top-right corner of the platform are two buttons. The leftmost button displays the type of segmentation applied, which could be either Threshold, Sigmoid + Threshold, or Snake. Clicking the blue button initiates the execution of the selected segmentation algorithm (highlighted in orange).
- The Select Box or Drag and Drop feature enables the import of an exported Excel file and displays it through the upper cards. This allows viewing both the segmentation obtained by certain methods and the corresponding coordinates in the annotation table (highlighted in yellow).
- The top section of the interface contains two cards that handle the segmentation process. The card on the left displays the image selected via the top buttons and enables the user to define the groove area to be segmented. The right card exhibits the coordinates of each manual segmentation executed for each pathway in a table. A button at the bottom enables the selection of the path to be segmented, while the top button exports the table to an Excel file (highlighted in green).
- The lower section of the platform consists of two cards. The first card displays the segmented grooves obtained by the selected method, while the second card displays the numerical coordinates of the defined segmentation. The second card also features an export button that enables users to save the data in an Excel file (highlighted in blue).
2.2.2. Software Requirements
- Cloud: Google Colab platform was utilized to test and evaluate the various methods applied to the images, as their size varied depending on the operations performed, making them too large to be processed locally. As a result, the team opted to work remotely using Google servers.
- Local: Jupyter Lab has been utilized to create and evaluate the platform intended for medical practitioners once its functionalities and design were established. This enables the verification of the performance of both the platform and the algorithm on a local machine.
2.2.3. Methodology Structure and Implementation
Step 1: Contrast Enhancement
Step 2: Binarization
Step 3: Morphological filtering
Step 4: Object labeling and feature extraction
Step 5: Segmentation
2.2.4. Digital Repository and Distribution
- 1.
- Redistributions of source code must retain the above copyright notice, this list of conditions, and the following disclaimer [39].
- 2.
- Redistributions in binary form must reproduce the above copyright notice, this list of conditions, and the following disclaimer in the documentation and/or other materials provided with the distribution.
- 3.
- Neither the name of the copyright holder nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.
3. Results
3.1. Experimental Case of Use
- 1.
- The first step is to select one of the three methods that can be chosen. Each technique performs different steps in processing the image and obtaining the sulcus mask:
- Threshold: The first step is the rectangular cropping of the image, taking the maximum and minimum values of both axes to perform the scaling. Once this is done, we proceed with the local threshold in order to try to eliminate most of the noise that appears and thus obtain the groove that is being segmented.
- Sigmoid + Threshold: The process applied when selecting this option is similar to the previous one, but before cutting out the groove area, the image is treated with the Sigmoid Correction function.
- Snake: In this option, the image, where the furrow has been manually identified, is cropped, and the Morphological Active Contours without Edges (MorphACWE) method is applied. The mask obtained is analyzed, and it is seen if the sum of the pixels that contain the value 1, which corresponds to the groove, is less than 50% of the pixels that the image contains. Otherwise, the mask is inverted, and the pixels that contain the value 1 pass to the value 0 and vice versa.
- 2.
- As a second step, we proceed to identify the structures, know the centroid, and the area of each one of them to maintain the structures whose centroid is within the manual segmentation and eliminate the rest. Once the structures that meet the condition have been identified, we proceed to analyze which of these contains a larger surface area in order to eliminate the rest and obtain an image with a black background where only a single structure appears.
- 3.
- After obtaining the image from the previous step, the next step is to multiply it with a black-and-white image that defines the structure based on the manual segmentation. The purpose of this image multiplication is to remove the parts of the structure that are outside the manually segmented area, which the user has not marked as part of it.
- 4.
- Finally, so that the contour can be shown correctly in the original image, we proceed to adjust the coordinates of the rectangle cropped with the segmentation to where it had originally been cropped in the original image. This is accomplished using the transposition of the image coordinates with the help of the maximum and minimum values of the x and y axes obtained in the first section when cropping.
3.2. Performance Analysis and Evaluation
4. Discussion
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Conflicts of Interest
References
- Cainelli, E.; Bisiacchi, P. Neurodevelopmental Disorders: Past, Present, and Future. Children 2023, 10, 31. [Google Scholar] [CrossRef] [PubMed]
- Hou, W.; Tang, P.H.; Agarwal, P. The most useful cranial ultrasound predictor of neurodevelopmental outcome at 2 years for preterm infants. Clin. Radiol. 2023, 75, 278–286. [Google Scholar] [CrossRef] [PubMed]
- Routier, L.; Querne, L.; Ghostine-Ramadan, G.; Boulesteix, J.; Graïc, S.; Mony, S.; Wallois, F.; Bourel-Ponchel, E. Predicting the Neurodevelopmental Outcome in Extremely Preterm Newborns Using a Multimodal Prognostic Model Including Brain Function Information. JAMA Netw. Open 2023, 6, e231590. [Google Scholar] [CrossRef]
- World Health Organization (WHO). Preterm-Birth. World Health Organization (WHO). 2023. Available online: https://www.who.int/news-room/fact-sheets/detail/preterm-birth (accessed on 29 June 2023).
- Perin, J.; Mulick, A.; Yeung, D.; Villavicencio, F.; Lopez, G.; Strong, K.L.; Prieto-Merino, D.; Cousens, S.; Black, R.E.; Liu, L. Global, regional, and national causes of under-5 mortality in 2000-19: An updated systematic analysis with implications for the Sustainable Development Goals. Lancet Child Adolesc. Health 2022, 6, 106–115. [Google Scholar] [CrossRef] [PubMed]
- Spreafico, R.; Tassi, L. Chapter32—Cortical malformations. In Handbook of Clinical Neurology; Elsevier: Amsterdam, The Netherlands, 2012; Volume 108, pp. 535–557. [Google Scholar]
- Gholipour, A.; Rollins, C.; Velasco-Annis, C.; Ouaalam, A.; Akhondi-Asl, A.; Afacan, O.; Ortinau, C.; Clancy, S.; Limperopoulos, C.; Yang, E.; et al. A normative spatiotemporal MRI atlas of the fetal brain for automatic segmentation and analysis of early brain growth. Sci Rep 2017, 7, 476. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Graca, A.M.; Cardoso, K.R.V.; da Costa, J.M.F.P.; Cowan, F.M. Cerebral volume at term age: Comparison between preterm and term-born infants using cranial ultrasound. Early Hum. Dev. 2013, 89, 643–648. [Google Scholar] [CrossRef]
- Dubois, J.; Benders, M.; Borradori-Tolsa, C.; Cachia, A.; Lazeyras, F.; Ha-Vinh Leuchter, R.; Sizonenko, S.v.; Warfield, S.K.; Mangin, J.F.; Hüppi, P.S. Primary cortical folding in the human newborn: An early marker of later functional development. Brain 2008, 131, 2028–2041. [Google Scholar] [CrossRef] [Green Version]
- Recio, M.; Martínez, V. Resonancia magnética fetal cerebral. An. Pediatría Contin. 2010, 8, 41–44. [Google Scholar] [CrossRef]
- el Marroun, H.; Zou, R.; Leeuwenburg, M.F.; Steegers, E.A.P.; Reiss, I.K.M.; Muetzel, R.L.; Kushner, S.A.; Tiemeier, H. Association of Gestational Age at Birth With Brain Morphometry. JAMA Pediatr. 2020, 174, 1149–1158. [Google Scholar] [CrossRef]
- Poonguzhali, S.; Ravindran, G. A complete automatic region growing method for segmentation of masses on ultrasound images. In Proceedings of the International Conference on Biomedical and Pharmaceutical Engineering, Singapore, 11–14 December 2006; pp. 88–92. [Google Scholar]
- Kuo, K.-w.; Mamou, J.; Aristizábal, O.; Zhao, X.; Ketterling, J.A.; Wang, Y. Nested Graph Cut for Automatic Segmentation of High-Frequency Ultrasound Images of the Mouse Embryo. IEEE Trans. Med. Imaging 2016, 35, 427–441. [Google Scholar] [CrossRef]
- Milletari, F.; Ahmadi, S.; Kroll, C.; Plate, A.; Rozanski, V.; Maiostre, J.; Levin, J.; Dietrich, O.; Ertl-Wagner, B.; Bötzel, K.; et al. Hough-CNN: Deep learning for segmentation of deep brain regions in MRI and ultrasound. Comput. Vis. Image Underst. 2017, 164, 92–102. [Google Scholar] [CrossRef] [Green Version]
- Valanarasu, J.M.J.; Yasarla, R.; Wang, P.; Hacihaliloglu, I.; Patel, V.M. Learning to Segment Brain Anatomy From 2D Ultrasound With Less Data. IEEE J. Sel. Top. Signal Process. 2020, 14, 1221–1234. [Google Scholar] [CrossRef]
- Mortada, M.J.; Tomassini, S.; Anbar, H.; Morettini, M.; Burattini, L.; Sbrollini, A. Segmentation of anatomical structures of the left heart from echocardiographic images using Deep Learning. Diagnostics 2023, 13, 1683. [Google Scholar] [CrossRef]
- Xu, Y.; Wang, Y.; Yuan, J.; Cheng, Q.; Wang, X.; Carson, P. Medical breast ultrasound image segmentation by machine learning. Ultrasonics 2019, 91, 1–9. [Google Scholar] [CrossRef] [PubMed]
- Griffiths, P.D.; Naidich, T.P.; Fowkes, M.; Jarvis, D. Sulcation and Gyration Patterns of the Fetal Brain Mapped by Surface Models Constructed de 3D MR Image Datasets. Neurographics 2018, 8, 124–129. [Google Scholar] [CrossRef]
- Mata, C.; Munuera, J.; Lalande, A.; Ochoa-Ruiz, G.; Benitez, R. MedicalSeg: A Medical GUI Application for Image Segmentation Management. MedicalSeg: A Medical GUI Application for Image Segmentation Management. Algorithms 2022, 15, 200. [Google Scholar] [CrossRef]
- Mata, C.; Walker, P.; Oliver, A.; Martí, J.; Lalande, A. Usefulness of Collaborative Work in the Evaluation of Prostate Cancer from MRI. Clin. Pract. 2022, 12, 350–362. [Google Scholar] [CrossRef]
- Rodríguez, J.; Ochoa-Ruiz, G.; Mata, C. A Prostate MRI Segmentation Tool Based on Active Contour Models Using a Gradient Vector Flow. Appl. Sci. 2020, 10, 6163. [Google Scholar] [CrossRef]
- Lefèvre, J.; Germanaud, D.; Dubois, J.; Rousseau, F.; de MacEdo Santos, I.; Angleys, H.; Mangin, J.F.; Hüppi, P.S.; Girard, N.; de Guio, F. Are Developmental Trajectories of Cortical Folding Comparable Between Cross-sectional Datasets of Fetuses and Preterm Newborns? Cereb. Cortex 2016, 26, 3023–3035. [Google Scholar] [CrossRef] [Green Version]
- Dubois, J.; Lefèvre, J.; Angleys, H.; Leroy, F.; Fischer, C.; Lebenberg, J.; Dehaene-Lambertz, G.; Borradori-Tolsa, C.; Lazeyras, F.; Hertz-Pannier, L.; et al. The dynamics of cortical folding waves and prematurity-related deviations revealed by spatial and spectral analysis of gyrification. NeuroImage 2019, 185, 934–946. [Google Scholar] [CrossRef] [Green Version]
- Skinner, C.; Mount, C.A. Sonography Assessment of Gestational Age. In StatPearls [Internet]; StatPearls Publishing: Treasure Island, FL, USA, 2023; Available online: https://www.ncbi.nlm.nih.gov/books/NBK570610/ (accessed on 30 June 2023).
- Thomas, J.G.; Peters, R.A.; Jeanty, P. Automatic segmentation of ultrasound images using morphological operators. IEEE Trans. Med. Imaging 1991, 10, 180–186. [Google Scholar] [CrossRef]
- Menchón-Lara, R.; Sancho-Gómez, J. Fully automatic segmentation of ultrasound common carotid artery images based on machine learning. Neurocomputing 2015, 151, 161–167. [Google Scholar] [CrossRef]
- Cunningham, R.; Harding, P.; Loram, I. Real-Time Ultrasound Segmentation, Analysis and Visualisation of Deep Cervical Muscle Structure. IEEE Trans. Med. Imaging 2017, 36, 653–665. [Google Scholar] [CrossRef] [Green Version]
- Yang, X.; Yu, L.; Li, S.; Wen, H.; Luo, D.; Bian, C.; Qin, J.; Ni, D.; Heng, P.-A. Towards Automated Semantic Segmentation in Prenatal Volumetric Ultrasound. IEEE Trans. Med. Imaging 2019, 38, 180–193. [Google Scholar] [CrossRef]
- PyPI.org. Matplotlib. 2023. Available online: https://pypi.org/project/matplotlib/ (accessed on 30 June 2023).
- PyPI.org. Scikit-Image. 2023. Available online: https://pypi.org/project/scikit-image/ (accessed on 30 June 2023).
- PyPI.org. OpenCV. 2023. Available online: https://pypi.org/project/opencv-python/ (accessed on 30 June 2023).
- PyPI.org. NumPy. 2023. Available online: https://pypi.org/search/?q=numpy/ (accessed on 30 June 2023).
- PyPI.org. Python Imaging Library (PIL). 2023. Available online: https://pypi.org/project/Pillow/ (accessed on 30 June 2023).
- PyPI.org. Pandas. 2023. Available online: https://pypi.org/project/pandas/ (accessed on 30 June 2023).
- Keras. 2023. Available online: https://keras.io/api/ (accessed on 30 June 2023).
- PyPI.org. Scikit-Learn. 2023. Available online: https://pypi.org/project/scikit-learn/ (accessed on 30 June 2023).
- PyPI.org. Scipy. 2023. Available online: https://pypi.org/project/scipy/ (accessed on 30 June 2023).
- Plotly. Dash Enterprise. 2023. Available online: https://plotly.com/dash/ (accessed on 30 June 2023).
- Rabanaque, D. GUI Semi-Automatic Application. GitHub Repository. Last Update: 30/06/2023. GitHub Repository. 2023. Available online: https://github.com/Derther/GUI-semi-automatic-application (accessed on 30 June 2023).
- Moore, C.; Bell, D. Dice similarity coefficient. Radiopaedia Artif. Intell. 2020. [Google Scholar] [CrossRef]
- Ibrahim, J.; Mir, I.; Chalak, L. Brain imaging in preterm infants <32 weeks gestation: A clinical review and algorithm for the use of cranial ultrasound and qualitative brain MRI. Pediatr. Res. 2018, 84, 799–806. [Google Scholar]
- Kalbas, Y.; Jung, H.; Ricklin, J.; Jin, G.; Li, M.; Rauer, T.; Dehghani, S.; Navab, N.; Kim, L.; Pape, H.C.; et al. Remote Interactive Surgery Platform (RISP): Proof of Concept for an Augmented-Reality-Based Platform for Surgical Telementoring. J. Imaging 2023, 9, 03–56. [Google Scholar] [CrossRef]
- Fedorov, A.; Beichel, R.; Kalpathy-Cramer, J.; Finet, J.; Fillion-Robin, J.C.; Pujol, S.; Bauer, C.; Jennings, D.; Fennessy, F.; Sonka, M.; et al. 3D Slicer as an image computing platform for the Quantitative Imaging Network. Magn. Reson. Imaging 2012, 30, 1323–1341. [Google Scholar] [CrossRef] [Green Version]
Library | Definition and functions |
---|---|
matplotlib [29] | Library used to generate graphs from data contained in lists or arrays in the Python programming language and its mathematical extension NumPy. |
scikit-image [30] | This library is a collection of algorithms for image processing. |
OpenCV [31] | Free computer vision library originally developed by Intel. |
NumPy [32] | It is a library for the Python programming language that supports creating large multidimensional arrays and vectors, along with a large collection of high-level mathematical functions to operate on them. |
PIL [33] | Adds support for opening, manipulating, and saving many different images file formats. |
pandas [34] | It is a software library written as a NumPy extension for data manipulation and analysis for the Python programming language. In particular, it offers data structures and operations to manipulate number tables and time series. |
keras [35] | Open-Source Neural Networks library written in Python, specially designed to allow experimentation in a more or less short time with Deep Learning networks. |
Scikit-learn [36] | Free software machine learning library for the Python programming language. It includes several algorithms for classification, regression, and group analysis, including support vector machines, random forests, Gradient boosting, K-means, and DBSCAN. |
scipy [37] | Library that contains modules for optimization, linear algebra, integration, interpolation, special functions, FFT, signal and image processing, resolution of ODEs, and other tasks for science and engineering. |
Dash [38] | It is a productive Python framework for creating web analytic platforms that allow a user without programming knowledge to perform tasks already defined through interaction with it through buttons or sliders. |
Time-Computing | |||
---|---|---|---|
Segmentation | Expert 1 | Expert 2 | DSC |
Threshold | 0.96677 | 0.97550 | 0.99127 |
Sigmoid + Threshold | 0.97124 | 0.96998 | 0.99874 |
Snake | 0.99600 | 0.92772 | 0.93172 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Rabanaque, D.; Regalado, M.; Benítez, R.; Rabanaque, S.; Agut, T.; Carreras, N.; Mata, C. Semi-Automatic GUI Platform to Characterize Brain Development in Preterm Children Using Ultrasound Images. J. Imaging 2023, 9, 145. https://doi.org/10.3390/jimaging9070145
Rabanaque D, Regalado M, Benítez R, Rabanaque S, Agut T, Carreras N, Mata C. Semi-Automatic GUI Platform to Characterize Brain Development in Preterm Children Using Ultrasound Images. Journal of Imaging. 2023; 9(7):145. https://doi.org/10.3390/jimaging9070145
Chicago/Turabian StyleRabanaque, David, Maria Regalado, Raul Benítez, Sonia Rabanaque, Thais Agut, Nuria Carreras, and Christian Mata. 2023. "Semi-Automatic GUI Platform to Characterize Brain Development in Preterm Children Using Ultrasound Images" Journal of Imaging 9, no. 7: 145. https://doi.org/10.3390/jimaging9070145
APA StyleRabanaque, D., Regalado, M., Benítez, R., Rabanaque, S., Agut, T., Carreras, N., & Mata, C. (2023). Semi-Automatic GUI Platform to Characterize Brain Development in Preterm Children Using Ultrasound Images. Journal of Imaging, 9(7), 145. https://doi.org/10.3390/jimaging9070145