Ambiance Preservation Augmenting for Semantic Segmentation of Pediatric Burn Skin Lesions
<p>Image examples from the used databases: (<b>a</b>,<b>b</b>) Pediatric burns from BAMSI database, (<b>c</b>) Kaggle burns, and (<b>d</b>,<b>e</b>) healthy skin images (Abdominal Skin Dataset and, respectively, hand gesture recognition).</p> "> Figure 2
<p>The method schematic. The proposed method assumes training of deep model (G-Cascade) using, sequentially, three types of data (unlabeled, synthetic, and labeled). The trained model is further used to assist medical personnel by predicting the burned area and the degree of burn in images with burns.</p> "> Figure 3
<p>The Pyramid Vision Transformer (PVT) [<a href="#B32-mathematics-13-00758" class="html-bibr">32</a>], which is used as encoder. The model contains four stages, each made from a patch projection layer and an encoder transformer.</p> "> Figure 4
<p>The G-cascade model [<a href="#B29-mathematics-13-00758" class="html-bibr">29</a>] using a PVT encoder [<a href="#B32-mathematics-13-00758" class="html-bibr">32</a>]. The embeddings are captured at the exit of the encoder and concatenated to form the feature descriptor <math display="inline"><semantics> <mi mathvariant="bold">e</mi> </semantics></math>.</p> "> Figure 5
<p>The process to create synthetic burn skin images. First, given healthy skin images with annotations, the inscribed box is computed. Then, a random burn image and a random healthy skin image are selected and combined to create a synthetic burned skin image. The resulting image has a segmentation label map generated from the location where the burn was overlaid.</p> "> Figure 6
<p>Computation of the adaptive Dice Coefficient, used as a weighting factor when computing the Ambiance preservation loss. The background label (marked with black) is not taken into account. Only the burn classes are considered. The Dice Coefficient uses the ratio between intersection over union for each class. These are marked for two examples: in the upper part, there is a pair of synthetic burns, which are considered to have the burn grade IIA (class 2). In the lower part, there is an example with real burn images, which may contain all classes. In this example, both contain class 2, and thus, the term for class 2 is non-zero, while only one contains class 3 (grade IIB), and thus, the intersection is null.</p> "> Figure 7
<p>The training process has been split into three stages: (pre)training in supervised manner with synthetic burns, partially supervised training using the Ambiance preservation technique over the encoder embeddings over all images, and supervised training of the annotated images only.</p> "> Figure 8
<p>Example of images with pediatric skin burns (original images—first column), ground truth (second column) and predictions with the solution from [<a href="#B23-mathematics-13-00758" class="html-bibr">23</a>] (third column) and the proposed solution (forth row). Annotations and predictions showing in this example are color-coded by pink (grade IIA), magenta (grade III), and cyan (grade IV).</p> ">
Abstract
:1. Introduction
- Kaggle collection of cropped images of burns, where each image focuses solely on the burn area. The background has been removed and replaced with a plain white background.
- Healthy skin images that were collected for other purposes (for example, skin detection of hand gesture recognition).
Prior Works
2. Materials and Methods
2.1. Databases
- Grade I: They are superficial burns involving partial damage to the epidermis. These appear as erythema and cause pain, but they heal completely without surgical intervention. This grade corresponds to “superficial burns”.
- Grade IIA: Burns affecting the full epidermis and the superficial dermis. They are areas of erythema, blisters with serous-citrin fluid, and pain. Healing occurs through non-excisional debridement and proper rebalancing, although some discoloration may remain.
- Grade IIB: Burns involving the entire epidermis, the superficial dermis, and part of the deep dermis. Clinically, they appear bright pink–red and cause pain. Healing involves both excisional and non-excisional debridement, often leaving scars and discoloration. Grades IIA and IIB together are considered “partial thickness” burns, with depth being the difference.
- Grade III: Burns penetrating the entire epidermis and dermis, reaching the hypodermis or deeper structures such as fascia, tendons, or bone. These burns appear as firm white or brown eschars, are painless, and lack capillary pulses. Surgical excision and grafting are mandatory. This corresponds to “full-thickness burns”.
- Grade IV: Burns that extend beyond the dermis into underlying tissues like muscles or bones. These areas are insensate due to destroyed nerve endings and require surgical intervention and grafting. Grades III and IV are classified as “full-thickness burns”.
2.2. Segmentation Model
2.3. Synthesize Burn Skin Images
- Input: Database with burned skin patches and database with healthy skin (including skin map label).
- For each healthy skin image, identify the largest inscribed rectangle within the skin area.
- For each burned skin patch, create the burned skin map by applying a threshold to the patch.
- Randomly select a burned skin patch and, respectively, a healthy skin image.
- Rotate by an arbitrary angle and introduce a slight warp on the burned skin patch. Apply the same transform also on the burned skin map.
- Compute the bounding box of the skin patch. If the skin patch does not fit inside the healthy skin inscribed box, scale down the burned patch.
- Overlay the transformed burned skin patch with healthy skin.
- Form the burn segmentation map by placing the map associated with the burn onto the same location. The burn will be considered from grade IIA, and the rest is background.
- Find all possible rectangles: compute all possible rectangles for the given shape.
- Check each rectangle: For each possible rectangle, verify if it lies entirely within the binary shape.
- Store the largest rectangle: Keep track of the rectangle with the maximum area that is fully inscribed.
- If a speed-up is necessary, downscale the binary map with a factor, and after determining the inscribed rectangle, scale up its coordinates with the same factor.
2.4. Ambiance Preservation
2.5. Implementation
- Training on the synthetic burn skin images. The synthetic burn skin image set is much larger than the one with real images (BAMSI), and it contains more variability with respect to the background and to the healthy skin. We recall that all synthetic burns have been artificially labeled with burn grade IIA, which is class 2.
- Ambiance preservation learning. In this stage, only the encoder is used, and the procedure aims to ensure that the embeddings represent the relationships between the input images. Two images with the same burn degree at overlapping positions should have similar embeddings, as highlighted in the previous section.
- The last stage is purely supervised learning, where only the annotated real images from BAMSI are used. The large number of epochs is required by the nature of the model which is based on visual transformer.
3. Results
Ablation and Parameter Influence
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
Abbreviations
BAMSI | Burn Assessment by MultiSpectral Imaging |
ViT | Visual Transformer |
PVT | Pyramid Visual Transformer |
UCB | Up-convolution Blocks |
GCAM | Graph Convolutional Attention Module |
SegHeads | segmentation heads |
SPA | Spatial Attention |
GCB | Graph Convolution Block |
DWC | Depth Wise Convolution |
BN | Batch Normalization |
CE | Cross-Entropy |
References
- Richardson, M. Understanding the structure and function of the skin. Nurs. Times 2003, 99, 46–48. [Google Scholar] [PubMed]
- Rowan, M.P.; Cancio, L.C.; Elster, E.A.; Burmeister, D.M.; Rose, L.F.; Natesan, S.; Chan, R.K.; Christy, R.J.; Chung, K.K. Burn wound healing and treatment: Review and advancements. Crit. Care 2015, 19, 243. [Google Scholar] [CrossRef] [PubMed]
- Żwierełło, W.; Piorun, K.; Skórka-Majewicz, M.; Maruszewska, A.; Antoniewski, J.; Gutowska, I. Burns: Classification, pathophysiology, and treatment: A review. Int. J. Mol. Sci. 2023, 24, 3749. [Google Scholar] [CrossRef] [PubMed]
- Shpichka, A.; Butnaru, D.; Bezrukov, E.A.; Sukhanov, R.B.; Atala, A.; Burdukovskii, V.; Zhang, Y.; Timashev, P. Skin tissue regeneration for burn injury. Stem Cell Res. Ther. 2019, 10, 94. [Google Scholar] [CrossRef]
- Kolesnik, M.; Fexa, A. Multi-dimensional color histograms for segmentation of wounds in images. In Proceedings of the Image Analysis and Recognition: Second International Conference, ICIAR, Toronto, ON, Canada, 28–30 September 2005; Springer: Berlin/Heidelberg, Germany, 2005; pp. 1014–1022. [Google Scholar]
- Acha, B.; Serrano, C.; Fondón, I.; Gómez-Cía, T. Burn depth analysis using multidimensional scaling applied to psychophysical experiment data. IEEE Trans. Med. Imaging 2013, 32, 1111–1120. [Google Scholar] [CrossRef]
- Wantanajittikul, K.; Auephanwiriyakul, S.; Theera-Umpon, N.; Koanantakool, T. Automatic segmentation and degree identification in burn color images. In Proceedings of the Biomedical Engineering International Conference, Macau, China, 28–30 May 2012; IEEE: Piscataway, NJ, USA, 2012; pp. 169–173. [Google Scholar]
- Rangel-Olvera, B.; Rosas-Romero, R. Detection and classification of skin burns on color images using multi-resolution clustering and the classification of reduced feature subsets. Multimed. Tools Appl. 2024, 83, 54925–54949. [Google Scholar] [CrossRef]
- Kuan, P.; Chua, S.; Safawi, E.; Wang, H.; Tiong, W. A comparative study of the classification of skin burn depth in human. J. Telecommun. Electron. Comput. Eng. (JTEC) 2017, 9, 15–23. [Google Scholar]
- Rangaiah, P.K.B.; Pradeep kumar, B.P.; Augustine, R. Improving burn diagnosis in medical image retrieval from grafting burn samples using B-coefficients and the clahe algorithm. Biomed. Signal Process. Control 2025, 99, 106814. [Google Scholar] [CrossRef]
- Chauhan, J.; Goyal, P. BPBSAM: Body part-specific burn severity assessment model. Burns 2020, 46, 1407–1423. [Google Scholar] [CrossRef] [PubMed]
- Abubakar, A.; Ugail, H. Discrimination of human skin burns using machine learning. In Proceedings of the Intelligent Computing: Computing Conference, Chongqing, China, 6–8 December 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 641–647. [Google Scholar]
- Şevik, U.; Karakullukçu, E.; Berber, T.; Akbaş, Y.; Türkyılmaz, S. Automatic classification of skin burn colour images using texture-based feature extraction. IET Image Process. 2019, 13, 2018–2028. [Google Scholar] [CrossRef]
- Yadav, D.; Aljrees, T.; Kumar, D.; Kumar, A.; Singh, K.U.; Singh, T. Spatial attention-based residual network for human burn identification and classification. Sci. Rep. 2023, 13, 12516. [Google Scholar] [CrossRef]
- Khan, F.A.; Butt, A.U.R.; Asif, M.; Ahmad, W.; Nawaz, M.; Jamjoom, M.; Alabdulkreem, E. Computer-aided diagnosis for burnt skin images using deep convolutional neural network. Multimed. Tools Appl. 2020, 79, 34545–34568. [Google Scholar] [CrossRef]
- Jiao, C.; Su, K.; Xie, W.; Ye, Z. Burn image segmentation based on Mask Regions with Convolutional Neural Network deep learning framework: More accurate and more convenient. Burn. Trauma 2019, 7, 6. [Google Scholar] [CrossRef] [PubMed]
- Boissin, C.; Laflamme, L.; Fransén, J.; Lundin, M.; Huss, F.; Wallis, L.; Allorto, N.; Lundin, J. Development and evaluation of deep learning algorithms for assessment of acute burns and the need for surgery. Sci. Rep. 2023, 13, 1794. [Google Scholar] [CrossRef] [PubMed]
- Karthik, J.; Nath, G.S.; Veena, A. Deep learning-based approach for skin burn detection with multi-level classification. In Advances in Computing and Network Communications: Proceedings of CoCoNet 2020; Springer: Berlin/Heidelberg, Germany, 2021; Volume 2, pp. 31–40. [Google Scholar]
- Liu, H.; Yue, K.; Cheng, S.; Li, W.; Fu, Z. A framework for automatic burn image segmentation and burn depth diagnosis using deep learning. Comput. Math. Methods Med. 2021, 2021, 5514224. [Google Scholar] [CrossRef]
- Sun, K.; Zhao, Y.; Jiang, B.; Cheng, T.; Xiao, B.; Liu, D.; Mu, Y.; Wang, X.; Liu, W.; Wang, J. High-resolution representations for labeling pixels and regions. arXiv 2019, arXiv:1904.04514. [Google Scholar]
- Xu, X.; Bu, Q.; Xie, J.; Li, H.; Xu, F.; Li, J. On-site burn severity assessment using smartphone-captured color burn wound images. Comput. Biol. Med. 2024, 182, 109171. [Google Scholar] [CrossRef]
- Rozo, A.; Miskovic, V.; Rose, T.; Keersebilck, E.; Iorio, C.; Varon, C. A deep learning image-to-image translation approach for a more accessible estimator of the healing time of burns. IEEE Trans. Biomed. Eng. 2023, 70, 2886–2894. [Google Scholar] [CrossRef]
- Florea, C.; Florea, L.; Vertan, C.; Badoiu, S.C. BurnSafe: Automatic Assistive Tool for Burn Severity Assessment by Semantic Segmentation. In Proceedings of the 12th International Workshop on Assistive Computer Vision and Robotics, European Conference on Computer Vision, Marco, China, 29 September–4 October 2024; Springer: Berlin/Heidelberg, Germany, 2024. [Google Scholar]
- Zou, Y.; Zhang, Z.; Zhang, H.; Li, C.L.; Bian, X.; Huang, J.B.; Pfister, T. PseudoSeg: Designing Pseudo Labels for Semantic Segmentation. In Proceedings of the International Conference on Learning Representations—ICLR, Addis Ababa, Ethiopia, 26–30 April 2020. [Google Scholar]
- Zhang, Q.; Zhu, Y.; Cordeiro, F.R.; Chen, Q. PSSCL: A progressive sample selection framework with contrastive loss designed for noisy labels. Pattern Recognit. 2025, 161, 111284. [Google Scholar] [CrossRef]
- Badea, M.S.; Vertan, C.; Florea, C.; Florea, L.; Bădoiu, S. Severe burns assessment by joint color-thermal imagery and ensemble methods. In Proceedings of the 2016 IEEE 18th International Conference on e-Health Networking, Applications and Services (Healthcom), Munich, Germany, 14–17 September 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1–5. [Google Scholar]
- Topiwala, A.; Al-Zogbi, L.; Fleiter, T.; Krieger, A. Adaptation and Evaluation of Deep Leaning Techniques for Skin Segmentation on Novel Abdominal Dataset. In Proceedings of the 2019 IEEE 19th International Conference on Bioinformatics and Bioengineering (BIBE), Athens, Greece, 28–30 October 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 752–759. [Google Scholar]
- Kawulok, M.; Kawulok, J.; Nalepa, J.; Smolka, B. Self-adaptive algorithm for segmenting skin regions. EURASIP J. Adv. Signal Process. 2014, 2014, 170. [Google Scholar] [CrossRef]
- Rahman, M.M.; Marculescu, R. G-CASCADE: Efficient cascaded graph convolutional decoding for 2D medical image segmentation. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2024; pp. 7728–7737. [Google Scholar]
- Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention—MICCAI, Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. Transunet: Transformers make strong encoders for medical image segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar]
- Wang, W.; Xie, E.; Li, X.; Fan, D.P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; Shao, L. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the International Conference Computer Vision—ICCV, Montreal, QC, Canada, 10–17 October 2021; pp. 568–578. [Google Scholar]
- Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16x16 words: Transformers for image recognition at scale. In Proceedings of the International Conference Learning Representations—ICLR, Vienna, Austria, 3–7 May 2021. [Google Scholar]
- Chen, L.; Zhang, H.; Xiao, J.; Nie, L.; Shao, J.; Liu, W.; Chua, T.S. Sca-cnn: Spatial and channel-wise attention in convolutional networks for image captioning. In Proceedings of the IEEE Conference Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5659–5667. [Google Scholar]
- Han, K.; Wang, Y.; Guo, J.; Tang, Y.; Wu, E. Vision gnn: An image is worth graph of nodes. Adv. Neural Inf. Process. Syst. 2022, 35, 8291–8303. [Google Scholar]
- Florea, C.; Badea, M.; Florea, L.; Racoviteanu, A.; Vertan, C. Margin-mix: Semi-supervised learning for face expression recognition. In Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Proceedings, Part XXIII 16. Springer: Berlin/Heidelberg, Germany, 2020; pp. 1–17. [Google Scholar]
- Huang, H.; Lin, L.; Tong, R.; Hu, H.; Zhang, Q.; Iwamoto, Y.; Han, X.; Chen, Y.W.; Wu, J. Unet 3+: A full-scale connected unet for medical image segmentation. In Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020; IEEE: Piscataway, NJ, USA; pp. 1055–1059. [Google Scholar]
- Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 40, 834–848. [Google Scholar] [CrossRef]
Strategy | Architectures/Loss | AvgAccuracy [%] | Dice Score [%] |
---|---|---|---|
Supervised | MaskRCNN − CE [16] | 36.73 | 35.90 |
UNet − CE [22] | 32.26 | 36.01 | |
UNet3+ − CE | 34.14 | 37.69 | |
UNet3+ − CE + Dice | 35.45 | 45.85 | |
DeepLab-v3 − CE | 36.37 | 37.15 | |
DeepLab-v3 − CE + Dice | 37.15 | 43.58 | |
HRNetV2 − CE + Dice [20] | 39.34 | 41.20 | |
TransUnet − CE + Dice [31] | 48.78 | 45.24 | |
G-Cascade − CE [23] | 39.72 | 33.81 | |
G-Cascade − Dice [23] | 43.28 | 34.94 | |
G-Cascade − CE + Dice [23] | 66.72 | 48.14 | |
Domain Adaptat. | G-Cascade + Pseudolab − 800 ep [23] | 65.24 | 45.17 |
G-Cascade + ColorTransfer − 800 ep | 66.25 | 48.71 | |
G-Cascade + ColorTransfer − 200 ep [23] | 67.07 | 49.85 | |
G-Cascade + AmbPres − proposed | 70.11 | 51.15 |
Predicted | ||||||
---|---|---|---|---|---|---|
Grade I | Grade IIA | Grade IIB | Grade III | Grade IV | ||
True | Grade I | 85.12 | 5.56 | 4.17 | 2.27 | 2.88 |
Grade IIA | 13.98 | 64.01 | 18.88 | 3.13 | 0 | |
Grade IIB | 17.88 | 10.33 | 66.98 | 4.81 | 0 | |
Grade III | 11.03 | 3.27 | 24.45 | 59.71 | 1.54 | |
Grade IV | 3.48 | 2.99 | 2.07 | 16.7 | 74.76 |
Synthetic Burns | Amb. Preserv. | AvgAccuracy [%] | Dice Score [%] |
---|---|---|---|
66.72 | 48.14 | ||
✓ | 68.65 | 50.01 | |
✓ | 67.86 | 49.21 | |
✓ | ✓ | 70.11 | 51.15 |
No. Syntehtic Burns Img | AvgAccuracy [%] | Dice Score [%] |
---|---|---|
0 | 66.72 | 48.14 |
500 | 66.65 | 48.24 |
2000 | 67.16 | 48.84 |
5000 | 68.65 | 50.01 |
10,000 | 68.73 | 50.15 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Florea, L.; Florea, C.; Vertan, C.; Bădoiu, S. Ambiance Preservation Augmenting for Semantic Segmentation of Pediatric Burn Skin Lesions. Mathematics 2025, 13, 758. https://doi.org/10.3390/math13050758
Florea L, Florea C, Vertan C, Bădoiu S. Ambiance Preservation Augmenting for Semantic Segmentation of Pediatric Burn Skin Lesions. Mathematics. 2025; 13(5):758. https://doi.org/10.3390/math13050758
Chicago/Turabian StyleFlorea, Laura, Corneliu Florea, Constantin Vertan, and Silviu Bădoiu. 2025. "Ambiance Preservation Augmenting for Semantic Segmentation of Pediatric Burn Skin Lesions" Mathematics 13, no. 5: 758. https://doi.org/10.3390/math13050758
APA StyleFlorea, L., Florea, C., Vertan, C., & Bădoiu, S. (2025). Ambiance Preservation Augmenting for Semantic Segmentation of Pediatric Burn Skin Lesions. Mathematics, 13(5), 758. https://doi.org/10.3390/math13050758