Early Fire Detection System by Using Automatic Synthetic Dataset Generation Model Based on Digital Twins
<p>Structural diagram of the proposed digital-twin-based early fire detection system. Note: As depth increases, the depth data are scaled from blue to dark red.</p> "> Figure 2
<p>(<b>a</b>) A real office and (<b>b</b>) a depth image gathered from RealSense in the same viewpoint as CCTV in an office environment. Note: As depth increases, the depth data are scaled from blue to dark red.</p> "> Figure 3
<p>An example of a virtual fire implemented with Unity3D’s Particle System.</p> "> Figure 4
<p>Synthesis process of virtual fire and real environment.</p> "> Figure 5
<p>Results of a virtual fire simulation considering depth information.</p> "> Figure 6
<p>Example of different fire created when different burning materials.</p> "> Figure 7
<p>(<b>a</b>) Temporary positioning of fires using random on-screen coordinates and (<b>b</b>) automatic distancing for virtual fires. Note: As depth increases, the depth data are scaled from blue to dark red.</p> "> Figure 8
<p>(<b>a</b>) Example of using BoxCollider for annotation and (<b>b</b>) automatically generated annotation information for fires on images.</p> "> Figure 9
<p>Real-world fire training data released by FireNET [<a href="#B20-applsci-14-01801" class="html-bibr">20</a>].</p> "> Figure 10
<p>Automatically generated virtual fire data optimized for your environment.</p> "> Figure 11
<p>Transfer learning and model deployment course at NVIDIA TAO.</p> "> Figure 12
<p>Inference from testing data.</p> "> Figure 13
<p>The process of utilizing models deployed in TAO in deepstream for inference and post-processing.</p> "> Figure 14
<p>A message about the detection results sent by deepstream.</p> "> Figure 15
<p>Analysis between two bounding boxes for tracking fire.</p> "> Figure 16
<p>IoT is integrated into the proposed system.</p> "> Figure 17
<p>Training dataset consisting of virtual and real images of early fire and different fire types.</p> "> Figure 18
<p>Virtual fire data generated from multiple real-world environments.</p> "> Figure 19
<p>The NVIDIA DGX A100, which was used as a training environment.</p> "> Figure 20
<p>Inferring new fire footage from fires used to generate data for training.</p> "> Figure 21
<p>Detection of fire shapes that have never been used in training.</p> "> Figure 22
<p>YOLOv4 model correctly inferred small fires (<b>a</b>) and the DetectNetV2 model did not (<b>b</b>).</p> "> Figure 23
<p>YOLOv4 model does not make false detections for dark environments other than fire (<b>a</b>) and DetectNetV2 model detects dark areas as smoke (<b>b</b>).</p> "> Figure 24
<p>Real-world fire inference depends on the backbone of the YOLOv4 model.</p> "> Figure 25
<p>Inference for very small fires with the YOLOv4+resnet18 model.</p> "> Figure 26
<p>Examples of false positives in YOLOv4+resnet18 model inference.</p> "> Figure 27
<p>Inference results after post-processing.</p> "> Figure 28
<p>Correctly inferring a very small fire in the initial state, even after post-processing is added.</p> ">
Abstract
:1. Introduction
2. Materials and Methods
2.1. Real Environment Data
2.2. Virtual Fire Data
2.3. Automatic Dataset Generation
2.4. Fire Detection Model
2.5. AI Inference and Post-Processing
3. Experimental Results
3.1. IoT Installation
3.2. Virtual Fire Data
3.3. Results
3.4. Post-Processing
4. Conclusions
- Virtual fires were simulated with the actual field as the backdrop, but detailed descriptions such as reflections of fire or background elements burning and turning into ash were not addressed. However, these aspects may not significantly contribute to fire data for early detection.
- Additionally, since the background of the training images is constructed from recorded video data, it is crucial to record a diverse range of scenarios that could occur in the actual field to achieve significantly improved performance.
- Besides that, a moving camera is also in our plan for future work; this kind of camera will change the angle as well as the view based on the detected fire. This implementation requires more research and experimentation; once performed, it will improve dramatically in real time.
- This proposed method is trained and tested on NvidiaTAO, which has a lack of supporting models. Therefore, other state-of-the-art computer vision models should be considered for testing, such as YOLOv8.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Fire Statistics Yearbook 2020. Sejong (Korea): National Fire Agency 119. Available online: https://www.nfds.go.kr/stat/general.do (accessed on 31 December 2020).
- Smart City Korea. Available online: https://smartcity.go.kr/en/%ec%86%8c%ea%b0%9c/ (accessed on 17 October 2023).
- Kim, H.; Lee, S.H.; Ok, S.Y. Early Fire Detection System by Synthetic Dataset Automatic Generation Model Based on Digital Twin. J. Korea Multimed. Soc. 2023, 26, 887–897. [Google Scholar] [CrossRef]
- Sepasgozar, S.M.E. Differentiating Digital Twin from Digital Shadow: Elucidating a Paradigm Shift to Expedite a Smart, Sustainable Built Environment. Buildings 2021, 11, 151. [Google Scholar] [CrossRef]
- Qiu, X.; Wei, Y.; Li, N.; Guo, A.; Zhang, E.; Li, C.; Peng, Y.; Wei, J.; Zang, Z. Development of an early warning fire detection system based on a laser spectroscopic carbon monoxide sensor using a 32-bit system-on-chip. Infrared Phys. Technol. 2019, 96, 44–51. [Google Scholar] [CrossRef]
- Li, Y.; Yu, L.; Zheng, C.; Ma, Z.; Yang, S.; Song, F.; Zheng, K.; Ye, W.; Zhang, Y.; Wang, Y.; et al. Development and field deployment of a mid-infrared CO and CO2 dual-gas sensor system for early fire detection and location. Spectrochim. Acta Part Mol. Biomol. Spectrosc. 2022, 270, 120834. [Google Scholar] [CrossRef] [PubMed]
- Chen, S.; Ren, J.; Yan, Y.; Sun, M.; Hu, F.; Zhao, H. Multi-sourced sensing and support vector machine classification for effective detection of fire hazard in early stage. Comput. Electr. Eng. 2022, 101, 108046. [Google Scholar] [CrossRef]
- Fuller, A.; Fan, Z.; Day, C.; Barlow, C. Digital Twin: Enabling Technologies, Challenges and Open Research. IEEE Access 2020, 8, 108952–108971. [Google Scholar] [CrossRef]
- Misuk, L.; Kim, E. A Study on the Disaster Safety Management Method of Underground Lifelines based on Digital Twin Technology. Commun. Korean Inst. Inf. Sci. Eng. 2021, 39, 16–24. [Google Scholar]
- Zohdi, T. A machine-learning framework for rapid adaptive digital-twin based fire-propagation simulation in complex environments. Comput. Methods Appl. Mech. Eng. 2020, 363, 112907. [Google Scholar] [CrossRef]
- Zohdi, T. A digital twin framework for machine learning optimization of aerial fire fighting and pilot safety. Comput. Methods Appl. Mech. Eng. 2021, 373, 113446. [Google Scholar] [CrossRef]
- Pincott, J.; Tien, P.W.; Wei, S.; Calautit, J.K. Development and evaluation of a vision-based transfer learning approach for indoor fire and smoke detection. Build. Serv. Eng. Res. Technol. 2022, 43, 319–332. [Google Scholar] [CrossRef]
- Abdusalomov, A.; Baratov, N.; Kutlimuratov, A.; Whangbo, T.K. An improvement of the fire detection and classification method using YOLOv3 for surveillance systems. Sensors 2021, 21, 6519. [Google Scholar] [CrossRef] [PubMed]
- Yazdi, A.; Qin, H.; Jordan, C.B.; Yang, L.; Yan, F. Nemo: An open-source transformer-supercharged benchmark for fine-grained wildfire smoke detection. Remote Sens. 2022, 14, 3979. [Google Scholar] [CrossRef]
- Kim, J.; Lee, C.; Park, S.; Lee, J.; Hong, C. Development of Fire Detection Model for Underground Utility Facilities Using Deep Learning: Training Data Supplement and Bias Optimization. J. Korean Soc. Ind. Acad. Technol. 2020, 21, 320–330. [Google Scholar] [CrossRef]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going Deeper with Convolutions. arXiv 2014, arXiv:1409.4842. [Google Scholar]
- Wu, D.; Wang, Y.; Xia, S.T.; Bailey, J.; Ma, X. Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets. arXiv 2020, arXiv:2002.05990. [Google Scholar]
- Liau, H.; Yamini, N.; Wong, Y. Fire SSD: Wide Fire Modules based Single Shot Detector on Edge Device. arXiv 2018, arXiv:1806.05363. [Google Scholar]
- Thomson, W.; Bhowmik, N.; Breckon, T.P. Efficient and Compact Convolutional Neural Network Architectures for Non-temporal Real-time Fire Detection. arXiv 2020, arXiv:2010.08833. [Google Scholar]
- GitHub-OlafenwaMoses/FireNET: A Deep Learning Model for Detecting Fire in Video and Camera Streams—github.com. Available online: https://github.com/OlafenwaMoses/FireNET (accessed on 17 October 2023).
- Open Images Pre-trained Object Detection. 2023. Available online: https://docs.nvidia.com/tao/tao-toolkit/text/model_zoo/cv_models/open_images/open_images_pt_object_detection.html (accessed on 22 January 2024).
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. arXiv 2015, arXiv:1512.03385. [Google Scholar]
Data | Virtual Data | Real-World Data | Total |
---|---|---|---|
Training data | 4375 | 412 | 4787 |
Testing data | 625 | 90 | 715 |
Total | 5000 | 502 | 5502 |
Model | Unpruned Model Parameters | AP | Pruned Model Parameters | AP | Retrain/Model |
---|---|---|---|---|---|
DetectNetV2 | 11,200,458 | 0.93515 | 9,561,530 | 0.96316 | 0.85367 |
FasterRCNN | 12,751,352 | 0.9528 | 10,434,616 | 0.9506 | 0.81831 |
YOLOv4 | 34,829,183 | 0.90909 | 3,659,191 | 0.9091 | 0.10506 |
EfficientDet | 3,876,308 | 0.426 | 2,130,676 | 0.426 | 0.54966 |
DINO | - | 0.83 | - | - | - |
D-DERT | - | 0.71433 | - | - | - |
Backbone | Model Parameters | mAP | Retrain Model Parameters | mAP | Retrain/Model |
---|---|---|---|---|---|
resnet18 | 11,200,458 | 0.90798 | 3,659,191 | 0.9088 | 0.10506 |
resnet50 | 85,346,559 | 0.90798 | 22,902,807 | 0.90792 | 0.26835 |
resnet101 | 122,286,335 | 0.90673 | 3,000,711 | 0.90365 | 0.02453 |
cspdarknet19 | 53,444,895 | 0.9062 | 38,253,879 | 0.90847 | 0.71576 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kim, H.-C.; Lam, H.-K.; Lee, S.-H.; Ok, S.-Y. Early Fire Detection System by Using Automatic Synthetic Dataset Generation Model Based on Digital Twins. Appl. Sci. 2024, 14, 1801. https://doi.org/10.3390/app14051801
Kim H-C, Lam H-K, Lee S-H, Ok S-Y. Early Fire Detection System by Using Automatic Synthetic Dataset Generation Model Based on Digital Twins. Applied Sciences. 2024; 14(5):1801. https://doi.org/10.3390/app14051801
Chicago/Turabian StyleKim, Hyeon-Cheol, Hoang-Khanh Lam, Suk-Hwan Lee, and Soo-Yol Ok. 2024. "Early Fire Detection System by Using Automatic Synthetic Dataset Generation Model Based on Digital Twins" Applied Sciences 14, no. 5: 1801. https://doi.org/10.3390/app14051801
APA StyleKim, H. -C., Lam, H. -K., Lee, S. -H., & Ok, S. -Y. (2024). Early Fire Detection System by Using Automatic Synthetic Dataset Generation Model Based on Digital Twins. Applied Sciences, 14(5), 1801. https://doi.org/10.3390/app14051801