Bi-Resolution Hash Encoding in Neural Radiance Fields: A Method for Accelerated Pose Optimization and Enhanced Reconstruction Efficiency
<p>The framework of joint pose optimization.</p> "> Figure 2
<p>NeRF model network architecture.</p> "> Figure 3
<p>Two-stage training flowchart.</p> "> Figure 4
<p>Schematic representation of the trend and degree of error variation. (<b>a</b>) Trend of error increase. (<b>b</b>) Trend of error decrease. (<b>c</b>) Degree of error fluctuation.</p> "> Figure 5
<p>The impact of different resolutions on pose and rendering quality.</p> "> Figure 6
<p>The impact of resolution layers on pose and rendering quality.</p> "> Figure 7
<p>Qualitative results on Synthetic dataset. GT (Ground Truth) represents reference images, while BAA refers to images rendered by the method in [<a href="#B33-applsci-13-13333" class="html-bibr">33</a>].</p> "> Figure 8
<p>Performance difference between smooth warm-up learning rate scheduling strategy and non-smooth scheduling strategy in the scene of Lego.</p> "> Figure 9
<p>Partial data of a scene with low texture, reflective ceramic.</p> "> Figure 10
<p>Visualization of aligned poses.</p> "> Figure 11
<p>Projection of aligned poses onto the XY plane.</p> "> Figure 12
<p>Rendering and depth map synthesized from the new perspective.</p> ">
Abstract
:Featured Application
Abstract
1. Introduction
- The traditional method of camera pose estimation relies on detecting and matching feature points. However, this method encounters significant difficulties in scenes lacking obvious features [14], such as low textures or reflections. The lack of sufficient feature points in these scenes makes it difficult for traditional algorithms to accurately calculate the camera pose, thereby affecting the quality of the entire 3D scene reconstruction.
- Although the joint pose optimization method can have better effects on areas with low texture, this method is currently mainly based on the Multilayer Perceptron (MLP) model, which is slow to converge and requires more processing time and resources. This poses a significant limitation for the research and application of such algorithms.
- We propose a method for scene reconstruction called BiResNeRF, which is based on low and high-resolution hash encoding modules. The proposed approach ensures rapid and accurate scene reconstruction even in the presence of inaccurate poses.
- We delve deeply into the relationship between hash encoding of varying resolutions and pose estimation. Through experimental analysis, the characteristics of pose estimation related to both low and high-resolution hash encodings were explored, providing theoretical and empirical foundations for the research presented in this paper as well as for subsequent works.
- To ensure the effective training of BiResNeRF, a two-stage training strategy is proposed, with the transition between stages being timely and completed by the real-time detection of error stability signals. At the same time, a coarse-to-fine sampling strategy and a smooth warm-start learning rate scheduling strategy are adopted to make the training process proceed smoothly and efficiently.
- The effectiveness of our algorithm has been experimentally verified when applied to scenes with characteristics such as the absence of pose, low texture and reflection. This demonstrates the application value of the algorithm and its potential importance in future research.
2. Related Work
2.1. Pose Optimization Related
2.1.1. Feature-Based Methods
2.1.2. Joint Pose Optimization Methods
2.2. Reconstruction Time-Related
3. Methods
3.1. Formulation
3.2. Neural Network Architecture of BiResNeRF
3.3. Two-Stage Training Strategy
3.3.1. Overview
3.3.2. Real-Time Error Stability Detection Method
Algorithm 1 Error Stability Signal Detection. |
|
3.3.3. Coarse-to-Fine Ray Sampling Strategy
3.3.4. Smooth Warm-Up Learning Rate Scheduling Strategy
4. Experiments
- Dataset
- Evaluation Criteria
4.1. The Impact of Resolution on Pose Estimation
4.1.1. Experimental Setup
4.1.2. Implementation Details
4.1.3. Results of Different Resolution Hash Encoding
4.2. The Impact of Resolution Layers on Pose Estimation
4.2.1. Implementation Details
4.2.2. Results of Different Levels of Hash Encoding
4.3. Experimental and Performance Analysis of BiResNeRF
4.3.1. Experimental Setup
4.3.2. Implementation Details
4.3.3. Results of BiResNeRF
4.3.4. Significance Analysis
4.3.5. Stability Analysis
4.4. Application in Low Textured Scenes
- Dataset
4.4.1. Experimental Setup
4.4.2. Implementation Details
4.4.3. Results of Low-Texture Scene Experiments
5. Discussion
- Although our method has achieved preliminary results in reconstructing inaccurate or pose-less, low-textured and reflective scenes, it still faces several challenges that are worthy of further research and improvement:
- When the pose perturbation is too large, the joint pose optimization method still faces reconstruction failures.
- In the complete absence of pose data, the spatial relationship between cameras cannot be too distant; otherwise, reconstruction is unachievable.
- The method proposed in this paper improves both speed and accuracy; however, there is a slight increase in the occurrence of floaters. Minimizing the generation of floaters while maintaining efficient training is also a direction for further optimization.
- Whether it is the BiResNeRF or the baseline method BAA-NGP, during evaluation, the convergence of poses exhibited varying degrees of instability, and an alternative learning rate was adopted for training in specific scenes. Therefore, to more conveniently apply joint pose optimization methods, further exploration is needed on the relationship between different parameters (such as the learning rate) and scene reconstruction.
- The research has only been conducted on datasets generated from virtual scenes in Blender. Further research is needed on data acquisition methods, reconstruction methods and limitations in real-world scenarios.
- The speed of error reduction is related to factors such as the learning rate of the neural radiative field and pose parameters and the number of sampling points on the ray, making it a highly complex issue. Although the current method can automatically perform stage transitions and ensure timely transitions after sufficient training in the first stage, the exact position of these transitions could be more precise. Therefore, determining the transition position more accurately remains a question worth further investigation.
- We also discovered a research finding with potential in the experiments of this paper. Based on Table 3, during the first stage of training, when only utilizing the low-resolution pose estimation module, the average reconstruction time for scenes does not exceed 3 min. This means that, by using a three-layer low-resolution hash coding network, we can complete a preliminary reconstruction of the scene in about 3 min or even less and observe the results in real time. In contrast, BARF requires several hours to display the reconstruction results. This significant improvement in efficiency has brought tremendous momentum to related research in theory and application.
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Xu, L.; Xiangli, Y.; Peng, S.; Pan, X.; Zhao, N.; Theobalt, C.; Dai, B.; Lin, D. Grid-guided Neural Radiance Fields for Large Urban Scenes. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 8296–8306. [Google Scholar]
- Tancik, M.; Casser, V.; Yan, X.; Pradhan, S.; Mildenhall, B.; Srinivasan, P.P.; Barron, J.T.; Kretzschmar, H. Block-nerf: Scalable large scene neural view synthesis. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 8248–8258. [Google Scholar]
- Yang, Z.; Chen, Y.; Wang, J.; Manivasagam, S.; Ma, W.C.; Yang, A.J.; Urtasun, R. UniSim: A Neural Closed-Loop Sensor Simulator. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada, 18–22 June 2023; pp. 1389–1399. [Google Scholar]
- Yang, J.; Ivanovic, B.; Litany, O.; Weng, X.; Kim, S.W.; Li, B.; Che, T.; Xu, D.; Fidler, S.; Pavone, M.; et al. EmerNeRF: Emergent Spatial-Temporal Scene Decomposition via Self-Supervision. arXiv 2023, arXiv:2311.02077. [Google Scholar]
- Guo, Y.; Chen, K.; Liang, S.; Liu, Y.J.; Bao, H.; Zhang, J. Ad-nerf: Audio driven neural radiance fields for talking head synthesis. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 5784–5794. [Google Scholar]
- Gafni, G.; Thies, J.; Zollhofer, M.; Nießner, M. Dynamic neural radiance fields for monocular 4d facial avatar reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 8649–8658. [Google Scholar]
- Peng, S.; Dong, J.; Wang, Q.; Zhang, S.; Shuai, Q.; Zhou, X.; Bao, H. Animatable neural radiance fields for modeling dynamic human bodies. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 14314–14323. [Google Scholar]
- Deng, N.; He, Z.; Ye, J.; Duinkharjav, B.; Chakravarthula, P.; Yang, X.; Sun, Q. Fov-nerf: Foveated neural radiance fields for virtual reality. IEEE Trans. Vis. Comput. Graph. 2022, 28, 3854–3864. [Google Scholar] [CrossRef] [PubMed]
- Rojas, S.; Zarzar, J.; Pérez, J.C.; Sanakoyeu, A.; Thabet, A.; Pumarola, A.; Ghanem, B. Re-ReND: Real-time Rendering of NeRFs across Devices. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Paris, France, 2–3 October 2023; pp. 3632–3641. [Google Scholar]
- Ma, M.K.I.; Saha, C.; Poon, S.H.L.; Yiu, R.S.W.; Shih, K.C.; Chan, Y.K. Virtual reality and augmented reality—Emerging screening and diagnostic techniques in ophthalmology: A systematic review. Surv. Ophthalmol. 2022, 67, 1516–1530. [Google Scholar] [CrossRef] [PubMed]
- İsmailoğlu, E.G.; Orkun, N.; Eşer, İ.; Zaybak, A. Comparison of the effectiveness of the virtual simulator and video-assisted teaching on intravenous catheter insertion skills and self-confidence: A quasi-experimental study. Nurse Educ. Today 2020, 95, 104596. [Google Scholar] [CrossRef] [PubMed]
- Mickiewicz, P.; Gawęcki, W.; Gawłowska, M.B.; Talar, M.; Węgrzyniak, M.; Wierzbicka, M. The assessment of virtual reality training in antromastoidectomy simulation. Virtual Real. 2021, 25, 1113–1121. [Google Scholar] [CrossRef]
- Smith, M.J.; Ginger, E.J.; Wright, K.; Wright, M.A.; Taylor, J.L.; Humm, L.B.; Olsen, D.E.; Bell, M.D.; Fleming, M.F. Virtual reality job interview training in adults with autism spectrum disorder. J. Autism Dev. Disord. 2014, 44, 2450–2463. [Google Scholar] [CrossRef]
- Sun, J.; Shen, Z.; Wang, Y.; Bao, H.; Zhou, X. LoFTR: Detector-free local feature matching with transformers. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 8922–8931. [Google Scholar]
- Mur-Artal, R.; Montiel, J.M.M.; Tardos, J.D. ORB-SLAM: A versatile and accurate monocular SLAM system. IEEE Trans. Robot. 2015, 31, 1147–1163. [Google Scholar] [CrossRef]
- Lowe, D.G. Distinctive image features from scale-invariant keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Mildenhall, B.; Srinivasan, P.P.; Tancik, M.; Barron, J.T.; Ramamoorthi, R.; Ng, R. Nerf: Representing scenes as neural radiance fields for view synthesis. Commun. ACM 2021, 65, 99–106. [Google Scholar] [CrossRef]
- Schonberger, J.L.; Frahm, J.M. Structure-from-motion revisited. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 4104–4113. [Google Scholar]
- Zhang, K.; Riegler, G.; Snavely, N.; Koltun, V. Nerf++: Analyzing and improving neural radiance fields. arXiv 2020, arXiv:2010.07492. [Google Scholar]
- Martin-Brualla, R.; Radwan, N.; Sajjadi, M.S.; Barron, J.T.; Dosovitskiy, A.; Duckworth, D. Nerf in the wild: Neural radiance fields for unconstrained photo collections. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Nashville, TN, USA, 20–25 June 2021; pp. 7210–7219. [Google Scholar]
- Barron, J.T.; Mildenhall, B.; Verbin, D.; Srinivasan, P.P.; Hedman, P. Mip-nerf 360: Unbounded anti-aliased neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5470–5479. [Google Scholar]
- Chen, S.; Zhang, Y.; Xu, Y.; Zou, B. Structure-Aware NeRF without Posed Camera via Epipolar Constraint. arXiv 2022, arXiv:2210.00183. [Google Scholar]
- Wang, Z.; Wu, S.; Xie, W.; Chen, M.; Prisacariu, V.A. NeRF−: Neural radiance fields without known camera parameters. arXiv 2021, arXiv:2102.07064. [Google Scholar]
- Lin, C.H.; Ma, W.C.; Torralba, A.; Lucey, S. Barf: Bundle-adjusting neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 5741–5751. [Google Scholar]
- Chng, S.F.; Ramasinghe, S.; Sherrah, J.; Lucey, S. Gaussian activated neural radiance fields for high fidelity reconstruction and pose estimation. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 264–280. [Google Scholar]
- Meng, Q.; Chen, A.; Luo, H.; Wu, M.; Su, H.; Xu, L.; He, X.; Yu, J. Gnerf: Gan-based neural radiance field without posed camera. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 6351–6361. [Google Scholar]
- Hu, T.; Liu, S.; Chen, Y.; Shen, T.; Jia, J. Efficientnerf efficient neural radiance fields. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 12902–12911. [Google Scholar]
- Yu, A.; Li, R.; Tancik, M.; Li, H.; Ng, R.; Kanazawa, A. Plenoctrees for real-time rendering of neural radiance fields. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 5752–5761. [Google Scholar]
- Sun, C.; Sun, M.; Chen, H.T. Direct voxel grid optimization: Super-fast convergence for radiance fields reconstruction. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA, 18–24 June 2022; pp. 5459–5469. [Google Scholar]
- Chen, A.; Xu, Z.; Geiger, A.; Yu, J.; Su, H. Tensorf: Tensorial radiance fields. In Proceedings of the European Conference on Computer Vision, Tel Aviv, Israel, 23–27 October 2022; Springer: Berlin/Heidelberg, Germany, 2022; pp. 333–350. [Google Scholar]
- Müller, T.; Evans, A.; Schied, C.; Keller, A. Instant neural graphics primitives with a multiresolution hash encoding. ACM Trans. Graph. (ToG) 2022, 41, 1–15. [Google Scholar] [CrossRef]
- Li, R.; Gao, H.; Tancik, M.; Kanazawa, A. Nerfacc: Efficient sampling accelerates nerfs. arXiv 2023, arXiv:2305.04966. [Google Scholar]
- Liu, S.; Lin, S.; Lu, J.; Saha, S.; Supikov, A.; Yip, M. BAA-NGP: Bundle-Adjusting Accelerated Neural Graphics Primitives. arXiv 2023, arXiv:2306.04166. [Google Scholar]
- Zhang, R.; Isola, P.; Efros, A.A.; Shechtman, E.; Wang, O. The unreasonable effectiveness of deep features as a perceptual metric. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 586–595. [Google Scholar]
Parameter | Symbol | Value | |
---|---|---|---|
Common parameters | Number of features Dimensions per entry | F | 2 |
Hash table size | T | ||
Pose estimation module | Coarsest resolution | 16 | |
Finest resolution | 72 | ||
Number of levels | 3 | ||
High-resolution reconstruction module | Coarsest resolution | 128 | |
Finest resolution | 4096 | ||
Number of levels | 16 |
Scene | Camera Pose Registration | Visual Synthesis Quality | Training Time(s) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Rotation(°)↓ | Translation↓ | PSNR↑ | SSIM↑ | LPIPS↓ | ||||||||
BAA | Ours | BAA | Ours | BAA | Ours | BAA | Ours | BAA | Ours | BAA | Ours | |
Lego | 0.038 | 0.038 | 0.121 | 0.107 | 33.704 | 34.871 | 0.977 | 0.983 | 0.023 | 0.017 | 1143.08 | 742.07 |
Chair | 0.053 | 0.054 | 0.271 | 0.289 | 35.445 | 36.512 | 0.984 | 0.988 | 0.025 | 0.018 | 1464.40 | 804.51 |
Drums | 0.028 | 0.031 | 0.098 | 0.169 | 25.160 | 25.502 | 0.921 | 0.925 | 0.090 | 0.089 | 1237.37 | 781.55 |
Ficus | 0.030 | 0.029 | 0.136 | 0.102 | 31.307 | 31.789 | 0.977 | 0.979 | 0.034 | 0.028 | 1128.41 | 754.44 |
Hotdog | 2.050 | 0.698 | 8.430 | 2.864 | 30.406 | 36.137 | 0.950 | 0.976 | 0.043 | 0.040 | 1150.72 | 766.78 |
Materials | 0.038 | 0.149 | 0.134 | 1.414 | 28.638 | 28.944 | 0.943 | 0.952 | 0.082 | 0.065 | 1046.50 | 778.01 |
Mic | 0.040 | 0.035 | 0.189 | 0.133 | 34.174 | 36.044 | 0.982 | 0.989 | 0.028 | 0.013 | 1312.24 | 819.45 |
Ship | 0.065 | 0.496 | 0.282 | 0.745 | 30.052 | 31.286 | 0.888 | 0.904 | 0.114 | 0.089 | 987.59 | 768.65 |
mean | 0.293 | 0.191 | 1.207 | 0.728 | 31.111 | 32.636 | 0.953 | 0.962 | 0.055 | 0.045 | 1183.79 | 776.93 |
↓34.66% | ↓39.73% | ↑4.90% | ↑0.96% | ↓18.14% | ↓34.37% |
Scene | Camera Pose Registration | Visual Synthesis Quality | Training Time(s) | |||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|
Rotation(°)↓ | Translation↓ | PSNR↑ | SSIM↑ | LPIPS↓ | ||||||||
BARF | NeRFAcc | BARF | NeRFAcc | BARF | NeRFAcc | BARF | NeRFAcc | BARF | NeRFAcc | BARF | NeRFAcc | |
Lego | 0.077 | 0.059 | 0.257 | 0.215 | 28.377 | 30.020 | 0.929 | 0.948 | 0.048 | 0.038 | 9844 | 3647 |
Chair | 0.095 | 0.074 | 0.396 | 0.334 | 31.202 | 32.361 | 0.955 | 0.965 | 0.044 | 0.039 | 9839 | 3440 |
Drums | 0.051 | 0.038 | 0.201 | 0.191 | 23.914 | 24.772 | 0.900 | 0.917 | 0.099 | 0.079 | 12568 | 3532 |
Ficus | 0.085 | 0.066 | 0.529 | 0.332 | 26.253 | 27.819 | 0.934 | 0.953 | 0.058 | 0.041 | 29349 | 3392 |
Hotdog | 0.249 | 1.826 | 1.278 | 8.313 | 34.608 | 32.010 | 0.970 | 0.962 | 0.032 | 0.030 | 23485 | 3709 |
Materials | 0.911 | 1.087 | 4.496 | 5.452 | 27.013 | 28.129 | 0.930 | 0.944 | 0.063 | 0.038 | 9839 | 3511 |
Mic | 0.073 | 0.037 | 0.247 | 0.156 | 31.220 | 32.828 | 0.969 | 0.974 | 0.048 | 0.037 | 9803 | 3804 |
Ship | 0.070 | 0.076 | 0.293 | 0.415 | 27.560 | 28.647 | 0.850 | 0.868 | 0.128 | 0.112 | 9907 | 4957 |
mean | 0.202 | 0.408 | 0.962 | 1.926 | 28.768 | 29.573 | 0.930 | 0.941 | 0.065 | 0.052 | 14,329.250 | 3749.000 |
↓5.11% | ↓53.13% | ↓24.37% | ↓62.21% | ↑13.44% | ↑10.35% | ↑3.48% | ↑2.17% | ↓31.06% | ↓13.06% | ↓94.58% | ↓79.28% |
Method | p-Value | |||||
---|---|---|---|---|---|---|
Rotation | Translation | PSNR | SSIM | LPIPS | Training Time | |
BAA-NGP | 0.71 | 0.666 | 0.421 | 0.593 | 0.562 | 2.98 × 10 |
BARF | 0.942 | 0.713 | 0.055 | 0.095 | 0.225 | 1.98 × 10 |
NeRFAcc | 0.412 | 0.322 | 0.096 | 0.238 | 0.663 | 1.37 × 10 |
Scene | Lego | Chair | Drums | Ficus | Hotdog | Materials | Mic | Ship | Mean |
---|---|---|---|---|---|---|---|---|---|
Position(k) | 9 | 5.6 | 9.6 | 7.6 | 5 | 13.2 | 14.6 | 5.2 | 9.3 |
Time | 2 m 49 s | 1 m 26 s | 3 m 2 s | 2 m 18 s | 1 m 16 s | 4 m 12 s | 5 m 3 s | 1 m 21 s | 2 m 41 s |
Scene | Rotation Error (°) | Translation Error |
---|---|---|
Ceramics | 25.848 | 1.416 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Guo, Z.; Xie, Q.; Liu, S.; Xie, X. Bi-Resolution Hash Encoding in Neural Radiance Fields: A Method for Accelerated Pose Optimization and Enhanced Reconstruction Efficiency. Appl. Sci. 2023, 13, 13333. https://doi.org/10.3390/app132413333
Guo Z, Xie Q, Liu S, Xie X. Bi-Resolution Hash Encoding in Neural Radiance Fields: A Method for Accelerated Pose Optimization and Enhanced Reconstruction Efficiency. Applied Sciences. 2023; 13(24):13333. https://doi.org/10.3390/app132413333
Chicago/Turabian StyleGuo, Zixuan, Qing Xie, Song Liu, and Xiaoyao Xie. 2023. "Bi-Resolution Hash Encoding in Neural Radiance Fields: A Method for Accelerated Pose Optimization and Enhanced Reconstruction Efficiency" Applied Sciences 13, no. 24: 13333. https://doi.org/10.3390/app132413333
APA StyleGuo, Z., Xie, Q., Liu, S., & Xie, X. (2023). Bi-Resolution Hash Encoding in Neural Radiance Fields: A Method for Accelerated Pose Optimization and Enhanced Reconstruction Efficiency. Applied Sciences, 13(24), 13333. https://doi.org/10.3390/app132413333