Robust Velocity Dealiasing for Weather Radar Based on Convolutional Neural Networks
(This article belongs to the Section AI Remote Sensing)
"> Figure 1
<p>This figure is the distribution of labels with different <math display="inline"><semantics> <msub> <mi>v</mi> <mi mathvariant="normal">a</mi> </msub> </semantics></math> in logarithmic scale: Panel (<b>a</b>) represents the distribution of labels on <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi mathvariant="normal">a</mi> </msub> <mo>∈</mo> <mrow> <mo>[</mo> <mn>7</mn> <mo>,</mo> <mn>9</mn> <mo>]</mo> </mrow> </mrow> </semantics></math>; panel (<b>b</b>) shows on <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi mathvariant="normal">a</mi> </msub> <mo>∈</mo> <mrow> <mo>[</mo> <mn>11</mn> <mo>,</mo> <mn>13</mn> <mo>]</mo> </mrow> </mrow> </semantics></math>; and panel (<b>c</b>) shows on <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi mathvariant="normal">a</mi> </msub> <mo>∈</mo> <mrow> <mo>[</mo> <mn>21</mn> <mo>,</mo> <mn>23</mn> <mo>]</mo> </mrow> </mrow> </semantics></math>. When the <math display="inline"><semantics> <msub> <mi>v</mi> <mi mathvariant="normal">a</mi> </msub> </semantics></math> used is higher (left panel to the right panel), the distribution is more skewed to the <math display="inline"><semantics> <msub> <mi>L</mi> <mn>0</mn> </msub> </semantics></math>.</p> "> Figure 2
<p>Block diagram of the proposed velocity dealiasing technique using a CNN. Velocity dealiasing is performed by combining the input (aliased) velocity (<math display="inline"><semantics> <msub> <mi>v</mi> <mi mathvariant="normal">i</mi> </msub> </semantics></math>) and the aliasing count (<math display="inline"><semantics> <msub> <mi>L</mi> <mi mathvariant="normal">p</mi> </msub> </semantics></math>). <math display="inline"><semantics> <msub> <mi>v</mi> <mi mathvariant="normal">i</mi> </msub> </semantics></math> passes through the model, which consists of multiple layers of operations, i.e., convolution, pooling, softmax, and prediction. To that end, the technique produces a map that indicates whether a velocity measurement is aliased, the sign, and how many times it is aliased.</p> "> Figure 3
<p>Results of the process of velocity dealiasing using the labels predicted by the CNN. In this example, the data were collected from the KTLX radar on 8 March 2020 23:48 UTC. <a href="#remotesensing-15-00802-f003" class="html-fig">Figure 3</a> (<b>a</b>) <span class="html-italic">Z</span> is the radar reflectivity. The input velocity <math display="inline"><semantics> <msub> <mi>v</mi> <mi mathvariant="normal">i</mi> </msub> </semantics></math> is obtained by aliasing (<b>b</b>) <math display="inline"><semantics> <msub> <mi>v</mi> <mi mathvariant="normal">t</mi> </msub> </semantics></math> (ground truth) using (<b>c</b>) <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi mathvariant="normal">a</mi> </msub> <mo>=</mo> <mn>7</mn> <mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> <mspace width="0.166667em"/> <msup> <mi mathvariant="normal">s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math> and (<b>d</b>) <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi mathvariant="normal">a</mi> </msub> <mo>=</mo> <mn>17</mn> <mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> <mspace width="0.166667em"/> <msup> <mi mathvariant="normal">s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>. <math display="inline"><semantics> <msub> <mi>v</mi> <mi mathvariant="normal">p</mi> </msub> </semantics></math> is the dealiased velocity according to Equation (<a href="#FD9-remotesensing-15-00802" class="html-disp-formula">9</a>) with (<b>g</b>) <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi mathvariant="normal">a</mi> </msub> <mo>=</mo> <mn>7</mn> <mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> <mspace width="0.166667em"/> <msup> <mi mathvariant="normal">s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math> and corresponded label (<b>e</b>), and (<b>h</b>) is also dealiased velocity with <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi mathvariant="normal">a</mi> </msub> <mo>=</mo> <mn>17</mn> <mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> <mspace width="0.166667em"/> <msup> <mi mathvariant="normal">s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math> and label (<b>f</b>).</p> "> Figure 4
<p>Comparison results on velocity dealiasing performance between the proposed CNN method (blue) and the conventional region-based unwrapping method (red). Comparison is performed with <math display="inline"><semantics> <msub> <mi>μ</mi> <mi>A</mi> </msub> </semantics></math> (<b>top</b>) and <math display="inline"><semantics> <msub> <mi>σ</mi> <mi>A</mi> </msub> </semantics></math> (<b>bottom</b>). The left panels show the performance from the mostly filled precipitation, and the right panels show the performance from the sparsely filled precipitation.</p> "> Figure 5
<p>An example of failed prediction with non-speckle echoes: Panel (<b>a</b>) shows the true label, which is synthesized using a 0.88°-EL scan from the KTLX on 16 January 2017 06:33 UTC; Panel (<b>b</b>) shows the predicted label; Panels (<b>c</b>–<b>h</b>) represent the probability of each label from the CNN model. One can see that the green patch near azimuths 0–45° at far ranges is incorrectly predicted. The correct label (<math display="inline"><semantics> <mrow> <mi>L</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>), however, has a significant probability value, which would result in a correct prediction if selected.</p> "> Figure 6
<p>A similar scan to <a href="#remotesensing-15-00802-f005" class="html-fig">Figure 5</a> but the CNN model succeeded the prediction of aliasing labels (green patch in panel (b) of <a href="#remotesensing-15-00802-f005" class="html-fig">Figure 5</a>). Panel (<b>a</b>) shows the true label, which is synthesized using a 1.32°-EL scan from the KTLX on 16 January 2017 06:33 UTC. In panel (<b>b</b>), the green patch near azimuths 0–45° at range gates 180–256 from panel (<b>b</b>) of <a href="#remotesensing-15-00802-f005" class="html-fig">Figure 5</a> is now correctly identified. Panels (<b>c</b>–<b>h</b>) represent the probability of each label from the CNN model.</p> "> Figure 7
<p>This figure shows the replaced result by the second most probable prediction on failed pixels: Panel (<b>a</b>) is the true label; panel (<b>b</b>) is the raw predicted label; and panel (<b>c</b>) is the same as the middle panel but incorrect labels are replaced by ones with the second highest probability. The value of <span class="html-italic">A</span> increased from 88.1% in (<b>b</b>) to 99.6% in (<b>c</b>).</p> "> Figure 8
<p>Comparisons of the trained CNN with different <math display="inline"><semantics> <msub> <mi>v</mi> <mi mathvariant="normal">a</mi> </msub> </semantics></math>, i.e., <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi mathvariant="normal">a</mi> </msub> <mo>=</mo> <mn>7</mn> <mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> <mspace width="0.166667em"/> <msup> <mi mathvariant="normal">s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math> (blue), <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi mathvariant="normal">a</mi> </msub> <mo>=</mo> <mn>12</mn> <mspace width="3.33333pt"/> <mi mathvariant="normal">m</mi> <mspace width="0.166667em"/> <msup> <mi mathvariant="normal">s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math> (green), <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi mathvariant="normal">a</mi> </msub> <mo>∈</mo> <mrow> <mo>[</mo> <mn>7</mn> <mo>,</mo> <mn>12</mn> <mo>]</mo> </mrow> </mrow> </semantics></math> (red), <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi mathvariant="normal">a</mi> </msub> <mo>∈</mo> <mrow> <mo>[</mo> <mn>7</mn> <mo>,</mo> <mi>ν</mi> <mo>]</mo> </mrow> </mrow> </semantics></math> (purple), and <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi mathvariant="normal">a</mi> </msub> <mo>∈</mo> <mrow> <mo>[</mo> <mi>ν</mi> <mo>]</mo> </mrow> </mrow> </semantics></math> (orange) for each on the mostly filled precipitation (<b>left</b>), and the sparsely filled precipitation (<b>right</b>).</p> "> Figure 9
<p>Comparisons of the performance of the trained CNN with different template sizes (<span class="html-italic">T</span>), i.e., 32 (blue), 64 (green), 128 (red), and 256 (purple) range gates on mostly filled precipitation (left), and sparsely filled precipitation (right). It is tested with three different <math display="inline"><semantics> <msub> <mi>v</mi> <mi mathvariant="normal">a</mi> </msub> </semantics></math> group, i.e., <math display="inline"><semantics> <msub> <mi>G</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>G</mi> <mn>2</mn> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>G</mi> <mn>3</mn> </msub> </semantics></math>.</p> "> Figure 10
<p>Performance of the CNN algorithm as a function of range with the different <span class="html-italic">T</span>, i.e., 32 (orange), 64 (magenta), 128 (green), and 256 (red) range gates. It is also compared to the conventional region-based dealiasing method (blue dashed line). The first row is <math display="inline"><semantics> <msub> <mi>μ</mi> <mi>A</mi> </msub> </semantics></math> in percentage averaged by the number of scans for the mostly filled precipitation. The second row is also the <math display="inline"><semantics> <msub> <mi>μ</mi> <mi>A</mi> </msub> </semantics></math> but for the sparsely filled precipitation scans. It is analyzed with groups <math display="inline"><semantics> <msub> <mi>G</mi> <mn>1</mn> </msub> </semantics></math> (<b>left</b>), <math display="inline"><semantics> <msub> <mi>G</mi> <mn>2</mn> </msub> </semantics></math> (<b>center</b>), and <math display="inline"><semantics> <msub> <mi>G</mi> <mn>3</mn> </msub> </semantics></math> (<b>right</b>).</p> "> Figure 11
<p>An example PPI scan with a mostly filled precipitation. <span class="html-italic">Z</span> is the reflectivity, <math display="inline"><semantics> <msub> <mi>v</mi> <mi mathvariant="normal">i</mi> </msub> </semantics></math> is the input velocity, <math display="inline"><semantics> <msub> <mi>v</mi> <mi mathvariant="normal">t</mi> </msub> </semantics></math> is the ground truth, <math display="inline"><semantics> <msub> <mi>v</mi> <mi mathvariant="normal">p</mi> </msub> </semantics></math> is the dealiased velocity using the <span class="html-italic">predicted</span> aliased label from the CNN, and <math display="inline"><semantics> <msub> <mi>v</mi> <mi mathvariant="normal">c</mi> </msub> </semantics></math> is the dealiased velocity using the conventional region-based dealiasing method. The data are synthesized using a 1.32°-EL scan from the KTLX on 4 July 2017 05:38 UTC. This example shows the result of processing a velocity field observed at <math display="inline"><semantics> <mrow> <mn>7</mn> <mspace width="3.33333pt"/> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> <mspace width="0.166667em"/> <msup> <mi mathvariant="normal">s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>. For most simple cases such as this, both methods are able to produce an accurate dealiased velocity field. This example shows over 99% accuracy from both CNN and region-based methods.</p> "> Figure 12
<p>Similar to <a href="#remotesensing-15-00802-f011" class="html-fig">Figure 11</a>, this figure shows an example PPI scan with isolated storms observed at <math display="inline"><semantics> <mrow> <msub> <mi>v</mi> <mi mathvariant="normal">a</mi> </msub> <mo>=</mo> <mn>7</mn> <mspace width="0.166667em"/> <mi mathvariant="normal">m</mi> <mspace width="0.166667em"/> <msup> <mi mathvariant="normal">s</mi> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>. The data are synthesized using a 1.32°-EL scan from the KTLX on 30 April 2017 19:14 UTC. The CNN method successfully dealiased the scan as it processed the entire scan all at once. The region-based method, however, failed at a number of isolated storms, which are indicated in the yellow circle. In this example, CNN method predicts the 99.5% on <math display="inline"><semantics> <msub> <mi>L</mi> <mn>0</mn> </msub> </semantics></math>, 99.4% on <math display="inline"><semantics> <msub> <mi>L</mi> <mn>1</mn> </msub> </semantics></math>, and 100% on <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math>, while the region-based method predicts 77.9%, 67.8%, and 84.4% on <math display="inline"><semantics> <msub> <mi>L</mi> <mn>0</mn> </msub> </semantics></math>, <math display="inline"><semantics> <msub> <mi>L</mi> <mn>1</mn> </msub> </semantics></math>, and <math display="inline"><semantics> <msub> <mi>L</mi> <mn>2</mn> </msub> </semantics></math> for each.</p> ">
Abstract
:1. Introduction
2. Materials and Methods
2.1. Brief Review of Existing Techniques
2.2. Data Generation
2.3. Proposed Algorithm
2.3.1. Pre-Processing
2.3.2. Training
2.3.3. Variables of Optimization
2.3.4. Algorithm Description
3. Results
3.1. Evaluation Method and Metrics
3.2. Statistical Results
3.3. Sensitivity Test
3.4. Case Study
4. Discussion
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Doviak, R.J.; Zrnic, D. Doppler Radar and Weather Observations; Dover Publications, Inc.: Mineola, NY, USA, 1993. [Google Scholar]
- Ray, P.S.; Ziegler, C. De-Aliasing First-Moment Doppler Estimates. J. Appl. Meteorol. 1977, 16, 563–564. [Google Scholar] [CrossRef]
- Hennington, L. Reducing the Effects of Doppler Radar Ambiguities. J. Appl. Meteorol. 1981, 20, 1543–1546. [Google Scholar] [CrossRef]
- Bergen, W.R.; Albers, S.C. Two-and Three-Dimensional De-Aliasing of Doppler Radar Velocities. J. Atmos. Ocean. Technol. 1988, 5, 305–319. [Google Scholar] [CrossRef]
- Tabary, P.; Scialom, G.; Germann, U. Real-Time Retrieval of the Wind from Aliased Velocities Measured by Doppler Radars. J. Atmos. Ocean. Technol. 2001, 18, 875–882. [Google Scholar] [CrossRef]
- Zhang, J.; Wang, S. An Automated 2D Multipass Doppler Radar Velocity Dealiasing Scheme. J. Atmos. Ocean. Technol. 2006, 23, 1239–1248. [Google Scholar] [CrossRef]
- Helmus, J.J.; Collis, S.M. The Python ARM Radar Toolkit (Py-ART), a Library for Working with Weather Radar Data in the Python Programming Language. J. Open Res. Softw. 2016, 4. [Google Scholar] [CrossRef] [Green Version]
- Bodine, D.J.; Palmer, R.D.; Zhang, G. Dual-Wavelength Polarimetric Radar Analyses of Tornadic Debris Signatures. J. Appl. Meteorol. Climatol. 2014, 53, 242–261. [Google Scholar] [CrossRef]
- Kurdzo, J.M.; Bodine, D.J.; Cheong, B.L.; Palmer, R.D. High-Temporal Resolution Polarimetric X-band Doppler Radar Observations of the 20 May 2013 Moore, Oklahoma, Tornado. Mon. Weather Rev. 2015, 143, 2711–2735. [Google Scholar] [CrossRef] [Green Version]
- Houser, J.L.; Bluestein, H.B.; Snyder, J.C. A Finescale Radar Examination of the Tornadic Debris Signature and Weak-Echo Reflectivity Band Associated with a Large, Violent Tornado. Mon. Weather Rev. 2016, 144, 4101–4130. [Google Scholar] [CrossRef]
- Sauvageot, H. Radar Meteorology; Artech House on Demand: London, UK, 1992. [Google Scholar]
- Sirmans, D.; Zrnic, D.; Bumgarner, B. Extension of Maximum Unambiguous Doppler Velocity by Use of Two Sampling Rates. In Proceedings of the 17th Conference on Radar Meteorology, Seattle, WA, USA, 26–29 October 1976; American Meteorological Society: Boston, MA, USA, 1976. [Google Scholar]
- Doviak, R.J.; Zrnic, D.S.; Sirmans, D.S. Doppler Weather Radar. Proc. IEEE 1979, 67, 1522–1553. [Google Scholar] [CrossRef]
- Joe, P.; May, P. Correction of Dual PRF Velocity Errors for Operational Doppler Weather Radars. J. Atmos. Ocean. Technol. 2003, 20, 429–442. [Google Scholar] [CrossRef]
- Tabary, P.; Guibert, F.; Perier, L.; Parent-du Chatelet, J. An Operational Triple-PRT Doppler Scheme for the French Radar Network. J. Atmos. Ocean. Technol. 2006, 23, 1645–1656. [Google Scholar] [CrossRef] [Green Version]
- Gray, G.; Lewis, B.; Vinson, J.; Pratte, F. A Real-Time Implementation of Staggered PRT Velocity Unfolding. J. Atmos. Ocean. Technol. 1989, 6, 186–187. [Google Scholar] [CrossRef]
- Sachidananda, M.; Zrnić, D. Clutter Filtering and Spectral Moment Estimation for Doppler Weather Radars Using Staggered Pulse Repetition Time (PRT). J. Atmos. Ocean. Technol. 2000, 17, 323–331. [Google Scholar] [CrossRef]
- Torres, S.M.; Dubel, Y.F.; Zrnić, D.S. Design, Implementation, and Demonstration of a Staggered PRT Algorithm for the WSR-88D. J. Atmos. Ocean. Technol. 2004, 21, 1389–1399. [Google Scholar] [CrossRef]
- Eilts, M.D.; Smith, S.D. Efficient Dealiasing of Doppler Velocities Using Local Environment Constraints. J. Atmos. Ocean. Technol. 1990, 7, 118–128. [Google Scholar] [CrossRef]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep Learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Parkhi, O.M.; Vedaldi, A.; Zisserman, A. Deep Face Recognition. In Proceedings of the British Machine Vision Conference, Swansea, UK, 7–10 September 2015. [Google Scholar]
- Arena, P.; Basile, A.; Bucolo, M.; Fortuna, L. Image Processing for Medical Diagnosis Using CNN. Nucl. Instrum. Methods Phys. Res. Sect. A Accel. Spectrom. Detect. Assoc. Equip. 2003, 497, 174–178. [Google Scholar] [CrossRef]
- Racah, E.; Beckham, C.; Maharaj, T.; Kahou, S.E.; Prabhat, M.; Pal, C. ExtremeWeather: A Large-Scale Climate Dataset for Semi-Supervised Detection, Localization, and Understanding of Extreme Weather Events. In Proceedings of the Advances in Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 3402–3413. [Google Scholar]
- Lagerquist, R.; McGovern, A.; Gagne II, D.J. Deep Learning for Spatially Explicit Prediction of Synoptic-Scale Fronts. Weather Forecast. 2019, 34, 1137–1160. [Google Scholar] [CrossRef]
- Wimmers, A.; Velden, C.; Cossuth, J.H. Using Deep Learning to Estimate Tropical Cyclone Intensity from Satellite Passive Microwave Imagery. Mon. Weather Rev. 2019, 147, 2261–2282. [Google Scholar] [CrossRef]
- Gagne, D.J.; Haupt, S.E.; Nychka, D.W.; Thompson, G. Interpretable Deep Learning for Spatial Analysis of Severe Hailstorms. Mon. Weather Rev. 2019, 147, 2827–2845. [Google Scholar] [CrossRef]
- Chilson, C.; Avery, K.; McGovern, A.; Bridge, E.; Sheldon, D.; Kelly, J. Automated Detection of Bird Roosts Using NEXRAD Radar Data and Convolutional Neural Networks. Remote Sens. Ecol. Conserv. 2019, 5, 20–32. [Google Scholar] [CrossRef]
- James, C.N.; Houze, R.A. A Real-Time Four-Dimensional Doppler Dealiasing Scheme. J. Atmos. Ocean. Technol. 2001, 18, 1674–1683. [Google Scholar] [CrossRef]
- Lim, E.; Sun, J. A Velocity Dealiasing Technique Using Rapidly Updated Analysis from a Four-Dimensional Variational Doppler Radar Data Assimilation System. J. Atmos. Ocean. Technol. 2010, 27, 1140–1152. [Google Scholar] [CrossRef] [Green Version]
- Han, L.; Sun, J.; Zhang, W. Convolutional Neural Network for Convective Storm Nowcasting Using 3-D Doppler Weather Radar Data. IEEE Trans. Geosci. Remote Sens. 2019, 58, 1487–1495. [Google Scholar] [CrossRef]
- Browning, K.; Wexler, R. The Determination of Kinematic Properties of a Wind Field Using Doppler Radar. J. Appl. Meteorol. 1968, 7, 105–113. [Google Scholar] [CrossRef]
- Gong, J.; Wang, L.L.; Xu, Q. A Three-Step Dealiasing Method for Doppler Velocity Data Quality Control. J. Atmos. Ocean. Technol. 2003, 20, 1738–1748. [Google Scholar] [CrossRef]
- He, G.; Sun, J.; Ying, Z. An Automated Velocity Dealiasing Scheme for Radar Data Observed from Typhoons and Hurricanes. J. Atmos. Ocean. Technol. 2019, 36, 139–149. [Google Scholar] [CrossRef] [Green Version]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; Springer: Berlin/Heidelberg, Germany, 2015; pp. 234–241. [Google Scholar]
- Karpathy, A. Stanford University CS231n: Convolutional Neural Networks for Visual Recognition. 2018. Available online: https://cs231n.github.io/linear-classify/#softmax-classifier (accessed on 15 December 2022).
- Cheong, B.L.; Kelley, R.; Palmer, R.D.; Zhang, Y.; Yu, T.Y. PX-1000: A Solid-State Polarimetric X-band Weather Radar and Time–Frequency Multiplexed Waveform for Blind Range Mitigation. IEEE Trans. Instrum. Meas. 2013, 62, 3064–3072. [Google Scholar] [CrossRef]
- McLaughlin, D.; Pepyne, D.; Philips, B.; Kurose, J.; Zink, M.; Westbrook, D.; Lyons, E.; Knapp, E.; Hopf, A.; Defonzo, A.; et al. Short-Wavelength Technology and the Potential For Distributed Networks of Small Radar Systems. Bull. Am. Meteorol. Soc. 2009, 90, 1797–1817. [Google Scholar] [CrossRef]
- Pazmany, A.L.; Mead, J.B.; Bluestein, H.B.; Snyder, J.C.; Houser, J.B. A Mobile Rapid-Scanning X-band Polarimetric (RaXPol) Doppler Radar System. J. Atmos. Ocean. Technol. 2013, 30, 1398–1413. [Google Scholar] [CrossRef]
- Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 1–48. [Google Scholar] [CrossRef]
- NOAA Climate.gov. Average Wind Speeds—Map Viewer. 2022. Available online: https://www.climate.gov/maps-data/dataset/average-wind-speeds-map-viewer (accessed on 15 December 2022).
- Shannon, C.E. A Mathematical Theory of Communication. Bell Syst. Tech. J. 1948, 27, 379–423. [Google Scholar] [CrossRef]
- Jadon, S. A Survey of Loss Functions for Semantic Segmentation. In Proceedings of the 2020 IEEE Conference on Computational Intelligence in Bioinformatics and Computational Biology (CIBCB), Viña del Mar, Chile, 27–29 October 2020; pp. 1–7. [Google Scholar]
- Luo, W.; Li, Y.; Urtasun, R.; Zemel, R. Understanding the Effective Receptive Field in Deep Convolutional Neural Networks. Adv. Neural Inf. Process. Syst. 2016, 29, 4898–4906. [Google Scholar]
Training | Validation | Test | |
---|---|---|---|
Mostly filled precipitation | 240 | 75 | 102 |
sparsely filled precipitation | 1632 | 240 | 393 |
Total | 1872 | 315 | 495 |
109 | 47.8 | 1 | |
6.89 | 1 | 0 | |
119 | 1 | 0 |
(%) | (%) | (%) | (%) | ||
---|---|---|---|---|---|
CNN | 99.58 | 99.62 | 97.89 | 93.44 | |
Mostly Filled | Region-Based | 99.90 | 99.87 | 98.54 | 94.30 |
Difference | −0.32 | −0.25 | −0.65 | −0.86 | |
CNN | 99.23 | 98.39 | 96.85 | 84.96 | |
Sparsely Filled | Region-Based | 96.39 | 96.88 | 91.45 | 74.39 |
Difference | 2.84 | 1.51 | 5.39 | 10.57 |
Group | ||||
---|---|---|---|---|
0.25 | 0.61 | 1.00 | ||
Mostly Filled | 0.33 | 0.35 | - | |
0.42 | 0.04 | - | ||
0.28 | 0.74 | 1.00 | ||
Sparsely Filled | 0.35 | 0.25 | - | |
0.38 | 0.01 | - |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Kim, H.; Cheong, B. Robust Velocity Dealiasing for Weather Radar Based on Convolutional Neural Networks. Remote Sens. 2023, 15, 802. https://doi.org/10.3390/rs15030802
Kim H, Cheong B. Robust Velocity Dealiasing for Weather Radar Based on Convolutional Neural Networks. Remote Sensing. 2023; 15(3):802. https://doi.org/10.3390/rs15030802
Chicago/Turabian StyleKim, Hyeri, and Boonleng Cheong. 2023. "Robust Velocity Dealiasing for Weather Radar Based on Convolutional Neural Networks" Remote Sensing 15, no. 3: 802. https://doi.org/10.3390/rs15030802