CloudFCN: Accurate and Robust Cloud Detection for Satellite Imagery with Deep Learning
"> Figure 1
<p>Flowchart of CNN used in cloud segmentation. Bold arrows represent residual connections, connecting different stages of the encoder and decoder arms of the model which run in a U shape. ‘Inception module’ denotes a set of parallel convolutions of stride 1, followed by a single dimensionality-reduction convolution on their concatenated outputs. Widths and heights of inputs and outputs are the <span class="html-italic">minimum</span> and could be any integer multiple of 24 above 86, with internal tensor dimensions changing accordingly. The first input layer is given C channels, as a placeholder for whatever kind of input being used (e.g., 3 for RGB). The output has channel depth N, to denote the number of output classes used.</p> "> Figure 2
<p>Two examples of data from the Carbonite-2 satellite. The left-hand pane shows a scene from Benin, and the right-hand pane is over an area in Brazil. Data provided by Surrey Satellite Technology Ltd. and Earth-i Ltd.</p> "> Figure 3
<p>Four examples of cloud detection on the Biome dataset from the multispectral model. Very few mistakes are seen in the clouds’ interiors, but the edges are more error-prone. The first example shows what may be a cirrus cloud over a vegetated region, the boundaries of which show noticeably more errors of omission than neighbouring cumulus clouds. The cloud over snow in the second pane is successfully detected, although large errors are seen at it’s edge. The final two panes show highly successful detections of different kinds of cloud over varied terrain, despite some disagreement at the very edges of clouds, which are often not well defined and rather ambiguous. All scenes are shown as RGB composites using bands 4, 3 and 2.</p> "> Figure 4
<p>Examples of the effect of white noise and quantization throughout the range of levels applied to the Biome dataset for the validation in <a href="#sec5dot5-remotesensing-11-02312" class="html-sec">Section 5.5</a>. For white noise, an SNR of 7 dB represents a significant distortion of the data, leading to most small features being over-powered by the white noise. Quantization also leads to a loss of textural information, acting to smooth large areas with similar intensities.</p> "> Figure 5
<p>Omission and Commission rates of both RGB and multispectral models against the SNR (<b>a</b>) and the bit depth of the quantization (<b>b</b>). For both white noise and quantization errors of omission increase with the level of noise. Meanwhile, the effect on commission is less strong, suggesting the added noise has more of an impact on cloudy pixels than non-cloudy ones.</p> ">
Abstract
:1. Introduction
2. Related Work
2.1. Cloud Detection
2.2. Thresholding Techniques
2.3. Machine Learning Techniques
3. Materials and Methods
3.1. Overview
3.2. Model Features
3.3. Training
- Rotation: Rotated by a random integer multiple of 90°
- Flipping: Flipped left-right with 50% probability
- Intensity shift: All input values scaled by same random factor in range 0.9–1.1
- Chromatic shift: Each input channel scaled by different random factor in range 0.95–1.05
- Salt and pepper noise: Pixels set to hot (+3) or cold (−3) values, with per pixel probability of 0.005
- White noise: Gaussian noise with sigma = 0.05
4. Theoretical Considerations
4.1. Receptive Field
4.2. Class Weighting
5. Experimental Results
5.1. Datasets
5.1.1. Carbonite-2
5.1.2. Biome
5.2. Cloud Coverage Estimation
5.3. Pixel-Wise Cloud Detection
5.4. Multispectral Performance
5.5. Noise Tolerance
5.5.1. White Noise
5.5.2. Quantization
6. Discussion
7. Conclusions
Supplementary Materials
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- King, M.D.; Platnick, S.; Menzel, W.P.; Ackerman, S.A.; Hubanks, P. Spatial and Temporal Distribution of Clouds Observed by MODIS Onboard the Terra and Aqua Satellites. IEEE Trans. Geosci. Remote Sens. 2013, 51, 3826–3852. [Google Scholar] [CrossRef]
- Xue, J.; Su, B. Significant remote sensing vegetation indices: A review of developments and applications. J. Sens. 2017, 2017, 1353691. [Google Scholar] [CrossRef]
- Larsen, S.; Koren, H.; Solberg, R. Traffic Monitoring using Very High Resolution Satellite Imagery. Photogramm. Eng. Remote Sens. 2009, 75, 859–869. [Google Scholar] [CrossRef] [Green Version]
- Di Girolamo, L.; Davies, R. Cloud fraction errors caused by finite resolution measurements. J. Geophys. Res. Atmos. 1997, 102, 1739–1756. [Google Scholar] [CrossRef]
- Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2015. [Google Scholar]
- Zhang, Z.; Liu, Q.; Wang, Y. Road extraction by deep residual u-net. IEEE Geosci. Remote Sens. Lett. 2018, 15, 749–753. [Google Scholar] [CrossRef]
- Dong, H.; Yang, G.; Liu, F.; Mo, Y.; Guo, Y. Automatic brain tumor detection and segmentation using U-Net based fully convolutional networks. In Annual Conference on Medical Image Understanding and Analysis; Springer: Cham, Switzerland, 2017; pp. 506–517. [Google Scholar]
- Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; Rabinovich, A. Going deeper with convolutions. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1–9. [Google Scholar] [CrossRef]
- Francis, A.M.; Sidiropoulos, P.; Vazquez, E.; Space, M. Real-Time Cloud Detection in High-Resolution Videos: Challenges and Solutions. In Proceedings of the 6th International Workshop on On-Board Payload Data Compression, Matera, Italy, 20–21 September 2018. [Google Scholar]
- Jin, S.; Homer, C.; Yang, L.; Xian, G.; Fry, J.; Danielson, P.; Townsend, P.A. Automated cloud and shadow detection and filling using two-date Landsat imagery in the USA. Int. J. Remote Sens. 2013, 34, 1540–1560. [Google Scholar] [CrossRef]
- Ancuti, C.O.; Ancuti, C.; Hermans, C.; Bekaert, P. A fast semi-inverse approach to detect and remove the haze from a single image. In Asian Conference on Computer Vision; Springer: Berlin/Heidelberg, Germany, 2010; pp. 501–514. [Google Scholar]
- Makarau, A.; Richter, R.; Müller, R.; Reinartz, P. Haze detection and removal in remotely sensed multispectral imagery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 5895–5905. [Google Scholar] [CrossRef]
- Gultepe, I.; Zhou, B.; Milbrandt, J.; Bott, A.; Li, Y.; Heymsfield, A.J.; Ferrier, B.; Ware, R.; Pavolonis, M.; Kuhn, T.; et al. A review on ice fog measurements and modeling. Atmos. Res. 2015, 151, 2–19. [Google Scholar] [CrossRef]
- Zhang, Q.; Xiao, C. Cloud detection of RGB color aerial photographs by progressive refinement scheme. IEEE Trans. Geosci. Remote Sens. 2014, 52, 7264–7275. [Google Scholar] [CrossRef]
- Fisher, A. Cloud and cloud-shadow detection in SPOT5 HRG imagery with automated morphological feature extraction. Remote Sens. 2014, 6, 776–800. [Google Scholar] [CrossRef]
- Pahlevan, N.; Roger, J.C.; Ahmad, Z. Revisiting short-wave-infrared (SWIR) bands for atmospheric correction in coastal waters. Opt. Express 2017, 25, 6015–6035. [Google Scholar] [CrossRef] [PubMed]
- Chylek, P.; Robinson, S.; Dubey, M.; King, M.; Fu, Q.; Clodius, W. Comparison of near-infrared and thermal infrared cloud phase detections. J. Geophys. Res. Atmos. 2006, 111. [Google Scholar] [CrossRef] [Green Version]
- Gultepe, I.; Sharman, R.; Williams, P.D.; Zhou, B.; Ellrod, G.; Minnis, P.; Trier, S.; Griffin, S.; Yum, S.S.; Gharabaghi, B.; et al. A review of high impact weather for aviation meteorology. Pure Appl. Geophys. 2019, 176, 1869–1921. [Google Scholar] [CrossRef]
- Irish, R.R.; Barker, J.L.; Goward, S.N.; Arvidson, T. Characterization of the Landsat-7 ETM+ Automated Cloud-Cover Assessment (ACCA) algorithm. Photogramm. Eng. Remote Sens. 2006, 72, 1179–1188. [Google Scholar] [CrossRef]
- Zhu, Z.; Woodcock, C.E. Object-based cloud and cloud shadow detection in Landsat imagery. Remote Sens. Environ. 2012, 118, 83–94. [Google Scholar] [CrossRef]
- Bley, S.; Deneke, H. A threshold-based cloud mask for the high-resolution visible channel of Meteosat Second Generation SEVIRI. Atmos. Meas. Tech. 2013, 6, 2713–2723. [Google Scholar] [CrossRef] [Green Version]
- Henderson-Sellers, A. De-fogging cloud determination algorithms. Nature 1982, 298, 419–420. [Google Scholar] [CrossRef]
- Rossow, W.B.; Mosher, F.; Kinsella, E.; Arking, A.; Desbois, M.; Harrison, E.; Minnis, P.; Ruprecht, E.; Seze, G.; Simmer, C.; et al. ISCCP cloud algorithm intercomparison. J. Clim. Appl. Meteorol. 1985, 24, 877–903. [Google Scholar] [CrossRef]
- Saunders, R.W.; Kriebel, K.T. An improved method for detecting clear sky and cloudy radiances from AVHRR data. Int. J. Remote Sens. 1988, 9, 123–150. [Google Scholar] [CrossRef]
- Luo, Y.; Trishchenko, A.P.; Khlopenkov, K.V. Developing clear-sky, cloud and cloud shadow mask for producing clear-sky composites at 250-meter spatial resolution for the seven MODIS land bands over Canada and North America. Remote Sens. Environ. 2008, 112, 4167–4185. [Google Scholar] [CrossRef]
- Oreopoulos, L.; Wilson, M.J.; Várnai, T. Implementation on landsat data of a simple cloud-mask algorithm developed for MODIS land bands. IEEE Geosci. Remote Sens. Lett. 2011, 8, 597–601. [Google Scholar] [CrossRef]
- Zhu, Z.; Wang, S.; Woodcock, C.E. Improvement and expansion of the Fmask algorithm: Cloud, cloud shadow, and snow detection for Landsats 4-7, 8, and Sentinel 2 images. Remote Sens. Environ. 2015, 159, 269–277. [Google Scholar] [CrossRef]
- Gao, B.C.; Kaufman, Y.J. Selection of the 1.375-μm MODIS Channel for Remote Sensing of Cirrus Clouds and Stratospheric Aerosols from Space. J. Atmos. Sci. 1995, 52, 4231–4237. [Google Scholar] [CrossRef]
- Hollstein, A.; Segl, K.; Guanter, L.; Brell, M.; Enesco, M. Ready-to-Use Methods for the Detection of Clouds, Cirrus, Snow, Shadow, Water and Clear Sky Pixels in Sentinel-2 MSI Images. Remote Sens. 2016, 8, 666. [Google Scholar] [CrossRef]
- Henken, C.C.; Schmeits, M.J.; Deneke, H.; Roebeling, R.A. Using MSG-SEVIRI cloud physical properties and weather radar observations for the detection of Cb/TCu clouds. J. Appl. Meteorol. Climatol. 2011, 50, 1587–1600. [Google Scholar] [CrossRef]
- Hughes, M.J.; Hayes, D.J. Automated detection of cloud and cloud shadow in single-date Landsat imagery using neural networks and spatial post-processing. Remote Sens. 2014, 6, 4907–4926. [Google Scholar] [CrossRef]
- Skakun, S.; Vermote, E.F.; Roger, J.C.; Justice, C.O.; Masek, J.G. Validation of the LaSRC Cloud Detection Algorithm for Landsat 8 Images. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2019, 12, 2439–2446. [Google Scholar] [CrossRef]
- Foga, S.; Scaramuzza, P.L.; Guo, S.; Zhu, Z.; Dilley, R.D.; Beckmann, T.; Schmidt, G.L.; Dwyer, J.L.; Joseph Hughes, M.; Laue, B. Cloud detection algorithm comparison and validation for operational Landsat data products. Remote Sens. Environ. 2017, 194, 379–390. [Google Scholar] [CrossRef] [Green Version]
- Raghu, M.; Kleinberg, J.; Poole, B. On the Expressive Power of Deep Neural Networks. In Proceedings of the 34th International Conference on Machine Learning, Sydney, Australia, 6–11 August 2017; PMLR. Volume 70, pp. 2847–2854. [Google Scholar]
- Li, P.; Dong, L.; Xiao, H.; Xu, M. A cloud image detection method based on SVM vector machine. Neurocomputing 2015, 169, 34–42. [Google Scholar] [CrossRef]
- Ahmad, A.; Quegan, S. Cloud Masking for Remotely Sensed Data Using Spectral and Principal Components Analysis. Technol. Appl. Sci. Res. 2012, 2, 221–225. [Google Scholar]
- Lee, J.; Weger, R.C.; Sengupta, S.K.; Welch, R.M. A Neural Network Approach to Cloud Classification. IEEE Trans. Geosci. Remote Sens. 1990, 28, 846–855. [Google Scholar] [CrossRef]
- Yhann, S.R.; Simpson, J.J. Application of Neural Networks to AVHRR Cloud Segmentation. IEEE Trans. Geosci. Remote Sens. 1995, 33, 590–604. [Google Scholar] [CrossRef]
- Tian, B.; Shaikh, M.A.; Azimi-Sadjadi, M.R.; Vonder Haar, T.H.; Reinke, D.L. A study of cloud classification with neural networks using spectral and textural features. IEEE Trans. Neural Netw. 1999, 10, 138–151. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: New York, NY, USA, 2012; Volume 25, pp. 1097–1105. [Google Scholar]
- Shendryk, Y.; Rist, Y.; Ticehurst, C.; Thorburn, P. Deep learning for multi-modal classification of cloud, shadow and land cover scenes in PlanetScope and Sentinel-2 imagery. ISPRS J. Photogramm. Remote Sens. 2019, 157, 124–136. [Google Scholar] [CrossRef]
- Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC superpixels compared to state-of-the-art superpixel methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2281. [Google Scholar] [CrossRef] [PubMed]
- Shi, M.; Xie, F.; Zi, Y.; Yin, J. Cloud detection of remote sensing images by deep learning. In Proceedings of the International Geoscience and Remote Sensing Symposium (IGARSS), Beijing, China, 10–15 July 2016; pp. 701–704. [Google Scholar] [CrossRef]
- Xie, F.; Shi, M.; Yin, J.; Zhao, D. Multilevel Cloud Detection in Remote Sensing Images Based on Deep Learning. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 10, 3631–3640. [Google Scholar] [CrossRef]
- Zi, Y.; Xie, F.; Jiang, Z. A cloud detection method for Landsat 8 images based on PCANet. Remote Sens. 2018, 10, 877. [Google Scholar] [CrossRef]
- Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
- Badrinarayanan, V.; Kendall, A.; Cipolla, R. Segnet: A deep convolutional encoder-decoder architecture for image segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 2481–2495. [Google Scholar] [CrossRef] [PubMed]
- Mohajerani, S.; Krammer, T.A.; Saeedi, P. Cloud Detection Algorithm for Remote Sensing Images Using Fully Convolutional Neural Networks. In Proceedings of the 2018 IEEE 20th International Workshop on Multimedia Signal Processing (MMSP), Vancouver, BC, Canada, 29–31 August 2018. [Google Scholar]
- Li, Z.; Shen, H.; Wei, Y.; Cheng, Q.; Yuan, Q. Cloud detection by fusing multi-scale convolutional features. In Proceedings of the ISPRS Annals of the Photogrammetry, Remote Sensing and Spatial Information Science, Beijing, China, 7–10 May 2018; pp. 149–152. [Google Scholar]
- Ma, L.; Liu, Y.; Zhang, X.; Ye, Y.; Yin, G.; Johnson, B.A. Deep learning in remote sensing applications: A meta-analysis and review. ISPRS J. Photogramm. Remote Sens. 2019, 152, 166–177. [Google Scholar] [CrossRef]
- Karpatne, A.; Watkins, W.; Read, J.; Kumar, V. Physics-guided neural networks (pgnn): An application in lake temperature modeling. arXiv 2017, arXiv:1710.11431. [Google Scholar]
- Joseph Hughes, M. L8 SPARCS Cloud Validation Masks; U.S. Geological Survey: Sioux Falls, SD, USA, 2016. [CrossRef]
- Goodfellow, I.; Bengio, Y.; Courville, A. Deep Learning; The MIT Press: Cambridge, MA, USA, 2016. [Google Scholar]
- Odena, A.; Dumoulin, V.; Olah, C. Deconvolution and Checkerboard Artifacts. Distill 2016. [Google Scholar] [CrossRef]
- Maas, A.L.; Hannun, A.Y.; Ng, A.Y. Rectifier nonlinearities improve neural network acoustic models. Proc. ICML 2013, 30, 3. [Google Scholar]
- Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv 2015, arXiv:1502.03167. [Google Scholar]
- Bottou, L. Large-Scale Machine Learning with Stochastic Gradient Descent. In Proceedings of the COMPSTAT’2010; Lechevallier, Y., Saporta, G., Eds.; Physica-Verlag HD: Heidelberg, Germany, 2010; pp. 177–186. [Google Scholar] [Green Version]
- Kingma, D.P.; Ba, J. Adam: A method for stochastic optimization. arXiv 2014, arXiv:1412.6980. [Google Scholar]
- Duchi, J.; Hazan, E.; Singer, Y. Adaptive subgradient methods for online learning and stochastic optimization. J. Mach. Learn. Res. 2011, 12, 2121–2159. [Google Scholar]
- Zeiler, M.D.; Fergus, R. Visualizing and understanding convolutional networks. Lect. Notes Comput. Sci. 2014, 8689 LNCS, 818–833. [Google Scholar] [CrossRef]
- Hien, D.H.T. A Guide to Receptive Field Arithmetic for Convolutional Neural Networks. Available online: https://medium.com/mlreview/a-guide-to-receptive-field-arithmetic-for-convolutional-neural-networks-e0f514068807 (accessed on 20 March 2019).
- Luo, W.; Li, Y.; Urtasun, R.; Zemel, R. Understanding the effective receptive field in deep convolutional neural networks. In Advances in Neural Information Processing Systems; Curran Associates, Inc.: New York, NY, USA, 2016; Volume 29, pp. 4898–4906. [Google Scholar]
Layer | Layer Type | Stride | Filter Size | Jump | Receptive Field |
---|---|---|---|---|---|
1 | Inception | 1 | 5 | 1 | 1 |
2 | Convolution | 2 | 5 | 1 | 5 |
3 | Convolution | 3 | 7 | 2 | 9 |
4 | Inception | 1 | 5 | 6 | 21 |
5 | Convolution | 2 | 5 | 6 | 45 |
6 | Inception | 1 | 5 | 12 | 69 |
7 | Convolution | 2 | 5 | 12 | 117 |
8 | Transpose | 2 | 5 | 24 | 165 |
9 | Convolution | 1 | 1 | 12 | 261 |
10 | Transpose | 2 | 5 | 12 | 261 |
11 | Transpose | 3 | 7 | 6 | 309 |
12 | Inception | 1 | 5 | 2 | 345 |
13 | Transpose | 2 | 5 | 2 | 353 |
14 | Inception | 1 | 5 | 1 | 361 |
15 | Convolution | 1 | 1 | 1 | 365 |
16 | Output | - | - | 1 | 365 |
Total | %Mean | %Median | >2% | >5% | >10% | >20% | >50% | |
---|---|---|---|---|---|---|---|---|
Clear | 606 | 1.85 | 0.28 | 10.40 | 4.46 | 3.47 | 2.48 | 0.66 |
Clear + Airports | 129 | 1.17 | 0.34 | 8.53 | 4.65 | 1.55 | 0.78 | 0.00 |
Clear + Roads | 322 | 0.87 | 0.29 | 8.70 | 4.04 | 0.93 | 0.31 | 0.00 |
Clear + Waves | 42 | 3.71 | 0.69 | 30.95 | 7.14 | 7.14 | 7.14 | 0.00 |
Clear + Urban | 541 | 1.01 | 0.29 | 8.50 | 2.77 | 1.66 | 0.74 | 0.00 |
Clear + Crops | 379 | 0.98 | 0.28 | 7.39 | 1.85 | 1.32 | 1.06 | 0.00 |
Clear + Water | 196 | 1.59 | 0.40 | 12.76 | 4.59 | 3.06 | 2.04 | 0.00 |
Clear + Snow-Ice | 62 | 9.34 | 0.29 | 24.19 | 19.35 | 19.35 | 17.74 | 6.45 |
Clear + Docks | 79 | 1.94 | 0.40 | 10.13 | 3.80 | 3.80 | 3.80 | 0.00 |
Cloudy | 72 | - | - | - | - | - | 12.50 | 4.16 |
Barren | Forest | Grass- Crops | Shrubland | Snow- Ice | Urban | Water | Wetlands | Mean | ||
---|---|---|---|---|---|---|---|---|---|---|
CloudFCN (RGB) | %Correct | 78.92 | 82.69 | 94.69 | 93.41 | 49.65 | 93.41 | 92.34 | 77.36 | 82.81 |
%Omission | 12.24 | 23.20 | 4.08 | 8.82 | 7.25 | 2.88 | 6.34 | 24.32 | 11.14 | |
%Commission | 29.89 | 11.00 | 7.36 | 4.47 | 76.96 | 8.64 | 8.18 | 19.90 | 20.80 | |
%Quality | 36.79 | 48.48 | 83.25 | 80.11 | −34.56 | 81.89 | 77.82 | 33.14 | 50.87 | |
CloudFCN (Multispectral) | %Correct | 92.95 | 95.12 | 96.12 | 88.68 | 72.93 | 95.56 | 95.43 | 91.24 | 91.00 |
%Omission | 4.70 | 7.21 | 6.27 | 19.66 | 17.60 | 2.21 | 4.62 | 13.16 | 9.43 | |
%Commission | 8.95 | 1.92 | 1.79 | 3.23 | 27.53 | 5.75 | 4.50 | 3.89 | 7.19 | |
%Quality | 79.30 | 85.99 | 88.06 | 65.79 | 27.80 | 87.61 | 86.31 | 74.19 | 74.38 | |
ACCA | %Quality | 63.02 | 68.69 | 62 | 60.47 | 36.25 | 68.33 | 71.43 | 62.48 | 61.56 |
AT-ACCA | %Quality | 66.67 | 73.83 | 74.09 | 70.65 | 35.86 | 74.06 | 70.51 | 76.25 | 67.72 |
cfmask | %Quality | 77.1 | 67.27 | 85.74 | 75.53 | 26.37 | 74.72 | 50.98 | 65.97 | 65.69 |
cfmask-conf | %Quality | 66.78 | 66.72 | 83.59 | 72.3 | 20.75 | 76.54 | 51.11 | 67.45 | 63.63 |
cfmask-nt-cirrus | %Quality | 54.23 | 57.2 | 70.71 | 71.58 | −15.87 | 74.37 | 50.23 | 47.16 | 51.62 |
cfmask-nt-cirrus-conf | %Quality | 54.44 | 38.79 | 60.01 | 66.38 | −43.68 | 73.2 | 49.04 | 35.14 | 41.66 |
cfmask-t-cirrus | %Quality | 69.82 | 64.78 | 77.98 | 72.75 | −24.1 | 72.42 | 57.21 | 53.27 | 49.01 |
cfmask-t-cirrus-conf | %Quality | 69.37 | 43.99 | 77.76 | 72.34 | −52.76 | 74.72 | 57.24 | 52.14 | 49.63 |
See5 | %Quality | 54.19 | 51.88 | 42.15 | 42.46 | 35.48 | 57.4 | 39.35 | 68.17 | 49.17 |
© 2019 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Francis, A.; Sidiropoulos, P.; Muller, J.-P. CloudFCN: Accurate and Robust Cloud Detection for Satellite Imagery with Deep Learning. Remote Sens. 2019, 11, 2312. https://doi.org/10.3390/rs11192312
Francis A, Sidiropoulos P, Muller J-P. CloudFCN: Accurate and Robust Cloud Detection for Satellite Imagery with Deep Learning. Remote Sensing. 2019; 11(19):2312. https://doi.org/10.3390/rs11192312
Chicago/Turabian StyleFrancis, Alistair, Panagiotis Sidiropoulos, and Jan-Peter Muller. 2019. "CloudFCN: Accurate and Robust Cloud Detection for Satellite Imagery with Deep Learning" Remote Sensing 11, no. 19: 2312. https://doi.org/10.3390/rs11192312
APA StyleFrancis, A., Sidiropoulos, P., & Muller, J.-P. (2019). CloudFCN: Accurate and Robust Cloud Detection for Satellite Imagery with Deep Learning. Remote Sensing, 11(19), 2312. https://doi.org/10.3390/rs11192312