Flood Inundation Mapping from Optical Satellite Images Using Spatiotemporal Context Learning and Modest AdaBoost
<p>(<b>a</b>) Location of the first study site near the border of Russia and China; (<b>b</b>) Extent of the HJ-1A CCD data used in this study.</p> "> Figure 2
<p>(<b>a</b>) Location of the second study site at the North of Hunan Province in China; (<b>b</b>) Extent of the GF-4 PMS data used in this study.</p> "> Figure 3
<p>Flowchart of permanent pixel extraction procedure.</p> "> Figure 4
<p>Flowchart of inundation mapping procedure.</p> "> Figure 5
<p>False colour composites (R4G3B2) of HJ-1A CCD images acquired on (<b>a</b>) 12 July 2013 (before the flood) and (<b>b</b>) 27 August 2013 (after the flood) for the first case study. (<b>c</b>) Corresponding MOD44W water mask product with water in blue and land in black.</p> "> Figure 6
<p>(<b>a</b>–<b>d</b>) Flood inundation mapping results for the first case study using K-MEANs, SP-MADB, STP-MADB and STCLP-MADB methods. (Gray: flood pixels in both the detection and reference maps; Blue: flood pixels only in the detection map; Yellow: flood pixels only in the reference map; Black: the background). (<b>e</b>) The locations of the four sub-regions in red rectangle, shown on the reference map of inundation derived from the GF-1 WFV data, with the water in yellow and the background in black.</p> "> Figure 7
<p>From the left to the right column: the regions of interest A, B, C and D for the first case study. From the second to the fifth row: corresponding detection and reference maps with the flood in blue and the background in black.</p> "> Figure 8
<p>The first case study: commission, omission and overall accuracy in function of <math display="inline"> <semantics> <mi>n</mi> </semantics> </math>, the percentage of permanent pixels.</p> "> Figure 9
<p>False colour composites (R: 5, G: 4, B: 3) of GF-4 PMS images acquired on (<b>a</b>) 17 June 2016 (before the flood) and (<b>b</b>) 23 July 2016 (after the flood) for the second test case. (<b>c</b>) Corresponding MOD44W water mask product with water in blue and land in black.</p> "> Figure 10
<p>(<b>a</b>–<b>d</b>) Flood inundation mapping results for the second test area using K-MEANs, SP-MADB, STP-MADB and STCLP-MADB methods. (Gray: flood pixels in both the detection and reference maps; Blue: flood pixels only in the detection map; Yellow: flood pixels only in the reference map; Black: the background). (<b>e</b>) The locations of the four sub-regions in red rectangle, shown on the reference map of inundation derived from the HJ-1B CCD data, with the water in yellow and the background in black.</p> "> Figure 11
<p>From the left to the right column: the regions of interest A, B, C and D for the second case study. From the second to the fifth row: corresponding detection and reference maps with the flood in blue and the background in black.</p> "> Figure 12
<p>Second test—commission, omission and overall accuracy in function of <math display="inline"> <semantics> <mi>n</mi> </semantics> </math>, the percentage of permanent pixels.</p> ">
Abstract
:1. Introduction
2. Experimental Set Up
2.1. Datasets
2.2. Study Area
2.3. Validation
3. Methods
3.1. Permanent Pixel Extraction Using Spatiotemporal Context Learning
3.2. Inundation Mapping Based on Modest AdaBoost
3.2.1. Permanent Pixels Labelling
3.2.2. Inundation Mapping
4. Results
4.1. Inundation Mapping Using HJ-1A CCD Data
- (1)
- The proposed inundation mapping method, based on STCL permanent pixel extraction and MADB, successfully extracts most of the flood regions in the first case study. In each column of the Table 4, the STCLP-MADB method achieves the closest number of inundated pixels to the reference, except in sub-region C. It is the second best among the methods, and has almost the same number of inundated pixels as the best. All of this evidence proves the effectiveness of the HJ-1A CCD data and the proposed procedure for mapping wide inundated areas in a river flood event.
- (2)
- On the whole, it can be seen that the main regions of the flood are mostly well-delineated by each inundation mapping algorithm, except for small tributaries−for example the tributaries near the sub-region A and D−which are omitted by the SP-MADB method, and are shown in yellow. The STCLP-MADB performs better than the three other methods from the visual effect. In these regions, the K-MEANs and STP-MADB results present more false alarms, and the SP-MADB method makes more omissions. The result derived from STCLP-MADB is most consistent with the reference map. Its effectiveness for precision mapping is significant for inferring the future evolution of the flood.
- (3)
- From the detailed mapping results, it can be found that the inundated regions are delineated differently by different methods. In the map derived using K-MEANs, many points of false positive can be found in the unflooded regions. However, the SP-MADB method produces more false negatives in some small flood regions and half-submerged regions. More advanced results are obtained by STP-MADB and STCLP-MADB methods. Further comparisons of the details show that the results from STCLP-MADB provide finer outlines and are slightly better.
- (4)
- Although the results from STCLP-MADB are quite promising, there are still some false positive errors, mainly occurring in the small unflooded areas surrounded by large flooded areas. For example, in Figure 6d, we can find some pixels in blue inside the main region of the flood, which are unflooded areas but determined as flood by the STCLP-MADB method. This is because these areas mostly comprise mixed pixels. Different proportions and locations of water in one mixed pixel influence what class the pixel is distributed to.
- (1)
- STCLP-MADB achieves the highest overall accuracy and kappa coefficient among these four methods, which shows that STCLP-MADB performs better than the others in terms of quantitative evaluation. Extending the SP strategy to STP strategy improves mapping accuracy. Furthermore, utilising STCL confidence calculation instead of a simple counting strategy in STP also enhances the mapping results.
- (2)
- With incomplete flood information, different flood detectors produce different commission and omission errors. The best omission and commission rates are achieved by the K-MEANs and SP-MADB methods, respectively. However, there is always a balance between the omission and the commission. A decrease in omission errors usually brings about an increase in commission errors and vice versa. As can be seen from Table 5, the high commission and omission rates limit the ability of K-MEANs and SP-MADB methods in inundation mapping, which is illustrated in Figure 6 and Figure 7, while the STCLP-MADB method achieves a balance between these two rates and provides a more acceptable result.
4.2. Inundation Mapping Using GF-4 PMS Data
- (1)
- In terms of the performance in categorisation, results in the second test are similar to that in the first test. The proposed STCLP-MADB method still achieves the best overall accuracy and kappa coefficient. K-MEANs and STP-MADB methods achieve the best omission and commission, respectively, while STCLP-MADB shows an average performance of these two rates. As the two test datasets are from different sensors, locations and inundation cases, this experiment further proves the good robustness of the proposed method.
- (2)
- K-MEANs makes use of the statistical properties of the whole image, which causes high commission because the inundated pixels can have a different appearance in different contextual situations. SP-MADB and STP-MADB draw more attention to the local characteristics, but they make the determination of permanent pixels by counting, which lacks a theoretical foundation and can be easily disturbed. This can be found by comparing Figure 6c and Figure 10c. In the first case, using HJ-1A CCD data, the STP-MADB method produces more commission, while in the second case study, using GF-4 PMS data, more omission than commission is introduced in the STP-MADB result. In the proposed method, a spatiotemporal context confidence calculating model is adopted to overcome the limitation of counting. With the formulised combination of local spatiotemporal and spectral information, we achieve a more accurate and robust inundation map than other methods.
- (3)
- The changing curves of accuracy with are more unstable than those in the first test. The influence of on result precision does not change monotonically. It is difficult to find any rules in the curves at all. This could be because the outline of the inundation is more complicated in the second case study than in the first. Moreover, the spatial resolution of the GF-4 PMS data is sparser than that of the HJ-1A CCD data, which brings out more mixed pixels. With the increase in these uncertainties, the variation in accuracy becomes more unpredictable. Nevertheless, the fluctuation is still within a limited range. The effectiveness of the proposed method is rather stable.
- (4)
- As the GF-4 satellite was officially put into service not long ago (in June 2016), research on GF-4 PMS data is rare. Our work explores the applied value of this new dataset and proves its effectiveness for inundation mapping. More promising research about GF-4 PMS data could be carried out in the future.
5. Discussion
6. Conclusions
Supplementary Materials
Supplementary File 1Acknowledgments
Author Contributions
Conflicts of Interest
References
- O’Keefe, P.; Westgate, K.; Wisner, B. Taking the naturalness out of natural disasters. Nature 1976, 260, 566–567. [Google Scholar] [CrossRef]
- Sanyal, J.; Lu, X.X. Application of remote sensing in flood management with special reference to monsoon Asia: A review. Nat. Hazards 2004, 33, 283–301. [Google Scholar] [CrossRef]
- Berz, G.; Kron, W.; Loster, T.; Rauch, E.; Schimetschek, J.; Schmieder, J.; Siebert, A.; Smolka, A.; Wirtz, A. World map of natural hazards—A global view of the distribution and intensity of significant exposures. Nat. Hazards 2001, 23, 443–465. [Google Scholar] [CrossRef]
- Akıncı, H.; Erdoğan, S. Designing a flood forecasting and inundation-mapping system integrated with spatial data infrastructures for Turkey. Nat. Hazards 2014, 71, 895–911. [Google Scholar] [CrossRef]
- Smith, L.C. Satellite remote sensing of river inundation area, stage, and discharge: A review. Hydrol. Process. 1997, 11, 1427–1439. [Google Scholar] [CrossRef]
- Brivio, P.A.; Colombo, R.; Maggi, M.; Tomasoni, R. Integration of remote sensing data and GIS for accurate mapping of flooded areas. Int. J. Remote Sens. 2002, 23, 429–441. [Google Scholar] [CrossRef]
- Wang, Y.; Colby, J.D.; Mulcahy, K.A. An efficient method for mapping flood extent in a coastal floodplain using Landsat TM and DEM data. Int. J. Remote Sens. 2002, 23, 3681–3696. [Google Scholar] [CrossRef]
- Rahman, M.S.; Di, L. The state of the art of spaceborne remote sensing in flood management. Nat. Hazards 2017, 85, 1223–1248. [Google Scholar] [CrossRef]
- Li, L.; Chen, Y.; Yu, X.; Liu, R.; Huang, C. Sub-pixel flood inundation mapping from multispectral remotely sensed images based on discrete particle swarm optimization. ISPRS J. Photogramm. Remote Sens. 2015, 101, 10–21. [Google Scholar] [CrossRef]
- Brakenridge, R.; Anderson, E. MODIS-based flood detection, mapping and measurement: The potential for operational hydrological applications. In Transboundary Floods: Reducing Risks through Flood Management; Marsalek, J., Stancalie, G., Balint, G., Eds.; Springer: Dordrecht, The Netherlands, 2006; Volume 72, pp. 1–12. [Google Scholar]
- Ticehurst, C.J.; Chen, Y.; Karim, F.; Dutta, D.; Gouweleeuw, B. Using MODIS for mapping flood events for use in hydrological and hydrodynamic models: Experiences so far. In Proceedings of the 20th International Congress on Modelling and Simulation, Adelaide, Australia, 1–6 December 2013; pp. 1–6. [Google Scholar]
- Kwak, Y.; Park, J.; Yorozuya, A.; Fukami, K. Estimation of flood volume in Chao Phraya River basin, Thailand, from MODIS images couppled with flood inundation level. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Munich, Germany, 22–27 July 2012. [Google Scholar]
- Khan, S.I.; Hong, Y.; Wang, J.; Yilmaz, K.K.; Gourley, J.J.; Adler, R.F.; Brakenridge, G.R.; Policelli, F.; Habib, S.; Irwin, D. Satellite remote sensing and hydrologic modeling for flood inundation mapping in Lake Victoria basin: Implications for hydrologic prediction in ungauged basins. IEEE Trans. Geosci. Remote Sens. 2011, 49, 85–95. [Google Scholar] [CrossRef]
- Schumann, G. Preface: Remote Sensing in Flood Monitoring and Management. Remote Sens. 2015, 7, 17013–17015. [Google Scholar] [CrossRef]
- McFeeters, S.K. The use of the Normalized Difference Water Index (NDWI) in the delineation of open water features. Int. J. Remote Sens. 1996, 17, 1425–1432. [Google Scholar] [CrossRef]
- Jain, S.K.; Singh, R.D.; Jain, M.K.; Lohani, A.K. Delineation of flood-prone areas using remote sensing techniques. Water Resour. Manag. 2005, 19, 333–347. [Google Scholar] [CrossRef]
- Xie, H.; Luo, X.; Xu, X.; Pan, H.; Tong, X. Evaluation of Landsat 8 OLI imagery for unsupervised inland water extraction. Int. J. Remote Sens. 2016, 37, 1826–1844. [Google Scholar] [CrossRef]
- Giordano, F.; Goccia, M.; Dellepiane, S. Segmentation of coherence maps for flood damage assessment. In Proceedings of the IEEE International Conference on Image Processing, Genova, Italy, 14 September 2005; Volume 2, p. II-233. [Google Scholar]
- Dellepiane, S.; Angiati, E.; Vernazza, G. Processing and segmentation of COSMO-SkyMed images for flood monitoring. In Proceedings of the 2010 IEEE International Geoscience and Remote Sensing Symposium, Honolulu, HI, USA, 25–30 July 2010. [Google Scholar]
- Chignell, S.M.; Anderson, R.S.; Evangelista, P.H.; Laituri, M.J.; Merritt, D.M. Multi-temporal independent component analysis and Landsat 8 for delineating maximum extent of the 2013 Colorado front range flood. Remote Sens. 2015, 7, 9822–9843. [Google Scholar] [CrossRef]
- Rokni, K.; Ahmad, A.; Selamat, A.; Hazini, S. Water feature extraction and change detection using multitemporal Landsat imagery. Remote Sens. 2014, 6, 4173–4189. [Google Scholar] [CrossRef]
- Chen, X.C.; Khandelwal, A.; Shi, S.; Faghmous, J.H.; Boriah, S.; Kumar, V. Unsupervised method for water surface extent monitoring using remote sensing data. In Machine Learning and Data Mining Approaches to Climate Science; Springer International Publishing: Cham, Switzerland, 2015; pp. 51–58. [Google Scholar]
- Huang, C.; Chen, Y.; Wu, J. Dem-based modification of pixel-swapping algorithm for enhancing floodplain inundation mapping. Int. J. Remote Sens. 2014, 35, 365–381. [Google Scholar] [CrossRef]
- Lu, S.; Wu, B.; Yan, N.; Wang, H. Water body mapping method with hj-1a/b satellite imagery. Int. J. Appl. Earth Obs. Geoinf. 2011, 13, 428–434. [Google Scholar] [CrossRef]
- Gu, X.; Tong, X. Overview of china earth observation satellite programs. IEEE Geosci. Remote Sens. Mag. 2015, 3, 113–129. [Google Scholar]
- Gu, Y.; Hunt, E.; Wardlow, B.; Basara, J.B.; Brown, J.F.; Verdin, J.P. Evaluation of MODIS NDVI and NDWI for vegetation drought monitoring using oklahoma mesonet soil moisture data. Geophys. Res. Lett. 2008, 35, 1092–1104. [Google Scholar] [CrossRef]
- George, C.; Rowland, C.; Gerard, F.; Balzter, H. Retrospective mapping of burnt areas in central siberia using a modification of the normalised difference water index. Remote Sens. Environ. 2006, 104, 346–359. [Google Scholar] [CrossRef]
- Mcfeeters, S.K. Using the normalized difference water index (NDWI) within a geographic information system to detect swimming pools for mosquito abatement: A practical approach. Remote Sens. 2013, 5, 3544–3561. [Google Scholar] [CrossRef]
- Li, W.; Du, Z.; Ling, F.; Zhou, D.; Wang, H.; Gui, Y.; Sun, B.; Zhang, X. A comparison of land surface water mapping using the normalized difference water index from TM, ETM+ and ALI. Remote Sens. 2013, 5, 5530–5549. [Google Scholar] [CrossRef]
- Zhang, K.; Zhang, L.; Yang, M.H.; Zhang, D. Fast tracking via spatio-temporal context learning. arXiv, 2013; preprint. arXiv:1311.1939. [Google Scholar]
- Zuo, Z.Y.; Tian, S.; Pei, W.Y.; Yin, X.C. Multi-strategy tracking based text detection in scene videos. In Proceedings of the IEEE 13th International Conference on Document Analysis and Recognition, Tunis, Tunisia, 23–26 August 2015. [Google Scholar]
- Xu, J.; Lu, Y.; Liu, J. Robust tracking via weighted spatio-temporal context learning. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014. [Google Scholar]
- Muster, S.; Heim, B.; Abnizova, A.; Boike, J. Water body distributions across scales: A remote sensing based comparison of three arctic tundra wetlands. Remote Sens. 2013, 5, 1498–1523. [Google Scholar] [CrossRef]
- Freund, Y.; Schapire, R.E. A desicion-theoretic generalization of on-line learning and an application to boosting. In European Conference on Computational Learning Theory; Springer: Berlin/Heidelberg, Germany, 1995; pp. 23–37. [Google Scholar]
- Viola, P.; Jones, M. Robust real-time object detection. Int. J. Comput. Vis. 2001, 57, 34–37. [Google Scholar]
- Vezhnevets, A.; Vezhnevets, V. Modest AdaBoost-teaching AdaBoost to generalize better. Graphicon 2005, 12, 987–997. [Google Scholar]
- Sam, K.T.; Tian, X.L. Rapid license plate detection using Modest AdaBoost and template matching. In Proceedings of the 2nd International Conference on Digital Image Processing, Singapore, 26 February 2010. [Google Scholar]
- Qahwaji, R.; Al-Omari, M.; Colak, T.; Ipson, S. Using the real, gentle and modest AdaBoost learning algorithms to investigate the computerised associations between coronal mass ejections and filaments. In Proceedings of the 2008 IEEE Communications, Computers and Applications, Amman, Jordan, 8–10 August 2008. [Google Scholar]
- MSU Graphics & Media Lab, Computer Vision Group. Available online: http://graphics.cs.msu.ru (accessed on 3 June 2016).
- Gao, B.C. NDWI—A normalized difference water index for remote sensing of vegetation liquid water from space. Remote Sens. Environ. 1996, 58, 257–266. [Google Scholar] [CrossRef]
- Huete, A.; Didan, K.; Miura, T.; Rodriguez, E.P.; Gao, X.; Ferreira, L.G. Overview of the radiometric and biophysical performance of the MODIS vegetation indices. Remote Sens. Environ. 2002, 83, 195–213. [Google Scholar] [CrossRef]
Satellite Sensor | Band No. | Spectral Range (µm) | Spatial Resolution (m) | Revisiting Time |
---|---|---|---|---|
HJ-1A/B CCD | 1 | 0.43–0.52 | 30 | 4 days |
2 | 0.52–0.60 | |||
3 | 0.63–0.69 | |||
4 | 0.76–0.90 |
Satellite Sensor | Band No. | Spectral Range (µm) | Spatial Resolution (m) | Revisiting Time |
---|---|---|---|---|
GF-4 PMS | 1 | 0.45–0.90 | 50 | 20 s |
2 | 0.45–0.52 | |||
3 | 0.52–0.60 | |||
4 | 0.63–0.69 | |||
5 | 0.76–0.90 |
Satellite Sensor | Band No. | Spectral Range (µm) | Spatial Resolution (m) | Width |
---|---|---|---|---|
GF-1 WFV | 1 | 0.45–0.52 | 16 | 800 km |
2 | 0.52–0.59 | |||
3 | 0.63–0.69 | |||
4 | 0.77–0.89 |
Method | Full Region | Sub-Region A | Sub-Region B | Sub-Region C | Sub-Region D |
---|---|---|---|---|---|
K-MEANs | 528,238 | 1833 | 16,236 | 7778 | 1442 |
SP-MADB | 710,587 | 96 | 8470 | 5207 | 0 |
STP-MADB | 843,570 | 1750 | 16,060 | 7009 | 1345 |
STCLP-MADB | 812,610 | 1308 | 13,884 | 7011 | 1047 |
Reference Map | 764,470 | 1176 | 11,861 | 6267 | 873 |
Method | Overall Accuracy (%) | Kappa | Omission (%) | Commission (%) |
---|---|---|---|---|
K-MEANs | 87.48 | 0.7450 | 2.77 | 17.51 |
SP-MADB | 90.73 | 0.8146 | 12.18 | 5.53 |
STP-MADB | 91.22 | 0.8220 | 3.03 | 12.13 |
STCLP-MADB | 92.25 | 0.8435 | 4.10 | 9.78 |
Method | Full Region | Sub-Region A | Sub-Region B | Sub-Region C | Sub-Region D |
---|---|---|---|---|---|
K-MEANs | 2,417,535 | 74,610 | 21,785 | 10,363 | 13,001 |
SP-MADB | 1,103,408 | 23,946 | 1488 | 5884 | 6163 |
STP-MADB | 873,738 | 16,908 | 1468 | 5233 | 3848 |
STCLP-MADB | 1,215,464 | 30,502 | 2116 | 6668 | 6961 |
Reference Map | 1,357,670 | 37,529 | 3654 | 7963 | 7510 |
Method | Overall Accuracy (%) | Kappa | Omission (%) | Commission (%) |
---|---|---|---|---|
K-MEANs | 79.73 | 0.5613 | 3.24 | 45.66 |
SP-MADB | 92.88 | 0.7877 | 26.43 | 4.29 |
STP-MADB | 91.01 | 0.7190 | 36.58 | 1.46 |
STCLP-MADB | 93.66 | 0.8195 | 18.47 | 8.93 |
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, X.; Sahli, H.; Meng, Y.; Huang, Q.; Lin, L. Flood Inundation Mapping from Optical Satellite Images Using Spatiotemporal Context Learning and Modest AdaBoost. Remote Sens. 2017, 9, 617. https://doi.org/10.3390/rs9060617
Liu X, Sahli H, Meng Y, Huang Q, Lin L. Flood Inundation Mapping from Optical Satellite Images Using Spatiotemporal Context Learning and Modest AdaBoost. Remote Sensing. 2017; 9(6):617. https://doi.org/10.3390/rs9060617
Chicago/Turabian StyleLiu, Xiaoyi, Hichem Sahli, Yu Meng, Qingqing Huang, and Lei Lin. 2017. "Flood Inundation Mapping from Optical Satellite Images Using Spatiotemporal Context Learning and Modest AdaBoost" Remote Sensing 9, no. 6: 617. https://doi.org/10.3390/rs9060617