Deep Learning Network for Flood Extent Mapping based on the integration of Sentinel 2 and MODIS satellite imagery

Document Type : Research Paper

Authors

1 Department of Photogrammetry, Faculty of Surveying, Khajeh Nasir Toosi University of Technology, Tehran, Iran

2 Department of Surveying Engineering and Architectural Engineering, Faculty of Civil Engineering, Noshirvani University of Technology, Babol, Iran

Abstract

Introduction:

Flood is a natural hazard that causes many deaths each year, and due to the effects of climate change, the number of occurrences is increasing worldwide. Therefore, natural disaster damage assessment, such as floods, provides important information to support decision-making and policy development in the field of natural hazard management and climate change planning. In this regard, in recent years, various methods for classifying remote sensing images have been developed, which always face challenges in differentiating a variety of land uses. Another challenge in flood crisis management is the lack of access to satellite imagery with high temporal resolution while maintaining spatial resolution, which is more pronounced in the presence of cloud cover in the area and occurs during floods. The purpose of this study is to identify flooded areas in Khuzestan province following the flood of 1398, which is based on the integration of optical images of Sentinel 2 and MODIS to produce a time series with relatively good spatial and temporal resolution. In order to classify and prepare maps, a patch-based hierarchical convolutional neural network has been designed, which solves the challenge of extracting deep features due to the relatively weak structure of images with a resolution of more than 10 meters. In addition, the effect of different neighborhood dimensions on the extraction of deep features in all images has been investigated. Finally, the area of damage to urban land cover and various agricultural lands has been estimated consecutively during the flood period.

Material and methods:

The data used in this research are two series of different satellite images including Sentinel-2 MSI Level-1C images with a spatial resolution of 10 meters and the product of MODIS daily surface reflectance (MOD09GA) with a spatial resolution of 500 meters. The general process of implementing this research can be summarized in 7 phases. In the first phase, the data is first pre-processed. Then, in the second phase, the image fusion algorithm is implemented to predict the daily surface reflectivity of the images, and if the error and accuracy of the predicted images are appropriate, the time series of the flood period is obtained. In the third phase, Ground truth maps are prepared by the researcher using image interpretation. In the fourth phase, training samples are prepared from these data to perform various classifications such as deep learning approaches and machine learning, and the proposed network is implemented in different input dimensions. It should be noted that the number of training and validation samples in deep learning networks has been very limited and less than half a percent of images to automate and reduce user dependence. In the fifth phase, to perform damage assessment in the agricultural and vegetation regions, the relevant maps are prepared with the best approach tested in the previous phase, and finally, in the sixth phase, accuracy assessments are performed by the confusion matrix and related criteria. In the last phase which is the seventh phase, the area of flood-affected land uses is estimated.

Discussion:

The present study is implemented to improve one of the most important issues in crisis management in the country, namely the assessment of damage caused by the sudden phenomenon of floods. Therefore, presenting a method with appropriate speed compared to existing methods and also increasing the accuracy of final maps due to its challenging has been one of the objectives of this research. First, in order to prepare a suitable time series of optical data with an appropriate spatial and temporal resolution, the ESTARFM fusion algorithm was used. According to the evaluations performed for the two integrated images, this algorithm has high efficiency and accuracy in areas with heterogeneous coverage. Due to the change in environmental conditions between the images, the maximum errors have occurred in water-sensitive bands, but all errors due to their small values in each band indicate the efficiency of the algorithm used. In addition, since the two images are predicted in time series, so the generalizability of the algorithm has been investigated and proven. Furthermore, regarding the classification algorithms for preparing the destruction map, the proposed neural network has a significant difference in accuracy compared to other approaches. In addition, in the study of the extracted classes, in the proposed approach, the built-up areas benefit from a very high identification compared to other algorithms and the appropriateness of other uses, especially the use of water areas, is maintained. According to the studies, the highest rate of flooding in the study area was in the third week of April and after that, the area has been experiencing a decreasing trend. Therefore, the damage was estimated on April 14 and 21. According to the assessments, flooding has decreased from April 14 to April 21 in built-up areas, rainfed and fallow lands, and has increased in wetland and Aquatic cultivation areas.

Conclusion:

In this research, the ESTARFM image fusion algorithm, which is known to be suitable for combining images in heterogeneous regions, has been used for April 8 and April 14 images, and the evaluations have been done with the help of scatter plots and least-squares error. The results showed the efficiency of the method in integrating relatively high-resolution Sentinel 2 images and low-resolution MODIS images in the field of flood management. In the field of identifying flooded areas and further due to the poor structure of images with a resolution above 10 meters, the possibility of extracting optimal and deep features is difficult. In the present study, a patch-based convolutional neural network has been designed with a minimum of layers and hyper-parameters, which provides the possibility of training from scratch with the least amount of training samples and without overfitting for images with different environmental conditions. Also, in order to find the optimal state, the dimensions of different inputs in all images have been tested to make a comparison of the effect of different neighborhoods. Thus, patches of sizes 3 to 11 were tested, patches 5 and 7 in the pre-flood image, and patches 9 and 11 in the post-flood images were the best. The results were compared with approaches such as object and pixel-based SVM, LCNN, and DCNN neural networks with dimensions of 3 × 3 and 5 × 5 according to the reference research, and had a significant improvement in accuracy. Time evaluations were performed between all approaches and the lowest time was related to the proposed approach with patch dimensions of 3 × 3 and 5 × 5 and the highest time was related to DCNN network with dimensions of 5 × 5. However, due to the importance of time in crisis management and the need to prepare a high-speed map, the proposed approach has provided an appropriate response. If the time and accuracy are proportionally considered in implementing the research, the designed network in 9 × 9 input dimensions is recommended because in this case, both the accuracy and the time superiority are satisfied.

Keywords


Dao, P.D., Mong, N.T., & Chan, H.-P. (2019). Landsat-MODIS image fusion and object-based image analysis for observing flood inundation in a heterogeneous vegetated scene. GIScience & Remote Sensing, 56, 1148-1169
Du, Y. Zhang, Y. Ling, F. Wang, Q. Li, W. & Li, X. (2016). Water bodies’ mapping from Sentinel-2 imagery with modified normalized difference water index at 10-m spatial resolution produced by sharpening the SWIR band. Remote Sensing, 8, 354
Gebrehiwot, A. Hashemi-Beni, L. Thompson, G. Kordjamshidi, P. & Langan, T.E. (2019). Deep convolutional neural network for flood extent mapping using unmanned aerial vehicles data. Sensors, 19, 1486
Hashemi-Beni, L. & Gebrehiwot, A.A. (2021). Flood extent mapping: an integrated method using deep learning and region growing using UAV optical data. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 14, 2127-2135
Isikdogan, F. Bovik, A.C. & Passalacqua, P. (2017). Surface water mapping by deep learning. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 10, 4909-4918
Jain, P. Schoen-Phelan, B. & Ross, R. (2020). Automatic flood detection in SentineI-2 images using deep convolutional neural networks. In, Proceedings of the 35th Annual ACM Symposium on Applied Computing (pp. 617-623)
Jiménez-Jiménez, S.I. Ojeda-Bustamante, W. Ontiveros-Capurata, R.E. & Marcial-Pablo, M.d.J. (2020). Rapid urban flood damage assessment using high resolution remote sensing data and an object-based approach. Geomatics, Natural Hazards and Risk, 11, 906-927
Merz, B. Kreibich, H. Schwarze, R. & Thieken, A. (2010). Review article" Assessment of economic flood damage". Natural Hazards and Earth System Sciences, 10, 1697-1724
Mohammadizadeh, P. Hamzeh, S. Kiavarz, M. & Darvishi Blorani, A. (2018). Derivation daily and high spatial resolution Land Surface Temperature using Fusion of Landsat and Modis Satellite Imagery. Journal of Geospatial Information Technology, 6, 77-99
Mousavi, S.M. Ebadi, H. & Kiani, A. (2019). Provide an Optimal Deep-network Method for Spectral-spatial Classifying of High Resolution Images. Journal of Geomatics Science and Technology, 9, 151-170
Sharma, A. Liu, X. Yang, X. & Shi, D. (2017). A patch-based convolutional neural network for remote sensing image classification. Neural Networks, 95, 19-28
Singh, A. (1989). Review article digital change detection techniques using remotely-sensed data. International Journal of Remote Sensing, 10, 989-1003
Song, H. Kim, Y. & Kim, Y. (2019). A patch-based light convolutional neural network for land-cover mapping using Landsat-8 images. Remote Sensing, 11, 114
Tamimi, E. Ebadi, H. & Kiani, A. (2017). Evaluation of different metaheuristic optimization algorithms in feature selection and parameter determination in SVM classification. Arabian Journal of Geosciences, 10, 478
Wang, B. Jia, K. Wei, X. Xia, M. Yao, Y. Zhang, X. Liu, D. & Tao, G. (2020). Generating spatiotemporally consistent fractional vegetation cover at different scales using spatiotemporal fusion and multiresolution tree methods. ISPRS journal of photogrammetry and remote sensing, 167, 214-229
Zaffaroni, M. & Rossi, C. (2020). Water Segmentation with Deep Learning Models for Flood Detection and Monitoring. In, Proceedings of the 17th ISCRAM Conference, Blacksburg, VA, USA (pp. 24-27)
Zhang, F. Zhu, X. & Liu, D. (2014). Blending MODIS and Landsat images for urban flood mapping. International Journal of Remote Sensing, 35, 3237-3253
Zhou, X. Wang, P. Tansey, K. Zhang, S. Li, H. & Tian, H. (2020). Reconstruction of time series leaf area index for improving wheat yield estimates at field scales by fusion of Sentinel-2,3 and MODIS imagery. Computers and Electronics in Agriculture, 177, 105692