Multispectral Drone Imagery Accelerates Flood Risk Mapping

2022-08-19 20:33:09 By : Ms. Sandy Li

We use cookies to enhance your experience. By continuing to browse this site you agree to our use of cookies. More info.

Monitoring areas susceptible to natural disasters has become a global priority. Effective methods and pertinent land cover data is required to decrease disaster risk. In a study published in the journal Sensors, researchers created a high-resolution land cover map of flood-prone rural settlements using multispectral drone data.

Study: Land Cover Classification from Very High-Resolution UAS Data for Flood Risk Mapping . Image Credit: Vadven/Shutterstock.com

Extreme floods threaten the safety of a region and can cause substantial damage. Planning against climate change hazards is essential to protecting people, communities, and nations, as well as their livelihoods and health, economic assets, cultural heritage, and ecosystems.

Monitoring the world's most susceptible locations to natural disasters has become a global priority.

The Sendai Framework is an international agreement that aims to reduce disaster risk and losses to lives, livelihoods, health, and economic, cultural, physical, social, and environmental assets of businesses, individuals, communities, and countries.

Insufficient resistance to natural disasters can weaken or stop the progress of sustainable development goals (SDGs).

Understanding the relationship between land cover (LC) and geophysical hazards is essential. The LC maps offer critical information for evaluating and developing flood risk management strategies. Therefore, land cover affects the flood itself in one way or another.

Land cover mapping provides data on flood-prone assets, including agricultural land, infrastructure, and human settlements.

Urban areas, soil, agricultural regions, and plants have varying permeability. The domination of one over the other, or an imbalance in their distribution, significantly affects flood behavior.

Infrequent use of UAS data for LC is due to complex automated categorization owing to significant radiometric heterogeneity and poor spectral resolution.

Object-oriented classification approaches effectively identify the high-resolution land cover in images, such as those obtained by unmanned aerial systems (UAS).

Several studies employed different methods and models to classify land cover from very high resolution (VHR) optical data. They differed primarily in training sample size, approach, and input dataset.

Belcore et al. defined high-thematic resolution land cover classes related to flood risk mitigation planning.

Two photogrammetric unmanned aerial systems (UAS) flights were performed with fixed-wing NIR and RGB optical sensors. The LC input dataset was created using the standard structure from motion (SfM) method, yielding a digital surface model (DSM) and two orthomosaics.

The LC system comprised nine classes calculating flood-related potential losses such as residences and production sectors. A random forest (RF) classifier helped produce the LC using object-oriented supervised classification.

Textural and elevation parameters were calculated to address mapping challenges caused by the high spectral uniformity of cover types.

Special consideration was given to the classes' definition to meet flood risk management standards and appropriately identify flood-exposed structures from a geometrical standpoint. The buildings were subjected to geometric validation as part of the segmentation process.

The training-test dataset was manually created.

The segmentation was completed in 11 hours and consisted of 34,439 items, whereas the classification took around four hours.

The segmentation yielded an F1 score of 0.70 and a median Jaccard index of 0.88. The RF model had an overall accuracy of 0.94 except for rocky concentrated regions and grasslands.

The area-based assessments did not support the segmentation method's tendency to over-segment. The over-segmentation average's median value was 0.32, whereas the under-segmentation index was 0.63.

The demand for more input characteristics was directly related to increased spatial and thematic resolution, and textural information was proven critical in the classification and segmentation processes. The grey level co-occurrence matrix (GLCM) derived measurements significantly impacted the classification and segmentation procedures.

The single pixel of a particular feature of the scene element has a significant amount of spectral variance due to the high spectral diversity of very high-resolution (VHR) imagery. This is the primary reason for the widespread use of object-based image analysis (OBIA) approaches in categorization. However, the segmentation relies heavily on the analyst's expertise.

Although the final classification accuracy is satisfactory for the risk reduction strategy, further study is required to identify shareable and efficient segmentation methodologies.

The final classifications met the demands of the planners and overcame the criticalities associated with the study area's high cover variability and spatial resolution.

The final precision is sufficient for disaster risk planning, allowing for the exact identification of exposed facilities and calculation of probable flood-induced losses in the region.

Belcore, E., Piras, M., & Pezzoli, A. (2022). Land Cover Classification from Very High-Resolution UAS Data for Flood Risk Mapping. Sensors, 22, 5622. https://www.mdpi.com/1424-8220/22/15/5622

Disclaimer: The views expressed here are those of the author expressed in their private capacity and do not necessarily represent the views of AZoM.com Limited T/A AZoNetwork the owner and operator of this website. This disclaimer forms part of the Terms and conditions of use of this website.

NEBOSH certified Mechanical Engineer with 3 years of experience as a technical writer and editor. Owais is interested in occupational health and safety, computer hardware, industrial and mobile robotics. During his academic career, Owais worked on several research projects regarding mobile robots, notably the Autonomous Fire Fighting Mobile Robot. The designed mobile robot could navigate, detect and extinguish fire autonomously. Arduino Uno was used as the microcontroller to control the flame sensors' input and output of the flame extinguisher. Apart from his professional life, Owais is an avid book reader and a huge computer technology enthusiast and likes to keep himself updated regarding developments in the computer industry.

Please use one of the following formats to cite this article in your essay, paper or report:

Ali, Owais. (2022, July 29). Multispectral Drone Imagery Accelerates Flood Risk Mapping. AZoOptics. Retrieved on August 19, 2022 from https://www.azooptics.com/News.aspx?newsID=27757.

Ali, Owais. "Multispectral Drone Imagery Accelerates Flood Risk Mapping". AZoOptics. 19 August 2022. <https://www.azooptics.com/News.aspx?newsID=27757>.

Ali, Owais. "Multispectral Drone Imagery Accelerates Flood Risk Mapping". AZoOptics. https://www.azooptics.com/News.aspx?newsID=27757. (accessed August 19, 2022).

Ali, Owais. 2022. Multispectral Drone Imagery Accelerates Flood Risk Mapping. AZoOptics, viewed 19 August 2022, https://www.azooptics.com/News.aspx?newsID=27757.

Do you have a review, update or anything you would like to add to this news story?

The Thermo Scientific™ ARL™ EQUINOX 3000 X-ray Diffractometer for research enables accurate measurements.

KLA’s Filmetrics F40 allows you to transform your benchtop microscope into an instrument to measure thickness and refractive index.

This product profile describes the properties and applications of the ProMetric® I-SC Solution Imaging Colorimeter.

We spoke with University of Bonn spin-off Midel Photonics, a start-up company whose laser beam shaping technology is hoping to sharpen up the laser industry.

Following Laser World of Photonics 2022, we spoke with Matthias Sachsenhauser from Hamamatsu Photonics about the role of laser-driven light sources in the future of the photonics sector.

AZoOptics speaks to Dr. Keith Paulsen about the importance of breast cancer detection and the introduction of his team's deep-learning algorithm that associates spatial images of tissue optical properties with optical signal patterns measured during an imaging experiment or patient exam.

AZoOptics.com - An AZoNetwork Site

Owned and operated by AZoNetwork, © 2000-2022