loading page

Eye of Horus: A Vision-based Framework for Real-time Water Level Measurement
  • +5
  • Seyed Mohammad Hassan Erfani,
  • Corinne Smith,
  • Zhenyao Wu,
  • Elyas Asadi Shamsabadi,
  • Farboud Khatami,
  • Austin R.J. Downey,
  • Jasim Imran,
  • Erfan Goharian
Seyed Mohammad Hassan Erfani
University of South Carolina
Author Profile
Corinne Smith
University of South Carolina
Author Profile
Zhenyao Wu
University of South Carolina
Author Profile
Elyas Asadi Shamsabadi
University of Sydney
Author Profile
Farboud Khatami
University of South Carolina
Author Profile
Austin R.J. Downey
University of South Carolina
Author Profile
Jasim Imran
University of South Carolina
Author Profile
Erfan Goharian
University of South Carolina

Corresponding Author:[email protected]

Author Profile

Abstract

Heavy rains and tropical storms often result in floods, which are expected to increase in frequency and intensity. Flood prediction models and inundation mapping tools provide decision-makers and emergency responders with crucial information to better prepare for these events. However, the performance of models relies on the accuracy and timeliness of data received from in-situ gaging stations and remote sensing; each of these data sources has its limitations, especially when it comes to real-time monitoring of floods. This study presents a vision-based framework for measuring water levels and detecting floods using Computer Vision and Deep Learning (DL) techniques. The DL models use time-lapse images captured by surveillance cameras during storm events for the semantic segmentation of water extent in images. Three different DL-based approaches, namely PSPNet, TransUNet, and SegFormer, were applied and evaluated for semantic segmentation. The predicted masks are transformed into water level values by intersecting the extracted water edges, with the 2D representation of a point cloud generated by an Apple iPhone 13 Pro LiDAR sensor. The estimated water levels were compared to reference data collected by an ultrasonic sensor. The results showed that SegFormer outperformed other DL-based approaches by achieving 99.55% and 99.81% for Intersection over Union (IoU) and accuracy, respectively. Moreover, the highest correlations between reference data and the vision-based approach reached above 0.98 for both the coefficient of determination (R2) and Nash-Sutcliffe Efficiency. This study demonstrates the potential of using surveillance cameras and Artificial Intelligence for hydrologic monitoring and their integration with existing surveillance infrastructure.
07 Mar 2023Submitted to ESS Open Archive
09 Mar 2023Published in ESS Open Archive