Natural disasters ravage the world's cities, valleys, and shores on a monthly basis. Having precise and efficient mechanisms for assessing infrastructure damage is essential to channel resources and minimize the loss of life. Using a dataset that includes labeled pre- and post- disaster satellite imagery, the xBD dataset, we train multiple convolutional neural networks to assess building damage on a per-building basis. In order to investigate how to best classify building damage, we present a highly interpretable deep-learning methodology that seeks to explicitly convey the most useful information required to train an accurate classification model. We also delve into which loss functions best optimize these models. Our findings include that ordinal-cross entropy loss is the most optimal loss function to use and that including the type of disaster that caused the damage in combination with a pre- and post-disaster image best predicts the level of damage caused. We also make progress in the realm of qualitative representations of which parts of the images that the model is using to predict damage levels, through gradient class-activation maps. Our research seeks to computationally contribute to aiding in this ongoing and growing humanitarian crisis, heightened by climate change. Specifically, it advances the study of more interpretable machine learning models, which were lacking in previous literature and are important for the understanding of not only research scientists but also operators of such technologies in underserved regions.
The atmospheric drag and the Radiation Pressure are the dominant forces acting on LEO satellites. Many different approaches have been followed for the modelling of these non-gravitational forces, based on the physics and the satellite characteristics, but in many cases large inconsistencies are present between the models and the accelerometer measurements. Atmospheric drag is considered as the most difficult force to model, and the Radiation Pressure models show large deviations from the measurements depending on the b′ angle and the position of the satellite near the entrance and the exit from the Earth’s shadow. Numerous models have been presented for GRACE satellites but none for GRACE-FO. The innovation of this study is the development of an atmospheric drag and a Radiation Pressure data-driven model based only on the accelerometer measurements of GRACE-C satellite, using least squares principles. The atmospheric drag is modelled using accelerometer measurements from the shadow segment of the orbit. An additional weighted constraint is that near the middle of the sun segment of the orbit, the drag in the x-direction should be equal to the actual measurements due to Radiation Pressure being nearly zero. Subsequently, we subtract the modelled drag from the real measurements in order to estimate the Radiation Pressure which, consequently, is modelled using a least squares frequency-domain analysis. The residual series proceeded from the subtraction of these two models from the actual measurements of GRACE-C accelerometer, are analyzed by taking into consideration the local time, the spatial information and the variations of b΄ angle, as well as their connection with electromagnetic changes in the upper atmosphere. The proposed models have been tested for different time periods in the last three years of GRACE C and the rms of the residual series along the x and the z axes of the accelerometer is ~2.5 nm/s2, while the y-axis exhibits an rms of ~1 nm/s^2.
Internal climate variability plays an important role in the abundance and distribution of phytoplankton in the global ocean. Previous studies using large ensembles of Earth system models (ESMs) have demonstrated their utility in the study of marine phytoplankton variability. These ESM large ensembles simulate the evolution of multiple alternate realities, each with a different phasing of internal climate variability. However, ESMs may not accurately represent real world variability as recorded via satellite and in situ observations of ocean chlorophyll over the past few decades. Observational records of surface ocean chlorophyll equate to a single ensemble member in the large ensemble framework, and this can cloud the interpretation of long-term trends: are they externally forced, caused by the phasing of internal variability, or both? Here, we use a novel statistical emulation technique to place the observational record of surface ocean chlorophyll into the large ensemble framework. Much like a large initial condition ensemble generated with an ESM, the resulting synthetic ensemble represents multiple possible evolutions of ocean chlorophyll concentration, each with a different phasing of internal climate variability. We further demonstrate the validity of our statistical approach by recreating a ESM ensemble of chlorophyll using only a single ESM ensemble member. We use the synthetic ensemble to explore the interpretation of long-term trends in the presence of internal variability. Our results suggest the potential to explore this approach for other ocean biogeochemical variables.
Balancing socio-ecological systems among competing water demands is a difficult and complex task. Traditional approaches based on limited, linear growth optimization strategies overseen by command/control have partially failed to account for the inherent unpredictability and irreducible uncertainty affecting most water systems due to climate change. Governments and managers are increasingly faced with understanding driving-factors of major change processes affecting multifunctional systems. In the last decades, the shift to address the integrated management of water resources from a technocratic “top-down” to a more integrated “bottom-up” and participatory approach was motivated by the awareness that water challenges require integrated solutions and a socially legitimate planning process. Assuming water flows as physical, social, political, and symbolic matters, it is necessary to entwining these domains in specific configurations, in which key stakeholders and decision-makers could directly interact through social-learning. The literature on integrated water resources management highlights two important factors to achieve this goal: to deepen stakeholders’ perception and to ensure their participation as a mechanism of co-production of knowledge. Stakeholder Analysis and Governance Modelling approaches are providing useful knowledge about how to integrate social-learning in water management, making the invisible, visible. The first one aims to identify and categorize stakeholders according to competing water demands, while the second one determines interactions, synergies, overlapping discourses, expectations, and influences between stakeholders, including power-relationships. The HydroSocial Cycle (HSC) analysis combines both approaches as a framework to reinforce integrated water management by focusing on stakeholder analysis and collaborative governance. This method considers that water and society are (re)making each other so the nature and competing objectives of stakeholders involved in complex water systems may affect its sustainability and management. Using data collected from a qualitative questionnaire and applying descriptive statistics and matrices, the HSC deepens on interests, expectations, and power-influence relationships between stakeholders by addressing six main issues affecting decision-making processes: relevance, representativeness, recognition, performance, knowledge, and collaboration. The aim of this contribution is to outline this method from both theory and practice perspective by highlighting the benefits of including social sciences approaches in transdisciplinary research collaborations when testing water management strategies affecting competing and dynamic water systems.
This paper focuses on the study of the temporal evolution of seismicity and the role of fluids during major earthquake sequences that occurred in the central Apennines and Eastern California Shear Zone-Walker Lane belt over the last two decades: The 1997 Colfiorito sequence, the 2009 L’Aquila sequence, the 2016 Amatrice-Norcia sequence, and the 2019 Ridgecrest sequence. The availability of different high-quality seismic catalogs offers the opportunity to evaluate in detail the temporal evolution of the earthquake’s size distribution (or b-value) and propose a physical explanation based on the effect of the fluid flow process in triggering seismicity. For all seismic sequences, the b value time series show a gradual decrease from a few months to one year before mainshocks. The gradual decrease in the b value is interpreted in terms of coupled fluid-stress intensity as a gradual increase in earthquake activity due essentially to the short-term to intermediate-term pore-fluid fluctuations. For the 2016 Amatrice-Norcia sequence and the 2019 Ridgecrest sequence, the temporal variation of b value during the foreshock sequence is characterized by a double b value minimum separated by a short-lived b value increase as observed in laboratory experiments on water-saturated rocks. Based on laboratory experiments results, the observed short–term fluctuation of b value is presented here as an accelerating cracks growth due essentially to the fluid flow instability. Despite that the occurrence of seismic precursors could have been predictable in areas with high dense seismic networks, the different b value time series show difficulty to establish a correspondence between the duration of the foreshock activity and the magnitude of the next largest expected earthquake. This may suggest that the spatial and temporal evolution of fluid migration controls the size of the ruptures.