Episodes of slow uplift and subsidence of the ground, called bradyseism, characterize the recent dynamics of the Campi Flegrei caldera (Italy). In the last decades two major bradyseismic crises occurred, in 1969/1972 and in 1982/1984, with a ground uplift of 1.70 m and 1.85 m, respectively. Thousands of earthquakes, with a maximum magnitude of 4.2, caused the partial evacuation of the town of Pozzuoli in October 1983. This was followed by about 20 years of overall subsidence, about 1 m in total, until 2005. After 2005 the Campi Flegrei caldera has been rising again, with a slower rate, and a total maximum vertical displacement in the central area of ca. 70 cm. The two signals of ground deformation and background seismicity have been found to share similar accelerating trends. The failure forecast method can provide a first assessment of failure time on present‐day unrest signals at Campi Flegrei caldera based on the monitoring data collected in [2011, 2020] and under the assumption to extrapolate such a trend into the future. In this study, we apply a probabilistic approach that enhances the well‐established method by incorporating stochastic perturbations in the linearized equations. The stochastic formulation enables the processing of decade‐long time windows of data, including the effects of variable dynamics that characterize the unrest. We provide temporal forecasts with uncertainty quantification, potentially indicative of eruption dates. The basis of the failure forecast method is a fundamental law for failing materials: ẇ-α ẅ = A, where ẇ is the rate of the precursor signal, and α, A are model parameters that we fit on the data. The solution when α >1 is a power law of exponent 1/(1 − α) diverging at time Tf , called failure time. In our case study, Tf is the time when the accelerating signals collected at Campi Flegrei would diverge if we extrapolate their trend. The interpretation of Tf as the onset of a volcanic eruption is speculative. It is important to note that future variations of monitoring data could either slow down the increase so far observed, or suddenly further increase it leading to shorter failure times than those here reported. Data from observations at all locations in the region were also aggregated to reinforce the computations of Tf reducing the impact of observation errors.
Shallow glacial aquifers systems are the primary source of drinking water for millions of residents in the upper Midwest and Great Lakes regions of the United States. Studies show that a significant number of municipal and private groundwater wells in these regions are impacted by high nitrate concentrations, which can have negative health impacts for humans. Reducing nitrate contamination through good land management practices will reduce the need for costly nitrate treatment systems and help mitigate other ecological concerns related to nutrient pollution of groundwater. This study presents a Python-based modelling tool that uses a local groundwater flow model and historical land use data (USDA CropScape) to estimate nitrate concentrations at a high-capacity pumping well. Nitrate concentrations predicted by this model are within 5% of median annual values observed at a study site in Waupaca, WI. The model is user-friendly and can easily be adapted to other locations, where it has the potential to help local and state agencies, landowners, and growers make cost-effective decisions about land-use and agricultural practices.
The in-situ magnetospheric exploration of the four large planets of our solar system had started with Pioneer 10’s flyby of Jupiter in Dec. 1973. The second collection of field, particle and radio data of the gas giant was carried out by Pioneer 11 in Dec. 1974, before this spacecraft made its closest approach to Saturn in Sep. 1979. Around the same period, Voyager 1 (2) flew by Jupiter in Mar. (Jul.) 1979 then Saturn in Nov. (Aug.) 1980 (1981). As of today, only Voyager 2 visited the magnetospheres of Uranus (Jan. 1986) and Neptune (Aug. 1989). Galileo had remained the only spacecraft to orbit an outer planet for several years (1995 - 2003) until the arrival of Juno at Jupiter in 2016. Between 2004 and 2017, the Cassini mission had provided a wealth of in-situ data pertinent to the study of magnetospheric particles at Saturn. In this paper, we present our current understanding of the processes that shape the spatial distributions of energetic electrons trapped in the magnetospheres of Jupiter (L < 6), Saturn (L < 15) and Uranus (L < 15) obtained by combining multi-instrument analyses of data from past missions (Pioneer, Voyager, Galileo, Cassini) and computational models of charged particle fluxes. To determine what controls the energy and spatial distributions throughout the different magnetospheres, we compute the time evolution of particle distributions with the help of a diffusion theory particle transport code that solves the governing 3-D Fokker-Planck equation. Particle, field and wave datasets are either used to provide model constraints, assist in modeling physical processes, or validate our simulation results. We first emphasize our latest results regarding the relative (or coupled) role of mechanisms at Saturn, including the radial transport and interactions of electrons with Saturn’s dust/neutral/plasma environments and waves, as well as particle sources from high-latitudes, interchange injections, and outer magnetospheric region. The lessons learned from our modeling of electron distributions at Saturn are used to identify the processes that may be missing in our modeling of Jupiter’s energetic electron environment or those in need to be implemented using new modeling concepts. Our first physics-based modeling of electron populations at Uranus is also assessed with our data-model comparison approach.
Policies to regulate severe surface ozone pollution in cities in India are challenging to develop, due to the complex dependence on precursor emissions of volatile organic compounds (VOCs) and nitrogen oxides (NOx), non-linear chemistry leading to ozone formation, and very limited spatial and temporal surface air quality monitoring. Ratios of space-based observations of formaldehyde (HCHO), an intermediate oxidation product of VOCs, and nitrogen dioxide (NO2) have been used to characterize the sensitivity of surface ozone production to precursor emissions of VOCs and NOx, but interpretation of these depends on the local oxidation regime. Here we develop an improved approach in which we discretize the data into background HCHO due to methane and other long-lived VOCs (regression intercept) and the local relationship (regression slope) between HCHO associated with reactive VOCs and NO2. We apply this to TROPOMI HCHO and NO2 tropospheric columns oversampled to higher spatial resolution than the native pixel resolution of the instrument over the ten most populous cities in India. We use GEOS-Chem to characterize the ozone production regimes and then apply this updated interpretation of the relationship between HCHO and NO2 to the oversampled TROPOMI columns to identify the most effective strategies for regulating ozone and whether these should vary seasonally and spatially.
In studying problems like plant-soil-microbe interactions in environmental biogeochemistry and ecology, one usually has to quantify and model how substrates control the growth of, and interaction among, biological organisms. To address these substrate-consumer relationships, many substrate kinetics and growth rules have been developed, including the famous Monod kinetics for single substrate-based growth, Liebig’s law of the minimum for multiple-nutrient co-limited growth, etc. However, the mechanistic basis that leads to these various concepts and mathematical formulations and the implications of their parameters are often quite uncertain. Here we show that an analogy based on Ohm’s law in electric circuit theory is able to unify many of these different concepts and mathematical formulations. In this Ohm’s law analogy, a resistor is defined by a combination of consumers’ and substrates’kinetic traits. In particular, the resistance is equal to the mean first passage time that has been used by renewal theory to derive the Michaelis-Menten kinetics under substrate replete conditions for a single substrate as well as the predation rate of individual organisms. We further show that this analogy leads to important insights on various biogeochemical problems, such as (1) multiple-nutrient co-limited biological growth, (2) denitrification, (3) fermentation under aerobic conditions, (4) metabolic temperature sensitivity, and (5) the accuracy of Monod kinetics for describing bacterial growth. We expect our approach will help both modelers and non-modelers to better understand and formulate hypotheses when studying certain aspects of environmental biogeochemistry and ecology.
This study investigates the occurrence of mixed-phase clouds (MPC) over the Southern Ocean (SO) using space- and surface-based lidar and radar observations. The occurrence of supercooled clouds is dominated by geometrically thin (< 1km) layers that are rarely MPC. We diagnose layers that are geometrically thicker than 1 km to be MPC approximately 65%, and 4% of the time from below by surface remote sensors and from above by orbiting remote sensors, respectively. We examine the discrepancy in MPC as diagnosed from the below and above. From above, we find that MPC occurrence has a gradient associated with the Antarctic Polar Front near 55°S with the rare occurrence of satellite-derived MPC south of that latitude. In contrast, surface sensors find MPC in 33% of supercooled layers. We infer that space-based lidar cannot identify the occurrence of MPC except when secondary ice-forming processes operate in convection that is sufficiently strong to loft ice crystals to cloud tops. We conclude that the CALIPSO phase statistics of MPC have a severe low bias in MPC occurrence. Based on surface-based statistics, we present a parameterization of the frequency of MPC as a function of cloud top temperature that differs substantially from that used in recent climate model simulations.
The excess of radiogenic lead (Pb) isotopes in the silicate Earth, which is referred to as “the first terrestrial Pb paradox” has remained a confusion for a long time. A large-scale U/Pb fractionation with an increase of μ value (238U/204Pb) compared with CI chondrite is proposed to be the main culprit. The volatile e.g., Pb diffuses into space from the planetesimal-scale collisional melting, which plays a critical role in Pb loss on the accreting proto-Earth. The N-body simulation describes the collisional history of terrestrial planets in the first 200 million years of the Solar System. The collisional information provides the degree of silicate melting and further obtains the volatile loss fraction. Within the early 20% accretion of proto-Earth, the cumulative fraction of Pb loss can reach 80%-90%. Meanwhile, the μ value could rise to 1.5-4 setting the initial value to be 0.2-0.6. Besides, the silicate melting with higher temperature and lower oxygen fugacity (relatively reduced condition) can bring about more Pb loss. Further increase of μ to 9.26 possibly caused by a late large-scale U/Pb fractionation can effectively explain the excess of radiogenic Pb isotopes in the bulk silicate Earth. The two-stage model with the planetesimal-scale evaporation predicts a young age of 240 million years of the last large-scale fractionation event. The last fractionation is more consistent with the “Hadean matte” event than a late Moon-forming giant impact.
Natural and non-natural factors have combined effects on the trajectory of COVID-19 pandemic, but it is difficult to make them separate. To address this problem, a two-stepped methodology is proposed. First, a compound natural factor (CNF) model is developed via assigning weight to each of seven investigated natural factors, i.e., temperature, humidity, visibility, wind speed, barometric pressure, aerosol and vegetation in order to show their coupling relationship with the COVID-19 trajectory. Onward, the empirical distribution based framework (EDBF) is employed to iteratively optimize the coupling relationship between trajectory and CNF to express the real interaction. In addition, the collected data is considered from the backdate, i.e., about 23 days—which contains 14-days incubation period and 9-days invalid human response time—due to the non-availability of prior information about the natural spreading of virus without any human intervention(s), and also lag effects of the weather change and social interventions on the observed trajectory due to the COVID-19 incubation period; Second, the optimized CNF-plus-polynomial model is used to predict the future trajectory of COVID-19.Results revealed that aerosol and visibility show the higher contribution to transmission, wind speed to death, and humidity followed by barometric pressure dominate the recovery rates, respectively. Consequently, the average effect of environmental change to COVID-19 trajectory in China is minor in all variables, i.e., about -0.3%, +0.3% and +0.1%, respectively. In this research, the response analysis of COVID-19 trajectory to the compound natural interactions presents a new prospect on the part of global pandemic trajectory to environmental changes.
Many of the respiratory pathogens show seasonal patterns and association with environmental factors. In this article, we conducted a cross-sectional analysis of the influence of environmental factors, including climate change along with development indicators on the differential global spread and fatality of COVID-19 during its early phase. We used the published COVID-19 data by the WHO for April. Global climate data we used are monthly averaged gridded datasets of Temperature, Humidity and Temperature Anomaly. We used the HDI to account for all other socioeconomic factors that can affect the disease spread and mortality and build a negative binomial regression model. The temperature has a negative association with COVID-19 mortality. However, HDI is shown to confound the effect of temperature on the reporting of the disease. Temperature anomaly, which is being regarded as a global warming indicator, is positively associated with the pandemic's spread and mortality. Viewing newer infectious diseases like SARS-CoV-2 in the perspective of climate change has a lot of public health implications, and it necessitates further research.
On-going trade wars combined with the increasing consumption and depletion of known resources will necessitate the search for new deposits in poorly explored or unexplored areas, such as the polar regions. Antarctica is unique among the world’s continents in having no native population and state sovereignty; the continent has also been identified as potentially harboring extensive hydrocarbon and mineral resources. To protect the fragile Antarctic environment, the Protocol on Environmental Protection to the Antarctic Treaty (1991) banned any mineral activity for a 50-year period, except for scientific purposes. The Protocol will be renewed in 2048, and discussions of possible future mining in the region has already begun. With the improvement of drilling and mining technology, the risk of future mining activity on the continent is increasing. Moreover, extensive mining operations in the Arctic demonstrate the technical and economic feasibility of mining activities in harsh polar environments. The protection of the fragile Antarctic environment must be prioritized; however, maintaining the balance between environmental protection and commercial and national interests in resource development is problematic.
The study was aimed to identify the relations between the severity of coronary artery disease and associated percutaneous coronary interventions with the changes in the local Earth magnetic field activity (LEMF). One-thousand-two-hundred-forty patients diagnosed with Acute coronary syndrome who underwent percutaneous coronary intervention within 2015-2016 were retrospectively included in this single centre study. The majority of acute coronary syndromes that occurred in females was associated with an increase in LEMF intensity in 3.5-32 Hz frequency ranges and were also associated with a higher number of diseased coronary arteries. Increased intensity in the same range was associated with a lower number of stented coronary arteries in males in 2015. Positive correlation coefficients were found between increased LEMF intensity in the 0-15 Hz range and the number of revascularized coronary arteries in females during the winter season in 2016. Stronger LEMF in low-medium frequency ranges is associated with acute coronary syndromes in males caused by less diffuse coronary artery disease resulting in lower number of coronary arteries segments needed for revascularisation, especially during winter. Stronger LEMF in high frequency range is associated with increased occurrence of ischaemic cardiovascular events, while stronger LEMF in low to moderate frequency ranges is associated with positive effect.
Limited research has evaluated the mental health effects of compounding disasters (e.g., hurricanes followed by a heat wave), and few studies have relied on crisis lines for post-disaster mental health surveillance. This study examined changes in crisis help-seeking for individuals in Louisiana, USA, before and after Hurricane Ida (2021), a storm that co-occurred during the COVID-19 pandemic, subsequent hurricane, and corresponding heatwave. An interrupted time series analysis for a single and multiple group comparisons were used to examine pre-and post- changes in crisis text volume (any crisis text, substance use, thoughts of suicide, stress/anxiety and bereavement) among help-seeking individuals in communities that received individual and public assistance disaster declarations. Results showed a significant increase in crisis texts for any reason, thoughts of suicide, stress/anxiety, and bereavement in the short-term impact period. In the continued impact period, there was an increase in crisis texts for any crisis event, substance use, thoughts of suicide, stress/anxiety, and bereavement. Findings highlight the need for more mental health support for residents directly impacted by concurrent disasters.
Freshwater ecosystems are globally significant sources of greenhouse gases (GHG) to the atmosphere. Generally, we assume that in-situ production of GHG in streams is limited by turbulent reaeration and high dissolved oxygen concentrations, so stream GHG flux is highest in headwater streams that are connected to their watersheds and serve as conduits for the release of terrestrially derived GHG. Low-gradient streams contain pool structures with longer residence times conducive to the in-situ production of GHG, but these streams, and the longitudinal heterogeneity therein, are seldom studied. We measured continuous ecosystem metabolism alongside concentrations and fluxes of carbon dioxide (CO2), methane (CH4) and nitrous oxide (N2O) from autumn to the following spring along an eight kilometer segment of a low-gradient third order stream in the North Carolina Piedmont. We characterized spatial and temporal patterns of GHG in the context of channel geomorphology, hydrology, and ecosystem metabolic rates using linear mixed effects models. We found that stream metabolic cycling was responsible for most of the CO2 flux over this period, and that in-channel aerobic metabolism was a primary driver of both CH4 and N2O fluxes as well. Long water residence times, limited reaeration, and substantial organic matter from terrestrial inputs foster conditions conducive to the in-stream accumulation of CO2 and CH4 from microbial respiration. Streams like this one are common in landscapes with low topographic relief, making it likely that the high contribution of instream metabolism to GHG fluxes that we observed is a widespread yet understudied behavior of many small streams.
• Phytoplankton distributions and primary productivity were assessed off the northern coast of Norway in spring. Biomass and productivity were greatest off the continental shelf during the period of observations. • A satellite climatology showed that blooms usually form on the continental shelf first, and spread to deeper waters from 2-4 weeks after the shelf bloom. • The Calanus finmarchicus population had the potential for removing substantial amounts of chlorophyll each day, but phytoplankton vertical distributions were controlled by passive sinking.
Spatial gradients in rock uplift control the relief and slope distribution in uplifted terrains. Relief and slopes, in turn, promote channelization and fluvial incision. Consequently, the geometry of drainage basins is linked to the spatial pattern of uplift. When the uplift pattern changes basin geometry is expected to change via migrating water divides. However, the relations between drainage pattern and changing uplift patterns remain elusive. The current study investigates the plan-view evolution of drainage basins and the reorganization of drainage networks in response to changes in the spatial pattern of uplift, focusing on basin interactions that produce globally observed geometrical scaling relations. We combine landscape evolution experiment and simulations to explore a double-stage scenario: emergence of a fluvial network under block uplift conditions, followed by tilting that forces drainage reorganization. We find that the globally observed basin spacing ratio and Hack’s parameters emerge early in basin formation and are maintained by differential basin growth. In response to tilting, main divide migration induces basins’ size changes. However, basins’ scaling relations are mostly preserved within a narrow range of values, assisted by incorporation and disconnection of basins to and from the migrating main divide. Lastly, owing to similarities in landscape dynamics and response rate to uplift pattern changes between experiment and simulations, we conclude that the stream power incision model can represent fluvial erosion processes operating in experimental settings.
Urbanization tends to increase runoff volumes, which might cause flooding and reduce groundwater recharge. Since the design of impermeable urban elements is based on the water flow volume before their construction, once they are erected the induced change to the local drainage pattern might generate flooding of the newly developed and previously developed areas. As such, precise modeling is essential to allow municipal watershed-sensitive hydrological design, which may prevent impervious urban surface expansion negative impacts. The digital elevation model that represents the watershed relief at any given location is the hydrological modeling base layer, which is necessary for describing urban landscapes and watersheds. The common notion is that the finer the elevation model resolution is, the more precise the hydrological model will be. Nevertheless, it is suggested that over-accuracy might be redundant. In the same manner, the land use classification resolution should be aligned with the modeling requirements. Such careful evaluation of the modeling resolution will reduce the computing resources needed for the modeling procedure and may be utilized as a sensitivity filter for insignificant tributaries of the hydrological network. This paper demonstrates a nominal procedure for urban watershed sub-basin analysis, which is the initial stage for detailed urban runoff modeling. It was found that the scale-optimized model performed very well and was found suitable for the prediction of runoff volume and discharge from a mainly urban, mountainous karstic watershed.
The maximum extent of the last North American ice sheet is well constrained empirically, but it has proven to be challenging to simulate with coupled Climate-Ice Sheet models. Coupled Climate-Ice Sheet models are often too computationally expensive to sufficiently explore uncertainty in input parameters, and it is unlikely values calibrated to reproduce modern ice sheets will reproduce the known extent of the ice at the Last Glacial Maximum. To address this, we run a series of ensembles with a coupled Climate-Ice Sheet model (FAMOUS-ice), simulating the final stages of growth of the last North American Ice Sheets’ maximum extent. Using this large ensemble approach, we explore the influence of uncertain ice sheet, albedo, atmospheric, and oceanic parameters on the ice sheet extent. We find that albedo parameters determine the majority of uncertainty when simulating the Last Glacial Maximum North American Ice Sheets. Importantly, different albedo parameters are needed to produce a good match to the Last Glacial Maximum North American Ice Sheets than have previously been used to model the contemporary Greenland Ice Sheet, due to differences in cloud cover over ablation zones. Thus calibrating coupled climate-ice sheet models solely for present day strongly biases simulations of past and future climates different from today.
This chapter discusses efforts to measure surface observations of air pollution at the country-scale. The countries with the most comprehensive regulatory systems to monitor air pollution are the older industrial nations such as countries in the United Kingdom and the United States. Recent proliferation of low-cost air quality monitors (LCAQM) are making near-real-time air pollution monitoring more prevalent across the globe. While unique challenges exist between regulatory and LCAQM data access and usability, there are common challenges in using these data for decision support and research applications. This chapter discusses common statistical methods for estimating air pollution including spatial interpolation methods, statistical regression methods, machine learning, and chemical transport modeling.