The Government of India announced its commitment to reach net-zero greenhouse gas emissions by 2070 at the recent COP 26 summit. Modeling projections suggest that meeting this target would likely require substantial amounts of CO2 capture and storage (CCS) from large-point sources (LPS). Our analysis first reveals the key co-benefits for India in the adoption of CCS, viz. energy security, lower aggregate costs of carbon mitigation, higher resilience and lower stranded assets. For instance, we estimate that stranding of >100 GW and >70 GW of coal- and gas-fired power capacity could be avoided with the presence of CCS in the power sector mix.This analysis is further supplemented by our recent estimates on CO2 storage potential estimates in Indian geologic formations. Our results indicate that the storage capacity via enhanced oil recovery (EOR) is 1.2 GtCO2 after incorporating engineering and geologic constraints. Similarly, the storage capacity in unminable coal fields is estimated to be 3.5-6.3 GtCO2. Even though the combined storage potential in these formations is constrained, they should be actively considered within policy-making as they predominantly lie within areas of dense areas of LPS, thus creating possibilities of CCS hubs and clusters. In addition, 291 GtCO2 could be sequestered in saline aquifers and 97-316 GtCO2 in basalts; though, these values are subject to higher uncertainties. A number of saline aquifers may be characterized as having storage potential equivalent to several years of LPS emissions (>10 GtCO2) along with high storage feasibility.Our ongoing analysis attempts a more evolved approach towards source-sink mapping in India by combining the storage potential estimates with geospatial layers of LPS. Large power plants, which emit >20 MtCO2 annually, and high-purity CO2 sources such as refineries, are of particular interest. Preliminary source-sink mapping results show substantial clustering opportunities in eastern India, which has active coalbed methane extraction undertaken by five companies, and western India, with large industrial sources interspersed with EOR sites. The results of this analysis will also inform decision-makers on future LPS siting opportunities if a policy thrust on CCS is undertaken for meeting net-zero targets over the next two decades.
Onsite wastewater treatment systems (OWTSs), or septic tank systems, are commonly used throughout the United States and are generally effective at remediating wastewater. However, malfunctioning OWTSs can introduce excess nutrients (i.e., nitrogen and phosphorous) and pathogens (i.e., E. coli) into the environment. There is increasing evidence that OWTSs can be a significant, and potentially underestimated, nonpoint source (NPS) of pollution. Thus, the objectives of this research were to (1) develop a model to assess the pollution potential from OWTSs using GIS-based multi-criteria decision analyses (MCDA) and (2) evaluate the relationship between the pollution potential from OWTSs and water pollutants. This study was completed in the Choccolocco Creek watershed, Alabama. The main tributary in this watershed, the Choccolocco Creek, is an impaired waterbody due to elevated E. coli concentrations. An MCDA was developed to model the pollution potential from OWTSs using environmental and OWTS variables. Similarly, an OWTS site unsuitability analysis, that only included environmental variables, was used to predict where OWTS may poorly perform, if OWTS data are not accessible in other areas. Water samples were taken along Choccolocco Creek to measure nitrogen, phosphorous, and E. coli concentrations. Pollutant concentrations were correlated to modeled pollution potential from OWTSs and OWTS site unsuitability, to compare how the exclusion of OWTS data changes the results. Additionally, land cover distribution was correlated to pollutant concentrations to account for other potential NPSs of water pollution. All water pollutants were significantly, positively correlated to OWTS count. Additionally, E. coli and nitrogen concentrations were significantly, positively correlated to pollution potential from OWTSs. This suggests that OWTSs may contribute to water pollution within the watershed. Furthermore, the location of areas most probable to have OWTS pollution varied between models, highlighting the importance of accounting for OWTSs as a NPS of water pollution. The methods presented could be adapted for other watersheds and used to guide best watershed management practices.
The current contribution presents wintertime climatology from 2012 to 2020 of mixed-phase clouds and their radiative effects when coupled to the sea ice states. Measurements from the U.S. Department of Energy (DOE) Atmospheric Radiation Measurement (ARM) at the North Slope Alaska (NSA) site in Utqiagvik, Alaska are being analyzed.Classification of cloud hydrometeors in the liquid, ice or mixed-phase states was primary determined by the Cloudnet algorithm, developed by the Finish Meteorological Institute, and applied to a set of ground-based remote sensing instruments from NSA . To evaluate the influence by sea ice, which plays an important role on the Arctic surface-atmosphere interaction, the statistics are separated into cases when clouds are coupled or decoupled to specific sea ice conditions, like presence of leads or polynyas in the vicinity of NSA .We found that clouds coupled to sea ice with presence of leads have shown distinguished features like the increase of total liquid content, lower cloud base heights and less ice content when compared to decoupled cases. Nevertheless, these results rely on Cloudnet accurately detecting cloud droplets within mixed-phase clouds. Arctic cloud radiative effects (CRE) have already been studied from short expeditions like the SHEBA campaign (Shupe et al., 2004) and middle-term ground observations in Barrow (Shupe et al., 2015) and Ny-Ålesund, Svalbard (Ebell et al., 2020). We extend similar CRE studies for 8 years during wintertime, where longwave up- and down-welling flux measurements from NSA are used to estimate surface net fuxes and other cloud radiative features for cases when clouds are coupled or decoupled to sea ice conditions and their sensitivity to different gradients of air-surface temperature when leads or polynyas are present.
United States Federal Emergency Management Agency provides model-output localized flood grids that are useful in characterizing flood hazards for properties located in the Special Flood Hazard Area (SFHA ─ areas expected to experience a 1% or greater annual chance of flooding). But these flood grids are often unavailable or fail to include return periods for particular applications, such as understanding flood risk of properties during the 70-year useful building life cycle. Furthermore, due to the unavailability of higher-return-period flood grids, the flood risk of properties located outside the SFHA cannot be quantified. Here, we present a method to estimate the flood hazard for U.S. properties that are located both inside and outside the SFHA. The flood hazard is characterized by the Gumbel extreme value distribution to project flood elevations to extreme flood events for which an entire area is assumed to be submerged. Spatial interpolation techniques impute elevation values in the extreme flood elevation surfaces and therefore can estimate the flood hazard for areas outside the SFHA. The proposed method can improve the assessment of flood risk for properties located in both inside and outside the SFHA and therefore, the decision-making process regarding flood insurance purchases, mitigation strategies, and long-term planning for enhanced resilience to one of the world’s most ubiquitous natural hazards.
Models of fault slip development generally consider interfacial strength to be frictional and deformation of the bounding medium to be elastic. The frictional strength is usually considered as sliding rate- and state-dependent. Their combination, elastic deformation due to differential slip and rate-state frictional strength, leads to nonlinear partial differential equations (PDEs) that govern the spatio-temporal evolution of slip. Here, we investigate how data on fault slip rate and stress can directly discover the complex system (of PDEs) that governs aseismic slip development. We first prepare (synthetic) data sets by numerically solving the forward problem of slip rate and fault stress evolution with models, such as a thin laterally deformable layer over a thick substrate. We now identify the variables, for example, slip rate or friction state variable, and use nonlinearity identification algorithms to discover the governing PDE of the chosen variable.In particular, we use sparse identification of nonlinear dynamics algorithm (SINDy, Brunton et al., 2016) where we solve a regression problem, Ax=y. Here, y is the time derivative of the variable of interest, for example, slip rate. A is a large matrix (library) with all possible candidate functions that may appear in the slip rate evolution PDE. The entries in x, to be solved for, are coefficients corresponding to each library function in matrix A. We update A according to the solutions x so that A's column space can span the dynamics we seek to find. To find the suitable column space for A, we encourage sparse solutions for x, suggesting that only a few columns in matrix A are dominant, leading to a parsimonious representation of the governing PDE.We show that the algorithm successfully recovers the terms of the PDE governing fault slip and could also find the frictional parameter, for example, a/b, where a and b, respectively, are the magnitudes that control direct and evolution effects. Moreover, the algorithm can also determine whether the associated state variable evolves as aging- or slip-law types or their combination. Further, with the data set prepared from distinct initial conditions, we show that the SINDy can also determine the problem parameter’s spatial distribution (heterogeneities) from fault slip rate and stress data.
The United Nations Agenda 2030 and Sendai Framework, as well as African Union's Agenda 2063, are targeted at human peace and prosperity amidst environmental and economic sustainability. These frameworks contain goals for the earth's protection and human poverty/disaster risk reduction. The foremost priority of the Sendai Framework for Disaster Risk Reduction is the increased understanding of disaster risk and strengthening its governance and management. Three overarching questions warrant this study: what are the important predictors of disaster risk in the vulnerable continent of Africa? How does disaster risk relate to climate change literacy and people's beliefs in Africa? Do national action plans respond appropriately to key factors reflecting Africa's disaster risk? This study uses the climate change literacy and belief data from the Afrobarometer and disaster risk data from the Index for Risk Management (INFORM) of the European Commission. Using disaster risk index as the dependent variable and 30 independent variables, the important predictors contributing to disaster risk in all African countries were identified using random forest machine learning models. Essential variables in the model include projected conflict risk, current highly violent conflict intensity, uprooted people, other vulnerable groups, governance, physical infrastructure and access to health care, among others. Also, The higher the percentage of African countries' population that is climate literate, the lower the disaster risk. Conversely, the higher the climate change literacy of the population, the higher the percentage of people who believe that people can do little about climate change. Furthermore, 25 policies of countries with very high disaster risk were analysed. Within these selected policies, concepts related to violent conflicts were the least included, while those about vulnerability factors were the most included. Policies explored included more vulnerability concepts and much fewer hazard (violent conflict) concepts indicating the least responsiveness to hazards. The study provides a deeper understanding of disaster risk in Africa by showing essential factors and offers insight into disaster risk governance in line with the Sendai Framework.
Recently, a tailored gravity field model was developed to fit local terrestrial gravity data by integrating Global Gravitational Models (GGMs), terrestrial gravity data, and Digital Elevation Models (DEMs). The numerical analysis of the newly developed tailored gravity model showed a substantial improvement by means of its possible application for geophysical exploration by exhibiting known geological features over the Southern Benue Trough of Nigeria. In this study, we apply a similar technique to develop a tailored gravity field model at the Limpopo Province in South Africa using a total of 8,603 terrestrial gravity measurements. Validation of results indicates that our tailored gravity model could reproduce the observed gravity data with the accuracy specified by a standard deviation of 8.9 mGal and with a systematic bias less than 0.1 mGal within the study area. We then inspected a possibility of using our tailored gravity field model to improve the accuracy of existing geoid/quasi geoid models at the study area. For this purpose, we compute a new (quasi)geoid model by applying the Remove-compute-restore numerical technique that treats separately the detailed gravity pattern that is closely correlated spatially with the topographic relief, the higher-to-medium gravity signal that is mostly captured by local/regional gravity data, and the long-wavelength gravity signal that is modelled by using GGMs. The accuracy of the new (quasi)geoid model was assessed by using the most recent South African gravimetric quasi-geoid model CDSM09A and the latest hybrid quasi-geoid model of South AfricaSAGEOID10. The comparison of our quasi-geoid model with the CDSM09A and SAGEOID10 quasi-geoid models was done at 7,225 quasi-geoid grid points. The comparison revealed that our new quasi-geoid model closely agrees with the CDSM09A and SAGEOID10 models. The differences between our and CDSM09A quasi-geoid models vary within-0.31 and 0.70 m, with a mean of 0.05 m and a standard deviation of 0.12 m. The corresponding differences between our and SAGEOID10 quasi-geoid models are between-0.35 and 0.70 m with a mean of 0.06 m and a standard deviation 0.12 m. The numerical analysis revealed that the new tailored gravity model could efficiently be used in various geophysical and geodetic applications.
Session Number and Title: GC41B: Connecting Cause and Effect in Analyses of Coupled Human and Geophysical Systems: The Early to Modern Anthropocene II Online Poster Discussion Abstract ID and Title: 1190904: Sustainability Analysis of the Indian Fashion Design Industry Final Paper Number: GC41B-02 Session Date and Time: Thursday, 15 December 2022; 08:00 - 09:00 VIEW POSTER
* The researcher, Mohamed Akl, is funded by a full scholarship from the Ministry of Higher Education of the Arab Republic of Egypt. Abstract: The Gravity Recovery and Climate Experiment (GRACE) satellite has proven to be an excellent tool for monitoring changes in total water storage (TWS), which vertically integrate water storage changes from the land surface to the deepest aquifers. The objective of many GRACE studies is to isolate groundwater storage changes from changes in TWS using independent in-situ, remotely sensed, simulated, or assimilated data to remove other water budget components. Using auxiliary datasets to account for water budget components have revealed large biases and uncertainties, especially over high latitude regions, leading to accumulating errors in GRACE-GW estimates. Comparisons with in-situ groundwater observations permit assessments to evaluate how accurately we can isolate groundwater storage signals from TWSA. Goodness-of-fit (GOF) indices e.g., spearman correlation, mean square error (MSE), Nash-Sutcliffe Efficiency (NSE), and the Kling-Gupta Efficiency (KGE), are commonly applied hydrologic fit metrics that express similarity of time series. Such metrics are used here to compare GRACE-GW estimations and in-situ groundwater observations. The use of GOF indices is constrained by their substantial sampling uncertainty, and controversial interpretation, which may lead to wrong judgement on GRACE-GW estimations. Bias, nonlinearity, and non-normality introduce challenges in our use and interpretation of GOF applied to GRACE-GW time series. The goal of this work is to improve interpretation and use of GOF metrics to validate GRACE-GW estimates, highlighting the importance of assessing multiple GOF criteria beyond simply correlation often applied in GRACE studies. Our results document that poor performance of GOF metrics do not simply translate to inaccurate extraction of GRACE-GW time series but may be attributed to the GOF metric applied. We show that a rigorous assessment of GOF enhances our ability to interpret GRACE-GW change.
Biomass burning (BB) is one of the largest sources of absorbing aerosols globally and accounts for about 40% of black carbon in the atmosphere. The Southern African region contributes approximately 35% of Earth’s BB aerosol emissions. During the Southern Hemisphere winter, smoke is transported over the southeast Atlantic Ocean, overlying and mixing with a semi-permanent stratocumulus cloud deck. Aerosol-cloud interactions contribute the largest uncertainty to anthropogenic forcing, and the southeast Atlantic region exhibits a large model-to-model divergence of climate forcing. This makes the region particularly valuable for understanding these interactions and was one of the factors motivating the three-year NASA ORACLES (ObseRvations of Aerosols above CLouds and their intEractionS) mission. Previous studies using ORACLES datasets have explored the distribution of aerosol and cloud particles, however, changes in some aerosol properties during transport are not well documented. This study investigates the evolution of biomass burning aerosol properties from emission within Southern Africa, transport over land, and then over the Atlantic. Measurements from a collection of airborne in situ and remote-sensing instruments including 4STAR (Spectrometer for Sky-Scanning, Sun-Tracking Atmospheric Research) along with ground-based AERONET (Aerosol Robotic Network) are combined with results from two regional models, the WRF-AAM and WRF-CAM5 to explore the changes in the optical properties of these smoke plumes as they age. The aerosol age is determined using tracers from the WRF-AAM configured with 12 km resolution over the region’s spatial domain (41 ºS – 14 ºN, 34 ºW – 51 ºE). Changes in extinction, single scattering Albedo (SSA) and angstrom exponent (AE) with age as well as a comparative analysis between observations and model results were carried out using datasets from airborne PSAP (Particle Soot Absorption Photometer) and nephelometers, 4STAR, AERONET, and WRF-CAM5.
Animal behaviour such as dispersal and migration ensure their survival in the landscape. It has been established in the past few decades that wildlife conservation and study of their movement in the wilderness is vital for sustainable ecosystem. Thus, identification of regions having high movement permeability for planning and maintenance of functional wildlife corridors has turn out to be a fundamental requirement for habitat management. This study emphases on movement of big cats-Bengal Tiger (Panthera tigris tigris) and Leopard (Panthera pardus fusca) in the protected area of Rajaji National Park situated in Uttarakhand State of India. The National park is a designated tiger reserve with large amount of tigers and leopards at its disposal. Here, Circuitscape was used to generate connectivity map of the study area. The results were validated using occurrence points downloaded from GBIF. The habitat suitability and resistance of the landscape was estimated based on literature review and expert opinion survey. Since, both the species have comparable ecological niche, similar habitat parameters were used for generation of resistance map of the species. Occurrence points for the species were downloaded from GBIF. 60% of the points were used as nodes or focal points where species presence is recorded whereas 40% of the points were used in validation of the connectivity paths. Results depicts the current density map of the study area highlighting areas with high connectivity for the species.
Imperfect models are often used for forecasting and state estimation of complex dynamical systems, typically by mapping a reference initial state into model phase space, making a forecast, and then mapping back to the reference space. In many cases these mappings are implicit, and forecast errors thus reflect a combination of model forecast errors and mapping errors. Techniques to infer parameterizations and parameters to reduce model bias have been the subject of intense scrutiny; however, we lack a general framework for discovering optimal mappings between system and model attractors. Here we propose a novel Machine Learning paradigm for inferring cross-attractor transformations (CATs) that minimize forecast error. CATs are pairs of transformations from the phase space of a reference system to the phase space of a model and vice versa that serve as a bridge between the attractors of a true system and an imperfect model. A computationally efficient analog approximation to tangent linear and adjoint models is developed to enable efficient stochastic gradient descent algorithms to train CAT parameters. Neural networks constructed with a custom analog-adjoint layer permit specification of affine transformations as well as more general nonlinear transformations.
Recently, Zhou (2022) reported temporal change of seismic velocity in the Earth’s outer core based on relative travel time differences of SKS phase between some “doublets”. The study further suggested existence of a possible 2-3% density deficit in the outer core and a localized transient flow with a speed of ~40 km/year. We examine the seismic data of the best-quality “doublet” (event pair 19970503-20180910) reported in the study. We relocate the “doublet” based on a master-event relocation method (Wen , 2006) using the seismic data of the compressional waves that travel outside the outer core, including P or Pdiff , pP or pPdiff , pPn, PP or Pdiff Pdiff , and PcP waves recorded at the global seismographic network. The later event (20180910) is found to be located 14.20 km away, 204.33°NW, of the earlier event (19970503) with a source depth 1.45 km deeper. After correction for the effects of relative source location and origin time, SKS signals exhibit no discernable relative travel time differences between the two events at the frequency band ≥0.2 Hz at all the four most anomalous stations (COLA, INK, ULN, YAK) reported in Zhou (2022). However, SPdKS-SKPdS phases, which start bifurcating from the SKS phases at the distance range of those four reported anomalous stations, exhibit evident changes of waveform and travel time between the events. The “SKS signals” used inZhou (2022), which had a 50-s time window and were filtered from 0.01-0.05 Hz, contain signals of SKS and SPdKS-SKPdS phases. It is the changes of SPdKS-SKPdS phases, not that of SKS phases, that generate the apparent time shift in the low-frequency filtered “SKS signals” reported in Zhou (2022). The SPdKS-SKPdS phases of those reported anomalous stations sample a lowermost mantle region populated with ultra-low velocity zones (ULVZs). The separation of the two events is large and the SPdKS-SKPdS phases would sample ULVZs with slightly different paths between the two events, yielding different waveform and travel time (Wen & Helmberger , 1998). We conclude that there is no observable temporal change of seismic properties in the Earth’s outer core in the seismic data used in Zhou (2022) and the reported relative travel time difference in the “SKS signals” in Zhou(2022) is caused by waveform and relative travel time changes in SPdKS-SKPdS phases due to slightly different sampling paths to the ULVZs at the bottom of the mantle between the events.