A high-resolution model of exospheric temperatures has been developed, with the objective of predicting the global values of exospheric temperatures with greater accuracy. From these temperatures, the neutral densities in the thermosphere can be calculated. This model is derived from measurements of the neutral densities on the CHAMP, GRACE, and Swarm satellites. These data were sorted into 1620, triangular cells on a spherical, polyhedral grid, using coordinates of geographic latitude and local solar time (longitude). A least-error fit of the data is used to obtain a separate set of regression coefficients for each grid cell. Several versions of model functions have been tested, using parameters such as the day-of-year, Universal Time, solar indices, and emissions from nitric oxide in the thermosphere, as measured with the SABER instrument on the TIMED satellite. Accuracy is improved with the addition of parameters that use the total Poynting flux flowing into the Northern and Southern hemispheres. This energy flux is obtained from the solar wind velocity and interplanetary magnetic, using an empirical model. Given a specific date, time, and other inputs, a global map of the exospheric temperature is obtained. These maps show significant variability in the polar regions, that are strongly modulated by the time-of-day, due to the rotation of the magnetic poles around the geographic pole. Values at specific locations are obtained using a triangular interpolation of these results. Comparisons of the exospheric temperatures from the model with neutral density measurements are shown to produce very good results.
Understanding the effects of water, energy, and food (WEF) consumption and production on socio-economic and environmental indicators has become a strategic issue for all countries, especially the developing ones, that depend on natural resources to promote economic growth. Our main objective with this study is to quantify and understand the interconnections (Nexus) of WEF production and consumption with regional economic growth and social development in Brazil. We use a multi input-output approach based on a Data Envelopment Analysis (DEA) model to calculate efficiency indicators over time for various municipalities in Brazil. We assume that a high input-output efficiency level indicates that a certain municipality can reach larger output benefits with less WEF consumption. The time-based approach using the Malmquist Index model enables us determining whether cities’ WEF input-output efficiencies have been rising or declining over time. The time-spatial analysis is appropriate to indicate the level of interdependence between WEF-Nexus and the demographic, economic, and environmental systems in Brazil. We expect that our results can help policymakers establishing regional and city-level policies that can benefit a more efficient use of WEF resources.
Water scarcity, population growth and climate change dilemmas imperatively require adaption strategies for a more efficient and sustainable use of water resources. Agricultural systems are part of a wider network, where all social, economic and, ecologic parameters must be taken into consideration to assess the performance and resilience of said network. The importance of accounting the complexity of human decisions and their impact on the water cycle has been increasingly studied, nevertheless the integration and analysis of different decision making theories into hydrological models still remains a major challenge and uncertainty source. Therefore, the ongoing project is aimed to improve the understanding of social dynamics in agrohydrological networks by assessing different irrigation practices including rainfed agriculture and deficit irrigation within a hydro-economic network. We developed an agent-based model (ABM) of farmer decision making on crop water productivity and groundwater levels using two existing optimization models: (i) the Assessment, Prognosis, Planning and Management Tool (APPM) (Schmitz, et al. 2010) that integrates the complex interactions of the strongly nonlinear meteorological, hydrological and agricultural phenomena, considering the socio-economic aspects and (ii) the Deficit Irrigation Toolbox (DIT) (Schütze and Mialyk 2019) to maximize crop-water productivity by analyzing the crop yield response to climate change, soil variability, water management practices. The developed ABM was assessed with the different theories on human decision-making based on the Modelling Human Behavior (MoHuB) framework (Schlüter, et al. 2017). As a result of this study, a sensitivity analysis of how different behavioral theories affect the dynamics of social-ecological systems which enables the evaluation of the robustness of policy implementation to different assumptions of human behavior where cooperation is a mechanism to improve resilience. This research was funded by the Technische Universität Dresden, by means of the Excellence Initiative by the German Federal and State Governments.
Formal international standards as well as promotion of community or recommended practices have their place in ensuring “FAIRness” of data. Data management in NASA’s Earth Observation System Data and Information System (EOSDIS) has benefited from both of these avenues to a significant extent. The purpose of this paper is to present one example of each of these, which promote (re)usability. The first is an ISO standard for specifying preservation content from Earth observation missions. The work on this started in 2011, informally within the Earth Science Information Partners (ESIP) in the US, while the European Space Agency (ESA) was leading an effort on Long-Term Data Preservation (LTDP). Resulting from the ESIP discussions was NASA’s Preservation Content Specification, which was applied in 2012 as a requirement for NASA’s new missions. ESA’s Preserved Data Set Content (PDSC) document was codified into a document adopted by the Committee on Earth Observation Satellites (CEOS). It was recognized that it would be useful to combine PCS and PDSC into an ISO standard to ensure consistency in data preservation on a broader international scale. This standard, numbered ISO 19165-2 has been under development since mid-2017. The second is an example of developing recommendations for “best practices” within more limited (still fairly broad) communities. A Data Product Developers’ Guide (DPDG) is currently being developed by one of NASA’s Earth Science Data System Working Groups (ESDSWGs). It is for use by developers of products to be derived from Earth observation data to improve product (re)usability. One of the challenges in developing the guide is the fact that there are already many applicable standards and guides. The relevant information needs to be selected and expressed in a succinct manner, with appropriate pointers to references. The DPDG aims to compile the most applicable parts of earlier guides into a single document outlining the typical development process for Earth Science data products. Standards and best practices formally endorsed by the Earth Science Data and Information System (ESDIS) Standards Office (ESO), outputs from ESDSWGs (e.g., Dataset Interoperability Working Group, and Data Quality Working Group), and recommendations from Distributed Active Archive Centers and data producers are emphasized.
Arctic sea ice coverage has changed considerably over the last few decades. Sea ice extent record minimums have been observed in recent years, the distribution of sea ice age now heavily favors younger ice, and sea ice is thinning. To investigate the response of the ice pack to climate forcing during summertime melt, we have developed a technique to track individual Arctic sea ice parcels along with associated properties as these parcels advect through the Arctic Ocean. This sea ice parcel tracking method utilizes our sea ice motion dataset, archived at NASA’s National Snow and Ice Data Center. Tracked sea ice parcel locations coincide with other environmental products that influence sea ice growth/decay. We have recently tracked variables such as ice surface temperature, albedo, ice concentration, and ice thickness for hundreds of sea ice parcels, defined as occupying one EASE-grid cell location. These parcels can be tracked through a melt season, to determine the influence of these properties on sea ice melt, along with determining what fraction of the parcels survive or don’t survive summer melt. This analysis can be applied to determine the impact of these and other properties on ice melt, in terms of their relative importance. Here, we focus on tracking recent sea ice freeboard, produced from NASA’s ICESat-2 observations. We look at the evolution of ice thickness for individual parcels through the recent ICESat-2 record, to determine the rate of ice growth over part or all of the winter, depending on product availability (as of this writing, freeboard is available through Dec 27, 2018). We determine how sea ice growth varies for hundreds of parcels, and determine how growth is affected by location and time.
Students K-16 in the United States and Canada joined their GLOBE Program peers from across the world in collecting water quality measurements during a week-long data-collection period in September, led by the GLOBE Africa Regional Coordination Office. The project was built off of other GLOBE collaborations around spring phenology measurements (Europe) and expeditions to Mt. Kilimanjaro and Lake Victoria (Africa). The efforts and resulting analysis of Collaboration: Water were supported by an international team of scientists, faculty and education professionals. The GLOBE Program Country Coordinators from the U.S. and Canada share the project goals, discuss the results of the September data challenge and how these lead into the community-based collaboration projects being developed between schools. Some of the projects will be presented during the International Virtual Science Symposium and Student Research Symposia in spring 2020. This project works on several levels. It creates resiliency locally through community-based inquiry, supports the development of 21st Century critical thinking, collaboration and communication skills and places the community investigations into the global context of the United Nations Sustainable Development Goal 6 (Clean Water and Sanitation). Along with tools, templates and the benefits of participation, the presenters will share how other communities can be involved in the March data collection event.
It is now widely understood that seasonal snow cover in the Western United States is melting earlier than in past decades. This could have significant consequences for human populations and ecosystems dependent on regularity in timing and magnitude of downstream flows that originate as snow. However, while earlier melt is well established, less is known about intra-annual changes in the spatial and temporal distribution of accumulation and ablation (melt) cycles in the core winter months and spring months, i.e. the ‘persistence’ of seasonal snow cover. This is significant because changes to the persistence of seasonal snow in the winter and spring could have important implications for other snow cover characteristics such as albedo, as well as ancillary hydrologic factors such as soil moisture and runoff. To understand these changes in persistence, this project focuses on study basins in different climatic zones of the Columbia river basin, capturing the shift from maritime snowpack in the west to alpine snowpack in the east. The research relies on a combination of time series analysis of NRCS SNOTEL stations and snow courses and use of an optical remote sensing product which is based on the MODIS MOD10A1 dataset. To compensate for significant winter and spring cloud cover, particularly in the Pacific Northwest, a temporal and spatial gap filling approach utilizing higher spatial resolution products (e.g. Landsat and Sentinel 2) is implemented primarily in Google Earth Engine. The seasonal snow persistence from the MODIS-based product is evaluated using additional Landsat, Sentinel 2 and Planet Labs data, as well as data from the in situ monitoring stations. Finally, changes in intra-annual seasonal snow cover persistence are characterized for core winter, spring and early summer months along an elevational gradient and across study sub-basins.
The Space Physics Data Facility (SPDF https://spdf.gsfc.nasa.gov) and Solar Data Analysis Center (SDAC https://umbra.nascom.nasa.gov/), as the NASA Heliophysics active final archives, will be preserving and distributing the data from Parker Solar Probe. Working in cooperation with current operating missions and the heliophysics community, SPDF ingests, preserves and serves a wide range of past and current public science-quality data from the ionosphere into the furthest reach of deep-space exploration. SPDF has been working with the Parker Solar Probe mission in preparation for archiving and serving its in-situ data starting 2019 Nov 12, and also has arrangements to serve in-situ data from Solar Orbiter when those data become public. SPDF will facilitate scientific analysis of multi-instrument and multi-mission datasets to enhance the science return of Parker Solar Probe mission. SPDF develops and maintains the Common Data Format (CDF) and the associated ISTP/SPDF metadata guidelines. SPDF services include CDAWeb, which supports both survey and burst mode data with graphics, listings and data superset/subset functions. All public data held by SPDF are also available for direct file download by HTTPS or FTPS links from the SPDF home page (https://spdf.gsfc.nasa.gov). SPDF is currently receiving and serving from missions including Helios, MMS, Van Allen Probes, THEMIS/ARTEMIS, GOLD, ACE, Cluster, Geotail, Polar, Wind and many others, and >120 Ground-Based investigations. SPDF recently added support for ARASE/ERG and MAVEN as supplementary access at the requests of those missions. SPDF also operates the multi-mission orbit displays and query services of SSCWeb and the Java-based 4D Orbit Viewer, as well as the Heliophysics Data Portal (HDP) discipline-wide data inventory and access service, and the OMNIweb near-Earth solar wind plasma and magnetic field database.
Over the past third of a century the Incorporated Research Institutions for Seismology (IRIS) has facilitated observational seismology in many ways. At the beginning of IRIS in 1984, and with the support of the National Science Foundation and in partnership with the US Geological Survey, IRIS embarked on deploying the Global Seismographic Network (GSN). Key characteristics of the GSN are its use of high-performance digitizers, very broad band seismometers, strong motion accelerometers, and high frequency sensors to provide multi-decadal observations across a wide frequency band and dynamic range. The IRIS Portable Array Seismic Studies of the Continental Lithosphere (PASSCAL) program has also operated since 1984. PASSCAL’s extensive inventory of seismic equipment has been used by scientists to make observations on every part of the globe. The number and breadth of observations made with this equipment has fueled thousands of research papers and contributed to the education of hundreds, if not thousands, of students. More recently, the IRIS-operated EarthScope Transportable Array (TA) provided a breakthrough in the systematic collection of data using an array of unprecedented size. The success of the TA has ushered in a new era of “Large N” seismology, focused on dense spatial coverage of sensors to reduce aliasing and provide more complete recording of the full wavefield. The TA highlighted the power of survey mode data collection, where systematic, spatially-dense, and high-quality data fuel data-driven discovery, as opposed to deployments made to test a specific hypothesis. Key future directions in observational seismology include an increasing emphasis on wavefield measurements. Deploying instruments in large numbers requires reductions in the size, weight, and power of units, as well as a focus on dirt-to-desktop data management strategies that merge data and metadata while minimizing human intervention with the data flow from the sensor in the dirt to the scientist’s desktop. Other critical frontiers include pervasive seafloor observations to enable studies of key structures like subduction zones, more accessible satellite telemetry to enable ubiquitous sensing of the environment, and new sensing technologies such as MEMS and Distributed Acoustic Sensing.
Seismic site characterization of rock and soil properties has a large impact on earthquake ground motions and engineering seismology, especially for evaluation of local site amplification, calibration of strong-motion records and realistic shaking estimates at seismic stations, site-specific hazard assessment, estimation of ground motion models and soil classification for building code applications. However, there is not yet a common way to exchange site characterization information, whereas setting-up standard practices and quality assessment are becoming very important to reach high-level metadata. Within the framework of the SERA “Seismology and Earthquake Engineering Research Infrastructure Alliance for Europe” Horizon 2020 Project, a networking activity is leading to the definition of a European strategy and standards for site characterization of seismic stations in Europe. Based on the results of an online questionnaire, we first defined a list of indicators considered as mandatory for a reliable site characterization: fundamental resonance frequency, shear-wave velocity profile (Vs), time-averaged Vs over the first 30 m, depth of seismological and engineering bedrock, surface geology, soil class. We then proposed a summary report for each indicator, containing the most significant background information of data acquisition and processing details, and a quality metrics scheme. This requires the evaluation of both (i) reliability of the site characterization indicators provided by different methods, and (ii) consistency among the indicators according to the current knowledge and experience of the scientific community. The quality metrics application to some Italian accelerometric sites, characterized within the Italian Civil Protection Department-INGV agreement (2016 to 2021), highlights the capabilities of capturing the characterization quality. These guidelines have been shared within European and worldwide scientific community and validated through focus groups during a dedicated workshop (https://sites.google.com/view/site-characterization-workshop/). They represent a first attempt to reach high-level metadata for site characterization, being aware that they can be improved and modified after a few years of experience and feedback from users.
Observed surface temperature trends over recent decades are characterized by (i) intensified warming in the Indo-Pacific Warm Pool and slight cooling in the eastern equatorial Pacific, consistent with strengthening of the Walker circulation, and (ii) cooling in the Southern Ocean. In contrast, state-of-the-art coupled climate models generally project Walker circulation weakening, enhanced warming in the eastern equatorial Pacific, and warming in the Southern Ocean. Here we investigate the ability of 16 climate model large ensembles to reproduce observed sea-surface temperature and sea-level pressure trends over 1979-2020 through a combination of externally forced climate change and internal variability. We find large-scale differences between observed and modeled trends that are very unlikely (<5% probability) to occur due to internal variability as represented in models. Disparate trends are found even in regions with weak multi-decadal variability, suggesting that model biases in the transient response to anthropogenic forcing constitute part of the discrepancy.
Soil organic carbon (SOC) stocks represent a large component of the global carbon cycle that is sensitive to warming. Modeling and empirical studies often assume that temperature responses of microbial physiological functions and extracellular enzymatic reactions are predictive of ecosystem-scale SOC decomposition responses to warming. However, temperature-dependent soil trophic interactions such as predation of microbial decomposers by other organisms have not yet been incorporated into quantitative SOC models. Here, we incorporated a microbial predator into a tri-trophic population ecology model and a global-scale predictive SOC model to determine how predation would affect soil community population dynamics and temperature sensitivity of SOC stocks. Predators increased SOC stocks and their dependence on substrate input rates. Top-down controls of predators on microbial biomass caused SOC warming responses to diverge from microbial temperature responses, with warming-induced SOC losses reduced or reversed when predators were more temperature-sensitive. Our results suggest that higher trophic levels can reduce the sensitivity of SOC to warming, and that differences in temperature sensitivity across trophic levels may be a key determinant of SOC warming responses.
Current approaches to estimate NOx emissions fail to account for new and small sources, biomass burning, and sources which change rapidly in time, generally don’t account for measurement error, and are either based on models, or do not consider wind, chemistry, and dynamical effects. This work introduces a new, model-free analytical environment that assimilates daily TROPOMI NO2 measurements in a mass-conserving manner, to invert daily NOx emissions. This is applied over a rapidly developing and energy-consuming region of Northwest China, specifically chosen due to substantial economic and population changes, new environmental policies, large use of coal, and access to independent emissions measurements for validation, making this region representative of many rapidly developing regions found across the Global South. This technique computes a net NOx emissions gain of 70% distributed in a seesaw manner: a more than doubling of emissions in cleaner regions, chemical plants, and regions thought to be emissions-free, combined with a more than halving of emissions in city centers and at well-regulated steel and powerplants. The results allow attribution of sources, with major contributing factors computed to be increased combustion temperature, atmospheric transport, and in-situ chemical processing. It is hoped that these findings will drive a new look at emissions estimation and how it is related to remotely sensed measurements and associated uncertainties, especially applied to rapidly developing regions. This is especially important for understanding the loadings and impacts of short-lived climate forcers, and provides a bridge between remotely sensed data, measurement error, and models, while allowing for further improvement of identification of new, small, and rapidly changing sources.
Sediment trapping behind dams is currently a major source of bias in large-scale hydro-geomorphic models, hindering robust analyses of anthropogenic influences on sediment fluxes in freshwater and coastal systems. This study focuses on developing a new reservoir trapping efficiency (Te) parameter to account for the impacts of dams in hydrological models. This goal was achieved by harnessing a novel remote sensing data product which offers high-resolution and spatially continuous maps of suspended sediment concentration across the Contiguous United States (CONUS). Validation of remote sensing-derived surface sediment fluxes against USGS depth-averaged sediment fluxes showed that this remote sensing dataset can be used to calculate Te with high accuracy (R2 = 0.98). Te calculated for 116 dams across the CONUS, using upstream and downstream sediment fluxes from their reservoirs, range from 0.3% to 98% with a mean of 43%. Contrary to the previous understanding that large reservoirs have larger Te and vice versa, these data reveal that large reservoirs can have a wide range of Te values. A suite of 21 explanatory variables were used to develop an empirical Te model using multiple regression. The strongest model predicts Te using five variables: dam height, incoming sediment flux, outgoing water discharge, reservoir length, and Aridity Index. A global model was also developed using explanatory variables obtained from a global dam database to conduct a global-scale analysis of Te. These CONUS- and global-scale Te models can be integrated into hydro-geomorphic models to more accurately predict river sediment transport by representing sediment trapping in reservoirs.
We propose an integrated dynamics-physics coupling framework for weather and climate-scale models. Each physical parameterization would be advanced on its natural time scale, revised to include a moist thermodynamic relationship, and finally integrated into the relevant components of the dynamical core. We show results using a cloud microphysics scheme integrated within the dynamical core of the GFDL SHiELD weather model to demonstrate the promise of this concept. We call it the in-line microphysics as it is in-lined within the dynamical core. Statistics gathered from one year of weather forecasts show significantly better prediction skills when the model is upgraded to use the in-line microphysics. However, we do find that some biases are degraded with the in-line microphysics. The in-line microphysics also shows larger-amplitude and higher-frequency variations in cloud structures within a tropical cyclone than the traditionally-coupled microphysics. Finally, we discuss the prospects for further development of this integrated dynamics-physics coupling.
This paper presents a new coupled urban change and hazard consequence model that considers population growth, a changing built environment, natural hazard mitigation planning, and future acute hazards. Urban change is simulated as an agent-based land market with six agent types and six land use types. Agents compete for parcels with successful bids leading to changes in both urban land use – affecting where agents are located – and structural properties of buildings – affecting the building’s ability to resist damage to natural hazards. IN-CORE, an open-source community resilience model, is used to compute damages to the built environment. The coupled model operates under constraints imposed by planning policies defined at the start of a simulation. The model is applied to Seaside, Oregon, a coastal community in the North American Pacific Northwest subject to seismic-tsunami hazards emanating from the Cascadia Subduction Zone. Ten planning scenarios are considered including caps on the number of vacation homes, relocating community assets, limiting new development, and mandatory seismic retrofits. By applying this coupled model to the testbed community, we show: (1) placing a cap on the number of vacation homes results in more visitors in damaged buildings, (2) that mandatory seismic retrofits do not reduce the number of people in damaged buildings when considering population growth, (3) polices diverge beyond year 10 in the model, indicating that many policies take time to realize their implications, and (4) the most effective policies were those that incorporated elements of both urban planning and enforced building codes.
Temperature extremes have been related to anomalies in the large-scale circulation, but how these alter the surface energy balance is less clear. Here, we attributed extremes in daytime and nighttime temperatures of the eastern Tibetan Plateau to anomalies in the surface energy balance. We find that daytime temperature extremes are mainly caused by altered solar radiation, while nighttime extremes are controlled by changes in dowelling longwave radiation. These radiation changes are largely controlled by cloud variations, which are further associated with certain large-scale circulations through modulating vertical air motion and horizontal cloud convergence. Anomalies in heat advection, soil moisture, and snow albedo played secondary roles in triggering the initial change and contributed mostly to maintaining the duration. These mechanisms are consistent during winter and summer, also holding for cold extremes. Our work implies more frequent and severe warm nights and compound warm events over the Tibetan Plateau in the future.
One of the central challenges for global food security is the growing pressure from increasingly frequent extreme weather events that results in sharp drops in crop yield and disruptions in the food supply. Such pressure can potentially be alleviated by international crop trade, which plays a crucial role in reallocating food commodities from surplus to deficit regions. However, few studies have examined the influence of extreme weather events and the synchrony of crop yield anomalies on trade linkages among nations. To investigate such influence, we used the international trade network of wheat as an example, developed relevant covariates, and tested specialized statistical and machine learning methods. The results show that countries with higher differences in extreme weather stress tend to have higher import volumes and more trade partners. Trade partnerships are more likely to be established between countries with synchronous yield variations. These findings indicate that increase in heat stress and co-occurring yield loss could lead to future higher dependence on imports, especially for vulnerable import dependent nations, and affect the stability of wheat supply. Hence, the current international trade network needs to be improved by contemplating the patterns of extreme weather and yield synchrony among countries.