Aerosols interact with radiation and clouds. Substantial progress made over the past 40 years in observing, understanding, and modeling these processes helped quantify the imbalance in the Earth’s radiation budget caused by anthropogenic aerosols, called aerosol radiative forcing, but uncertainties remain large. This poster presents the outcome of an international workshop and subsequent review paper, which quantify the likely range of aerosol radiative forcing over the industrial era based on multiple lines of evidence, including modelling approaches, theoretical considerations, and observations. Improved understanding of aerosol absorption and the causes of trends in surface radiative fluxes narrow the range of the forcing from aerosol-radiation interactions compared to the latest assessment by the Intergovernmental Panel on Climate Change (IPCC). A robust theoretical foundation and convincing evidence constrain the forcing caused by aerosol-driven increases in liquid cloud droplet number concentration. However, the influence of anthropogenic aerosols on cloud liquid water content and cloud fraction and on mixed-phase and ice clouds remains poorly constrained. Observed changes in surface temperature and radiative fluxes provide additional constraints. These multiple lines of evidence lead to total aerosol radiative forcing ranges that are of similar width to the last IPCC assessment but more clearly based on physical arguments.
We develop a Hamiltonian Monte Carlo (HMC) sampler which solves a multi-parameter elastic full-waveform inversion (FWI) in a probabilistic setting for the first time. This gives novel access to the full posterior distribution for this type of highly non-linear inverse problem. Typically, FWI has focused on using gradient descent methods with proper regularization to iteratively update models to a minimum misfit value. Non-uniqueness and uncertainties are mostly in this approach. Bayesian inversions offer an alternative by assigning a probability to each model in model space given some data and prior constraints. The drawback is the need to evaluate a very large number of models. Random walks from Markov chains counter this effect by only exploring regions of model space where probability is significant. HMC method additionally incorporates gradient information, i.e. local structure, typically available for numerical waveform tomography experiments. So far, HMC has only been implented for acoustic FWI. We implement HMC for multiple 2D elastic FWI set-ups. Using parallelized wave propagation code, wavefields and kernels are computed on an regular numerical grid and projected onto basis functions. These gradients are subsequently used to explore the posterior space of different target models using HMC. The free parameters in these experiments are P and S velocity, and density. Although simulating Hamiltonian dynamics in the resulting phase space is approximated numerically, the results of the Markov chain are nevertheless very insightful. No prior tuning of kernels, data or model space is required, under the constraint that the sampler is properly tuned. After a burn in phase during which the mass matrix is iteratively optimized, the Markov chain is run on multiple nodes. After approximately 100,000 samples (combined from all nodes) the Markov chain mixes well. The resulting samples give access to the full posterior distribution, including the mean and maximum-likelihood models, conditional probabilities, inter-parameter correlations and marginal distributions.
We explore seasonal mass oscillations by continental water storage in Southeast Asia and Himalayan arc region using continuous Global Positioning System (cGPS) measurements and satellite data from the Gravity Recovery and Climate Experiment (GRACE). While the interaction between seasonally induced non-tectonic and tectonic deformation along the Himalayan plate boundary is still debated, we propose that tectonic deformation along this plate boundary can be significantly influenced by the deformation induced by the non-tectonic hydrological loading cycles. We suggest that the substantially higher transient displacements above the base of the seismogenic zone indicate a role of changes in aseismic slip rate on the deep megathrust that may be controlled by seasonal hydrological loading. We invoke modulation of aseismic slip on the megathrust down-dip of the seismogenic zone due to a fault resonance process induced by the seasonal stress changes. This process modulates mid-crustal ramp associated micro-seismicity and influences the timing of Central Himalayan earthquakes.
Globally, thermodynamics explains an increase in atmospheric water vapor with warming of around 7%/°C near to the surface. In contrast, global precipitation and evaporation are constrained by the Earth’s energy balance to increase at ∼2–3%/°C. However, this rate of increase is suppressed by rapid atmospheric adjustments in response to greenhouse gases and absorbing aerosols that directly alter the atmospheric energy budget. Rapid adjustments to forcings, cooling effects from scattering aerosol, and observational uncertainty can explain why observed global precipitation responses are currently difficult to detect but are expected to emerge and accelerate as warming increases and aerosol forcing diminishes. Precipitation increases with warming are expected to be smaller over land than ocean due to limitations on moisture convergence, exacerbated by feedbacks and affected by rapid adjustments. However, these temperature-dependent changes offset rapid atmospheric adjustments to radiative forcings which tend to increase precipitation over land relative to the oceans. These factors therefore drive complex changes in the regional water cycle in time and space, some examples of which will be discussed. Thermodynamic increases in atmospheric moisture fluxes amplify wet and dry events, driving an intensification of precipitation extremes. The rate of intensification can deviate from a simple thermodynamic response due to in‐storm and larger‐scale feedback processes, while changes in large‐scale dynamics and catchment characteristics further modulate the frequency of flooding in response to precipitation increases. Changes in atmospheric circulation in response to radiative forcing and evolving surface temperature patterns are capable of dominating water cycle changes in some regions. Moreover, the direct impact of human activities on the water cycle through water abstraction, irrigation, and land use change is already a significant component of regional water cycle change and is expected to further increase in importance as water demand grows with global population. This talk will summarize recent advances in understanding past and future large-scale responses in the water cycle.
Development is well-advanced for the next version of the Integrated Multi-satellitE Retrievals for the Global Precipitation Measurement (GPM) mission (IMERG), labeled Version 07. IMERG is a key output of the U.S. GPM Science Team, and V07 will be the second generation in which data from both the Tropical Rainfall Measuring Mission (TRMM) and GPM projects are combined into a single, uniformly processed record, currently starting in June 2000. This presentation will show several examples of successes and challenges in V06, and use these to illuminate the upgrades that have been pursued for V07. For example, the V06 IMERG near-real-time products (Early and Late Runs) show regional biases because they do not have climatological calibration (despite the documentation), and this will be done in V07. As well, the time series of precipitation-rate histograms shows a seam in the transition from TRMM calibration to GPM Core Observatory calibration at the start of June 2014. V07 will benefit from better continuity in the input calibration datasets across that boundary. A third issue is that the Kalman filter used in IMERG a) introduces a variable amount of smoothing, and b) depends on relatively simple measures of input data quality. Both of these are revisited in V07. We will report the status of IMERG Version 07 processing as of the conference time, and introduce some topics that are being considered for the future, including improved uncertainty estimates, addition of sub-monthly gauge information, and strategies for incorporating precipitation estimates from multiple, relatively short-lived small satellites.
Extension and rifting of the lithosphere is fundamental to the evolution of the continents, but the mechanism by which the lithosphere thins remains enigmatic. Using new dense magnetotelluric array data collected within the rifted margin and adjacent areas of Southeast China, we resolve the three-dimensional electrical structure of the lithosphere to constrain the process of rifting and thinning. Our measurements discover a brittle-ductile transition zone featuring low electrical resistivity and low seismic velocity in the Cathaysia Block. A southeast-directed dip is resolved for the Jiangshan-Shaoxing Fault that documents the Neoproterozoic suturing of the Yangtze and Cathaysia Blocks, and has been reactivated by the Early Paleozoic and Early Mesozoic intracontinental orogenies. It acted as a low-angle detachment fault during the Mesozoic-Cenozoic extension and rifting. Given the asymmetries of topography, electrical resistivity, Bouguer gravity anomaly and Mesozoic volcanism across the Gan-Hang Rift, an asymmetric simple shear extension model is proposed for the South China Mesozoic-Cenozoic rift system. Water content of up to 0.1 wt% and melt fraction of up to 1% are estimated at 70 km depth beneath the central Wuyi Mountains, suggesting hydration of the mantle lithosphere. The hydration weakening of the mantle lithosphere promoted both the gravitational instability and convective removal of the lowermost lithosphere in South China.
Flood depth grids from U.S. Federal Emergency Management Agency (FEMA) provide model-output estimates of the depth of water that can, on average, be expected to occur at various return periods for localized areas. However, use of these depth grids can be limited by spurious data and an insufficient number of return periods for certain planning applications. This research proposes a new method for estimating flood depth grids to address these shortcomings. The Gumbel distribution is used to characterize the flood depth-return period relationship for grid cells for which the data are plausible. Then the Gumbel parameters of slope (α) and intercept (u) are used to project flood elevations for extreme return periods for which an entire area can be assumed to be submerged. Spatial interpolation methods are then used to impute the flood elevations for spurious or missing grid cells. Then, the flood depth is recomputed from the flood elevations, once they are re-calculated at the shorter return periods. Validation of this technique for a Metairie, Louisiana, U.S.A. study area suggests that the cokriging spatial interpolation technique provides the most suitable estimates of flood depth, provided that the FEMA-generated model output is assumed to provide the “correct” results. These methods may assist engineers, developers, planners, and others in mitigating the world’s most widespread and expensive natural hazard.
Measurements of dust size usually obtain the optical or the projected area-equivalent diameters, whereas model calculations of dust impacts use the geometric and the aerodynamic diameters. As such, accurate conversions between the four types of diameters are critical. However, most current conversions assume dust is spherical, which is problematic as numerous studies show that dust is highly aspherical. Here, we obtain conversions between different diameter types that account for dust asphericity. Our conversions indicate that optical particle counters using optical diameter to determine dust size underestimate dust geometric diameter at coarse sizes. We further use the diameter conversions to obtain a consistent observational constraint of size distributions of emitted dust in terms of geometric and aerodynamic diameters. The resulting size distributions are coarser than accounted for by parameterizations used in climate models, which which underestimate the mass of emitted dust within 10≤D_geo≤20 μm by a factor of ~2 and do not account for dust emission with D_geo≥20 μm. This finding suggests that current models substantially underestimate coarse dust emission.
A key to better constraining estimates of the ocean sink for fossil fuel emissions of carbon dioxide is reducing uncertainties in coastal carbon fluxes. A contributing factor in uncertainties in coastal carbon fluxes stems from the under sampling of seasonality and spatial heterogeneity. Our objectives were to i) assess satellite-based approaches that would expand the spatial and temporal coverage of the surface ocean pCO2 and sea-air CO2 flux for the northern Gulf of Mexico, and ii) investigate the seasonal and interannual variations in CO2 dynamics and possible environmental drivers. Regression tree analysis was effective in directly relating surface ocean pCO2 to satellite-retrieved (MODIS Aqua) products including chlorophyll, sea surface temperature, and dissolved/detrital absorption. Satellite-based assessments of sea surface pCO2 were made spanning the period from 2006-2010 and were used in conjunction with estimates of wind fields and atmospheric pCO2 to produce regional-scale estimates of air-sea fluxes. Seasonality was evident in air-sea fluxes of CO2, with an estimated annual average CO2 flux for the study region of -4.3 + 1.1 Tg C y-1, confirming prior findings that the Gulf of Mexico was a net CO2 sink. Interannual variability in fluxes was related to Mississippi River dissolved inorganic nitrogen inputs, an indication that human- and climate-related changes in river exports will impact coastal carbon budgets. This is the first multi-year assessment of pCO2 and air-sea flux of CO2 using satellite-derived environmental data for the northern Gulf of Mexico.
Geoscientists often spend significant research time identifying, downloading, and refining geospatial data before they can use it for analysis. Exploring interdisciplinary data is even more challenging because it may be difficult to evaluate data quality outside of one’s expertise. QGreenland, a newly funded EarthCube project, is designed to remove these barriers for interdisciplinary Greenland-focused research and analysis via an open data, open platform Greenland GIS tool. QGreenland will combine interdisciplinary data (e.g., glaciology, human health, geopolitics, hydrology, biology, etc.) curated by an international Editorial Board into a unified, all-in-one GIS environment for offline and online use. The package is designed for the open source GIS platform QGIS. QGreenland will include multiple levels of data use: 1) a fully downloadable base package ready for offline use, 2) additional disciplinary and/or high-resolution data extension packages for select download, and 3) online-access-only data to facilitate especially large datasets or updating time series. Software development has begun and we look forward to discussing techniques to create the best open access, reproducible methods for package creation and future sustainability. We also now have a beta version available for experimentation and feedback from interested users and the Editorial Board. The version 1 public release is slated for fall 2020, with two subsequent annual updates. As an interdisciplinary data package, QGreenland is designed to aid collaboration and discovery across fields. Along with discussing QGreenland development, we will also provide an example use case to demonstrate the potential utility of QGreenland for researchers, educators, planners, and communities.
We introduce a computer program for time series tuning and analysis. The well known in paleo-climatic community program AnalySeries after Paillard et al. is restricted to the Mac OS (32-bit) and, according to Apple plans to move entirely onto the 64-bit system, it will not be supported in the future upgrades of Macintosh OS. QAnalySeries is an attempt to re-implement the major functionality of AnalySeries thus providing the community with a useful tool. QAnalySeries is written using Qt SDK as a free software and can be run on Macintosh, Windows and Linux systems. Paillard, D., L. Labeyrie and P. Yiou (1996), Macintosh program performs time-series analysis, Eos Trans. AGU, 77: 379.
There is no consensus on the physical mechanisms controlling the scale at which convective activity organizes near the Equator, where the Coriolis parameter is small. High resolution cloud-permitting simulations of non-rotating convection show the emergence of a dominant length scale, which has been referred to as convective self-aggregation. Furthermore, simulations in an elongated domain of size 12228km x 192km with a 3km horizontal resolution equilibrate to a wave-like pattern in the elongated direction, where the cluster size becomes independent of the domain size. These recent findings suggest that the size of convective aggregation may be regulated by physical mechanisms, rather than artifacts of the model configuration, and thus within the reach of physical understanding. We introduce a diagnostic framework relating the evolution of the length scale of convective aggregation to the net radiative heating, the surface enthalpy flux, and horizontal energy transport. We evaluate these length scale tendencies of convective aggregation in twenty high-resolution cloud-permitting simulations of radiative-convective equilibrium. While both radiative fluxes contribute to convective aggregation, the net longwave radiative flux operates at large scales (1000-5000 km) and stretches the size of moist and dry regions, while the net shortwave flux operates at smaller scales (500-2000 km) and shrinks it. The surface flux length scale tendency is dominated by convective gustiness, which acts to aggregate convective activity at smaller scales (500-3000 km). We further investigate the scale-by-scale radiative tendencies in a suite of nine mechanism denial experiments, in which different aspects of cloud radiation are homogenized or removed across the horizontal domain, and find that liquid and ice cloud radiation can individually aggregate convection. However, only ice cloud radiation can drive the convective cluster to scales exceeding 5000 km, because of the high optical thickness of ice, and the increase in coherence between water vapor and deep convection with horizontal scale. The framework presented here focuses on the length scale tendencies rather than a static aggregated state, which is a step towards diagnosing clustering feedbacks in the real world. Overall, our work underscores the need to observe and simulate surface fluxes, radiative and advective fluxes across the 1km-1000km range of scales to better understand the characteristics of turbulent moist convection.
Due to its inherent ability to estimate the background error covariances, an ensemble Kalman filter (EnKF) is thought to be a practical approach to the strongly coupled data assimilation problems, where an entire coupled model state is estimated as if it was a single integrated system. However, increased complexity and the multiple time scale of the coupled system aggravate the rank-deficiency and spurious correlation problems caused by limited ensemble size available for the analysis. To alleviate these problems, a distance-independent localization method to systematically select the observations to be assimilated into each model variable has been developed and successfully tested with a nine-variable coupled model with slow and fast modes. This method, called correlation-cutoff method, utilizes the mean squared ensemble error correlation between each observable and model variable to identify where the cross-update should be used, and we cut off the assimilation of observations when the squared error correlation becomes small. To implement the method on a more realistic model, we thoroughly investigate inter-fluid background covariances in an atmosphere-ocean coupled general circulation model where the spatiotemporal scales of coupled dynamics significantly vary by latitudes and driving processes.
Reliability of future global warming projections depends on how well climate models reproduce the observed climate change over the twentieth century. In this regard, deviations of the model simulated climate change from observations, such as a recent “pause” in global warming, have received considerable attention. Such decadal mismatches between model simulated and observed climate trends are common throughout the twentieth century, and their causes are still poorly understood. Here we show that the discrepancies between the observed and simulated climate variability on decadal and longer time scale have a coherent structure suggestive of a pronounced global multidecadal oscillation. Surface temperature anomalies associated with this variability originate in the North Atlantic and spread out to the Pacific and Southern oceans and Antarctica, with Arctic following suit in about 25–35 years. While climate models exhibit various levels of decadal climate variability and some regional similarities to observations, noneof the model simulations considered match the observed signal in terms of its magnitude, spatial patterns and their sequential time development. These results highlight a substantial degree of uncertainty in our interpretation of the observed climate change using current generation of climate models.
We are at a unique time in the study of our place in space. On one hand, we operate in the same paradigm that has guided the study of space science for the past couple of decades, and on the other a rising dependence of our economic and social well-being on space demands a shift. Everywhere in our society ‘big data’ (defined by four V’s: volume, variety, veracity, and velocity) and the advent of sophisticated and efficient methods to explore these data (i.e., data science) present new opportunities for discovery, and the time is ripe for these methods to shift how we study the physics of space. We will first discuss the meaning of data science in the context of space science, and then demonstrate the potential for new discovery through a power use case: leveraging Global Navigation Satellite Systems (GNSS) signals for space weather prediction. In this use case, we take advantage of a large volume of data from GNSS signals, data science-driven technologies, and a machine learning algorithm known as the Support Vector Machine (SVM) to develop a novel predictive model for high-latitude ionospheric phase scintillation. This talk will conclude with a perspective on opportunities in space science through ‘big data’ and creating new scientific discovery at the intersection of traditional approaches and data science-driven innovation.
Post-seismic deformation following large earthquakes offers insights into the rheology of the lithosphere and upper asthenosphere. The Mojave, in southern California, is one of the best studied regions on Earth, yet key questions, about fault slip rates and rheological heterogeneity, remain unanswered. Unprecedented geodetic coverage of the 2019 Ridgecrest earthquakes provides an opportunity to test whether rheological models developed for the Mojave, from the Landers, Hector Mine and El Major Cucapah earthquakes, are applicable north of the Garlock fault, and to place bounds on the effects of local rheological heterogeneties associated with the Coso volcanic field. This volcanic field, which is located to the NW of the Mw7.1 rupture trace, is a region of high heat flow and geothermal activity. The locally high temperatures in the Coso volcanic field are likely to be associated with low viscosities compared to the surrounding regions, and high pore pressures due to the hydrothermal activity. The aftershock sequence associated with the Ridgecrest earthquakes shows a notable absence of large magnitude earthquakes in this region. We use variational Bayesian independent component analysis to isolate postseismic deformation in GPS time series around the earthquakes. We present models of the possible poroelastic, afterslip and viscoelastic response driven by coseismic stress changes in the July 2019 Ridgecrest earthquakes and investigate the possible effect of the Coso volcanic field. By modelling a series of different afterslip geometries, and viscoelastic rheologies we identify features of the GPS- and InSAR-derived surface deformation which are diagnostic of different post-seismic mechanisms and rheological heterogeneities.
Coral reefs are one of the most diverse ecosystems on Earth and provide significant ecological, economic, and societal benefits valued at approximately $9.8 trillion U.S. dollars per year. Since 1997, NOAA’s Coral Reef Watch (CRW) has used near real-time satellite monitoring to provide ecological nowcasting of the ocean heat stress that can cause mass coral bleaching. While this benefitted coral reef managers, scientists, and other stakeholders, our users desired longer-range forecasts. In 2012, CRW launched its probabilistic, global Four-Month Coral Bleaching Outlook system based on NOAA’s operational Climate Forecast System (now CFSv2). The Outlook proved accurate in local bleaching events over the following two years. Subsequently, June 2014-May 2017 brought the longest, most widespread, and probably most damaging coral bleaching event on record. As this global event greatly threatened all tropical coral reefs, the Outlook system proved critical in helping users worldwide prepare for and respond to bleaching – including actions to reduce damage from these intense marine heatwaves. This presentation will introduce CRW’s ecoforecasting tools and focus on four “use cases” of CRW’s Outlook system during the 2014-17 global coral bleaching event. In 2015, concern over bleaching forecasted by CRW’s Outlooks prompted two actions by the State of Hawaii. First, the “Eyes of the Reef” volunteer network organized numerous training sessions and its first state-wide Bleach Watch “Bleachapalooza” event to monitor bleaching across the state. Second, State scientists collected specimens of rare corals to preserve them in onshore nurseries. One of these species is now locally extinct on Hawaii’s reefs, and these rescued specimens are being prepared for re-introduction. Next, as CRW predicted bleaching would persist for several months in the Northern Line Islands, NOAA mounted a special cruise to monitor these remote coral reefs. The record heat stress killed over 98% of the corals at Jarvis Island. Finally, in 2016, prior to peak bleaching, Thailand used CRW’s prediction of severe heat stress to close ten heavily used coral reefs to tourism as a way to reduce further stress to the reefs. These actions show the value of ecoforecasts to prepare resource managers for further climate change impacts.
Low-cost air quality monitors (LCAQMs) are promising supplements to regulatory monitors for PM2.5 exposure assessment. However, the application of LCAQM in spatially extensive exposure modeling is hindered by the difficulty in performing calibration at large spatial scales and the adverse influence of LCAQM residual uncertainty after calibration. We aimed to develop an efficient spatially scalable calibration method for LCAQM and design a residual uncertainty-derived down-weighting strategy to optimize the use of LCAQM data with regulatory monitoring data in PM2.5 modeling. In California, for each monitor from PurpleAir, a global LCAQM network, we identified a station within a 500-m radius from the Air Quality System (AQS), a U.S. regulatory monitoring network. Regional calibration of PurpleAir to AQS was performed at the hourly level with Geographically Weighted Regression (GWR). The calibrated PurpleAir measurements were down-weighted according to their residual uncertainty and then incorporated into a Random Forest (RF) prediction model as a dependent variable to generate 1-km daily PM2.5 exposure estimates. The state-level PurpleAir calibration reduced the systematic bias to ~0 ug/m3 and decreased the random error by 38%. The considerably large samples also enabled quantitative analyses regarding potential factors related to the PurpleAir bias. The RF-based model with both AQS and down-weighted PurpleAir data outperformed the RF model based solely on AQS with an improved CV R2 of 0.86, an improved spatial CV R2 of 0.81, and a lower prediction error of 5.40 ug/m3. The down-weighting allowed the prediction model to show more spatial details of PM2.5 and to better detect pollution hot-spots. Our spatially scalable calibration and down-weighting strategies, for the first time, allowed an effective application of a state-level LCAQM network in high-resolution PM2.5 exposure modeling. The proposed framework can be generalized to regions worldwide for advancing the evaluation of heavy PM2.5 episodes and health-related applications.