The Southern Annular Mode (SAM) is the leading mode of atmospheric variability in the extratropical Southern Hemisphere, and its variations affect westerly winds, regional storm tracks, midlatitude wildfire activity, Antarctic and Southern Ocean dynamics, and surface mass balance. The SAM is therefore of high importance to both ecosystems and societies across the Southern Hemisphere. The behavior of the SAM has been extensively studied during the instrumental era, but there is substantially less confidence and considerable disagreement in its decadal to centennial-scale variability over the Common Era. Studying these longer time scales requires millennial-length reconstructions, but the sparsity of multi-century proxy records in the Southern Hemisphere has hindered the production of such reconstructions. Consequently, variability and trends in the SAM remain uncertain through most of the Common Era. Here, we use paleoclimate data assimilation to reconstruct the austral summer (DJF) SAM index (SAMI) over the entire Common Era. Our method integrates the South American Drought Atlas, Australia-New Zealand Drought Atlas, and the PAGES2k temperature-sensitive proxy network with a multi-model ensemble of last millennium GCM simulations using an offline ensemble Kalman Filter with a stationary prior. We use a novel nested variance adjustment to correct for the effect of changing proxy availability through time. Our reconstruction is not calibrated to the observed SAMI, yet exhibits a correlation coefficient greater than 0.6 over the instrumental era. Using superposed-epoch and wavelet analyses, we find the reconstruction exhibits minimal response to volcanic and solar forcings and is instead dominated by internal climate variability until the late 20th century. Our data assimilation framework also facilitates the use of optimal-sensor analysis, which we use to identify key proxy sites at different time periods in the reconstruction. Prior to 1400 CE, the reconstruction is strongly influenced by two tree-ring records (Mt. Read, Tasmania and Oroko, New Zealand) and two ice-cores (WDC05A and Plateau Remote). Finally, we examine the coherence of our results against existing reconstructions and compare reconstructed 20th century trends with the instrumental record.
Lake Tanganyika, located in central East Africa, is the longest and second deepest freshwater lake on Earth. Lake Tanganyika’s diverse ecosystem and watershed are under threat today by human activities from extensive deforestation, climate change, and human-induced fires. Therefore, documenting fire and deforestation history in Lake Tanganyika’s surrounding watersheds is crucial for improving watershed management around the lake in the future. Analyzing sediment charcoal records from sediment cores provides high-resolution paleolimnological evidence that reflects the timing and impacts of fire histories and landscape conversion. Macrocharcoal, an incompletely combusted residue that remains when plants materials were burnt by fire, can be transported away from the fire sites and deposited into the lake. We sampled and calculated macro-charcoal (>61 μm) sediment flux from three sediment cores, LT-98-20MR, LT-98-15M, and TANG14-1MC-1A from the lake’s east-central coast. 20MR and 15M are 2.4 km apart, whereas 1A and 15M are 6.97 km apart. We have also compared our results with several previously studied cores from the central part of the lake. Core 15M, which is closest to the shore and has the highest sedimentation rates, showed peaks of charcoal flux from 1830 – 1850, 1896, 1910 – 1914 and 1996 AD based on correlation with a nearby core. Core 20MR, which is further offshore than 15M, has multiple sharp charcoal flux peaks at 1674, 1770, 1848 and 1881 AD, again using correlation with a nearby core. Core 1A, where the watershed has been intensively managed at Kalilani Bay in recent decades (McGlue et al., 2021), shows two significant peaks at 1668 and 1808 AD. The difference in timing of the distributions of sediment charcoal flux peaks from our study indicates these charcoal histories record localized wildfires. Some of these may correlate with the late Little Ice Age dry period in the late 18th – mid 19th C, whereas other more recent ones maybe linked to human activities such as land clearance for cassava cultivation. Low fire frequencies at most sites during the late 19th – mid 20th C may correspond to reduced human populations and disease outbreaks during that period.
Wildfires affect 40% of the earth’s terrestrial biome, but much of our knowledge of wildfire activity is limited to the satellite era. Improved understanding of past fires is necessary to better understand how wildfires might change with future climate change, to understand ecosystem resilience, and to improve data-model comparisons. Environmental proxy archives can extend our knowledge of past fire activity. Speleothems, naturally occurring cave formations, are widely used in palaeoenvironmental research as they are absolutely dateable, occur on every ice-free continent, and include multiple proxies. Recently, speleothems have been shown to record past fire events (McDonough et al., 2022). Here we present a review of this emerging application in speleothem palaeoenvironmental science. We give a concise overview of fire regimes and traditional palaeofire proxies, describe past attempts to use stalagmites to investigate palaeofire, and describe the physical basis though which speleothems can record past fires. We then describe the ideal speleothem sample for palaeofire research and offer a summary of applicable laboratory and statistical methods. Finally, we present four case studies which detail  the geochemistry of ash leachates,  how sulphur may be a proxy for post fire ecological recovery,  how a catastrophic palaeofire was linked to changes in climate and land management, and  demonstrate that deep caves can record past fire events. We conclude the paper by suggesting that speleothem δ18O research may need to consider the impact of fire on δ18O values, and outline future research directions.
Researchers and end users using climate data face a challenge when they analyze the data they need. Data volumes are increasing very rapidly, and the ability to download all needed data is often no longer possible. Most of the climate analysis tools for research and application needs must use very large datasets, often distributed among several data centres and into a large quantity of files. This is especially true when they are stored in a federated architecture like the ESGF. One of these tools is icclim (https://github.com/cerfacs-globc/icclim ), a flexible python software package to calculate climate indices and indicators. This tool adhere as much as possible to metadata conventions such as CF, implementing also provenance information. It also aims at providing increasing support for all FAIR aspects. It is designed with performance and optimisation in mind, because the goal is to provide on-demand calculations for users. It provides the implementation of most of the international standard climate indices such as ECAD, ETCCDI, ET-SCI, including the correct methodology for calculating percentile indices using the bootstrapping method. It has been validated against R.Climdex as well (https://cran.r-project.org/web/packages/climdex.pcic/index.html ). The new 5.x version of icclim is now based on functions from the xclim python library, which was inspired by earlier versions of icclim, but using xarray and dask for data access and processing. icclim is also a candidate as the software to calculate climate indices for the C3S toolbox (https://cds.climate.copernicus.eu/cdsapp#!/toolbox ). An example of a complex analysis tool used in climate research and adaptation studies is a tool to follow storm tracks. In the context of climate change, it is important to know if storm tracks will change in the future, in both their frequency and intensity. Storms can cause significant societal impacts, hence it is important to assess future patterns. These tools are integrated in the IS-ENES C4I 2.0 platform (https://climate4impact.eu/ ), using a Jupyter notebook collection in a SWIRRL environment (Software for Interactive Reproducible Research Labs https://gitlab.com/KNMI-OSS/swirrl ). Having access to this type of complex analysis tool is very useful, and integrating them with front-ends like C4I enable the use of those tools by a larger number of researchers and end users. This project (IS-ENES3) has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement N°824084.
The Atlantic’s Meridional Overturning Circulation (AMOC) is weak when the northern summer insolation is strong and vice-versa in the 23-kyr precessional frequency band. The mechanism behind this response is unknown and does not conform in any obvious way to Milankovitch theory. The link between the AMOC and precession is attributed here to the trade winds that blow out from Africa across the Atlantic. The trade winds, like the AMOC itself, are weak when the northern summer insolation is strong. The trade winds are shown here to have a direct impact on the AMOC’s return flow. A return flow altered this way should warm and cool the Earth in a way that has not been previously considered.
Water system operations require subannual streamflow data—e.g., monthly or weekly—that are not readily achievable with conventional streamflow reconstructions from annual tree rings. This mismatch is particularly relevant to highly seasonal rivers such as Thailand’s Chao Phraya. Here, we combine tree ring width and oxygen isotope (δ18O) from Southeast Asia to produce 254-year, monthly-resolved reconstructions for all four major tributaries of the Chao Phraya. From the reconstructions, we derive subannual streamflow indices to examine past hydrological droughts and pluvials, and find coherence and heterogeneity in their histories. The monthly resolution reveals the spatiotemporal variability in wet season timing, caused by interactions between early summer typhoons, monsoon rains, catchment location, and topography. Monthly-resolved reconstructions, like the ones presented here, not only broaden our understanding of past hydroclimatic variability, but also provide data that are functional to water management and climate-risk analyses, a significant improvement over annual reconstructions.
Despite having offered important hydroclimatic insights, streamflow reconstructions still see limited use in water resources operations, because annual reconstructions are not suitable for decisions at finer time scales. Attempts towards sub-annual reconstructions have relied on statistical disaggregation, which uses none or little proxy information. Here, we develop a novel framework that optimizes proxy combinations to simultaneously produce seasonal and annual reconstructions. Importantly, the framework ensures that total seasonal flow matches annual flow closely. This mass balance criterion is necessary to avoid misguiding water management decisions, such as water allocation. Using the framework, and leveraging a multi-species network of ring width and cellulose δ18O in Southeast Asia, we reconstruct seasonal and annual inflow to Thailand’s largest reservoir. The reconstructions are statistically skillful. This work is one step closer towards operational usability of streamflow reconstruction in water resources management.
Reconstructing water availability in terrestrial ecosystems is key to understanding past climate and landscapes, but there are few proxies for aridity that are available for use at terrestrial sites across the Cenozoic. The isotopic composition of tooth enamel is widely used as paleoenvironmental indicator and recent work suggests the potential for using the triple oxygen isotopic composition of mammalian tooth enamel (∆’17Oenamel) as an indicator of aridity. However, the extent to which ∆’17Oenamel values vary across environments is unknown and there is no framework for evaluating past aridity using ∆’17Oenamel data. Here we present ∆’17Oenamel and δ18Oenamel values from 50 extant mammalian herbivores that vary in physiology, behavior, diet, and water-use strategy. Teeth are from sites in Africa, Europe, and North America and represent a range of environments (humid to arid) and latitudes (34S to 69N), where mean annual δ18O values of meteoric water range from -26.0‰ to 2.2‰ (VSMOW). ∆’17Oenamel values from these sites span 146 per meg (-283 to -137 per meg, where 1 per meg = 0.001‰). The observed variation in ∆’17Oenamel values increases with aridity, forming a wedged-shape pattern in a plot of aridity index vs. ∆’17Oenamel that persists regardless of region. In contrast, the plot of aridity index vs. δ18Oenamel for these same samples does not yield a distinct pattern. We use these new ∆’17Oenamel data from extant teeth to provide guidelines for using ∆’17Oenamel data from fossil teeth to assess and classify the aridity of past environments. ∆’17Oenamel values from the fossil record have the potential to be a widely used proxy for aridity without the limitations inherent to approaches that use δ18Oenamel values alone. In addition, the data presented here have implications for how ∆’17Oenamel values of large mammalian herbivores can be used in evaluations of diagenesis and past pCO2.
The author presents an iterative approach to describing a data set, such as a paleoclimatic proxy or a model output, in terms of automata of successively higher order. The automata reflect dynamics acting on successively longer timescales and larger spatial scales. The method uses the computed probability density function from reconstructed state space portraits, over successive overlapping windows in time, of the record of magnetic susceptibility of loess and paleosols at Luochuan, central China. Areas of consistently high probability across several time windows represent areas of quasistability, which are used as the predictive and successor states of a succession of Markov Chains that characterize the variability of the strength of the East Asian paleomonsoon at different time scales. Seven metastable states are thus identified, forming four Markov Chains, which show a marked increase in complexity of behavior of the paleomonsoon system throughout the Quaternary. A higher-order automaton is suggested by the sequence of Markov Chains, suggesting differing cycles of dynamic behaviour in the Early and Late Quaternary.
The Miocene epoch, spanning 23.03-5.33Ma, was a dynamic climate of sustained, polar amplified warmth. Miocene atmospheric CO2 concentrations are typically reconstructed between 300-600ppm and were potentially higher during the Miocene Climatic Optimum (16.75-14.5Ma). With surface temperature reconstructions pointing to substantial midlatitude and polar warmth, it is unclear what processes maintained the much weaker-than-modern equator-to-pole temperature difference. Here we synthesize several Miocene climate modeling efforts together with available terrestrial and ocean surface temperature reconstructions. We evaluate the range of model-data agreement, highlight robust mechanisms operating across Miocene modelling efforts, and regions where differences across experiments result in a large spread in warming responses. Prescribed CO2 is the primary factor controlling global warming across the ensemble. On average, elements other than CO2, such as Miocene paleogeography and ice sheets, raise global mean temperature by ~ 2℃, with the spread in warming under a given CO2 concentration (due to a combination of the spread in imposed boundary conditions and climate feedback strengths) equivalent to ~1.2 times a CO2 doubling. This study uses an ensemble of opportunity: models, boundary conditions, and reference datasets represent the state-of-art for the Miocene, but are inhomogeneous and not ideal for a formal intermodel comparison effort. Acknowledging this caveat, this study is nevertheless the first Miocene multi-model, multi-proxy comparison attempted so far. This study serves to take stock of the current progress towards simulating Miocene warmth while isolating remaining challenges that may be well served by community-led efforts to coordinate modelling and data activities within a common analysis framework.
Global changes in climate not only affect its mean, but also its variability, which mainly impacts society. For better projections of future climate changes it is crucial to improve the understanding of changes in both the mean, the variability and their relationship. Model-Data comparison between climate simulations and speleothem paleoclimate archives can test and validate the capability of different general circulation models (GCMs) to simulate changes in variability. However, the d18O values measured in climate archives don’t directly represent temperature or precipitation but result from multivariate, non-linear processes on top of the dominant atmospheric controls on precipitation d18O. We aim to assess a model’s capability to simulate climate variability on timescales longer than those observable. Our strategy combines a Proxy System Model (PSM) for the relevant processes with isotope-enabled GCMs. We focus on speleothems, as they are precisely date-able and provide well preserved (semi-)continuous climate signals in the lower and mid-latitudes. We evaluate trends, correlations between different records and power spectral densities across a speleothem database, focusing on the past millennium. We compare proxy results to those obtained by forward models based on isotope-enabled HadCM3 simulations and PSM approaches of increasing complexity. We evaluate the sensitivity of results to parameter choices, and test options to constrain them. We find that some parameters, e.g. transit times of water from the surface to the speleothem’s cave, strongly influences the slope of the spectra in the PSM. Based on the ample proxy and model evidence for the past 1000ys, we test for realistic parameter ranges and the sufficient complexity of speleothem PSM for global application. Given a successful application on this more recent period we envisage application on longer, millennial to orbital timescales, to provide estimates of low-latitude changes in climate variability.
While the impact of dust on climate and Atlantic Meridional Overturning Circulation (AMOC) during the interglacial period such as the mid-Holocene (MH) has been studied extensively, its impact during the glacial period is unclear. Here we investigate how the climate and AMOC would change if there had been no dust during the Last Glacial Maximum (LGM). Model simulations show that the dust removal leads to a global cooling of over 2.4 °C and a weakening of AMOC by ~30 %. Such temperature change is opposite in sign to that for the MH. The cooling is attributed to the increase of snow and ice albedo and weakening of AMOC when dust is removed, and is amplified through a positive feedback between sea ice and AMOC. Our results indicate that the climate and AMOC are more sensitive to dust change during the glacial than the interglacial period.
Past temperature reconstructions from polar ice sheets are commonly based on stable water isotope records in ice-cores. However, despite major efforts in the understanding of the ice-core signal formation, the temperature reconstructions of the last millennium in Antarctica remain highly uncertain. Here, using a 100 km scale representative surface water isotope dataset, we show that the spatial variability of local surface topography and accumulation rate anomalies influences the isotopic composition of the upper-meter snowpack. The magnitude of this non-temperature effect on water isotopes is similar to changes of the last millennium. We demonstrate that these spatial anomalies are advected into the deeper firn and ice column, and are translated into an artificial centennial to millennial scale variability in the isotope record. Additionally, we provide an estimation of areas where this effect is relevant for last millennium temperature reconstructions.
The oxygen and hydrogen isotopic composition in snow and ice have long been utilized to reconstruct past temperatures of polar regions, under the assumption that post-depositional processes such as sublimation do not fractionate snow. In low-accumulation (<0.01 m yr-1) areas near the McMurdo Dry Valleys in Antarctica surface snow and ice samples have negative deuterium excess values (δD - 8*δ18O). This unique phenomenon, only observed near the Dry Valleys, is not fully understood. Here we use both an isotope-enabled general circulation model and an ice physics model and establish that negative deuterium excess values can only arise from precipitation if the majority of the moisture is sourced from the Southern Ocean. However, the model results show that moisture sourced from oceans north of 55°S contributes significantly (>50%) to precipitation in Antarctica today. We thus propose that sublimation must have occurred to yield the negative deuterium excess values in snow observed in and near the Dry Valleys and that solid-phase-diffusion in ice grains is sufficiently fast to allow Rayleigh-like isotopic fractionation in similar environments. We calculate that under present-day conditions at the Allan Hills outside the Dry Valleys, 3 to 24% of the surface snow is lost due to sublimation. Because a higher fraction of snow is expected to be sublimed when accumulation rates are lower, the magnitude of δ18O and δD enrichment due to sublimation will be higher during past cold periods than at present, altering the relationship between the snow isotopic composition and polar temperatures.
A theory postulates that a huge ice lake named “Lake Agassiz” existed near the border between Canada and the United States during the last glacial age. It was thought that this lake collapsed, sometime between 13,000 to 8,200 years ago, causing global environmental changes such as cooling in the Younger Dryas period and the sea level rise. However, the truth about Lake Agassiz and its collapse remained unclear. To verify the actuality of the collapse, a simulation software was created for the geomorphological analysis and water volume calculation. The result of the present analysis indicated that the amount of water in Lake Agassiz was much smaller than presumed in the previous theory. Considering the surrounding topography, we deduced that Lake Agassiz was not the type of lake whose collapse would have caused a large-scale flood. Additionally, from a slope map created for the North American continent, we discovered a topography that appears to be a trace of erosion caused by a large-scale flood near Lake Agassiz. These findings reveal that the flooding of Lake Agassiz was likely caused by the collapse of an even larger ice-sheet lake. This study considers the scale and mechanism of the floods from a giant ice-sheet lake that existed in the Laurentide Ice Sheet.
Located at the transition between monsoon and westerly dominated climate systems, major rivers draining the western Qilian Shan incise deep, narrow canyons into latest Quaternary foreland basin sediments of the Hexi Corridor. Field surveys show that the Beida River incised 135 m at mountain front over the Late Pleistocene and Holocene at an average rate of 0.006 m/yr. A steep knickzone, with 3% slope, initiated at the mountain front and has since retreated 10km upstream. Terrace dating results suggest that this knickzone formed around the mid-Holocene, over a duration of less than 1.5 kyr, during which incision accelerated to at least 0.035 m/yr. These incision rates are much larger than the uplift rate across the North Qilian fault, which suggests a climate-related increase in discharge drove rapid incision over the Holocene and formation of the knickzone. Using the relationship between incision rates and the amount of base level drop, we build a bedrock and foreland incision model for the Beida River system. We find that narrowing of channel width plays a key role, as important as increased channel slope, in enhancing the rate of river incision. Our model places the maximum duration of knickzone formation to about 600yr, and the minimum river discharge needed to trigger knickzone formation to be 1.7 times of the present discharge. This period of increased river discharge corresponds to a pluvial lake-filling event at the terminus of the Beida River and correlates with a wet period driven by strengthening of the Southeast Asian Monsoon.
The “greenhouse” climates of the Paleocene and Eocene have formed the focus for many proxy and modelling studies in recent decades, as they are the closest geological analogues for our future warmer anthropogenic world. Yet, the long-term evolution of ocean temperature and carbonate chemistry at orbital-resolution, especially at low latitudes, are still poorly constrained. Here we present new orbital-resolution foraminiferal trace metal (Mg/Ca & B/Ca) records spanning the late Paleocene to early Eocene (~58–53 Ma) from a new splice between ODP Site 758 and IODP Site U1443, Ninetyeast Ridge, northern Indian Ocean. We generated coupled Mg/Ca and B/Ca records from well-preserved mixed layer and thermocline-dwelling planktic foraminifera, and benthic foraminifera deposited at a shallow palaeo-water depth (~1500 m), to construct temperature change and carbonate chemistry (related to pH and DIC concentration) across a water column depth transect above Ninetyeast Ridge. Our new trace metal records are the first long-term orbital-resolution records of their kind from the poorly studied Indian Ocean. We estimate both the magnitude of long-term warming and associated carbonate chemistry change from the late Paleocene–early Eocene, as well as the magnitude of change on orbital (405- & 100-kyr) timescales. In addition, a portion of the Paleocene-Eocene Thermal Maximum is resolved in our records, providing a critical minimum constraint for the magnitude of temperature and carbonate chemistry change within the low-latitude Indian Ocean during this hyperthermal event.
We describe new cosmogenic Be-10 and C-14 exposure age dating on previously glaciated bedrock samples from Lyell Canyon as constraints to model the glacier’s rate and timing of thinning and retreat after the Last Glacial Maximum (LGM). Close analysis of deglaciation following the LGM (22-12 ka) can offer insight into how glacier retreat proceeds in a warming climate. The extent and age of the LGM glaciation in Yosemite National Park, California are relatively well-constrained. Our new exposure ages from Yosemite can quantify the change of the glaciation after the LGM. This is important because the rate and timing of glacier retreat after the LGM allows us to learn about the LGM-Holocene climate transition. We collected 16 granodiorite bedrock samples from the Lyell Canyon walls in three vertical transects: at the end, in the middle, and near the head of Lyell Canyon. Sample elevations range from 2781m to 3388m. The samples are being processed for cosmogenic Be-10 and C-14 concentrations (for the lower and higher elevations in the transects, respectively). Together with previously acquired Be-10 exposure ages from glacial polished bedrock and boulders at the canyon floor, our vertical transects will help to define the relationship between glacier retreat and thinning along the valley. The combination of different nuclide measurements has the potential to reveal whether the glacier melted rapidly or went through multiple thinning and thickening cycles. We created several simple forward models of cosmogenic Be-10 and C-14 exposure ages on the valley wall for different glacier thinning patterns: (i) rapid thinning, (ii) thinning and thickening cycles during the melting, (iii) thickening first, followed by thinning, and (iv) breaking an upper small cirque glacier from the main glacier during the thinning. After we have obtained all our data, we will compare the exposure age data to our modeled scenarios, as well as local paleoclimate records, to quantify the glacier’s geometry and mass balance during the climate warming period. Understanding the timing, rates, and patterns of LGM retreat and thinning constitute a useful test case that aids mountain glacier melting predictions and water budget planning under contemporary climate change in analogous environments.