Spectral-based vegetation indices (VI) have been shown to be good proxies of grapevine stem water potential (Ψstem), potentially assisting in irrigation-decision making of commercial vineyards. However, VI-Ψstem correlations are mostly reported at the leaf or canopy scales using sensors attached to leaves or very-high-spatial resolution images derived from sensors mounted on small airplanes or drones. Here, for the first time, we take advantage of the high spatial resolution (3-m), near-daily images acquired from Planet’s nano-satellites constellation to derive VI-Ψstem correlations at the vineyard scale. Weekly Ψstem were measured along the growing season of 2017 in six vines in 81 commercial vineyards and in 60 pairs of vines in a 2.4 ha experimental vineyard in Israel. The clip application programming interface (API), provided by Planet, and Google Earth Engine platform were used to derive spatially continuous time series of four VIs: GNDVI, NDVI, EVI, and SAVI in the 82 vineyards. Results show that per-week multivariable linear models using variables extracted from VI time series successfully tracked spatial variations in Ψstem across the experimental vineyard (Pearson’s-r = 0.45–0.84: N=60). A simple linear regression model enabled monitoring seasonal changes in Ψstem along the growing season in the vineyard (r = 0.80–0.82). Planet VIs and seasonal Ψstem data from the 82 vineyards were used to derive a ‘global’ model for in-season monitoring of Ψstem at the vineyard-level (r = 0.81: RMSE = 17.5%: N=970). The ‘global’ model, which requires only a few VI variables extracted from Planet images, may be used for real-time weekly assessment of Ψstem in Mediterranean vineyards, substantially reducing expenses of conventional monitoring efforts.
Recent research in real-time tsunami early warning can be broadly classified into two approaches. The first involves the use of seismic and regional geodetic data to calculate the tsunami wavefield indirectly through the estimation of earthquake source parameters. The second directly reconstructs the tsunami wavefield using data assimilation of ocean-bottom pressure sensor data such as those from DONET and S-NET (Maeda et al. 2015, Gusman et al. 2016). Data assimilation interpolates between the numerical solution and the observations to make the forecast more consistent with real data. Currently, the most popular method for forecasting the waveform is optimal interpolation, which uses a Kalman filter (KF) like approach, but holds the Kalman gain matrix fixed to reduce the runtime. This approach, coupled with tsunami Green’s functions, is very efficient and generates useful predictions. Here, we demonstrate that more accurate and stable forecasts can be obtained using the ensemble KF (enKF), a more computationally efficient variant of KF, in which the gain matrix is updated according to the physical model and the evolution of the error covariance matrix. The ensemble representation is a form of dimensionality reduction, in that only a small ensemble is propagated, instead of the joint distribution including the full covariance matrix. This method also provides a means to obtain the probability distribution of the forecast at each grid point location. We use a scenario tsunami in the Cascadia subduction zone, generated from a 2D fully-coupled dynamic rupture simulation (Lotto et al., submitted 2018). Randomly perturbed tsunami wave height data is used in the assimilation process, as we propagate the wave using a 1D linear shallow water code on a staggered grid. Better waveform agreement is achieved even in the early stages of assimilation, with much less fluctuation compared to optimal interpolation. We also explore spatial and temporal aliasing effects, in terms of the relation between observation station spacing and wavelength, as well as between assimilation and forecast time intervals. Although enKF is computationally more expensive, we are working on a fast, parallelized GPU implementation, which will significantly reduce the runtime, taking us a step closer to reliable real-time tsunami early warning.
The Tropical Rainfall Measurement Mission (TRMM) Microwave Imager (TMI) and the Global Precipitation Measuring (GPM) Microwave Imager (GMI) have been used as the radiometric transfer standard one after another for the GPM constellation radiometers, during the past nearly two decades. Given that GMI and TMI share only a 13-month common operational period, for the time there is no overlap in between, WindSat can serve as the calibration bridge to provide additional intercalibration for the realization of a consistent multi-decadal oceanic brightness temperature (Tb) product. Thus, we conducted the intercalibration of TMI/GMI for 13-month period, TMI/WindSat for >9 years’ overlap period, and WindSat/GMI XCAL for one year, to assess the Tb bias of one to another. A multi-decadal oceanic Tb dataset was thereafter achieved to ensure a consistent long-term precipitation record that covers TRMM and GPM eras. Moreover, a generic uncertainty quantification model (UQM) was developed by taking various sources of uncertainties into account rigorously and orderly. This UQM model was then applied to quantify the uncertainty estimates associated with these Tb biases. This allows the unified high-sampling-frequency and globally-covered Tb product with associated boundary uncertainties to be much improved for scientific utilization as compared to existing Tb products that are with ad-hoc uncertainties estimates. Moreover, based upon the results of uncertainty quantification process, it is recognized that there is room for improvement in the intercalibration for the water vapor sensitive channels. Further analysis indicates that the issue may be associated with the atmospheric water vapor profile input to the radiative transfer model. Suggestions are subsequently made to use water vapor profile retrieved from millimeter radiometer sounders’ measurements (rather than numerical weather predictions) to determine the impact on the Tb biases of these problematic channels.
To prevent serious problems occurring in abandoned mines such as ground subsidence, it is commonly carried out to fill cavities with some materials. During or after the cavity-filling process, we need to monitor distribution of filling materials in abandoned mines. Various geophysical methods, such as microgravity, electrical resistivity, ground penetrating radar, and seismic methods, have been used to describe abandoned mines themselves or to monitor distribution of filling materials. Microgravity, electrical resistivity, ground penetrating radar, and microseismic methods can be used to detect cavities, but may have some limitation in monitoring material distributions. In this study, we apply the seismic reflection method to image distribution of filling materials in near-surface abandoned mines. As the imaging methods, we use full waveform inversion and reverse time migration. In addition, we apply seismic interferometry to obtain better results. The full waveform inversion and reverse time migration methods are applied to four models which can mainly appear in abandoned mines. Through numerical examples, we investigate feasibility of seismic reflection method for describing filling material distribution in abandoned mine. Acknowledgements This work was supported by the National Research Foundation of Korea (NRF) grant funded by the Korea government (MSIT) (No. 2017R1A2B4002031) and by the project funded by the Ministry of Oceans and Fisheries, Korea (D11603317H480000112).
The subduction zone of the Cocos Plate beneath Southern Mexico has major variations in the megathrust geometry and behavior. The subduction segment beneath the Oaxaca state of Mexico has relatively frequent large earthquakes on the shallow part of the megathrust and within the subducting slab, and it also has large aseismic slow-slip events. The slab geometry under Oaxaca includes part of the subhorizontal “flat-slab” zone extending far from the trench beneath southern Mexico and the beginning of its transition to more regular subduction geometry to the southeast. We study the rupture of the 16 February 2018 Mw 7.2 Pinotepa earthquake near Pinotepa Nacional in Oaxaca that was a thrust event on the subduction interface. The Pinotepa earthquake was about 350 km away from the 8 September 2017 Mw 8.2 Tehuantepec earthquake in the subducting slab offshore Oaxaca and Chiapas; it was in an area of Coulomb stress decrease from the M8.2 quake, so it seems unlikely to be a regular aftershock and was not triggered by the static stress change. Geodetic measurements from interferometric analysis of synthetic aperture radar (InSAR) and time-series analysis of GPS station data constrain finite-fault slip models for the M7.2 Pinotepa earthquake. We analyzed InSAR data from Copernicus Sentinel-1A and -1B satellites and JAXA ALOS-2 satellite. Our Bayesian (AlTar) static slip model for the Pinotepa earthquake shows all of the slip confined to a very small (10-20 km diameter) rupture, similar to some early seismic waveform fits. The Pinotepa earthquake ruptured a portion of the Cocos megathrust that has been previously mapped as partially coupled and shows that at least small asperities in that zone of the subduction interface are fully coupled and fail in high-stress drop earthquakes. The previous 2012 Mw 7.4 Ometepec earthquake is another example of asperity in the partially coupled zone but was not imaged by InSAR so the rupture extent is not so well constrained. The preliminary NEIC epicenter for the Pinotepa earthquake was about 40 km away (NE) from the rupture imaged by InSAR, but the NEIC updated epicenter and Mexican SSN location are closer. Preliminary analysis of GPS data after the Pinotepa earthquake indicates rapid afterslip on the megathrust in the region of coseismic slip. Atmospheric noise masks the postseismic signal on early InSAR data.
Melt ponds play an important role in the seasonal evolution of Arctic sea ice. During the melt season, snow atop the sea ice begins to metamorphose and melt, forming ponds on the ice. These ponds reduce the albedo of the surface, allowing for increased solar energy absorption and thus further melting of snow and ice. Analyzing the spatial distribution and temporal evolution of melt ponds helps us understand the sea ice processes that occur during the summer melt season. It has been shown that the inclusion of melt pond parameters in sea ice models increases the skill of predicting the summer sea ice minimum extent. Previous studies have used remote sensing imagery to characterize surface features and calculate melt pond statistics. Here we use new observations of melt ponds obtained by the Digital Mapping System (DMS) flown onboard NASA Operation IceBridge (OIB) during two Arctic summer melt campaigns which surveyed thousands of kilometers of sea ice and resulted in more than 45,000 images. One campaign was conducted in the Beaufort Sea (July 2016), and one in the Lincoln Sea and the Arctic Ocean north of Greenland (July 2017). Using these data we expect to advance our understanding of the differences and similarities between melt pond features on young, thin sea ice seen in the Beaufort Sea versus those on multi-year ice. We have developed a pixel-based classification scheme by considering the different RGB spectral values associated with each surface type. We identify four sea ice surface types (level ice, rubbled ice, open water, and melt ponds). The classification scheme enables the calculation of parameters including melt pond fraction, ice concentration, melt pond area, and melt pond dimensions. We compare results with data from the Airborne Topographic Mapper (ATM), a laser altimeter also operated during these OIB missions. Given the extent over which the OIB data are available, regional information may be derived. Leveraging existing satellite data products, we examine whether the high-resolution airborne statistics are representative of the region and can be scaled up for comparison against satellite-derived parameters such as ice concentration and extent.
Current estimates of the impact of an increase in greenhouse gas concentrations on global warming, including by the IPCC and in General Circulation Models, are based on radiative forcing. Two recently published formulations of the theoretical foundation for radiative forcing are reviewed. Radiative forcing at the tropopause is calculated by assuming that the absorption of terrestrial radiation by greenhouse gases is determined by their spectral properties, using a radiative transmittance function based on the line strength and line shape of the absorption lines and the vertical optical mass, whilst, under conditions of local thermodynamic equilibrium, the emission of radiation at each layer of the atmosphere is given by the Planck blackbody function at the local atmospheric temperature. Radiative forcing is given by the net change in radiative flux at the troposphere due to an increase in greenhouse gases. Climate change is seen to take place when the system responds to restore the radiative equilibrium. Without any theoretical foundation, a linear relationship between the change in surface temperature in °C and radiative forcing is assumed. Here, IPCC 2013’s estimate of radiative forcing of 2.83 W/m2 due to the increase in greenhouse gases from 1750 to 2011 is used to calculate the resulting change in radiative flux at the Earth’s surface under reasonable assumptions, and the Stefan-Boltzmann law is applied to calculate the change in surface temperature of between 0.8 and 1.0 °C. This represents a climate sensitivity of around 0.32 °C/(W/m2), about one third of the climate sensitivity of 1.0 °C/(W/m2) used by IPCC 2013 that was obtained from the mean regression-based values of 30 climate models.
To date, actively flowing lava has only been observed on Earth and on Jupiter’s moon Io. This lack of observation means that for the vast majority of volcanic systems in the Solar System, solidified lava-flow morphologies are used to infer important information about eruption and emplacement parameters. These include: lava supply rate, lava composition, lava rheology, and determination of laminar or turbulent emplacement regimes. Commonly used models that relate simple lava flow morphologic properties (e.g., width, thickness, length) to emplacement characteristics are based on assumptions that are readily misinterpreted. For example, the simplifying assumption of fully turbulent lava flow allows for a thermally mixed flow interior, but ignores the lava properties that naturally work to suppress full turbulence (such as thermal boundary layers encasing active lava flows, and a temperature-dependent lava rheology). However, full turbulence in silicate lava flows erupted into environments that have temperatures lower than the lava solidification temperature requires a rare combination of characteristics. We model Bingham Plastic, Newtonian, and Herschel-Bulkley fluids in rectangular channels, tubes, and sheets with computational fluid dynamics (COMSOL) software to obtain flow solutions and general flow rate equations and compare them to field measurements of volcanic velocity and flow rates. We present these as more realistic alternatives to older simpler rate-from-morphology models. We find that several lava rheology properties work together to delay the onset of turbulence as compared to isothermal Newtonian materials, and that while turbulent lavas flows certainly exist, they are not as prevalent as the published literature might indicate. Results obtained from models that assume full turbulence in silicate flows on the terrestrial planets should therefore be interpreted cautiously.
The formation of crystal clusters may influence the mechanical behaviour of magmas. However whether clusters form largely from physical contact in a mobile state during sedimentation and stirring, or require residence in a crystal mush is not well understood. We use discrete-element fluid dynamics numerical experiments to illuminate the potential for clustering from both sedimentation and open-system mixing in a model olivine basalt reservoir for three different initial solid volume fractions. Crystal clustering is quantified using both bulk measures of clustering such as the R index and Ripley’s L(r) and g(r) functions and with a variable scale technique called Voronoi tessellations, which also provide orientation data. Probability density functions for the likelihood of crystal clustering under freely circulating conditions indicates that there is nearly an equal likelihood for clustering and non-clustered textures in natural examples. A crystal cargo in igneous rock suites exhibiting a dominance of crystal clusters may be largely sampling magmatic materials formed in a crystal mush.
The climatology of upwelling in the tropical tropopause layer (TTL) in current climate simulations and in future climate projections is examined using models participated in the Coupled Model Intercomparison Project Phase 5 (CMIP5). Large intermodel differences in upwelling in the TTL appear in the current climate simulations. Model composite analysis and upwelling diagnosis based on the zonal momentum budget indicate that the intermodel differences in upwelling are controlled by meridional eddy momentum fluxes associated with tropical planetary waves and midlatitude synoptic waves. Future climate simulations indicate that upwelling changes in the TTL are significantly correlated with the upwelling in current climate simulations. Models with strong (weak) TTL upwelling in the current climate simulations tend to project strong (weak) upwelling enhancement in the future climate. The intermodel differences in the upwelling change arise from the same dynamical factors as the current climate cases. The contribution of sea surface temperature (SST) to the intermodel upwelling differences is examined by SST-prescribed simulations in CMIP5. The contribution of intermodel SST differences to the upwelling is smaller than that of intrinsic atmospheric intermodel differences. The significant correlation of the tropical upwelling between the current climate simulations and the future changes appears to be independent of the target latitude range.
The Western US accounts for a significant amount of the forested biomass and carbon uptake within the conterminous United States. Warming and drying climate trends combined with a legacy of fire suppression have left Western forests particularly vulnerable to disturbance from insects, fire and drought mortality. These challenging conditions may significantly weaken this region’s ability to uptake carbon from the atmosphere and warrant continued monitoring. Traditional methods of carbon monitoring are limited by the complex terrain of the Rocky Mountains that lead to complex atmospheric flows coupled with heterogeneous climate and soil conditions. Recently, solar induced fluorescence (SIF) has been found to be a strong indicator of GPP, and with the increased availability of remotely-sensed SIF, provides an opportunity to estimate GPP and ecosystem function across the Western US. Although the SIF-GPP empirical linkage is strong, the mechanistic understanding between SIF and GPP is lacking, and ultimately depends upon changes in leaf chemistry that convert absorbed radiation into photochemistry, heat (via non-photochemical quenching (NPQ)), leaf damage or SIF. Understanding of the mechanistic detail is necessary to fully leverage observed SIF to constrain model estimates of GPP and improve representation of ecosystem processes. Here, we include an improved fluorescence model within CLM 4.5 to simulate seasonal changes in SIF at a sub-alpine forest in Colorado. We find that when the model includes a representation of sustained NPQ the simulated fluorescence is much closer to the seasonal pattern of SIF observed from the GOME-2 satellite platform and a custom tower mounted spectrometer system. We also find that average air temperature may be used as a predictor of sustained NPQ when observations are not available. This relationship to air temperature is promising because it may allow for efficient spatial upscaling of SIF simulations, given widespread availability of temperature data, but not NPQ observations. Further improvements to the fluorescence model should focus upon distinguishing between the impacts of NPQ versus the de-activation of photosystems brought on by high-stress environmental conditions.
The origin of Kiruna-type magnetite-apatite deposits, which are thought to form by magmatic and/or hydrothermal processes, has recently come under renewed scrutiny. Geological and geochemical studies of volcanic-hosted magnetite deposits that include magnetite lava flows and ash layers at El Laco, a volcano in the Central Volcanic Zone, northern Chile, suggest a formation by eruptive emplacement of an iron oxide-rich melt. The generation of such exotic high density, low viscosity melts by dissociation from an andesitic host magma contaminated by shallow crustal sediments has only recently been shown experimentally. The dynamics of volcanic emplacement have remained enigmatic because the high density of iron-rich melts seems to negate their eruption potential. Yet, observations of ubiquitous vesiculation, degassing structures, and steam-heated alteration provide important clues that volatiles had a pivotal role in the volcanic emplacement. Here, we posit a scenario in which an iron-rich immiscible liquid gravitationally separates from its andesitic parent magma in a shallow magma reservoir and subsequently rises as a bubbly suspension along volcano-tectonic faults extending to the flanks of the edifice. We test this hypothesis through numerical models that capture both the deformation of the volcanic edifice as well as the melt transport within. Preliminary results indicate that separation of a low-viscosity, iron- and volatile-rich melt from a silicic magma within a reasonable time is possible only if an interconnected melt drainage networks forms at the granular scale. Results further suggest that magma reservoir deflation and/or minor local extension combined with the topographic load of the edifice may explain normal faults connecting the magma reservoir with magnetite flow locations on the volcano flanks. Finally, our models show that hydrostatically driven flow of iron-rich melts into these faults at depth may trigger volatile exsolution and bubble expansion to provide sufficient driving force for an eruptive emplacement. Although the case for such magmatic ore formation is perhaps strongest at El Laco, evidence from other localities suggests that similar processes have been at work. The new insights derived from our models may, therefore, apply more generally to Kiruna-type deposits elsewhere.
The release of freshwater into the North Atlantic by glacial Lake Agassiz towards the end of the last glacial period is hypothesized to have triggered the Younger Dryas (Y.D.) cold event of 12.9 -11.7 ka ago. It is thought that the influx of freshwater into the Atlantic weakened meridional overturning circulation, impeding heat transport to the northern latitudes. A subject of debate at present is how the freshwater released from Lake Agassiz was routed to the ocean. One suggestion is that the retreat of the Laurentide Ice Sheet (LIS) from the Lake Superior Basin allowed water from Lake Agassiz, which was flowing south to the Gulf of Mexico, to be redirected eastward via the St Lawrence River to reach the Atlantic. Reported surface exposure ages indicate that the St Lawrence River route became available between 13.0 and 12.7 ka ago, timing that is coincident with the onset of the Y.D. event. An alternative route for the drainage from Lake Agassiz is that it flowed northwestwards to the Arctic Ocean via the Mackenzie River in northwest Canada. This suggestion has been affirmed by some modeling studies that found that meridional overturning in the North Atlantic would have been weakened more significantly if freshwater was introduced via the Arctic. A flow path, and deposits that have been identified on the Canadian Arctic Coastal Plain have yielded luminescence ages indicating that a major flood event occurred sometime between 13.0 and 11.7 ka ago. From that age range, however, it is not possible to ascertain if the flood triggered the Y.D. Thus, in this study, in order to determine a more precise timeline for the northwestward drainage of Lake Agassiz, we collected postglacial eolian dune sands from northeast Alberta, Canada, an area through which water from Lake Agassiz would have had to pass in order to reach the Arctic Ocean. The dune sands were sourced by wind from sediments left behind following the drainage of glacial Lake McConnell which had also been dammed in the region by the LIS. Preliminary luminescence ages obtained from the eolian sands suggest that northeast Alberta was free of both ice and glacial lakes by 13.5-12.5 ka ago. This indicates that flow from Lake Agassiz via the Mackenzie River cannot be excluded as a trigger for the Y.D. since the northwestward drainage path appears to have also been available at the start of the event.
We have developed a suite of computer-based interactive educational activities for introductory sedimentology and stratigraphy courses, called SedEdu. Specifically, SedEdu is a free and open-source Python framework through which any contributor can easily and seamlessly integrate their own “module” into the suite for distribution. In this way, SedEdu is a community-built tool written by sedimentology and stratigraphy educators for sedimentology and stratigraphy educators. The so-called modules are coupled with “activities” that guide students through a concept, incrementally introducing components of the subject, and testing for understanding and retention throughout the activity. For example, one module (“rivers2stratigraphy”) illustrates the concept of the construction of fluvial stratigraphy through a laterally migrating river that leaves behind a channel-sand body, which is subsided into a stratigraphic profile. The module allows students to modulate system properties like water discharge, subsidence rate, and avulsion timescale, and observe changes in the developed stratigraphic record. At one point during an activity accompanying this module, students are guided to decrease the basin subsidence rate, and then to measure (using in-activity tools) the change in sand-body stacking patterns before and after the subsidence change. A small-scale test was conducted, wherein the SedEdu rivers2stratigraphy module was used as a curriculum component. In the test, one section of an undergraduate sedimentology and stratigraphy class was taught using traditional lecture materials, and the other used the module on student computers. The efficacy of this style of technology-enabled active learning was tested through a multivariate assessment of student’s understanding of fluvial stratigraphy construction.
Due to its inherent ability to estimate the background error covariances, an ensemble Kalman filter (EnKF) is thought to be a practical approach to the strongly coupled data assimilation problems, where an entire coupled model state is estimated as if it was a single integrated system. However, increased complexity and the multiple time scale of the coupled system aggravate the rank-deficiency and spurious correlation problems caused by limited ensemble size available for the analysis. To alleviate these problems, a distance-independent localization method to systematically select the observations to be assimilated into each model variable has been developed and successfully tested with a nine-variable coupled model with slow and fast modes. This method, called correlation-cutoff method, utilizes the mean squared ensemble error correlation between each observable and model variable to identify where the cross-update should be used, and we cut off the assimilation of observations when the squared error correlation becomes small. To implement the method on a more realistic model, we thoroughly investigate inter-fluid background covariances in an atmosphere-ocean coupled general circulation model where the spatiotemporal scales of coupled dynamics significantly vary by latitudes and driving processes.
Atmospheric temperature and relative humidity profiles are fundamental for atmospheric research such as numerical weather prediction and climate change assessment. Hyperspectral satellite data contain a wealth of relevant information and have been used in many algorithms (e.g. regression-based methods) to retrieve these profiles. Deep Learning or Deep Neural Network (DNN) is capable of finding complex relationships (functions) between pairs of input and output variables by assembling many simple non-linear modules together and learning the parameters therein from large amounts of observations. DNN has been successfully applied in many fields (such as image classification, object detection, language translation). In this study, we explored the potential of retrieving atmospheric profiles from hyperspectral satellite radiation data using DNN. The requirement for applying the DNN technique is satisfied with large amount of hyperspectral radiance data provided by United States Suomi National Polar (NPP) Cross-track Infrared Sounder (CrIS) and the reanalyzed atmospheric profiles data provided by the European Centre for Medium-Range Weather Forecasts (ECMWF). The proposed DNN consists of two consecutive parts. In the first part, the first 1245 bands of the NPP CrIS hyperspectral radiance data (648.75 to 2555 cm-1) are compressed into a 300-element vector representing their key features by stacked AutoEncoders. Then, in the second part, the multi-layer Self-Normalizing Neural Network (SNN) is used to map the compressed vector (of 300 elements) into 55-layer temperature and relative humidity profiles. The DNN trainable variables are optimized by minimizing the difference of its predictions and the matched ECMWF temperature and humidity profiles (53230 samples). Finally, the DNN retrieved atmospheric temperature and relative humidity profiles and those provided by the NOAA Unique Combined Atmospheric Processing System (NUCAPS, the official retrieval products for CrIS) are compared with the matched radiosonde observations at one location.
To improve our understanding of the Canoe Reach Geothermal Field in the Rocky Mountain Trench of western Canada, we examine the distribution of local earthquakes using a network of 10 broadband seismometers deployed over a 40 by 60 km area across the trench. The Canoe Reach area exhibits strong cultural noise from communities, roads and trains that makes detecting earthquake signals challenging. We propose detecting earthquakes in the area of the trench by measuring the kurtosis of the seismic signal, which is a statistical moment representing the distribution tail and is insensitive to emerging signals but more sensitive to impulsive earthquake onsets. Examining the kurtosis of the three-component seismograms for four months of data, we identified eight local earthquakes. An earthquake catalog produced by STA/LTA detections found 11 events for the same four-month period, four of which were detected through our kurtosis approach. By further exploring the kurtosis detections, we are refining our catalog to identify the source of discrepancies between it and the STA/LTA catalog. We then estimated locations of our detected events, and the uncertainties of those locations, through nonlinear Bayesian sampling. This method treats the origin times, half-space velocities, and the picking noise for P and S arrivals as unknowns. We employed this parameterization to test whether Bayesian sampling could account for the challenging noise environment. Locating our detected events found that five events occurred outside the seismic network and three events occurred inside. The average horizontal and vertical uncertainty is 28 and 19 km respectively for the outside events. These uncertainties are lower at 7 and 9 km for the inside events. While the inside events exhibit lower spatial uncertainties than the outside events, their uncertainties remain large. We then examined whether the uncertainties could be further improved by jointly locating multiple events. Jointly inverting two of the events from within the array decreased their average horizontal uncertainty from 6.5 to 2.5 km and the vertical from 14 to 7 km. Reducing uncertainties in the locations of the events in this manner will clarify their distribution and all for an improved understanding of the seismicity and structure of the Rocky Mountain Trench.
Global food security is negatively affected by drought. Climate projections show that drought frequency and intensity may increase in different parts of the globe. Early season forecasts on drought occurrence and severity could help to better mitigate the negative consequences of drought. The objective of this study was to assess if interannual variability in agricultural productivity in Chile can be accurately predicted from freely-available, near real-time data sources. As the response variable, we used the standard score of seasonal cumulative NDVI (zcNDVI), based on 2000-2017 data from Moderate Resolution Imaging Spectroradiometer (MODIS), as a proxy for anomalies of seasonal primary productivity. The predictions were performed with forecast lead-times from one- to six-month before the end of the growing season, which varied between census units in Chile. Predictor variables included the zcNDVI obtained by cumulating NDVI from season start up to prediction time; standardised precipitation indices, derived from satellite rainfall estimates, for time-scales of 1, 3, 6, 12 and 24 months; the Pacific Decadal Oscillation and the Multivariate ENSO oscillation indices; the length of the growing season, and latitude and longitude. We used two prediction approaches: (i) optimal linear regression (OLR) whereby for each census unit the single predictor was selected that best explained the interannual zcNDVI variability, and (ii) a multi-layer feedforward neural network architecture, often called deep learning (DL), where all predictors for all units were combined in a single spatio-temporal model. Both approaches were evaluated with a leave-one-year-out cross-validation procedure. Both methods showed good prediction accuracies for small lead times and similar values for all lead times. The mean R2cv values for OLR were 0.95, 0.83, 0.68, 0.56, 0.46 and 0.37, against 0.96, 0.84, 0.65, 0.54, 0.46 and 0.38 for DL, for one, two, three, four, five, and six months lead time, respectively. Given the wide range of climates and vegetation types covered within the study area, we expect that the presented models can contribute to an improved early warning system for agricultural drought in different geographical settings around the globe.