Peidong Shi

and 11 more

The application of machine learning techniques in seismology has greatly advanced seismological analysis, especially for earthquake detection and seismic phase picking. However, machine learning approaches still face challenges in generalizing to datasets that differ from their original setting. Previous studies focused on retraining or transfer-training models for these scenarios, though restricted by the availability of high-quality labeled datasets. This paper demonstrates a new approach for augmenting already trained models without the need for additional training data. We propose four strategies - rescaling, model aggregation, shifting, and filtering - to enhance the performance of pre-trained models on out-of-distribution datasets. We further devise various methodologies to ensemble the individual predictions from these strategies to obtain a final unified prediction result featuring prediction robustness and detection sensitivity. We develop an open-source Python module quakephase that implements these methods and can flexibly process input continuous seismic data of any sampling rate. With quakephase and pre-trained ML models from SeisBench, we perform systematic benchmark tests on data recorded by different types of instruments, ranging from acoustic emission sensors to distributed acoustic sensing, and collected at different scales, spanning from laboratory acoustic emission events to major tectonic earthquakes. Our tests highlight that rescaling is essential for dealing with small-magnitude seismic events recorded at high sampling rates as well as larger magnitude events having long coda and remote events with long wave trains. Our results demonstrate that the proposed methods are effective in augmenting pre-trained models for out-of-distribution datasets, especially in scenarios with limited labeled data for transfer learning.
We develop a rate- and state-dependent friction (RSF) model to investigate a compendium of recent experiments performed in the laboratory. In the documented experiments, a fault was sheared until macroscopic stick-slip frictional failure. Before macro-failure, small precursor seismicity nucleated from regions that also experienced aseismic slow slip. This behavior requires heterogeneity and is defined in our model as local variation in frictional parameters inferred from the roughness. During sliding wear introduced a smooth-polished surface onto a previously rough surface and was quantified using a bimodal Gaussian distribution of surface heights. We used spatial distribution of the smooth and rough sections to impose binary partitioning in critical slip distance $D_{c}$ to a planar frictional model. Simulations revealed that local seismicity nucleated on the “smooth’ sections, while the larger “rough’ section hosted aseismic slip. As the level of heterogeneity between smooth and rough sections increased, the model transitioned from a predominantly stick-slip to creeping. The simulations produced a dominant asperity, which appeared to control aspects of rupture nucleation: ($i$) weak heterogeneity caused the dominant asperity to generate foreshocks but also “ignite’ cascade-up fault-wide event, while ($ii$) strong heterogeneity led to constrained repeaters. Seismic source properties: average slip $\delta$, seismic moment $M_{0}$, stress drop $\Delta \tau$ and fracture energy $G^{’}$, were determined for each event and agreed with separate kinematic estimates made independently from seismic measurements. Our numerical calculations provide insight into rate-dependent cascade-up nucleation theory where frictional heterogeneity here was associated with wear of solid frictional contacts in the laboratory.
We investigate experimental results from a direct shear friction apparatus, where a fault was formed by pressing mature, worn surfaces of two polymethyl methacrylate (PMMA) samples on top of each other in a dry environment. The fault was sheared until macroscopic stick-slip frictional failure occurred. Before the macro-failure small precursory seismicity nucleated from regions that also experienced aseismic slow slip. These precursory events did not cascade-up into gross fault rupture and arrested locally. Reasons as to why ruptures arrested are investigated using a 1-D rate and state friction (RSF) model. Surface profilometry of the fault surface taken \textit{a posteriori} revealed wear in the form of a bimodal Gaussian distribution of surface height. In our model, this unique distribution of surface roughness is determined to be a proxy for the heterogeneous spatial description of the critical slip distance $D_{c}$. We assume that smooth (polished) sections of fault exhibited lower $D_{c}$ than rougher sections of the bimodal Gaussian roughness profile. We used a quasi-dynamic RSF model that determined localized seismicity initiated at the smooth sections. Source properties: average slip $\delta$, seismic moment $M_{0}$, stress drop $\Delta \tau$ and fracture energy $G^{’}$, were determined for each event. We compare the numerically modeled source properties to experimental source characteristics inferred from seismological estimates using an array of acoustic emission sensors from a concerted study. We discuss similarities, discrepancies and assumptions between these two independent models (kinematic and dynamic) used to study earthquakes for the first time in the laboratory.

Leila Mizrahi

and 2 more

We propose two new methods to calibrate the parameters of the epidemic-type aftershock sequence (ETAS) model based on expectation maximization (EM) while accounting for temporal variation of catalog completeness. The first method allows for model calibration on earthquake catalogs with long history, featuring temporal variation of the magnitude of completeness, mc. This extended calibration technique is beneficial for long-term probabilistic seismic hazard assessment (PSHA), which is often based on a mixture of instrumental and historical catalogs. The second method jointly estimates ETAS parameters and high-frequency detection incompleteness to address the potential biases in parameter calibration due to short-term aftershock incompleteness. For this, we generalize the concept of completeness magnitude and consider a rate- and magnitude-dependent detection probability – embracing incompleteness instead of avoiding it. Using synthetic tests, we show that both methods can accurately invert the parameters of simulated catalogs. We then use them to estimate ETAS parameters for California using the earthquake catalog since 1932. To explore how the newly gained information from the second method affects earthquakes’ predictability, we conduct pseudo-prospective forecasting experiments for California. Our proposed model significantly outperforms the base ETAS model, and we find that the ability to include small earthquakes for simulation of future scenarios is the main driver of the improvement. Our results point towards a preference of earthquakes to trigger similarly sized aftershocks, which has potentially major implications for our understanding of earthquake interaction mechanisms and for the future of seismicity forecasting.