A major challenge to community code development and management is the testing and validation of public contributions. The community-developed GFDL Finite-Volume Cubed Sphere Dynamical Core (FV3) is no exception: automated testing of contributions made to the FV3 public repository is paramount for ensuring the integrity of the many earth-system models and forecasting applications using FV3 as a dynamical core. A build and test system for the FV3 dynamical core was developed for internal testing on NOAA Research and Development High Performance Computing Systems (RDHPCS). We have designed a continuous integration (CI) approach for the FV3 dynamical core Github repository that uses a cloud-based platform to perform automated compilation and reproducibility testing to validate community code contributions. A combination of NOAA RDHPCS Parallel Works virtual machines and containers developed at GFDL are used to compile and test code on the cloud efficiently. We will also discuss how we adapted the FV3 tests for automated CI.
This research aims to address the limitations of current building codes and regulations for coastal flood hazards in the United States. The current codes only consider the 100-year flood hazard, which is not adequate for areas that are prone to more severe floods. The research presents a new methodology that considers longer return period floods by using data from flood insurance studies (FIS), statistical models and best practices to provide guidance for architects and engineers when designing buildings in coastal high hazard areas. The National Flood Insurance Program (NFIP) insurance policies and building codes require structures in coastal high hazard areas to be elevated above the design flood elevation (DFE) without using fill. However, the current elevation procedures only take into account the 100-year base flood elevation with minimal guidance for longer return period floods, which is of particular concern for critical facilities and buildings with a longer design life such as institutional buildings. The methodology proposed in this research uses stillwater elevations (SWEL) from FIS and statistical models to extrapolate flood elevations associated with longer return periods. The FIS data is fitted using the Gumbel extreme value distribution, which results in an equation that can be used for extrapolating flood elevations beyond code requirements and current best practices. The results are evaluated using R 2 values, differences in projected elevations and known elevations for the same return period, and normalized data for the 100-year SWELs. It's important to note that the results of this research are not intended to be integrated into the current codes or policy regulations in the United States. Instead, it's intended to provide generalized guidance to aid practitioners in decision making by consolidating current codes, best practices, and characteristics of the changing coastal environment. This research aims to provide architects and engineers with a better understanding of the potential flood hazards in coastal areas and help them design buildings that are more resilient to these hazards. stillwater elevations (SWEL), design flood elevation (DFE), National Flood Insurance Program (NFIP), base flood elevation (BFE), Gumbel extreme value distribution, flood insurance studies (FIS)
Though precise, most LiDARs are vulnerable to position and pointing errors as deviations from the expected principal axis lead to projection errors on target. While fidelity of location/pointing solutions can be high, determination of uncertainty remains relatively limited. As a result, NASA’s 2021 Surface Topography and Vegetation Incubation Study Report lists vertical (horizontal, geolocation) accuracy as an associated parameter for all (most) identified Science and Application Knowledge Gaps, and identifies maturation of Uncertainty Quantification (UQ) methodologies on the STV Roadmap for this decade. The presented generalized Polynomial Chaos Expansion (gPCE) based method has wide ranging applicability to improve positioning, geolocation uncertainty estimates for all STV disciplines, and is extended from the bare earth to the bathymetric lidar use case, adding complexity introduced by entry angle, wave structure, and sub-surface roughness. This research addresses knowledge gaps in bathy-LiDAR measurement uncertainty through a more complete description of total aggregated uncertainties, from system level to geolocation, by applying a gPCE-UQ approach. Currently, the standard approach is the calculation of the Total Propagated Uncertainty, which is often plagued by simplifying approximations (e.g. strictly Gaussian uncertainty sources) and ignored covariances. gPCE intrinsically accounts for covariance between variables to determine uncertainty in a measurement, without manually constructing a covariance matrix, through a surrogate model of system response. Additionally, gPCE allows arbitrarily high order uncertainty estimates (limited only by the one-time computational cost of computing gPCE coefficients), accurate representation of non-Gaussian sources of error (e.g. wave height energy distributions), and direct integration of measurement requirements into the design of LiDAR systems, by trivializing the computation of global sensitivity analysis.
Key Points:Lagrangian-based HYSPLIT modelling system used to estimate volcanic ash particle trajectories.HYSPLIT simulation took place before and after the massive eruption on 15th January 2022 (termed as pre-caldera and post-caldera respectively in Section 5)Volcanic ash particle deposition and volcanic ash particle position simulated using HYSPLIT for the HTHH submarine volcano massive eruption event.AbstractVolcano-seismic signals such as long-period (LP) events and tremors are important indicators for volcanic activity and unrest. Explosive volcanic eruptions are stunning phenomena that influence the Earth’s natural systems and climate in a variety of ways. This paper discusses the mid-week January 2022 eruption of the HTHH submarine volcano, especially on 15th January an event with many impacts in the region (dynamic, chemical, climate breakdown). Given the potential for a volcanic eruption to affect climate, the oceanic system, or climate variability, consistent and understandable modelling of these exceptional events is critical.The main objective was to determine the volcanic effects in our atmospheric boundary layer (ABL) during the multiple eruptive events occurred on January 2022 at HTHH. Our discussion also contributes to understanding the underlying Earth system dynamics triggered by cataclysmic volcanic eruptions. The Hybrid Single-Particle Lagrangian Integrated Trajectory (HYSPLIT) model system developed by the National Oceanic and Atmospheric Administration’s (NOAA) Air Resources Laboratory was used to deliberate the effects caused by the multiple eruptions of HTHH on mid-week of January 2022. Our modelling results include model trajectories at different frequency levels, volcanic ash deposition and ash particle position from the series of multiple eruption events of submarine volcano HTHH in the mid-week of January 2022.
Geomagnetic storms are primarily driven by stream interaction regions (SIRs) and coronal mass ejections (CMEs). Since SIR and CME storms have different solar wind and magnetic field characteristics, the magnetospheric response may vary accordingly. Using FAST/TEAMS data, we investigate the variation of ionospheric O+ and H+ outflow as a function of geomagnetic storm phase during SIR and CME magnetic storms. The effects of storm size and solar EUV flux, including solar cycle and seasonal effects, on storm time ionospheric outflow, are also investigated. The results show that for both CME and SIR storms, the O+ and H+ fluence peaks during the main phase, and then declines in the recovery phase. However, for CME storms, there is also significant increase during the initial phase. Because the outflow starts during the initial phase in CME storms, there is time for the O+ to reach the plasma sheet before the start of the main phase. Since plasma is convected into the ring current from the plasma sheet during the main phase, this may explain why more O+ is observed in the ring current during CME storms than during SIR storms. We also find that outflow fluence is higher for large storms than moderate storms and is higher during solar maximum than solar minimum.
The identification of a prospective groundwater recharge zone is crucial for supplementing groundwater resources. It's especially critical in the hard rock region, where groundwater is the principal source of potable water and is fast disappearing due to uncontrolled mining. The present study used a combination of modern methodologies and technologies to analyze the groundwater potential zone occurrence, including geographic information system (GIS), remote sensing (RS), electrical resistivity i.e. VES, and multi-criteria decision analysis (MCDA). Several thematic layers were prepared, including geomorphology, lineament density, drainage density, soil type, geology, rainfall, soil texture, elevation, and land use land cover (LULC), which were weighed according to their impact on the groundwater prospect zone. The analytical hierarchical process (AHP) was used in this study to apply normalization to relative assistance. Vertical electrical sounding was used to find water bearing formations/fracture zones at various points throughout the selected region. The five prospective groundwater prospect zones that were delineated using these methods have been classified as very low, low, moderate, high, and very high. The delineated high groundwater potential zones were found in the northeastern part and a little below the central region, while low to moderate zones were found almost evenly distributed all over the region. The acquired result was validated using well yield data, which showed a 72 percent accuracy with our delineated groundwater potential zone. Hence, the AHP model in the current work has outperformed the competition in terms of prediction accuracy.
V. S. Gokul 1,2,3., K. M. Sreejith4., G. Srinivasa Rao 5., M. Radhakrishna 1*., P. G. Betts 21 Department of Earth Sciences, Indian Institute of Technology Bombay, Mumbai 400 076, India.2 School of Earth, Atmosphere and Environment, Monash University, Clayton, Victoria 3800, Australia.3 IITB-Monash Research Academy, Indian Institute of Technology Bombay, Mumbai 400 076, India.4 Geosciences Division, Space Applications Centre, Ahmedabad 380 015, India.5 Department of Applied Geophysics, Indian Institute of Technology (Indian School of Mines) Dhanbad, Dhanbad 826 004, India (*Corresponding author:firstname.lastname@example.org)AbstractThe Indian Ocean has the largest geoid anomaly, known as the Indian Ocean Geoid Low (IOGL). This long wavelength geoid depression has a magnitude of negative 106 m, and is centered south of India. The nature and depth of the sources causing this characteristic low are poorly constrained and has been the subject of debate. In this abstract, we focus on understanding the density contributions to the geoid low from the crust and upper mantle using joint analysis of geoid and gravity data along with published tomographic models in the region. Decomposition of geoid anomalies in the spectral domain indicate that mass anomalies below the upper mantle (> 700 km) contribute to 90% of the total geoid anomaly. In order to compute the upper mantle contribution to the IOGL, we used the Moho geometry and the crustal density structure from the 3-D gravity inversion, and the SL2013sv tomography model for the upper mantle density structure. The presence of density sources, which was not resolved in the modeling within the sub-lithospheric mantle is confirmed upon comparing the crustal and upper mantle (up to 700 km) geoid response below the IOGL with n=10 residual geoid anomaly. Integrated gravity-geoid 2-D modeling of the geometries of the anomalous sources located at the base of LAB and at a depth of 320-340 km, respectively, confirms that the contribution of density structures up to 700 km explains only the ten percent of the IOGL which matches well with the spectral decomposition results. This suggests that the lower mantle sources, such as paleo-subducted slabs or plume sources from the core-mantle boundary significantly contributes to the IOGL.
Abstract ID and Title: 1147876 Role of soil in regulating runoff processes in Pine- and Oak-dominated headwater catchments of the Western HimalayasFinal Paper Number: H44H-02Presentation Type: Online Poster DiscussionSession Number and Title: H44H Runoff Generation Processes: Exploring Thresholds, Sources, and Pathways from Plot to Continental Scales II Online Poster DiscussionSession Date and Time: Thursday, 15 December 2022; 13:45 - 14:45Location: Online Only
Climate change and a growing global population pose ongoing threats to critical resources. As resources required by the agriculture sector continue to diminish, it is critical to leverage the emerging technologies and new solutions within the sector. New cultivation practices have emerged over the years, allowing food to be grown within urban areas. Greenhouses are versatile in the resources needed for their operation, as well as the foods that can be grown. While greenhouses provide a potential for a more constant food supply, there is a lack of optimization between the components. There are benefits to having modular components of a greenhouse, allowing for adjustments or repairs to singular pieces. However, there is inefficiency in the entire system, since each component functions without considering the others. To improve greenhouse efficiency, a closed-loop system can be introduced. A greenhouse is a closed system, and by repurposing, reusing, and recirculating resources, a greenhouse can evolve to have a closed-loop system. This enables the components of a system to share resources more effectively, communicate any systems changes that are required, and minimize waste outputs.This research explores the current technology in the space of agriculture and computer science to create a fully closed-loop system. The most noticeable system components are food waste, nutrient systems, water systems, growing media, and heating and energy. Not all components within a greenhouse can leverage the same artificial intelligence methods and techniques based on existing findings. There are methods in place that allow the components to interpret data gathered from the greenhouse and alter its operational patterns. There remains a lack in communicating this information to other aspects of the system to have it make informed data-driven decisions as well. One can optimize singular components thereby reducing resource reliance, to a certain threshold until it impacts the plant’s development and yield. When all the systems components’ resource needs and outputs converge the functionality of the system can be optimized to utilize resources at a higher efficiency. Results are indicative of very siloed and isolated research, exploring closed-loop systems within greenhouses, but not leveraging its full capabilities.
The Hunga Tonga Volcano eruption launched a myriad of atmospheric waves that have been observed to travel around the world several times. These waves generated Traveling Ionospheric Disturbances (TIDs) in the ionosphere, which are known to adversely impact radio applications such as Global Navigation Satellite Systems (GNSS). One such GNSS application is Precise Point Positioning (PPP), which can achieve cm-level accuracy using a single receiver, following a typical convergence time of 30 mins to 1 hour. A network of ionosondes located throughout the Australian region were used in combination with GNSS receivers to explore the impacts of the Hunga-Tonga Volcano eruption on the ionosphere and what subsequent impacts they had on PPP. It is shown that PPP accuracy was not significantly impacted by the arrival of the TIDs and Spread-F, provided that PPP convergence had already been achieved. However, when the PPP algorithm was initiated from a cold start either shortly before or after the TID arrivals, the convergence times were significantly longer. GNSS stations in northeastern Australia experienced increases in convergence time of more than 5 hours. Further analysis reveals increased convergence times to be caused by a super equatorial plasma bubble (EPB), the largest observed over Australia to date. The EPB structure was found to be ~42 TECU deep and ~300 km across, traveling eastwards at 30 m/s. The Hunga Tonga Volcano eruption serves as an excellent example of how ionospheric variability can impact real-world applications and the challenges associated with modeling the ionosphere to support GNSS.
Thawing of permanently frozen ground (permafrost) has increased in recent decades with negative implications for human and non-human adaptation to climate change. Impacts include reduced ground stability, increased transportation risk, and changes in water availability. Direct measurements of permafrost active layer thickness (the depth of thawed ground overlying permafrost) are sparse. Measurements currently exist for a few hundred sites located primarily in the Northern Hemisphere supported by the Circumpolar Active Layer Monitoring (CALM) Program. The sparsity of direct active layer thickness measurements limits broad-scale understanding of changes in permafrost thaw and confidence in future projections. To address the sparsity of direct active layer thickness measurements, we developed a method to estimate active layer thickness change from streamflow measurements, which integrate processes over broad spatial areas and are more common than point-scale active layer thickness measurements. The method uses classical principles of hydraulic groundwater theory and nonlinear baseflow recession analysis, which sets it apart from prior methods based on linear recession analysis. The method is applied to catchments in the continuous and discontinuous permafrost zone of the North American Arctic containing co-located streamflow and CALM active layer thickness measurements. We find good agreement in the magnitude and direction of measured and predicted active layer thickness trends. This suggests that regional-scale estimates of active layer thickness change can be obtained from streamflow measurements, which may open the door to retrospective estimation of active layer thickness change in data sparse Arctic regions with short, sporadic, or even nonexistent ground-based active layer measurements.