Fronts are ubiquitous in the climate system. In the Southern Ocean, fronts delineate water masses, which correspond to upwelling and downwelling branches of the overturning circulation. A robust understanding of Southern Ocean fronts is key to projecting future changes in overturning and the associated air-sea partitioning of heat and carbon. Classically, oceanographers define Southern Ocean fronts as a small number of continuous linear features that encircle Antarctica. However, modern observational and theoretical developments are challenging this traditional framework to accommodate more localized views of fronts [Chapman et al. 2020]. In this work, we present two related methods for calculating fronts from oceanographic data. The first method uses unsupervised classification (specifically, Gaussian Mixture Modeling or GMM) and an interclass metric to define fronts. This approach produces a discontinuous, probabilistic view of front location, emphasising the fact that the boundaries between water masses are not uniformly sharp across the entire Southern Ocean. The second method uses Sobel edge detection to highlight rapid changes [Hjelmervik & Hjelmervik, 2019]. This approach produces a more local view of fronts, with the advantage that it can highlight the movement of individual eddy-like features (such as the Agulhas rings). The fronts detected using the Sobel method are moderately correlated with the magnitude of the velocity field, which is consistent with the theoretically expected spatial coincidence of fronts and jets. We will present our python GitHub repository, which will allow researchers to easily apply these methods to their own datasets. Figure caption Two methods for interpretable front detection. Solid lines represent classical fronts. (a) The “inter-class” metric, which indicates the probability that a grid cell is a boundary between two classes. The classes are defined by GMM of principal component values (PCs) derived from both temperature and salinity. The different colors indicate different class boundaries. (b) Sobel edge detection: approximately the magnitude of the spatial gradient of the PCs divided by each field’s standard deviation, which highlights locations of rapid change.
It is important that we prepare tomorrow’s scientists, decision makers, and communities to address the societal impacts of a changing climate. In order to respond to, manage, and adapt to those changes, citizens of all ages need accurate, up-to-date information, knowledge of the sciences, and analytical skills to make responsible decisions and long-term resiliency plans regarding these challenging topics. The Climate Literacy and Energy Awareness Network (CLEAN, http://cleanet.org) is 1) providing teaching resources for educators through the CLEAN Collection and pedagogical support for teaching climate and energy science; and 2) facilitating a professionally diverse community of climate and energy literacy stakeholders, called the CLEAN Network, to share and leverage efforts to extend the reach and effectiveness of climate and energy education. This presentation will provide an overview of the CLEAN web portal and techniques we have used to market it. We will showcase the CLEAN Collection, which is comprised of 700+ resources (curricula, activities, videos, visualizations, and demonstrations/experiments) that have been reviewed for scientific accuracy, pedagogical effectiveness, and technical quality. Recent activities of the CLEAN Network will be highlighted. We will present findings from our web analytics work, which monitors visitor use of the CLEAN web portal. Through analytics data, we will show evidence of successful CLEAN marketing efforts. The results of our recent pop-up survey, which has been completed by CLEAN visitors from six continents, will also be discussed. Survey results will provide detailed information about how our audiences use the web portal. We anticipate that our insights from the CLEAN network can aid other climate and energy education programs in effectively increasing the visibility of their vital work.
In this study, we assess pan-Arctic and regional seasonal sea ice forecast skill in versions 1 and 2 of the Canadian Seasonal to Inter-annual Prediction System (CanSIPSv1 and CanSIPSv2) dynamical seasonal prediction systems. Each version applies a multi-model ensemble approach using two coupled general circulation models. CanSIPSv2 features a new model formulation (where one of the underlying models, CanCM3, was replaced with GEM-NEMO) and improved sea ice initialization. We show that the modifications made in the development of CanSIPSv2 substantially enhance forecast skill. For example, the lead time for skillful forecasts of detrended pan-Arctic September sea ice area increases from three months in CanSIPSv1 to seven months in CanSIPSv2. We also show that forecasts of detrended winter sea ice area are improved, with CanSIPSv2 producing skillful forecasts for all considered lead times (up to 11 months) for December, January, and February. We find that improvements in pan-Arctic forecast skill are due primarily to improved initialization methods.Further, a potential predictability experiment is conducted for one of the two CANSIPSv2 models, CanCM4, in order to establish – in conjunction with similar studies – the potential to further increase forecast skill with improved models, observations and initialization methods.
Through the PolarTREC program that pairs US educators with field researchers in polar regions, our team has been collaborating on K-12 and undergraduate curriculum development and outreach activities on Arctic amplification of climate change. We have created new lesson plans and activities focused on how organic carbon from thawing permafrost in the Arctic is turned into carbon dioxide, a greenhouse gas that amplifies climate change. This presentation will cover our collaboration to bring this knowledge and experience to high school science students through classroom activities and projects. The focus will be laboratory activities designed for the chemistry classroom: use of spectrophotometry to assess degree of photobleaching in organic samples and evaluation of data from high resolution mass spectrometry to characterize complex organic mixtures. We will also review lessons learned from our efforts to promote enthusiasm for polar science within the general public and discuss the benefits of the PolarTREC program to researchers, educators, students, and the public.
It’s very difficult to understand the mechanism producing solar magnetic fields, as it mingled with various activities, it also hindered by gaseous model of the sun; an alternative view is suggested based on characteristics of electrons exhibited in electric current; in 1820 Ørsted discovered both the relation between electricity and magnesium and the Circular Magnetic Field (CMF) produced by electric current, later discovered its produced by electrons in motion; thus the bulky rotation of charged particles (electrons, protons and ions) in tornado mode, produced intense CMF, designated as Plasma Pillar Intense Magnetic Field (PPIMF) with magnitude exceeds millions Tesla; and since EUV images in F-A, illustrates subsurface intense Magnetic Lines of Force (MLF), it also shows activities of Solar Flare (SF), both are suggested as due to PPIMF, which accounted for most solar activities, the Active Region (AR) as in F-B suggested to represent the PPIMF, where AR near surface are in circle, while AR at deep depth in squares; at deep depths the influence of PPIMF on photosphere during quiet sun resulted in pairs of negative and positive magnetic fields represented by magnetogram in F-C; during active sun, PPIMF raise nearer photosphere, it’s negative and positive fields interacted with the photosphere’s state, resulted in pairs of sunspots in F-D, look like iron filings, but formed by plasma, their shapes determined by proximity to PPIMF; as charged particles gyrate around the pillar, any increase in field’s intensity reduced radius of gyration, hence the adjacent distances between ions, thus at critical distance Solar Flare (SF) is triggered producing great energy, radiations and plasma including heavy ions; this knowledge will unlock dynamics of the sun, it’s internal structures and related mechanisms, it will help attained the alternative renewable energy, avert negative consequences of climate change, improve prediction of solar activity and space weather among others.
Concurrent temperature and precipitation extremes during Indian summer monsoon generally have signicant effects on agriculture, society and ecosystems. Due to climate change, frequency and spatial extent of concurrent extremes have changed, and there is a need to advance our understanding in this domain. Quantication of individual extremes (temperature and precipitation) during the summer monsoon season and its teleconnections to climate indices have been studied comprehensively. But, less attention is devoted to the quantication of concurrent extremes and its teleconnections to climate indices. In this study, concurrent extremes (dry/hot and wet/cold) based on mean monthly temperature and total monthly precipitation during the Indian summer season from 1951 to 2019 over the Indian mainland are investigated. Next, the study uses wavelet coherence analysis to unravel the teleconnections of the spatial extent of concurrent extremes to climate indices (Nino 3.4, WEIO SST and SEEIO SST). Results show that the frequency of wet/hot concurrent extremes has increased signicantly, while the frequency of wet/cold concurrent has decreased for the time window 1985 to 2019 relative to 1951-1984. Also, a statistically signicant increase (decrease) in the spatial extent exists in concurrent dry/hot (wet/cold) extremes during the July, August and September months. The ndings of this study could advance our understanding of changes in concurrent extremes during the Indian summer monsoon due to climate change.
We investigate the CO2 flux calculated by the ISBA soil-vegetation-atmosphere transfer model (Noilhan and Planton, 1989)by comparing three different formulations for the plant (dark) respiration scheme applied to a soybean culture. The model includes CO2 flux/photosynthesis based on Jacobs (1994) in a manner similar to Calvet et al. (1998) (ISBA-A-gs). The first respiration scheme (M0) computed the autotrophic respiration Rd similarly to Jacobs (1994) but with an ad-hoc temperature correction calibrated by statistical parameter fitting using measured data. For the second model (M1), we implemented the respiration proposed by Joetzjer et al. (2015). Finally we implemented a third respiration scheme (M2) as in Wang (1996). The three models were calibrated and CO2 fluxes were compared with measurements made over a soybean culture using eddy covariance method between December, 2008 and March, 2009, at a farm near Buenos Aires, Argentina. The total CO2 maximum, minimum and mean measured flux values were respectively 0.9890, -0.2479 and 0.3087 mg m-2 s-1. For the sake of comparison, statistics were computed for the full daily cycle flux (total) and also for nighttime flux, as a means to avoid masking of the results due to the much larger daytime photosynthetic flux. We here present the Nash-Sutcliffe efficiency (NSE) coefficient for each model. M0 gave the best overall performance with 0.7568 for the total daily CO2 flux and 0.0795 for the dark flux. M1 gave similar predictions for the daily CO2 flux with 0.7582, butthe worst result for the nighttime period with -0.4965. M2 gave 0.7424 for the full daily flux and 0.0119 for the night CO2 flux. The results show a seemingly better performance of the models in predicting the total CO2 flux compared to the dark CO2 flux. This is due to several facts such as: respiration is less understood and harder to predict than photosynthesis; measurements are more difficult at nighttime due to the limitations of the eddy-covariance technique in low turbulent activity; in the measured data, it is difficult to identify and separate the portions of CO2 fluxes as soil respiration, autotrophic respiration and photosynthetic flux, without many auxiliary measurements. We also conclude that there is a clear influence of the temperature on the respiration, which can be suitably incorporated in the models.
From interpreting data to scenario modeling of subduction events, numerical modeling has been crucial for studying tsunami generation by earthquakes. Seafloor instruments in the source region feature complex signals containing a superposition of seismic, ocean acoustic, and tsunami waves. Rigorous modeling is required to interpret these data and use them for tsunami early warning. However, previous studies utilize separate earthquake and tsunami models, with one-way coupling between them and approximations that might limit the applicability of the modeling technique. In this study, we compare four earthquake-tsunami modeling techniques, highlighting assumptions that affect the results, and discuss which techniques are appropriate for various applications. Most techniques couple a 3D Earth model with a 2D depth-averaged shallow water tsunami model. Assuming the ocean is incompressible and that tsunami propagation is negligible over the earthquake duration leads to technique (1), which equates earthquake seafloor uplift to initial tsunami sea surface height. For longer duration earthquakes, it is appropriate to follow technique (2), which uses time-dependent earthquake seafloor velocity as a time-dependent forcing in the tsunami mass balance. Neither technique captures ocean acoustic waves, motivating newer techniques that capture the seismic and ocean acoustic response as well as tsunamis. Saito et al. (2019) propose technique (3), which solves the 3D elastic and acoustic equations to model the earthquake rupture, seismic wavefield, and response of a compressible ocean without gravity. Then, sea surface height is used as a forcing term in a tsunami simulation. A superposition of the earthquake and tsunami solutions provides the complete wavefield, with one-way coupling. The complete wavefield is also captured in technique (4), which utilizes a fully-coupled solid Earth and ocean model with gravity (Lotto & Dunham, 2015). This technique, recently incorporated into the 3D code SeisSol, simultaneously solves earthquake rupture, seismic waves, and ocean response (including gravity). Furthermore, we show how technique (3) follows from (4) subject to well-justified approximations.
Disaster risk reduction relies on quantitative estimates of the future impacts and consequences of known hazard threats in order to evaluate proposed mitigation and adaptation measures. Natural Resources Canada is collaborating with the Global Earthquake Model Foundation on the first ever national seismic risk assessment in Canada to inform disaster risk reduction planning by individuals, businesses and organizations working across all jurisdictional levels. The 2020 National Seismic Risk Model incorporates the 6th Generation National Seismic Hazard Map, a novel physical exposure model for the entire country, localized exposure models based on a machine learning approach to building categorization, and HAZUS-based earthquake building performance functions. Before results can be transmitted to end users, the model must be validated in a Canadian context using observations from real world disaster events or pre-existing catastrophic risk models. This study focuses on benchmarking the 2020 Canadian National Seismic Risk Model using shaking intensities and physical impacts recorded from the 2001 Mw 6.8 Nisqually and 2012 Mw 7.8 Haida Gwaii events, and the results of a 2013 catastrophic risk assessment performed by AIR Worldwide to evaluate the potential impact of major earthquakes in eastern Quebec and Cascadia. We compute anticipated building damage, economic loss, and fatalities for these benchmark scenario earthquakes using the OpenQuake engine and the national exposure dataset. Preliminary results indicate that the model results are largely consistent with observed or predicted impacts of these earthquakes in Canada, after adjusting for economic and population growth. Subsequently, we will evaluate the impact of running the Cascadia scenario using a regional building-level exposure database versus the national level inventory. Ultimately, this work will assess the ability of the National Seismic Risk Assessment to reproduce expected results, to ensure the applicability of this model in anticipating future outcomes at the national and local level.
Open source in-situ environmental sensor hardware continues to expand across the geosphere to a variety of applications. These systems typically perform three fundamental tasks: sample sensors at a specified time or period, save data onto retrievable media, switch power to components on and off in between sample cycles to conserve battery energy and increase field operation time. This is commonly accomplished through integrating separate off-the-shelf components into the desired system such as: power relays, SD card hardware, Real-Time Clocks (RTCs), and coin cell batteries. To enable faster prototyping, the Openly Published Environmental Sensing Lab abstracted all of these requirements into a single PCB that can be dropped into any project to achieve these commonly-required capabilities. The hardware is laid out in a “Feather” form factor, a popular configuration in the open-source hardware community, to easily mate with other industry standard products. The onboard RTC acts as an alarm clock that wakes a user-attached micro controller from low-power sleep modes in between sample cycles. By integrating all these components into a single PCB, we save cost while significantly reducing physical system size. The design as well as a suite of code functions that enable the user to configure all the Hypnos board features are detailed. For more information, please visit open-sensing.org/projects.
The community of Kotzebue, located on the coast of Kotzebue Sound, which is northeast of the Bering Straits adjacent to the Chukchi Sea, is reliant on the waters around Kotzebue Sound for food and economy. There have been reports of cyanobacterial blooms in these waters around Kotzebue but they have not been systematically studied yet, because the region is sparsely populated with few in-situ observations. Cyanobacteria often form surface blooms in freshwater and coastal ecosystems which can be detected using remote sensing techniques. Cyanobacteria are found to have low nutritional value and many species of cyanobacteria produce cyanotoxins, and thus can be harmful to aquatic life and cause public health hazards. In addition, consumption of decaying cyanobacterial blooms by microbes depletes oxygen level which can lead to hypoxia, adversely impacting the benthic community. As the Arctic is warming twice as fast as the rest of the planet due to climate change, thawing permafrost is releasing nutrients that might be enhancing cyanobacterial blooms in the coastal, marine and lacustrine waters of Alaska. In this study, we used remote sensing to study phytoplankton biomass, turbidity and cyanobacterial blooms between mid-June to end of September each year from 2013 to 2019 when the waters around Kotzebue are ice-free. Using images from Landsat-8 and Sentinel-2, processed using ACOLITE software, we investigated spatial and temporal changes in water quality parameters such as turbidity and chlorophyll concentration between June and September. We used a combination of true-color images and fai (floating algal index) to detect cyanobacterial blooms. There were about two scenes from Sentinel-2 and about one scene from Landsat-8, for a total of about three scenes every week between June and September. Of these, only 49% of the images were cloud-free. Of the cloud-free images, 29% were found to have a cyanobacterial bloom between August and September for an average of two to four scenes every year. Most of the cyanobacterial blooms were detected in Kobuk Lake near Kotzebue, and nearby sites in Hotham Inlet and Selawik Lake. In 2013, 68% of the images were cloudy which was the highest in the observed years and no cyanobacterial blooms were detected.
An upper-crustal intrusive network in the 201.5 Ma, rift-related Central Atlantic Magmatic Province is exposed in the western Newark basin (PA, USA). Alpha-MELTS modeling was used to track magma evolution starting with initial pyroxene crystallization at depth (1000-500 MPa); plagioclase crystallized during ascent in the upper crust. For magma emplaced at 5-6 km depth (170 MPa), six MELTS models were generated to bracket different composition, H2O (1-3 wt.%), and crystallinity (28-49 vol.%). Corresponding magma viscosities evolved from 3 to 1624 Pa-sec (predicted using Giordano et. al 2008; Moitra and Gonnermann 2014). Detailed crystal mush structures in a diabase sill are revealed in a dimension stone quarry. Ubiquitous asymmetric modal layers a few mm thick comprising plag-rich layers (PRL, 75% modal plag) overlying more pyx-rich layers outline the tops of hundreds of dm-m scale flow lobes in the quarry. Tabular plag in PRL show shape-preferred orientations, tiling, and pressure shadows around larger pyx that resemble analog experiments on particle slurries and indicate flow with limited mechanical compaction. During magma emplacement, recursive interactions of propagation, sorting, and crystallization self-organized as flow lobes with plag entrained and aligned along lobe tops. Our calculations show plag separation can reduce bimodal suspension viscosity; a positive feedback likely enhanced by shear thinning and crystal alignments. EDS analyses and X-ray maps show that plag has oscillatory-zoned cores (An82-67) with patchy-zoned mantles (An67) filled in by An66-63. In PRL, plag are cemented together by An62-55; Na-rich rims occur next to qtz-Kspar pockets. By the end of cementation, PRL liquid volume was significantly reduced to 11-18% compared with 28-45% in overall magma based on MELTS models for An62-55 plag. Diabase suspension viscosity increased to >6000 Pa-sec; PRL viscosity cannot be modeled by equations based on random packing. PRL with aligned interlocking crystals were more rigid and less permeable than surrounding diabase. Upward flow of magma after modal layer development was channelized into pipes truncated and deflected by PRL. Thus, lateral flow during emplacement developed sub-vertical heterogeneities that exemplify complex mush rheology over m-scale distances.
We present a new approach for the analysis of high-resolution digital camera photographs taken by photographers who have fortuitously been able to capture rare events such as the glowing sky phenomenon known as STEVE. This method is especially effective with a time lapse series of images of the night sky taken under constant camera settings with a steady pointing. Stars, planets and satellites seen in such images can be used to determine precise and accurate registration of camera pixels to coordinates of angular altitude and azimuth. The location of satellites in the image enables precise and accurate synchronization of the images. We apply these techniques to the series of photographs of STEVE taken on 25 July 2016. We confirm the altitude structure previously found for STEVE. We find it most likely that the green picket fence features often seen during STEVE events are produced by auroral electron precipitation. With the precipitation assumption, we are able to extract novel information about the energy spectrum of the particles responsible for the production of STEVE luminosity in this particular event. Similar analyses of archived digital photographs may constitute a treasure trove of important data for improved understanding of rare and transient events such as STEVE.
Studies on the molecular mechanisms of microbial adaptation in chaotropic and low water activity (aw) environments are poorly understood. Chaotropic environments are characterized as salt rich, MgCl2 and CaCl2, which lowers the availability of water for biological processes. PATRIC, an integrated genomic browsing tool containing vast libraries of sequenced genomes, can help us identify unique genetic markers in chaophilic and xerophilic microbes. Halophilic microbes are characterized as obligate hypersaline with the ability to tolerate exposure to chaotropic agents. Microbes with the greatest tolerance in these extreme environments must have advanced adaptive methods. Halobacterium salinarum and Haloquadratum walsbyi are chaotolerant and well adapted to low water activity. Haloquadratum walsbyi is unique among the halophilics as having the highest tolerance for chaotropes and its square shape. Performing comparative genomics using fully sequenced halophilic archaea such as Halobacterium salinarum NRC-1, a model halophile, and Haloquadratum walsbyi C23, we were able to identify genes that confer adaptation to chaotropic and low aw environments, as well as individual adaptations that may be responsible for the varying levels of tolerance in chaotropic environments . Characterizing genes associated with chaotolerance and low aw adaptations can help elucidate the cellular functions that make these microbes unique. Chaotropic brines may be used as analogs to study the origin of life and the possibility of suitable environments hosting extremophilic microbes on other planets like the Martian brines and the icy moons of Europa; therefore, studying the microbiome of chaotropic environments are essential in the field of astrobiology.
The potential commonality of prebiotic chemical processes on Titan and the primitive Earth makes Titan a prime body of astrobiological interest. Amino acid synthesis can occur if the abundant simple organics on Titan’s surface can mix with liquid water. Because events that melt surface ice, such as impacts, are rare, it is essential to recognize how long the synthesized molecules remain intact on Titan’s surface. The degradation of biomolecules in extraterrestrial environments can be estimated by combining theoretical work about energy deposition on the surface with experimental results from irradiation of organic molecules. We modelled the destruction of amino acids on the surface of Titan, something absent in current literature. We chose Glycine, Alanine, and Phenylalanine as our molecules of interest due to relevant experimental results for their radiation stability at Titan temperatures. Titan’s thick atmosphere prevents solar radiation and energetic particles trapped in Saturn’s magnetosphere from reaching the surface. The dominant source of energetic radiation at the surface of Titan is the diminished flux of Galactic Cosmic Rays (GCR’s) that penetrate the atmosphere. Sittler Jr et al. (Icarus, 2019) modeled surface GCR flux to be ~10^-9 ergs/cm^3/s. Using the GCR flux, in conjunction with the half-life doses at T=100 K from Gerakines et al. (Icarus, 2012), we estimate the half-lives to be 7.69 x 10^12;, 5.07 x 10^12, and 5.82 x 10^12 years for Glycine, Alanine and Phenylalanine, respectively. These extraordinarily long half-lives on Titan’s surface, as compared to similar calculations for amino acids on Mars, Europa, or Pluto, are directly the result of reduced energy deposition due to the atmosphere. We thus conclude that the degradation of these three amino acids by GCR flux is insignificant over geological time, and will not be an essential factor in interpreting the chemistry from Titan’s surface samples from future missions, such as Dragonfly.
Diverse, complex data are a significant component of Earth Science’s “big data” challenge. Some earth science data, like remote sensing observations, are well understood, are uniformly structured, and have well-developed standards that are adopted broadly within the scientific community. Unfortunately, for other types of Earth Science data, like ecological, geochemical and hydrological observations, few standards exist and their adoption is limited. The synthesis challenge is compounded in interdisciplinary projects in which many disciplines, each with their own cultures, must synthesize data to solve cutting edge research questions. Data synthesis for research analysis is a common, resource intensive bottleneck in data management workflows. We have faced this challenge in several U.S. Department of Energy research projects in which data synthesis is essential to addressing the science. These projects include AmeriFlux, Next Generation Ecosystem Experiment (NGEE) - Tropics, Watershed Function Science Focus Area, Environmental Systems Science Data Infrastructure for a Virtual Ecosystem (ESS-DIVE), and a DOE Early Career project using data-driven approaches to predict water quality. In these projects, we have taken a range of approaches to support (meta)data synthesis. At one end of the spectrum, data providers apply well-defined standards or reporting formats before sharing their data, and at the other, data users apply standards after data acquisition. As these projects continue to evolve, we have gained insights from these experiences, including advantages and disadvantages, how project history and resources led to choice of approach, and enabled data harmonization. In this talk, we discuss the pros and cons of the various approaches, and also present flexible applications of standards to support diverse needs when dealing with complex data.
Greenhouse gas (GHG) emission metrics, that is, conversion factors to evaluate the emissions of non-CO2 climate forcers on a common scale with CO2, serve crucial functions upon the implementation of the Paris Agreement. While different metrics have been proposed, they have not been investigated under a range of pathways, including those significantly overshooting the temperature targets of the Paris Agreement. Here we show that cost-effective metrics that minimize the overall cost of climate mitigation are time-dependent, primarily determined by the period remaining before the eventual stabilization, and strongly influenced by temperature overshoot. Our study suggests that flexibility should be maintained to adapt the choice of metrics in time as the future unfolds, if cost-effectiveness is a key consideration for global climate policy, instead of hardwiring the 100-year Global Warming Potential (GWP100) as a permanent feature of the Paris Agreement implementation as is currently under negotiation.
Managing river temperature in highly urbanized stream systems is critical for maintaining aquatic ecosystems and associated beneficial uses. Elevated river temperatures arise from warm surface inflows from impervious areas, channelization, the absence of riparian forests, and the lack of groundwater-surface water interactions. In the current work, we utilize a mechanistic river temperature model, i-Tree Cool River, to evaluate the cooling impacts of alternative ecological restoration scenarios: a) shading effects of tree planting in riparian areas and b) alternative streambed materials. The model was calibrated and validated on a 4.2 km reach of the Compton Creek in the Los Angeles (LA) River watershed, California, for low and high flow periods. The Arroyo Chub and Stickleback were considered the target species for analyzing thermal habitat suitability. River temperature simulations showed that like the ambient air temperature. The thermal response of the river in high flow periods was a function of upstream river temperature , where in low flow periods river water temperature was most affected by the tested restoration scenarios. Tree planting in the riparian corridor decreased the median thermal metrics: Max Weekly Max, Max Weekly Average, and Min Weekly Min Temperatures by an average of 3 ℃ (13%) to 20.4 ℃, 19.7 ℃, and 17.8 ℃, respectively. Using limecrete as an alternative bed material to the current concrete bottom decreased the median thermal metrics by an average of 0.9 ℃ (4%) to 22.7 ℃, 22 ℃, and 19 ℃, respectively. Combining the two scenarios decreased the river temperature metrics by an average of 4 ℃ (18%) to 18.2 ℃. Besides riparian vegetation, altering bed material is an impactful option in case of groundwater contamination and if channelized urban corridors lack the substrate to support vegetation. The use of ecological restoration scenarios resulted in summertime temperatures were within the documented spawning temperature thresholds and therefore temperature would not be a limiting factor in the potential reintroduction of the Arroyo Chub and Stickleback to Compton Creek. This tributary could be considered as a potential refuge and improved fish habitat in the LA basin during low flow periods.