Although adequately detailed kerosene chemical-combustion Arrhenius reaction-rate suites were not readily available for combustion modeling until ca. the 1990’s (e.g., Marinov ), it was already known from mass-spectrometer measurements during the early Apollo era that fuel-rich liquid oxygen + kerosene (RP-1) gas generators yield large quantities (e.g., several percent of total fuel flows) of complex hydrocarbons such as benzene, butadiene, toluene, anthracene, fluoranthene, etc. (Thompson ), which are formed concomitantly with soot (Pugmire ). By the 1960’s, virtually every fuel-oxidizer combination for liquid-fueled rocket engines had been tested, and the impact of gas phase combustion-efficiency governing the rocket-nozzle efficiency factor had been empirically well-determined (Clark ). Up until relatively recently, spacelaunch and orbital-transfer engines were increasingly designed for high efficiency, to maximize orbital parameters while minimizing fuels and structural masses: Preburners and high-energy atomization have been used to pre-gasify fuels to increase (gas-phase) combustion efficiency, decreasing the yield of complex/aromatic hydrocarbons (which limit rocket-nozzle efficiency and overall engine efficiency) in hydrocarbon-fueled engine exhausts, thereby maximizing system launch and orbital-maneuver capability (Clark; Sutton; Sutton/Yang). The combustion community has been aware that the choice of Arrhenius reaction-rate suite is critical to computer engine-model outputs. Specific combustion suites are required to estimate the yield of high-molecular-weight/reactive/toxic hydrocarbons in the rocket engine combustion chamber, nonetheless such GIGO errors can be seen in recent documents. Low-efficiency launch vehicles also need larger fuels loads to achieve the same launched mass, further increasing the yield of complex hydrocarbons and radicals deposited by low-efficiency rocket engines along launch trajectories and into the stratospheric ozone layer, the mesosphere, and above. With increasing launch rates from low-efficiency systems, these persistent (Ross/Sheaffer ; Sheaffer ), reactive chemical species must have a growing impact on critical, poorly-understood upper-atmosphere chemistry systems.
Although adequately detailed kerosene chemical-combustion Arrhenius reaction-rate suites were not readily available for combustion modeling until ca. the 1990’s (e.g., Marinov ), it was already known from mass-spectrometer measurements during the early Apollo era that fuel-rich liquid oxygen + kerosene (RP-1) gas generators yield large quantities (e.g., several percent of total fuel flows) of complex hydrocarbons such as benzene, butadiene, toluene, anthracene, fluoranthene, etc. (Thompson ), which are formed concomitantly with soot (Pugmire ). By the 1960’s, virtually every fuel-oxidizer combination for liquid-fueled rocket engines had been tested, and the impact of gas phase combustion-efficiency governing the rocket-nozzle efficiency factor had been empirically well-determined (Clark ). Up until relatively recently, spacelaunch and orbital-transfer engines were increasingly designed for high efficiency, to maximize orbital parameters while minimizing fuels and structural masses: Preburners and high-energy atomization have been used to pre-gasify fuels to increase (gas-phase) combustion efficiency, decreasing the yield of complex/aromatic hydrocarbons (which limit rocket-nozzle efficiency and overall engine efficiency) in hydrocarbon-fueled engine exhausts, thereby maximizing system launch and orbital-maneuver capability (Clark; Sutton; Sutton/Yang). The rocket combustion community has been aware that the choice of Arrhenius reaction-rate suite is critical to computer engine-model outputs. Specific combustion suites are required to estimate the yield of high-molecular-weight/reactive/toxic hydrocarbons in the rocket engine combustion chamber, nonetheless such GIGO errors can be seen in recent documents. Low-efficiency launch vehicles (SpaceX, Hanwha) therefore also need larger fuels loads to achieve the same launched/transferred mass, further increasing the yield of complex hydrocarbons and radicals deposited by low-efficiency rocket engines along launch trajectories and into the stratospheric ozone layer, the mesosphere, and above. With increasing launch rates from low-efficiency systems, these persistent (Ross/Sheaffer ; Sheaffer ), reactive chemical species must have a growing impact on critical, poorly-understood upper-atmosphere chemistry systems.
Key Points: 10 • We improve a long-standing stratocumulus (Sc) dim bias in a high-resolution Mul-11 tiscale Modeling Framework. 12 • Incorporating intra-CRM hypervisocity hedges against the numerics of its momen-13 tum solver, reducing entrainment vicinity. 14 • Further adding sedimentation boosts Sc brightness close to observed, opening path 15 to more faithful low cloud feedback analysis. Abstract 17 High-Resolution Multi-scale Modeling Frameworks (HR)-global climate models that 18 embed separate, convection-resolving models with high enough resolution to resolve bound-19 ary layer eddies-have exciting potential for investigating low cloud feedback dynam-20 ics due to reduced parameterization and ability for multidecadal throughput on mod-21 ern computing hardware. However low clouds in past HR have suffered a stubborn prob-22 lem of over-entrainment due to an uncontrolled source of mixing across the marine sub-23 tropical inversion manifesting as stratocumulus dim biases in present-day climate, lim-24 iting their scientific utility. We report new results showing that this over-entrainment 25 can be partly offset by using hyperviscosity and cloud droplet sedimentation. Hypervis-26 cosity damps small-scale momentum fluctuations associated with the formulation of the 27 momentum solver of the embedded LES. By considering the sedimentation process ad-28 jacent to default one-moment microphysics in HR, condensed phase particles can be re-29 moved from the entrainment zone, which further reduces entrainment efficiency. The re-30 sult is an HR that is able to produce more low clouds with a higher liquid water path 31 and a reduced stratocumulus dim bias. Associated improvements in the explicitly sim-32 ulated sub-cloud eddy spectrum are observed. We report these sensitivities in multi-week 33 tests and then explore their operational potential alongside microphysical retuning in 34 decadal simulations at operational 1.5 degree exterior resolution. The result is a new HR 35 having desired improvements in the baseline present-day low cloud climatology, and a 36 reduced global mean bias and root mean squared error of absorbed shortwave radiation. 37 We suggest it should be promising for examining low cloud feedbacks with minimal ap-38 proximation. 39 Plain Language Summary 40 Stratocumulus clouds cover a large fraction of the globe but are very challenging 41 to reproduce in computer simulations of Earth's atmosphere because of their unique com-42 plexity. Previous studies find the model produces too few Stratocumulus clouds as we 43 increase the model resolution, which, in theory, should improve the simulation of impor-44 tant motions for the clouds. This is because the clouds are exposed to more conditions 45 that make them evaporate away. On Earth, stratocumulus clouds reflect a lot of sun-46 light. In the computer model of Earth, too much sunlight reaches the surface because 47 of too few stratocumulus clouds, which makes it warmer. This study tests two methods 48 to thicken Stratocumulus clouds in the computer model Earth. The first method smooths 49 out some winds, which helps reduce the exposure of clouds to the conditions that make 50 them evaporate. The second method moves water droplets in the cloud away from the 51 conditions that would otherwise make them evaporate. In long simulations, combining 52 these methods helps the model produce thicker stratocumulus clouds with more water. 53
The urban morphology determined by urban canopy parameters (UCPs) plays an important role in simulating the interaction of urban land surface and atmosphere. The impact of urbanization on a typical summer rainfall event in Hangzhou, China, is investigated using the integrated WRF/urban modelling system. Three groups of numerical experiments are designed to assess the uncertainty in parameterization schemes, the sensitivity of urban canopy parameters (UCPs), and the individual and combined impacts of thermal and dynamical effects of urbanization, respectively. The results suggest that the microphysics scheme has the highest level of uncertainty in simulating precipitation, followed by the planetary boundary layer scheme, whereas the land surface and urban physics schemes have minimal impacts. The choices of the physical parameterization schemes for simulating precipitation are much more sensitive than those for simulating temperature, mixing ratio, and wind speed. Of the eight selected UCPs, changes in heat capacity, thermal conductivity, surface albedo, and roughness length have a greater impact on temperature, mixing ratio, and precipitation, while changes in building height, roof width, and road width affect the wind speed more. The total urban impact could lead to higher temperature, less mixing ratio, lower wind speed, and more precipitation in and around the urban area. Comparing the thermal and dynamical effects of urbanization separately, both of them contribute to an increase in temperature and precipitation and the thermal effect plays a major role. However, their impacts are opposite in changes of mixing ratio and wind speed, and each play a major role respectively.
The El Niño-Southern Oscillation (ENSO) influences climate variability across the globe. ENSO is highly predictable on seasonal timescales and therefore its teleconnections are a source of extratropical forecast skill. To fully harness this predictability, teleconnections must be represented accurately in seasonal forecasts. We find that a multimodel ensemble from five seasonal forecast systems can successfully capture the spatial structure of the late winter (JFM) El Niño teleconnection to the North Atlantic via North America, but the simulated amplitude is half of that observed. We find that weak amplitude teleconnections exist in all five models and throughout the troposphere, and that the La Niña teleconnection is also weak. We find evidence that the tropical forcing of the teleconnection is not underestimated and instead, deficiencies are likely to emerge in the extratropics. We investigate the impact of underestimated teleconnection strength on North Atlantic winter predictability, including its relevance to the signal-to-noise paradox.
\justifying The atmospheric radiative transfer calculations are among the most time-consuming components of the numerical weather prediction (NWP) models. Deep learning (DL) models have recently been increasingly applied to accelerate radiative transfer modeling. Besides, a physical relationship exists between the output variables, including fluxes and heating rate profiles. Integration of such physical laws in DL models is crucial for the consistency and credibility of the DL-based parameterizations. Therefore, we propose a physics-incorporated framework for the radiative transfer DL model, in which the physical relationship between fluxes and heating rates is encoded as a layer of the network so that the energy conservation can be satisfied. It is also found that the prediction accuracy was improved with the physic-incorporated layer. In addition, we trained and compared various types of deep learning model architectures, including fully connected (FC) neural networks (NNs), convolutional-based NNs (CNNs), bidirectional recurrent-based NNs (RNNs), transformer-based NNs, and neural operator networks, respectively. The offline evaluation demonstrates that bidirectional RNNs, transformer-based NNs, and neural operator networks significantly outperform the FC NNs and CNNs due to their capability of global perception. A global perspective of an entire atmospheric column is essential and suitable for radiative transfer modeling as the changes in atmospheric components of one layer/level have both local and global impacts on radiation along the entire vertical column. Furthermore, the bidirectional RNNs achieve the best performance as they can extract information from both upward and downward directions, similar to the radiative transfer processes in the atmosphere.
A weather station in Nukuʻalofa (NUKU), Tonga, ~68km away from the epicenter of the 2022 Tonga eruption, recorded exceptional pressure, temperature, and wind data representative of the eruption source hydrodynamics. These high-quality data are available for further source and propagation studies. In contrast to other barometers and infrasound sensors at greater ranges, the NUKU barometer recorded a decrease in pressure during the climactic stage of the eruption. A simple fluid dynamic explanation of the depressurization is provided, with a commentary on near- vs far-field pressure observations of very large eruptions.
Many regions across the globe broke their surface temperature records in recent years, further sparking concerns about the impending arrival of “tipping points” later in the 21st century. This study analyzes observed global surface temperature trends in three target latitudinal regions: the Arctic Circle, the Tropics, and the Antarctic Circle. We show that global warming is accelerating unevenly across the planet, with the Arctic warming at approximately three times the average rate of our world. We further analyzed the reliability of latitude-dependent surface temperature simulations from a suite of Coupled Model Intercomparison Project Phase 6 models and their multi-model mean. We found that GISS-E2-1-G and FGOALS-g3 were the best-performing models based on their statistical abilities to reproduce observational, latitude-dependent data. Surface temperatures were projected from ensemble simulations of the Shared Socioeconomic Pathway 2-4.5 (SSP2-4.5). We estimate when the climate will warm by 1.5, 2.0, and 2.5 ℃ relative to the preindustrial period, globally and regionally. GISS-E2-1-G projects that global surface temperature anomalies would reach 1.5, 2.0, and 2.5 ℃ in 2024 (±1.34), 2039 (±2.83), and 2057 (±5.03) respectively, while FGOALS-g3 predicts these “tipping points” would arrive in 2024 (±2.50), 2054 (±7.90), and 2087 (±10.55) respectively. Our results reaffirm a dramatic, upward trend in projected climate warming acceleration, with upward concavity in 21st century projections of the Arctic, which could lead to catastrophic consequences across the Earth. Further studies are necessary to determine the most efficient solutions to reduce global warming acceleration and maintain a low SSP, both globally and regionally.
The determination of buoyancy flux and its contribution to turbulence kinetic energy (TKE) is a fundamental problem in planetary boundary layer (PBL). However, due to the complexity of turbulence, previous studies mainly adopted dimensional analysis and empirical formula to determine TKE budget. This study introduces the endoreversible heat engine model concept to the convective boundary layer (CBL) TKE analysis and establishes a theoretical model based on the first principles. We found that the total contribution of buoyancy to TKE and heat engine efficiency in the boundary layer increase linearly with the boundary layer height. The derived buoyancy flux from our theoretical model is consistent with the results from numerical simulation and dimensional analysis. This heat engine-based theory reveals the physical mechanism of the power of TKE generated by buoyancy. Our theoretical model can replace the empirical value and provide an ideal method for buoyancy flux determination in PBL.
Using Climate Forecast System Reanalysis (CFSR) data and numerical simulations, the impacts of the multi-scale sea surface temperature (SST) anomalies in the North Pacific on the boreal winter atmospheric circulations are investigated. The basin-scale SST anomaly as the Pacific Decadal Oscillation (PDO) pattern, a narrow meridional band of frontal-scale smoothed SST anomaly in the subtropical front zone (STFZ) and the spatial dispersed eddy-scale SST anomalies within the STFZ are the three types of forcings. The results of Liang-Kleeman information flow method find that all three oceanic forcings may correspond to the winter North Pacific jet changing with the similar pattern. Furthermore, several simulations are used to reveal the differences and detail processes of the three forcings. The basin-scale cold PDO-pattern SST anomaly first causes negative turbulent heat flux anomalies, atmospheric cooling, and wind deceleration in the lower atmosphere. Subsequently, the cooling temperature with an amplified southern lower temperature gradient and baroclinity brings a lagging middle warming because of the enhanced atmospheric eddy heat transport. The poleward and upward development of baroclinic fluctuations eventually causes the acceleration of the upper jet. The smoothed frontal- and eddy-scale SST anomalies in the STFZ cause comparable anomalous jet as the basin-scale by changing the upward baroclinic energy and E-P fluxes. The forcing effects of multi-scale SST anomalies coexist simultaneously in the mid-latitude North Pacific, which can cause similar anomalous upper atmospheric circulations. This is probably why it is tricky to define the certain oceanic forcing that leads to specific atmospheric circulation variation in observations
In this paper we analyze electric-field and current measurements of competing upward leaders induced by a downward negative lightning flash that struck a residential building. The attachment process was recorded by two high-speed cameras running at 37,800 and 70,000 images per second and the current measured in two lightning rods. Differently from previous works, here we show, for the first time, the behavior of multiple upward leaders that after initiation compete to connect the negative downward moving leader. At the beginning of the propagation of the leaders that initiate on the instrumented lightning rods, current pulses appear superimposed to a steadily increasing DC current. The upward leader current pulses increase with the approach of the downward leader and are not synchronized but present an alternating pattern. All leader speeds are constant. The upward leaders are slower than the downward leader speed. The average time interval between current pulses in upward leaders is close to the interstep time interval found by optical or electric field sensors for negative cloud-to-ground stepped leaders. The upward leaders respond to different downward propagating branches and, as the branches alternate in propagation and intensity, so do the leaders accordingly. Right before the attachment process the alternating pattern of the leaders ceases, all downward leader branches intensify, and consequently upward leaders synchronize and pulse together. The average linear densities for upward leaders (49 and 82 µC/m) were obtained for the first time for natural lightning.
This paper is a contribution to the exploration of the parametric Kalman filter (PKF), which is an approximation of the Kalman filter, where the error covariance are approximated by a covariance model. Here we focus on the covariance model parameterized from the variance and the anisotropy of the local correlations, and whose parameters dynamics provides a proxy for the full error-covariance dynamics. For this covariance mode, we aim to provide the boundary condition to specify in the prediction of PKF for bounded domains, focusing on Dirichlet and Neumann conditions when they are prescribed for the physical dynamics. An ensemble validation is proposed for the transport equation and for the heterogeneous diffusion equations over a bounded 1D domain. This ensemble validation requires to specify the auto-correlation time-scale needed to populate boundary perturbation that leads to prescribed uncertainty characteristics. The numerical simulations show that the PKF is able to reproduce the uncertainty diagnosed from the ensemble of forecast appropriately perturbed on the boundaries, which show the ability of the PKF to handle boundaries in the prediction of the uncertainties. It results that Dirichlet condition on the physical dynamics implies Dirichlet condition on the variance and on the anisotropy.
Recent studies have demonstrated that it is possible to combine machine learning with data assimilation to reconstruct the dynamics of a physical model partially and imperfectly observed. The surrogate model can be defined as an hybrid combination where a physical model based on prior knowledge is enhanced with a statistical model estimated by a neural network. The training of the neural network is typically done offline, once a large enough dataset of model state estimates is available. By contrast, with online approaches the surrogate model is improved each time a new system state estimate is computed. Online approaches naturally fit the sequential framework encountered in geosciences where new observations become available with time. In a recent methodology paper, we have developed a new weak-constraint 4D-Var formulation which can be used to train a neural network for online model error correction. In the present article, we develop a simplified version of that method, in the incremental 4D-Var framework adopted by most operational weather centres. The simplified method is implemented in the ECMWF Object-Oriented Prediction System, with the help of a newly developed Fortran neural network library, and tested with a two-layer two-dimensional quasi geostrophic model. The results confirm that online learning is effective and yields a more accurate model error correction than offline learning. Finally, the simplified method is compatible with future applications to state-of-the-art models such as the ECMWF Integrated Forecasting System.
This is a comment on the Boone et al. (2022) article. The authors analyzed spaceborne observations of stratospheric aerosol in 2019-2020 . They concluded, the dominating aerosol type was volcanic sulfate aerosol. They critisized Raman lidar observations of Ohneiser et al. (2021) and Ansmann et al. (2021). These authors classified the aerosol as wildfire smoke. Boone et al. (2022) stated that this classification is wrong. In this article, we clearly show that the dominant aerosol type was wildfire smoke.
In-situ measurements of the trade cumulus boundary layer turbulence structure are compared across large-scale circulation conditions and cloud horizontal organizations during the EUREC4A-ATOMIC campaign. The vertical structure of turbulent (e.g. vertical velocity variance, total kinetic energy) and flux (e.g. sensible, latent, and buoyancy) quantities are derived and investigated using the WP-3D aircraft stacked level legs (cloud modules).The 16 cloud modules aboard the P-3 were split into three groups according to cloud top height and column-integrated TKE and vertical velocity variance. These groups map onto qualitative cloud features related to object size and clustering over a scale of 100 km. This grouping also correlates to the large scale forcings of surface windspeed and low-level divergence on the scale of a few hundred km. The ratio cloud top to trade inversion base height is consistent across the groups at around 1.18. The altitude of maximum turbulence is 0.75-0.85 of cloud top height. The consistency of these ratios across the groups may point to the underlying coupling between convection, dissipation, and boundary layer thermodynamic structure. The following picture of turbulence and cloud organization is proposed: (1) light surface winds and turbulence which decreases from the sub-cloud mixed layer (ML) with height generates clouds with generally uniform spacing and smaller features, then (2) as the surface winds increase, convective aggregation occurs, and finally (3), if surface convergence occurs, convection and turbulence reach higher altitudes, producing higher clouds which may precipitate and create colds pools. Observations are compared to a CAM simulation is run over the study period, nudged by ERA5 winds and surface pressure. CAM produces higher column integrated turbulent kinetic energy and larger maximum values on the days where higher cloud tops are observed from the aircraft, which is likely a factor that influences the development of deeper clouds in the model. However, CAM places the peak turbulence 500 m lower than observed, suggesting there may be a bias in CAM representation of turbulence and moisture transport. CAM also does not capture the large LHFs seen for two of the days in which lower cloud tops are observed, which could result in insufficient lower free tropospheric moistening in the model during this type of cloud organization. A large and consistent bias between the model and observations for all cloud groups is the negative SHFs produced in CAM near 1500 m. This is not observed in the measurements. This leads to a net negative buoyancy flux not observed and provides evidence of a specific shortcoming that can be addressed as part of the needed improvement in the representation of clouds by large-scale models.
We develop a doubly periodic version of the Simple Convection-Permitting E3SM Atmosphere Model (SCREAM) to provide an “efficient” configuration for this global storm resolving model (GSRM), akin to a single column model (SCM) often found in conventional general circulation models (GCMs). The design details are explained, in addition to the extensive case library associated with the doubly periodic SCREAM (DP-SCREAM) configuration. We demonstrate that doubly periodic cloud resolving models are useful tools to explore the scale awareness and scale sensitivity of GSRMs, in addition to replicating biases seen in the global models. Using DP-SCREAM, we show that SCREAM is a scale aware model as it is able to realistically partition between sub-grid scale (SGS) and resolved vertical transport across the gray zone of turbulence. We show that SCREAM is reasonably scale insensitive when run at resolutions from 1 to 5 km, but can exhibit sensitivity, particularly for the shallow convective regime, when run at resolutions approaching that of large eddy simulations. We conclude that SGS parameterization improvements are likely needed to reduce this scale sensitivity.
As tsunamis propagate across open oceans, they remain largely unseen due to the lack of adequate sensors, hence limiting the scope of existing tsunami warnings. A potential alternative method relies on the Global Navigation Satellites Systems to monitor the ionosphere for Traveling Ionospheric Disturbances created by tsunami-induced internal gravity waves (IGWs). The approach has been applied to tsunamis generated by earthquakes but rarely by undersea volcanic eruptions injecting energy into both the ocean and the atmosphere. The large 2022 Hunga Tonga-Hunga Ha’apai volcanic eruption tsunami is thus a challenge for tsunami ionospheric imprint detection. Here, we show that in near-field regions (<1500km), despite the complex wavefield, we can isolate the tsunami imprint. We also highlight that the eruption-generated Lamb wave’s ionospheric imprints show an arrival time and an amplitude spatial pattern consistent with internal gravity wave origin.