Podium Presentation Abstracts

Presentation Abstracts are listed in order of occurrence at the conference. 


Plenary


Forecasting atmospheric composition at the European Centre for Medium-Range Weather Forecasts : Achievements and challenges of the global CAMS system.

By: Johannes Flemming, European Centre for Medium-Range Weather Forecasts

Summary: To address environmental concerns about atmospheric composition, the European Union funds the Copernicus Atmosphere Monitoring Service (CAMS) as part of its Copernicus programme. CAMS is implemented by ECMWF and delivers a wide range of regional and global products on air quality, stratospheric ozone, emissions, solar radiation and climate forcing. Using ECMWFs operational weather modelling and data assimilation framework, CAMS delivers two times daily global 5-day forecasts of atmospheric composition. We will give an overview of the current modelling and data assimilation components of the global CAMS forecasting system. Special emphasis will be put on efforts to assimilate satellite retrievals of CO, AOD, ozone and NO2 using ECMWF's 4dVAR system and on recent developments to improve the performance of particulate matter forecast for air quality applications. We will report on the operational aspects of the implementation of the CAMS system and its potential to improve NWP forecasts by using prognostic aerosol and ozone fields in the NWP radiation scheme.


Modeling of Processes Across Global to Regional and Local Scales


A review of recent advances in climate modeling across scales

By: Paul Ullrich, University of California Davis

Summary: A varied and growing group of users now require accurate climate datasets, particularly high-resolution projections for all manner of impacts, adaptation, and vulnerability assessments. There has recently been explosive growth in the number of regional climate datasets to address these needs, with varied accuracy in different metrics. This talk aims to discuss recent developments in the production of high-resolution climate data at fine scales, including techniques that are currently under development.  In particular, we will review variable-resolution global climate models, regional climate modeling techniques, and hybrid statistical-dynamical downscaling methods.  We will further touch on strategies to evaluate these climate datasets, and discuss how one can assess dataset or model credibility in light of a non-stationary climate system.


Toward the integration of atmosphere and wind plant physics and simulation techniques: An overview of the DOE’s Mesoscale-Microscale Coupling project

By: Jeffrey Mirocha, Lawrence Livermore National Laboratory

Summary: The US Department of Energy’s Mesoscale-Microscale Coupling project is a coordinated, multi-institutional effort to integrate mesoscale atmospheric flow and microscale wind plant physics and simulation techniques, with the goal of enabling robust wind plant assessment, design and operation under the full range of meteorological and environmental conditions experienced over a lifetime of plant operation. Herein, we describe the challenges to this integration, including i) coupling strategies for various atmospheric and environmental conditions, ii) turbulence generation at interfaces between mesoscale and microscale simulation domains, iii) improved turbulence modeling in the gray zone, and iv) improved surface layer physics for high-resolution mesoscale and turbulence resolving simulations. We will describe how these challenges are being addressed by project team, highlight successes so far, and discuss remaining challenges and approaches. 


Atmospheric Acidity and the Role of Clouds on Air Quality

By: Mary Barth, NCAR

Summary: Clouds affect atmospheric chemistry in many ways. From fair weather cumulus to deep convection, clouds promote transport of boundary layer trace gases and aerosols to the free troposphere. Clouds affect photolysis rates, yet meteorology models do not always predict the timing and location of clouds accurately, which can affect surface ozone concentrations. Precipitation removes soluble trace gases and aerosols, yet the role of ice on wet deposition needs to be better understood. Chemistry in the cloud drops oxidizes sulfur dioxide and organic aldehydes to form sulfate and organic acids, which increases the aerosol mass in the troposphere thereby affecting meteorology. The aqueous-phase chemistry depends upon the acidity of the cloud drops, yet there has been very little effort to evaluate cloud acidity in regional and global chemistry transport models. We produced cloud water and aerosol pH maps for the conterminous U.S. from WRF-Chem model results and global cloud pH maps from the CAM-chem model. These results are compared with measurements, showing reasonable agreement between WRF-Chem and observations, as well as CAM-chem and observations except for regions near deserts where transition metal cations can reduce the cloud acidity. WRF-Chem predictions of fine-mode aerosol pH show good agreement with the few available observations. The WRF-Chem MOSAIC aerosol pH is found to increase as the aerosol size increases. Coarse-mode aerosol pH is near neutral over the oceans.


Forecasting Dust Emissions from Regional to Global Scale using Satellite Data In NOAA FV3

By: Barry Baker, CICS-MD; George Mason University; NOAA ARL

Summary: The NOAA ARL FENGSHA dust emission model has been implemented into the NOAA Next Generation Global Prediction System (NGGPS; FV3-CHEM) and is currently used for the NOAA National Air Quality Forecast Capability (NAQFC). FENGSHA uses threshold velocities derived from wind tunnel and field measurements by Dr. Dale Gillette for each soil type.  The idea is that once mobilization begins the resulting emission is not dependent on the soil size distribution, removing sensitive model parameterizations while making it more flexible for different model resolutions.  A new of FENGSHA will also be tested that improves prediction of the soil wind stress by redefining the drag partition and eliminating inconsistencies inherent in using the boundary layer friction velocity and z0/h. The threshold friction velocity is also transformed in a similar way that allows for a dynamic surface threshold velocity independent of soil particle size. Results will be shown comparing FENGSHA to existing dust modules within FV3-CHEM including the AFWA scheme. 

Additional Authors: Rick Saylor (NOAA ARL), Daniel Tong (CICS-MD; George Mason University; NOAA ARL)


Connecting Ozone Exceedances in Houston TX to Variability in Emissions and Meteorology: Implications for Federal Attainment

By: William Vizuete, Associate Professor, University of North Carolina - Chapel Hill

Summary: For regulatory purposes, it has been assumed that cities have a stable spatial and temporal distribution of emissions and the dominant factor in determining an ozone exceedance is variability in meteorological conditions. Thus, any analysis that can isolate conducive meteorological conditions should be able to accurately predict the frequency of exceedance days. This is not the case for the Houston-Galveston-Beaumont (HGB) region, where the vast majority of meteorological conducive ozone days do not produce exceedances. Conducive meteorological conditions are a necessary, but not sufficient condition for ozone exceedances in HGB. Using an expanded network of 32 monitors in HGB, my team found that the necessary conditions for high ozone were the result of the interaction of synoptic and Coriolis forces at 30 degrees N latitude that produces a rotational wind flow and stagnant morning conditions. This interaction, and resulting daily wind, can be observed across the state including the cities of San Antonio and El Paso. The infrequency of ozone exceedance days under these meteorological conditions suggests an additional variability in emission sources. On exceedance days, the observational data suggests local sources, and the location of origin of the ozone plumes points to sources to the east of the monitor. The variability of both emissions and meteorology presents a challenge for regulatory modeling and to assumptions made in the federal ozone attainment demonstration.


Defining environmental parameter domains for secondary organic aerosol formation

By: William Porter, University of California, Riverside

Summary: Understanding of the fundamental chemical and physical processes that lead to the formation of secondary organic aerosol (SOA) in the atmosphere have been rapidly advancing over the past decades. Many of the advancements have been achieved through laboratory studies, particularly SOA formation studies conducted in environmental chambers. Such studies have been parameterized to represent SOA formation in regional- and global-scale air quality models. In this work, the chemical transport model GEOS-Chem is used to quantitatively define atmospherically relevant ranges of chemical and meteorological parameters critical to the prediction of SOA. For some parameters atmospherically relevant ranges are generally well represented in laboratory studies. However, for others significant gaps exist between atmospherically relevant ranges and laboratory conditions. Parameter domains for which there are significant knowledge gaps are identified, and suggestions are made for extending laboratory studies and/or using mechanistic models to bridge existing gaps.

Additional Authors: Jose Jimenez, University of Colorado at Boulder

Kelley Barsanti, University of California, Riverside


Composition and Operational Forecasting from Daily to Seasonal Scales


Routine Multi-model Performance Analysis over North America for Three Operational Air Quality Forecast Systems

By: Mike Moran, Environment and Climate Change Canada

Summary: A number of operational air quality forecast systems are now producing gridded AQ forecasts for North America, but until recently there had not been any side-by-side evaluation and comparison of these forecasts.  Environment and Climate Change Canada (ECCC), the U.S. National Oceanic and Atmospheric Administration (NOAA), and the European Centre for Medium-range Weather Forecasts are now exchanging AQ forecasts, and ECCC has developed a multi-model verification system that receives, ingests, and evaluates North American AQ forecasts from the ECCC regional AQ forecast system, the NOAA-NWS regional AQ forecast system, and the ECMWF-CAMS global AQ forecast system with near-real-time, multi-network North American hourly surface measurements of O3, NO2, and PM2.5.  This new system, which contains daily forecasts from January 2017 onwards, automatically generates monthly multi-model performance statistics for North American daily maximum forecasts of O3, NO2, and PM2.5 at the end of each month.  While the system computes a number of standard statistical metrics, a new, pollutant-specific Air Quality forecast Performance Index (AQPI), which combines unitless measures of model bias, error, and correlation, is used to track and compare overall monthly performance and trends for the three AQ forecast models.  By exchanging these results on a regular basis, this international collaboration is providing useful information on multi-model performance to guide future model development.


Near Real-Time Sub/Seasonal Prediction of Aerosol at NASA Global Modeling and Assimilation Office

By: Andrea Molod, NASA

Summary: A new version of the coupled modeling and analysis system used to produce near real time subseasonal to seasonal forecasts was released over a year ago by the NASA/Goddard Global Modeling and Assimilation Office. The model runs at approcimately ½ degree globally in the atmosphere and ocean, contains a realistic dexcription of the cryosphere, and includes an interactive aerosol model. The data assimilation used to produce initial conditions is weakly coupled, in which the atmosphere-only assimilated state is coupled to an ocean data assimilation system using a local Ensemble Transform Kalman Filter.

Here will breifly describe the new syste, and show results of aerosol-derived air quality from an extensive series of retrospective forecasts. The interactive aerosol is shown to improve seasonal time scale prediction skill during some “forecasts of opportunity”. Plans for future version of the system with predicted biomass burning from fires will also be discussed.


High Resolution Air Quality Forecasting systems for India and the United States

By: Rajesh Kumar, National Center for Atmospheric Research (NCAR), Boulder, CO, USA

Summary: Air pollution has become one of the most important environmental concerns around the world. Short-term (1-3 days) air quality forecasts can provide timely information about upcoming air pollution episodes that the decision-makers and general public can use to reduce their exposure to air pollution and protect their health. In this perspective, we have been developing two high resolution (10-12 km) regional air quality forecasting systems to enhance air quality related decision-making activities in India with a focus on Delhi and surrounding regions, and in the contiguous United States (CONUS), respectively. Both of the forecasting systems are based on the Weather Research and Forecasting model coupled with Chemistry (WRF-Chem) and are run daily with predictions out to 48 h in the CONUS and 72 h in India. Publicly accessible websites have been designed to disseminate the forecast products. Aerosol optical depth retrievals from the Moderate Resolution Imaging Spectroradiometer (MODIS) are used to improve initialization of aerosol chemical composition in the Indian air quality forecasting system and work is in progress on assimilation of MODIS AOD retrievals in the U.S. air quality forecasting system. This presentation will discuss developmental and operational aspects of the two forecasting systems in detail.

Additional Authors: Rajesh Kumar1, Sachin Ghude2, Gabriele Pfister1, Louisa Emmons1, Stefano Alessandrini1, and Guy Brasseur1

1National Center for Atmospheric Research, Boulder, CO, USA

2Indian Institute of Tropical Meteorology, Pune, Maharashtra, India


A Machine Learning Approach for Ozone Forecasting and its Application for Kennewick, WA

By: Kai Fan, Laboratory for Atmospheric Research, Department of Civil and Environmental Engineering, Washington State University

Summary: O3 is one of the criterial air pollutants, which can be harmful for people's health. Many air quality management agencies apply processed based models to predict O3 levels, but it is difficult to predict high O3 events. With a growing application of machine learning (ML) models, we developed a ML modeling framework to forecast O3 AQI and D8M (daily 8-hr maximum) mixing ratios. This modeling framework includes two components. The random forest (RF) classifier model predicts the AQI categories and the multiple linear regression (MLR) model predicts the O3 mixing ratios. Hourly meteorological data from 4 km gridded WRF forecast archives, time information and the previous day's O3 observations were used to train the RF model. The predicted AQI category from the RF model is added to train the MLR model.

We have applied this forecast framework to Kennewick, WA in the tri-cities area where elevated O3 levels are periodically observed. Compared with our AIRPACT operational CMAQ air quality forecasts, our hybrid forecast framework increases R2 and decreases bias and error for Kennewick. More importantly, it improves peak O3 forecasts. Beginning in May 2019, our ML modeling framework has been used on a daily basis to predict the next 72-hour O3 mixing ratios and AQI with 18 ensemble members of UW-WRF forecasts for Kennewick. When RF and MLR based forecasts are produced for each ensemble member, their D8M and AQI are computed and reported at a public website http://ozonematters.com.

Additional Authors: Brian Lamb, Laboratory for Atmospheric Research, Department of Civil and Environmental Engineering, Washington State University (WSU), Pullman, WA

Ranil Dhammapala, Washington Department of Ecology, Olympia, WA

Ryan Lamastro, Environmental Geochemical Science, School of Science and Engineering, State University of New York, New Paltz, NY

Yunha Lee, Laboratory for Atmospheric Research, Department of Civil and Environmental Engineering, Washington State University (WSU), Pullman, WA


BL Parameterizations


Modeling Subgrid Transport

By: Jimy Dudhia, NCAR

Summary: The talk will give an overview of subgrid transport processes in various physical parameterizations, such as planetary boundary layer and convective parameterizations, including what types there are and when they are needed to transport chemical species or tracers. These processes in models represent unresolved vertical transports due to thermals or convection. When chemical

reactions occur on the same time scales as the dry sub-grid transport, or in cloudy updrafts and rainy regions, extra considerations are needed compared  to transporting passive species.


Evaluation of PBL Parameterizations in WRF at Subkilometer Grid Spacings: Turbulence Statistics in the Dry Convective Boundary Layer

By: Hyeyum (Hailey) Shin, NCAR

Summary: This study evaluates the performance of convective PBL parameterizations in the Weather Research and Forecasting (WRF) Model at subkilometer grid spacings. The evaluation focuses on resolved turbulence statistics, considering expectations for improvement in the resolved fields by using the fine meshes. The parameterizations include four nonlocal schemes - Yonsei University (YSU), asymmetric convective model 2 (ACM2), eddy diffusivity mass flux (EDMF), and total energy mass flux (TEMF) - and one local scheme, the Mellor-Yamada-Nakanishi-Niino (MYNN) level-2.5 model.

Key findings are as follows: 1) None of the PBL schemes is scale-aware. Instead, each has its own best performing resolution in parameterizing subgrid-scale (SGS) vertical transport and resolving eddies, and the resolution appears to be different between heat and momentum. 2) All the selected schemes reproduce total vertical heat transport well, as resolved transport compensates differences of the parameterized SGS transport from the reference SGS transport. This interaction between the resolved and SGS parts is not found in momentum. 3) Those schemes that more accurately reproduce one feature (e.g., thermodynamic transport, momentum transport, energy spectrum, or probability density function of resolved vertical velocity) do not necessarily perform well for other aspects.


Accounting for vertical and horizontal turbulent mixing in a three-dimensional planetary boundary layer parameterization

By: Pedro Jimenez, NCAR

Summary: Current planetary boundary layer (PBL) parameterizations are based on the assumption of horizontal homogeneity. This is a convenient assumption since it allows one to neglect the horizontal mixing. As a result, current PBL parameterizations are one dimensional (1D), with turbulent mixing only parameterized in the vertical direction. However, the assumption of horizontal homogeneity is hard to justify at fine grid spacings (e.g. sub-kilometer) wherein surface heterogeneities such us those ones introduced by topography become relevant. To overcome this limitation we have developed a PBL parameterization that accounts for both horizontal and vertical mixing and thus represents three-dimensional (3D) turbulent mixing.

Our 3D-PBL parameterization is based on first principles. It uses the Mellor and Yamada (MY) model to represent the turbulent fluxes and it has been implemented in the Weather Research and Forecasting (WRF) model. Originally, we implemented the MY level 2 model that only requires algebraic relations to diagnose the turbulent fluxes. The scheme is being upgraded to the MY level 2.5 wherein the turbulent kinetic energy becomes a prognostic variable.

This presentation will illustrate the benefits of moving beyond 1D-PBL parameterizations through a series of idealized and real case simulations using both 1D-PBL simulations and our 3D-PBL.


Scale-aware tests of the MYNN-EDMF PBL, shallow cumulus, and chemical mixing scheme with a novel framework

By: Wayne Angevine, CIRES and NOAA CSL

Summary: Making parameterizations for boundary layer turbulence and cumulus convection behave correctly as model grid spacing decreases from ~10km to a few hundred meters is a topic of current research.  At these scales, turbulence and cumulus are partially resolved, so the parameterized proportion should decrease. We present tests of the scale-aware aspects of the MYNN-EDMF PBL and shallow cumulus scheme in WRF with a novel framework.  The multi-column model (MCM) is a partially-convection-permitting setup that allows for changing grid spacing while testing the scheme's behavior in a well-constrained, idealized situation.  MCM consists of a grid with many columns (32x32) using doubly-periodic boundary conditions (like LES).  The grid is initialized and forced like a single-column model, but with tiny perturbations to the initial soil moisture to break symmetry.  The MYNN-EDMF scheme is available in WRF, is used in the operational RAP and HRRR models in the U.S., and is under active development. It provides fully consistent chemical and tracer mixing.  The MCM results point out the important distinction between grid spacing and resolution:  The effective resolution of a mesoscale model is 4-8 * deltax, whereas the coarsened LES used to formulate previous scale-aware functions have 2 * deltax resolution.

Additional Authors: Joseph Olson, CIRES and NOAA GSL


Complex Terrain and Coastal Zone Meteorology


Implications of Soil Moisture on Modeled Land-Atmosphere Interactions over Heterogenous Terrain

By: Aaron Alexander, University of California, Davis

Summary: During the summer, California's Central Valley is subject to complex atmospheric dynamics characterized by weak synoptic forcing, coupled valley-mountain and land-sea breeze circulations, and air stagnation in the deep valley. Additionally, soil moisture, due to land use heterogeneity, drives diverse land-atmosphere interactions, surface meteorology, and planetary boundary layer properties. This heterogeneity presents unique challenges to numerical simulations, especially due to the widespread use of time-varying irrigation in the Central Valley. We analyze the performance of the Weather Research and Forecasting (WRF) model with high resolution satellite remote sensing observations of soil moisture (SM-WRF) assimilated in the Central Valley during summer. These SM-WRF runs are compared to control runs to investigate how soil moisture affects model performance, particularly on surface energy fluxes, near surface meteorology (i.e. 2-meter relative humidity), and planetary boundary layer heights. Observations from the California Baseline Ozone Transport Study airplane flights, the NOAA Earth Systems Research Laboratory wind profiler network, and the California Irrigation Management Information System are used for comparison. Model simulations utilize the Noah land-surface model and two representative boundary layer parameterizations, a TKE closure scheme (MYNN 2.5) and a K-theory closure scheme (YSU), to illustrate the importance of improved representation of the soil moisture.

Additional Authors: Xia Sun, Justin Trousdell, Ian Faloona, Heather A. Holmes, Holly J. Oldroyd


Daytime, anabatic winds over a steep Alpine slope: Turbulence structure and modeling implications

By: Holly J. Oldroyd, University of California, Davis

Summary: Anabatic winds occur over slopes and in mountain valleys under weak synoptic and clear-sky conditions. Theses flows are buoyantly-driven by daytime surface heating and gradients in the near-surface virtual potential temperature field. The turbulence structure in anabatic flows have been given much less attention than their nocturnal, katabatic counterparts, yet these winds are important drivers of convergence at peaks and ridges, cloud formation, and convective precipitation during summer. Hence, a better understanding of the physical mechanisms driving heat and momentum transport are important to meteorological forecasting, pollutant transport, and hydrologic modelling in mountainous regions.  We present observations of the mean flow and turbulence structure over a steep (35.5 deg) slope in a narrow Alpine valley in Val Ferret, Switzerland. Here, the anabatic winds are characterized by a multi-scale, superposition of upslope and up-valley flows with oscillatory wind directions and wind speeds increasing throughout the afternoon. The near-surface virtual potential temperature profiles generally indicate a shallow convective layer. However, sensible heat fluxes tend to oscillate between positive and negative throughout the day and throughout the convective layer, indicating non-local drivers. Given the strong surface heating, or what would be a boundary condition in numerical models, the observed heat fluxes hold important ramifications for simulating these flows.


Diagnosing and Mitigating Errors in Boundary Layer Structure

By: Robert Fovell, University at Albany SUNY

Summary: Along with the surface layer treatment, the planetary boundary layer (PBL) scheme represents one of the most important parameterizations in a numerical weather prediction (NWP) model.  Among other things, PBL schemes are responsible for determining the depth, stability and vertical profiles of humidity and horizontal wind in the boundary layer. These are subject to observational uncertainty and also the validity of assumptions that can vary widely among parameterizations.  One major problem, however, is the relative lack of high-resolution and high-quality observations that can be used to verify and evaluate these schemes, as compared to the amount of surface information that is readily available.

In this presentation, forecasts of boundary layer wind, temperature, and specific humidity are evaluated using 60 high-resolution radiosondes across the contiguous US (CONUS) that can provide roughly 5 m vertical resolution near the surface.  Challenges in employing this underutilized resource have been mitigated or overcome and the spatially- and temporally-averaged composites are revealing shortcomings in the forecasts that can be addressed with further improvements to the scheme.  Implications for wind forecasting in complex terrain are considered.


The Impacts of Wildland Fires and Lower Troposphere Ozone in relation to Air Quality during CABOTS 2016

By: Sen Chiao, San Jose State University

Summary: The use of potential vorticity (PV) as a tracer for stratospheric air intrusions into the upper and middle troposphere is well known and supported in literature. This study examines the anomalies of this well-known tracer, along with humidity, in the lower levels of the troposphere to investigate the spatial and temporal influence of the filaments associated with upper level stratospheric intrusions. A regional average value is calculated for a 15-km vertical column during late July and early August 2016 encasing all sites of interest within the middle of California. These PV averages display the main synoptic features present over the region for the time period and the presence of the previously defined stratospheric intrusion cases of Clark (2018) are noticeable. The deviation from this average for the vertical column above each fire ignition and ozone monitoring sites indicate the timing and depth of stratospheric air ifuence to the local regions of interest. Negative PV anomalies and positive humidity anomalies in the lower 5 km vertical column would indicate a plausible low-level high ozone and dry stratospheric air influence to the region. The Soberanes Fire of Monterey County grew rapidly on July 26 and a State of Emergency was declared. The lowest 300-m vertical column exhibited a negative PV anomaly. At the location of the Cold Fire on August 2, 2016, the date of the fire outbreak in Yolo

County, the 1 km vertical column or air exhibited the negative PV and positive humidity anomalies. This indicates that stratospheric air likely entered the regions allowing for prime fire conditions. This study could help develop regional forecasting tool for low-level stratospheric intrusion high ozone events and spare-the-air no-burn days during the spring and summer.

Additional Authors: Jodie Clark, San Jose State University


Diablo Winds in the Bay Area California:  Their climatology, extremes, and behavior

By: Yi-Chin (Karry) Liu, California Air Resource Board

Summary: The Diablo winds, which are characterized as hot, dry, and gusty northeasterly winds in the San Francisco Bay area (Bay Area), have been linked to occurrences of several intense firestorms in Northern California, such as the devastating wine country wildfire in 2017. Despite their strong linkage to wildfires in Northern California, very few studies have been done for the Diablo winds, especially compared to their close cousin, the Santa Ana winds in Southern California. This study is to investigate possible mechanisms affecting the Diablo winds in the Bay Area from a climatological perspective. 3-hourly NCEP North American Regional Reanalysis (NARR) data from 1979 to 2018 are used for all the analyses. The months of interest are from September to February, and the area of interest covers the Bay Area air basin. As there hasn't been yet a consensus on a definition of a Diablo wind event (DWE), in this study a DWE is identified when three criteria are met: 1) area averaged Fosberg fire weather index is larger than 30; 2) area averaged wind direction is from  northerly to northeasterly (350Ëš  to 135Ëš ); and 3) the aforementioned two conditions satisfy for six or more consecutive hours.  A long-term trend of the DWEs in the Bay Area over the past four decades as well as potential climate variability modes that might influence DWEs and the mechanisms behind them are examined in details.

Additional Authors: Pingkuan Di (California Air Resources Board), Shu-Hua Chen (University of California, Davis), John DaMassa(California Air Resources Board)


LES, CFD, and Urban Canopy Modeling


Modeling variations in ozone dry deposition - what is important for ozone pollution?

By: Olivia Clifton, NCAR

Summary: Spatiotemporal variability in dry deposition of ozone is often overlooked despite its implications for interpreting and modeling tropospheric ozone concentrations accurately. Advancing understanding of depositional processes and their influence on ozone deposition velocity is key to estimating changes in the ozone depositional sink and associated damage to ecosystems with confidence. However, strong observed variations in ozone deposition velocities are not well understood, which challenges mechanistic modeling of ozone dry deposition and tropospheric ozone concentrations. In this talk I will discuss my work using a model hierarchy, including observation-driven process models, multilayer canopy large eddy simulation, and a global earth system model, to pinpoint the causes of variations in ozone dry deposition, changes in the depositional sink with climate and land use, and implications for ozone pollution.


Large-Eddy Simulation and Lagrangian Two-Particle Modeling of Mean and Fluctuating Concentrations in the Atmospheric Boundary Layer

By: Jeffrey Weil, NCAR

Summary: Dispersion in the atmospheric boundary layer (ABL) is characterized by large fluctuations in concentration over short time periods, e.g., a few seconds to minutes, etc. This is due to the random nature of the ABL turbulence field, which leads to especially large fluctuations in the convective boundary layer (CBL). Such fluctuations are caused by the meandering of a small "instantaneous" plume by the large CBL eddies. The root-mean-square (rms) concentrations are often as large as or larger than the mean value at short distances from a source and are important for health effects.  In this talk, we investigate rms fluctuations and peak concentrations from sources in the ABL using a new Lagrangian two-particle dispersion model (L2PDM) driven by large-eddy simulations (LESs). In this approach, we track the motion of two particles that start from a small source, spread due to inertial-subrange turbulence, and result in relative dispersion about the local plume centerline. We extend Thomson's (1990) L2PDM for homogeneous turbulence to more complex ABL flows by linking the model with LES, where the total particle velocity is divided into "resolved" and "subfilter-scale" (SFS) components. We find that the L2PDM-LES mean and rms concentration results agree well with earlier convection tank experiments over a range of source heights.  These results, the variation of rms and peak concentrations with averaging time, and probability distributions will be presented and discussed.


 

Analyzing and improving turbulence characterization in a multiscale atmospheric model of transport and dispersion through an urban area

By: David Wiersema, University of California, Berkeley

Summary: Recent multiscale simulations of transport and dispersion during the Joint Urban 2003 (JU2003) field campaign demonstrate the feasibility and benefits of dynamically downscaling from the mesosale to microscale within the Weather Research and Forecasting (WRF) model. A major challenge of multiscale simulation is the accurate modeling of turbulent flow at a variety of grid resolutions. Here, we analyze the observed and modeled turbulent flow and transport within Oklahoma City during JU2003. Two model improvements, the cell perturbation method (CPM) and the dynamic reconstruction method (DRM) turbulence closure, are investigated with an emphasis on the observed and modeled turbulence.

An immersed boundary method is used in WRF to enable multiscale simulations over urban terrain. Vertical grid nesting provides control of grid properties for the nested domains. CPM helps to generate inflow turbulence for the LES domains following grid refinements. DRM improves representation of turbulent structures and allows backscatter of turbulent energy. Impacts of CPM and DRM are evaluated by comparing multiscale simulations to JU2003 observations. Statistical measures of model skill indicate each configuration's accuracy in replicating wind speeds/directions and tracer concentrations. Modeled and observed turbulent velocity spectra are calculated and evaluated at several JU2003 observation locations.

Additional Authors: Katherine A. Lundquist, Lawrence Livermore National Laboratory

Fotini Katopodes Chow, University of California Berkeley


 

 

Convection


The Shallow-to-Deep Convective Transition:  A Modeling Challenge

By: David Adams, Universidad Nacional Autonoma de Mexico

Summary: The  shallow-to-deep convective transition has been a particularly vexing problem for models to replicate and will continue to be so given that the responsible mechanisms continue to be debated. In this presentation, proposed mechanisms are briefly reviewed. Observational data which provide metrics are necessary to ascertain model fidelity in the reproduction of the shallow-to-deep transition as well as to corroborate/provide evidence for the mechanisms proposed.  GPS precipitable water vapor, given its high temporal resolution and all-weather capacity, provides a particularly useful and simple variable for challenging models. Here, timescale metrics derived from a long-term GPS meteorological site and the Amazon Dense GNSS Meteorological Network, both located in the Central Amazon,  are presented. A water vapor convergence timescale of  ~4 hours is shown to characterize the shallow-to-deep transition. Likewise, a spatial decorrelation timescale, also of ~4 hours, provides a particular useful metric for higher resolution (~5 to 10km) models to ascertain their fidelity in reproducing the transition.  We wrap up the presentation with an overview of useful variables for model verification derived from our GPS meteorological campaigns in the North American Monsoon region.


Current Developmental Activity on the Grell-Freitas Cumulus Parameterization Including the Addition of Number Concentrations and Storm Motion

By: Hannah Barnes, NOAA ESRL

Summary: We will present some recent improvements to the GF parameterization. The main focus will be on two features that were added to the Grell-Freitas (GF) cumulus parameterization to improve the representation of the particle size distribution and to allow parameterized deep convection to propagate.

Estimates of cloud water and ice crystal number concentrations are added to GF base on the water-friendly aerosol content, temperature, and the cloud water and ice crystal mixing ratios. This modification is designed to diminish the artificial modification of the particle size distribution that occurs when the single moment cumulus schemes are used with the double-moment microphysics schemes. Simulations demonstrate that the addition of GF ice number concentrations substantially increases ice content aloft in the tropics, which shifts the outgoing longwave radiation distribution towards colder brightness temperatures.

The key modification used to enable the propagation of parameterized deep convective is the addition of an advected scalar that represents the cloud base mass flux associated with GF downdrafts. Our implementation of this advected scalar allows the impact of downdrafts from previous time steps to foster propagation. Evaluation and tuning of the new downdraft mass advection term is ongoing.


Improvement of parameterized convective transport and wet scavenging of trace gases in the WRF-Chem model

By: Yunyao Li, University of Maryland - Presented by Dr. Kenneth Pickering

Summary: Deep convection can transport surface moisture and pollution from the planetary boundary layer (PBL) to the upper troposphere (UT) within a few minutes. The convective transport of precursors of both ozone and aerosols from the PBL affects concentrations of these species in the UT, which influences the Earth's radiation budget and climate. Some of the precursors are soluble and reactive in the aqueous phase. This study uses WRF-Chem to simulate at cloud-parameterized resolution the convective transport of CO and O3 and wet scavenging of soluble precursors of both ozone and aerosol (i.e. CH2O, CH3OOH, H2O2, and SO2) in a supercell system observed on May 29, 2012, during the 2012 Deep Convective Clouds and Chemistry (DC3) field campaign. The default WRF-Chem subgrid convective transport scheme was replaced with a scheme to compute convective transport within the Grell-Freitas subgrid cumulus parameterization, which resulted in improved transport simulations. Furthermore, in order to improve the model simulation of cloud-parameterized wet scavenging, we added appropriate ice retention factors to the cloud-parametrized wet scavenging module, adjusted the conversion rate of cloud water to rainwater in the cloud parametrization and in the subgrid wet scavenging calculation, and add a subgrid scale LNOx scheme to the model. The introduction of these model modifications greatly improved the model simulation of trace gases mixing ratios in the UT relative to aircraft observations.

Additional Authors: Kenneth E. Pickering (University of Maryland, College Park)

Mary C. Barth (National Center for Atmospheric Research, Boulder)

Megan M. Bela (University of Colorado, Cooperative Institute for Research in Environmental Sciences, Boulder)

Kristin. A. Cummings (National Aeronautics and Space Administration (NASA), Kennedy Space Center, Florida)

Dale. J. Allen (National Aeronautics and Space Administration, Kennedy Space Center, Florida)


Plenary


Connecting Ozone Exceedances in Houston TX to Variability in Emissions and Meteorology: Implications for Federal Attainment 

By: William Vizuete, University of North Carolina - Chapel Hill

Summary: For regulatory purposes, it has been assumed that cities have a stable spatial and temporal distribution of emissions and the dominant factor in determining an ozone exceedance is variability in meteorological conditions. Thus, any analysis that can isolate conducive meteorological conditions should be able to accurately predict the frequency of exceedance days. This is not the case for the Houston-Galveston-Beaumont (HGB) region, where the vast majority of meteorological conducive ozone days do not produce exceedances. Conducive meteorological conditions are a necessary, but not sufficient condition for ozone exceedances in HGB. Using an expanded network of 32 monitors in HGB, my team found that the necessary conditions for high ozone were the result of the interaction of synoptic and Coriolis forces at 30 degrees N latitude that produces a rotational wind flow and stagnant morning conditions. This interaction, and resulting daily wind, can be observed across the state including the cities of San Antonio and El Paso. The infrequency of ozone exceedance days under these meteorological conditions suggests an additional variability in emission sources. On exceedance days, the observational data suggests local sources, and the location of origin of the ozone plumes points to sources to the east of the monitor. The variability of both emissions and meteorology presents a challenge for regulatory modeling and to assumptions made in the federal ozone attainment demonstration.

Additional Authors: Harvey E. Jeffries Othree Chemistry LLC


Aerosol Direct & Indirect Feedbacks and Aerosol Aware Microphysics


Effects of GHG mitigation strategies on future California climate

By: Mike Kleeman, UC Davis

Summary: California has committed to reduce greenhouse gas (GHG) emissions by 80% relative to 1990 levels by the year 2050.  This effort will require adoption of low-carbon energy sources across all economic sectors, transforming the airborne PM in California's atmosphere at the same time that it reduces GHG emissions. In this study, we examine the effects of changing PM composition on radiative forcing under two energy scenarios: business-as-usual (BAU) and low-carbon energy scenario (GHG-Step). Calculations are performed using the source-oriented WRF/Chem (SOWC) model, which can track a six dimensional aerosol variable (X, Z, Y, Size bin, Source type, Species) through explicit simulations of atmospheric chemistry and physics. This approach allows particles with the same size from different sources to age into different chemical compositions that depend on the chemical and hygroscopic properties of the primary seed particles.

The SOWC model is applied for the year 2054 with 12 km resolution over California. Meteorological initial and boundary conditions are updated using Community Earth System Model (CESM) model with Representative Concentration Pathway (RCP8.5) future scenario. Surface temperature, precipitation, and top of the atmosphere (TOA) forcing will be compared in the BAU and GHG-Step scenarios.  Implications for future climate in California will be discussed.


Substantial Convection and Precipitation Enhancements by  Ultrafine Aerosol Particles

By: Jiwen Fan, Pacific Northwest National Laboratory

Summary: Aerosol-cloud interactions remain the largest uncertainty in climate projections. Ultrafine aerosol particles smaller than 50 nanometers (UAP<50) can be abundant in the troposphere, but are conventionally considered too small to affect cloud formation. Observational evidence and numerical simulations of deep convective clouds (DCCs) over the Amazon show that DCCs forming in a low aerosol environment can develop very large vapor supersaturation because fast droplet coalescence reduces integrated droplet surface area and subsequent condensation. UAP<50 from pollution plumes that are ingested into such clouds can be activated to form additional cloud droplets on which excess supersaturation condenses and forms additional cloud water and latent heating, thus intensifying convective strength. This mechanism suggests a strong anthropogenic invigoration of DCCs in previously pristine regions of the world.


An Investigation of Proposed Aerosol Indirect Effect Mechanisms in Deep Convection

By: Adele Igel, UC Davis

Summary: We examine the impacts of Aitken and accumulation mode aerosol particles on isolated deep convective storms in thermodynamic environments characteristic of the tropics and midlatitudes. Recent published work has suggested that aerosol particles play a large role in determining the supersaturation and condensation rates in clouds and therefore also on the updraft speeds in storms. Alternatively, competing theories have suggested that precipitation formation, unloading, and/or ice processes may be responsible for setting the magnitude of aerosol indirect effects in deep convection. We use cloud-resolving simulations of deep convective storms to investigate these hypotheses. We find that the magnitude of the updraft response to aerosol concentrations is primarily a result of changes to supersaturation, and that it depends on the presence of both Aitken and accumulation mode particles. However, environmental conditions modulate the response, and in general the response is small and likely insignificant in most cases.

Additional Authors: Amy Yu, UC Davis


Medium Complexity Aerosol Treatment Coupled with Clouds/Precipitation/Radiation in a USA Operational NWP Model

By: Gregory Thompson, NCAR-RAL

Summary: Specific new advancements using the WRF Thompson-Eidhammer aerosol-aware scheme will be discussed including additions of time-varying surface aerosol flux, and a simple dust emissions scheme to permit wind-driven dust storms in dry soil areas to eject new dust into the category used for ice nucleation. Additionally, new research with the scheme includes prototype real-time forecasts of supercooled liquid water (SLW) and supercooled large drops (SLD) such as freezing drizzle and freezing rain. Results of the scheme are now being compared against aircraft, ground, satellite, and radar observations during a FAA-funded field project (In-Cloud Icing and Large Drop Experiment, ICICLE) based in Rockford, IL (28Jan-06Mar2019). We evaluate how well the real-time forecasts could differentiate between small and large droplet icing and predicted liquid water content and droplet number concentrations against the Convair-580 aircraft measurements among other data resources.


The Comparison of Dust-Radiation versus Dust-Cloud Interactions on the Development of a Modeled Mesoscale Convective System over North Africa

By: Chu-Chun Huang, UC Davis

Summary: This study evaluates the impact of dust-radiation-cloud interactions on the development of a mesoscale convective system (MCS) by comparing numerical experiments run with and without dust-radiation and/or dust-cloud interactions. An MCS that developed over North Africa on 4-6 July 2010 is used for study. The CloudSat and CALIPSO satellites passed over the center of the MCS after it reached maturity, providing valuable profiles of aerosol backscatter and cloud information for model verification. Our results indicate that the dust radiative effect has a far greater influence on the MCS's development than the dust microphysical effect. The dust-radiation interaction, both with and without the dust-cloud interaction, briefly delays the MCS's formation, but ultimately produces a stronger storm with a more extensive anvil cloud. This is caused by dust-radiation-induced changes to the MCS's environment. The dust microphysical effect on the MCS, on the other hand, is greatly affected by the presence of the dust-radiation interaction. The dust microphysical effect alone slows initial cloud development but enhances heterogeneous ice nucleation and extends cloud lifetimes. When the dust-radiation interaction is added, increased transport of dust into the upper portions of the storm allows dust-cloud processes to more significantly enhance heterogeneous freezing activity earlier in the storm's development, increasing updraft strength, hydrometeor growth, and rainfall.

Additional Authors: Shu-Hua Chen [Department of Land, Air, and Water Resources, University of California, Davis, CA]

Yi-Chiu Lin [Research Center of Climate Change and Sustainable Development, National Taiwan University, Taiwan]

Kenneth Earl [Department of Land, Air, and Water Resources, University of California, Davis, CA]

Toshihisa Matsui [NASA Goddard Space Flight Center, Greenbelt, Maryland]

Hsiang-He Lee [Atmospheric, Earth, and Energy Division, Lawrence Livermore National Laboratory, Livermore, CA]

I-Chun Tsai [Research Center for Environmental Changes, Academia Sinica, Taiwan]

Jen-Ping Chen [Department of Atmospheric Sciences, National Taiwan University, Taiwan]

Chao-Tzuen Cheng [National Science and Technology Center for Disaster Reduction (NCDR), Taiwan]


Model Evaluation Using Meteorological and Chemical Observations


CAMS Forecast and Reanalysis Evaluation using Chemical Observations

By: Henk Eskes, KNMI

Summary: The Atmosphere Monitoring Service of the European Copernicus Programme (CAMS) is an operational service providing analyses, reanalyses and daily forecasts of aerosols, reactive gases and greenhouse gases on a global scale, and air quality forecasts and reanalyses on a regional scale. In CAMS, data assimilation techniques are applied to combine in-situ and remote sensing observations with global and European-scale models of atmospheric reactive gases, aerosols and greenhouse gases. The global component is based on the Integrated Forecast System of the ECMWF, and the regional component on an ensemble of seven to nine European air quality models. CAMS is implemented by ECMWF.

CAMS has a dedicated validation activity - implemented by a partnership of 13 institutes co-ordinated by KNMI - to document the quality of the atmospheric composition products. In our contribution we discuss this validation activity, including the measurement data sets, validation requirements, the operational aspects, the upgrade procedure, the validation reports and scoring methods, and the model configurations and assimilation systems validated. Of special concern are the forecasts of high pollution concentration events (fires, dust storms, air pollution events, volcano eruptions).


Regional and hemispheric evaluation of the new Community Multiscale Air Quality Model (CMAQ) version 5.3

By: K. Wyat Appel, U.S. EPA

Summary: In the summer 2019, the United States Environmental Protection Agency (USEPA) will release the latest version of the Community Multiscale Air Quality (CMAQ) model. This latest version of the CMAQ model, 5.3, includes a wide range of scientific and structural updates to the current version of the model released several years ago. Examples of these updates include a new version of the aerosol module (AERO7), which improves the simulation of secondary organic aerosol (SOA) particle formation in the atmosphere. Dimethyl sulfide (DMS) chemistry has also been added as an available option in v5.3. The detailed halogen chemistry available in CMAQv5.2.1 has been updated and is now compatible with the CB6r3 chemical mechanism. The simple halogen chemistry available in CMAQv5.2.1 has also been updated. The M3DRY deposition scheme has been updated to improve the deposition of gases and bi-directional exchange of ammonia at the surface. A new dry deposition scheme, the Surface Tiled Aerosol and Gaseous Exchange (STAGE), is also available for the first time in CMAQv5.3. STAGE allows for land-use specific dry deposition estimates which are very important for terrestrial and aquatic ecosystem health applications. A comprehensive evaluation of the new model using observations from the Northern Hemisphere and contiguous United States will be presented, comparing the new model version to the previous model version and to observations, focusing primarily on the performance of ozone and PM2.5.


Seasonality and Trends of Modeled PM2.5 using WRF-CMAQ using Empirical Mode Decomposition

By: Marina Astitha, University of Connecticut

Summary: Regional air quality models have been widely used in studying sources, composition, transport and transformation of PM2.5 as well as their adverse environmental and health impacts. The emergence of decadal air quality simulations allows more sophisticated model evaluation other than the traditional operational evaluation. We propose a new framework of process-based model evaluation of speciated PM2.5 using Empirical Mode Decomposition (EMD) to assess how well regional-scale air quality models simulate the time-dependent long-term trend and cyclic variations in daily average PM2.5 and its species, including SO4, NO3, NH4, Cl, OC and EC. Amplitudes of the annual cycles of total PM2.5, SO4 and OC are well reproduced. However, the time-dependent phase difference in the annual cycles for total PM2.5, OC and EC reveal a shift of up to half year, indicating a potential challenge in the allocation of emissions during the study period and the urgent need for the recently completed model updates in the treatment of organic aerosols compared to the version employed for this set of simulations. Evaluation of several intra-annual and interannual variations indicates that model has larger potential in replicating the intra-annual cycles. In addition, we investigate the role of species other than those in the available dataset in driving agreements or discrepancies between model simulations and observations.

Additional Authors: Huiying Luo1, Christian Hogrefe2, Rohit Mathur2, S.T. Rao1,3

1 University of Connecticut, 2 US Environmental Protection Agency, 3 North Carolina State University


WRF-Chem Modeling of Summertime Ozone during the Long Island Sound Tropospheric Ozone Study

By: Brian McDonald, NOAA Earth System Research Laboratory, Chemical Sciences Division, Boulder, CO USA & Cooperative Institute for Research in Environmental Sciences, University of Colorado, Boulder, CO USA

Summary: During the summer of 2018, the New York City region experienced several high ozone episodes, including up to 143 ppb (1-hour max). Here we use the Weather Research Forecasting with Chemistry (WRF-Chem) Model to simulate ground-level ozone during the Long Island Sound Tropospheric Ozone Study (LISTOS). We utilize multiple observational datasets to evaluate meteorology and chemistry in the model. Meteorological measurements include a microwave radiometer and wind lidar operated at the City College of New York measuring vertical temperature, water vapor, horizontal wind speeds, and wind direction. Chemical observations include volatile organic compound (VOC) measurements by the NOAA Chemical Sciences Division, including integrated whole air samplers analyzed by gas chromatography-mass spectrometry (GC-MS) and proton transfer reaction-time of flight-mass spectrometry (PTR-ToF-MS). Carbon monoxide (CO), carbon dioxide (CO2), and methane (CH4) were measured concurrently. Columns of nitrogen dioxide (NO2) were measured by Pandora spectrometers located around the Long Island Sound. Lastly, ozone is measured routinely by air quality stations, as well as by an ozone lidar located downwind of New York City in Westport, CT. Utilizing these measurements and the WRF-Chem model, we investigate the sensitivity of ground-level ozone to anthropogenic emission sources, including mobile sources and volatile chemical products, as well as uncertainties associated with biogenic VOC emissions.


Challenges in simulating high air pollution concentrations during persistent cold air pool events

By: Xia Sun, University of Nevada, Reno. Atmospheric Science

Summary: Persistent cold air pool (PCAP) events are accompanied by a stably stratified atmospheric boundary layer and limited mixing, and both lead to an accumulation of air pollution in valleys during winter. The elevated air pollutant concentrations during PCAPs have been difficult to capture using chemical transport models (CTMs). The objective of this research is to understand how well the meteorology fields, especially the boundary layer structure, can be captured by the Weather Research and Forecasting (WRF) model and how this impacts the Community Multiscale Air Quality Modeling System (CMAQ) performance during PCAPs. We found that the temporal variability of the elevated air pollution concentrations during PCAPs was captured by the CMAQ model, but CMAQ underestimated the magnitudes. The underestimation of the atmospheric stability accompanied by more vertical mixing in the WRF model simulations contributed to the underestimation of PM2.5 concentrations. This research highlights the importance of meteorological uncertainties that contribute to air quality modeling deficiencies during PCAP events.

Additional Authors: Cesunica E. Ivey, University of California, Riverside; Heather A. Holmes,  University of Nevada, Reno


Data Assimilation & Inverse Modeling


Navy Ensemble Aerosol Forecasting and Data Assimilation

By: Juli Rubin, U.S. Naval Research Laboratory, Remote Sensing Division

Summary: In order to monitor aerosol impacts on air quality and climate and to quantify the impact of aerosol on numerical weather prediction (NWP) radiance assimilation, there has been rapid development in aerosol forecasting systems at many of the world's NWP centers.  Daily aerosol forecasts, initialized with analysis fields from a data assimilation (DA) system, are produced at the centers with the employed DA systems ranging from 2-dimensional variational (2DVar) to 4-dimensional variational (4DVar) to ensemble methods.  Currently, operational aerosol forecast for the United States Navy make use of a deterministic version of the Navy Aerosol Analysis Prediction System (NAAPS) with initial conditions produced using the Navy Variational Data Assimilation System for Aerosol Optical Depth (NAVDAS-AOD).  However, there has been increased interest in exploring ensemble systems as a means to produce better forecast guidance with probabilistic information.  In this talk, ensemble systems for Navy forecasting will be presented, including an Ensemble version of NAAPS that has been coupled to an Ensemble Adjustment Kalman Filter data assimilation system and the International Cooperative for Aerosol Prediction-Multi Model Ensemble (ICAP-MME).  The development and applications of these systems will be discussed.


Leveraging deep learning hyperparameter tuning frameworks for intelligent WRF ensembles

By: Derek Jensen, Lawrence Livermore National Laboratory

Summary: Ensemble forecast methods produce more accurate forecasts than individual realizations and provide an important measure of forecast uncertainty.  However, for model physics uncertainty, the number of possible model configurations grows exponentially with the number of ensemble parameters under consideration, limiting our ability to explore the ensemble space.  Similarly, deep neural networks are defined by a set of hyperparamters that describe the network architecture and training strategy.  The optimal set of hyperparameters is not known a priori and expensive to tune.  To address this, the deep learning community is actively developing strategies and tools to efficiently and probabilistically determine the optimal set of hyperparameters.  We are leveraging such strategies to generate intelligent WRF ensembles that efficiently explore the high-dimensional space of plausible WRF configurations.  We are developing an intelligent WRF ensemble tool (iWet) that automates the entire WRF program flow, including fetching initial and boundary condition datasets.  A job scheduler samples a set of WRF parameters and executes a WRF simulation.  An intelligent search then selects the next set of WRF parameters based on an acquisition function that jointly optimizes the exploitation of high-reward areas in the search space and the importance of exploring the unknown.

Additional Authors: Lucas, Donald D. - LLNL

Anderson-Bergman, Clifford I. - LLNL

Wharton, Sonia - LLNL


A biomass burning smoke prediction system including near-real time constraints on emissions over the Western U.S.

By: Pablo Saide, UCLA

Summary: Biomass burning is one of the major air pollutant sources with significant global, regional and local impacts on air quality, public health, and climate. Reducing the uncertainty of biomass burning emissions predictions is critical to improve air quality forecasts and assessments of their various impacts. This work will show the development and application of a system to forecast smoke applied experimentally during the NOAA/NASA FIREX-AQ (Fire Influence on Regional to Global Environments and Air Quality) field campaign planned for summer 2019. A unique feature of the system is that biomass burning emissions are constrained using inverse modeling techniques and near-real time satellite observations. We will compare the system performance against an ensemble of national and international air quality models which will be compiled during the deployment. This work will facilitate the near-real-time quantifications of fire emissions and is expected to provide improved predictions and better estimates of smoke impacts.


Errors in top-down estimates of emissions using a known source

By: Wayne Angevine, CIRES and NOAA CSL

Summary: Emissions estimates by top-down methods use observed concentrations, often from aircraft sampling, and models of varying complexity.  Top-down estimates are often higher than bottom-up (inventory) estimates.  Robust characterization of errors (biases) and uncertainties in top-down methods can be difficult to achieve.  Here we use a known source (the Martin Lake power plant in Texas) to eliminate several sources of uncertainty, allowing for more robust estimates of error and uncertainty in model and methods.  We use forward runs of HYSPLIT driven by ERA5 meteorology, and analyze the resulting concentrations with several mass-balance techniques.  ERA5 provides a 10-member ensemble to aid our analysis.  Results are compared to sulfur dioxide measured by NOAA in years 2000, 2006, 2013, and 2015.    We find errors and uncertainties comparable to those estimated previously for mass-balance techniques, and rather larger than uncertainties determined from purely numerical aspects of inversion methods.

Additional Authors: Jeff Peischl, CIRES and NOAA CSL


Top-down N2O emission estimation in California using tower measurements and an inverse modeling technique

By: Yu Yan Cui, California Air Resources Board

Summary: Nitrous oxide (N2O) is a long-lived climate pollutant with a high global warming potential, and is a strong agent for stratospheric ozone depletion. Recent studies have found that N2O emissions are significantly higher than current estimates in the bottom-up emissions inventories in California.  Top-down method, such as the inverse modeling techniques which use atmospheric transport models and ambient greenhouse gas (GHG) measurements, can provide an effective tool for evaluating the bottom-up estimates, understanding emission sources, and tracking emission trends to evaluate the effectiveness of mitigation efforts.  In this study, we estimate N2O emissions in California during a four-year period (2015-2018) by using a mesoscale inverse modeling system and tower measurements of N2O mixing ratio. High quality measurements from the California Air Resources Board (CARB) Statewide GHG Monitoring Network and the LA Megacities project were used for analysis. We also used an improved process-level N2O emissions inventory for fertilizer and crop-residue sources (DeNitrification-DeComposition model), and incorporated rigorous transport model uncertainty analysis by evaluating simulated Planetary Boundary Layer Heights using three types of retrievals, including a new CARB ground-based LiDAR network. The study evaluated seasonal and inter-annual variations of N2O emissions and combined the results with previously published top-down studies to evaluate the N2O emissions in California.

Additional Authors: Yu Yan Cui (1), Lei Guo (1), Ying-Kuang Hsu (1), Matthias Falk (1), Ken Stroud (1), Jorn Herner (1), Abhilash Vijayan (1)

1. California Air Resources Board, 1001 I Street, Sacramento, USA


New and Innovative Modeling Techniques: Machine Learning, New Computation Methods/GPUs, Exposure Estimate Improvement, Data Simulation


Using Machine Learning to Assess Parameters Associated with Harmful Algal Blooms and Hypoxia for Lake Erie

By: Christina Feng Chang, University of Connecticut

Summary: Lake Erie is an essential ecosystem for approximately 12 million people, as well as 17 metropolitan areas in the United States and Canada but excessive algal growth poses threats to the ecosystem and human health. We implement a novel approach that integrates numerical modelling, observations and machine learning to assess chlorophyll-α (chlor-a) and dissolved oxygen (DO) as proxies for HABs and hypoxia occurrence. Observations of chlor-a and DO are provided by the Lake Erie Committee Forage Task Group and the Great Lakes National Program Office. Meteorological weather variables from the WRF model, hydrological variables from the VIC model, atmospheric nitrogen deposition from the CMAQ model, and agricultural management practice variables from the EPIC model, provided by the US EPA for the time period 2002-2012, are used to fit a random forest model and predict concentrations of the mentioned response variables. We evaluate the importance of predictors, with special interest in the role of oxidized vs. reduced nitrogen deposition. We also analyze the contribution of each covariate in the model with Accumulated Local Effect (ALE) plots to better understand the occurrence of HABs and hypoxia. 


Machine Learning for Air Quality Applications

By: David Lary, University of Texas at Dallas

Summary: Machine Learning is of considerable utility for a variety of air quality applications, from the accurate calibration of low-cost sensors that can then be used to provide a dense sensor network at the neighborhood scale, to the creation of new data products such as estimating airborne pollen from weather radars. We will present a set of examples from our networks across the Dallas Fort Worth Metroplex.  Any sensor system benefits from calibration, but low-cost sensors are typically in particular need of calibration. The inter-sensor variability among low-cost nodes can be substantial. Using machine learning to individually calibrate low-cost sensors can often turn a carefully chosen low-cost sensor into a pseudo high-end sensor. We illustrate how machine learning can be used to readily calibrate low-cost sensors in an easy to deploy mesh network with a cascade of standards (primary reference, secondary, tertiary), with two orders of magnitude decrease in price as we go from the primary, to the secondary, to the tertiary sensors.  In addition, to the pre-deployment calibration, once the sensors have been deployed, the paradigm we first developed for satellite validation of constructing probability distribution functions of each sensor's observation streams, can be used to both monitor the real-time calibration of each sensor in the network by comparing its readings to those of its neighbors, and to look at the representativeness uncertainty on a neighborhood scale.


AI for Science: Deep Learning for improved Satellite Observations and Numerical Modeling

By: Craig Tierney, NVIDIA

Summary: In this session, we will present applications of NVIDIA GPUs for numerical weather prediction and satellite data analysis. NVIDIA's GPUs are driving the performance improvements in most modern and emerging supercomputers, and it is important to learn to use these resources effectively. Luckily, deep learning is a perfect fit for GPUs, providing a new path for accelerating existing routines and building powerful new software capabilities. Deep learning has the potential to improve all aspects of numerical weather prediction, and we will provide examples of it may be applied to extreme detection, data assimilation, satellite loop enhancement, model acceleration, physical parameterization, and more.


A Deep Learning Parameterization for Ozone Dry Deposition Velocities

By: Sam Silva, Massachusetts Institute of Technology

Summary: The loss of ozone to terrestrial and aquatic systems, known as dry deposition, is a highly uncertain process governed by turbulent transport, interfacial chemistry, and plant physiology. We demonstrate the value of using Deep Neural Networks (DNN) in predicting ozone dry deposition velocities. We find that a feedforward DNN trained on observations from a coniferous forest site (Hyytiälä, Finland) can predict hourly ozone dry deposition velocities at a mixed forest site (Harvard Forest, Massachusetts) more accurately than modern theoretical models, with a reduction in the normalized mean bias (0.05 versus ~0.1). The same DNN model, when driven by assimilated meteorology at 2° × 2.5° spatial resolution, outperforms the Wesely scheme as implemented in the GEOS-Chem model. With more available training data from other climate and ecological zones, this methodology could yield a generalizable DNN suitable for global models.


A Mass-Conserving Machine Learning Algorithm for Atmospheric Chemistry

By: Anthony Wexler, UC Davis

Summary: Gas phase chemistry and aerosol particle dynamics consume the majority of computer time in Chemical Transport Models (CTMs) of urban and regional air pollution. Likewise, Global Circulation Models (GCMs) heavily parameterize this subgrid process to make them computationally tractable while compromising detailed description of the physics and chemistry. If machine learning techniques could be used to relate the concentrations of the chemical constituents at one operator-splitting time step to those at the next time step, the computational speed of these models would be dramatically increased.

CTMs and GCMs use operator splitting to solve the operators that describe the governing physics and chemistry. In CTMs the operator splitting time step ranges typically from 0.1 to 1 hour. If a machine learning algorithm were used to describe the chemistry with 99% accurate, then for a 10-hour run with an 0.1-hour time step, the results could be 100% off. Even if the machine learning algorithm were 99.9% accurate, the errors can come to dominate. This is especially a problem if mass balance is violated.  That is, if the 1% error or 0.1% error resulted in systematic creation or destruction of one or more chemical constituents, the answers produced could be dubious at best.

In this talk, we will describe a mathematical framework for assuring 100% mass balance regardless of the machine learning algorithm employed or the accuracy of this algorithm.

Additional Authors: Patrick O. Sturm, Michael J. Kleeman and Anthony S. Wexler, UC Davis