Technical Support Document for Technical Products Prepared by the
Western Regional Air Partnership in Support of Western Regional Haze
Plans.  

DRAFT:  October 4, 2010

Prepared and Reviewed by:

Michael Feldman, EPA Region 6

Erik Snyder, EPA Region 6

Kevin Golden, EPA Region 8

Aaron Worstell, EPA Region 8

Larry Biland, EPA Region 9

Carol Bohnenkamp, EPA Region 9

Scott Bohning, EPA Region 9

Gregory Nudd, EPA Region 9

Robert Kotchenruther, EPA Region 10

Keith Rose, EPA Region 10

Tom Moore, WRAP

WRAP Regional Haze Technical Support Document (TSD)

Table of Contents

  TOC \t "WRAP TSD 1,1,WRAP TSD A,1"  1.	Background and Introduction	 
PAGEREF _Toc270057579 \h  3 

2.	WRAP Development of Visibility Baselines and Planning Goals	  PAGEREF
_Toc270057580 \h  5 

3.	WRAP Emissions Inventories	  PAGEREF _Toc270057581 \h  12 

4.	WRAP Meteorological Modeling	  PAGEREF _Toc270057582 \h  22 

5.	WRAP Visibility Modeling	  PAGEREF _Toc270057583 \h  28 

6.	WRAP Source Apportionment Modeling	  PAGEREF _Toc270057584 \h  41 

7.	WRAP BART modeling	  PAGEREF _Toc270057585 \h  51 

8.	Conclusions	  PAGEREF _Toc270057586 \h  55 

A.	Appendix: Accessing WRAP Technical Products	  PAGEREF _Toc270057587
\h  57 

 

Background and Introduction

Background 

Regional haze is visibility impairment that is caused overwhelmingly by
fine particulates (PM2.5).  Visibility impairment occurs when PM2.5 in
the atmosphere scatters and absorbs light, thereby creating haze. PM2.5
can be emitted into the atmosphere directly as primary particulates, or
it can be produced in the atmosphere from photochemical reactions of
gas-phase precursors and subsequent condensation to form secondary
particulates. Examples of primary PM2.5 include crustal materials and
elemental carbon; examples of secondary PM include ammonium nitrate,
ammonium sulfates, and secondary organic aerosols (SOA). Secondary PM2.5
is generally smaller than primary PM2.5, and because the ability of
PM2.5 to scatter light depends on particle size, with light scattering
for fine particles being greater than for coarse particles, secondary
PM2.5 plays an especially important role in visibility impairment.
Moreover, the smaller secondary PM2.5 can remain suspended in the
atmosphere for longer periods and is transported long distances, thereby
contributing to regional-scale impacts of pollutant emissions on
visibility.

The sources of PM2.5 are difficult to quantify because of the complex
nature of their formation, transport, and removal from the atmosphere.
This makes it difficult to simply use emissions data to determine which
pollutants should be controlled to most effectively improve visibility.
Photochemical air quality models offer opportunity to better understand
the sources of PM2.5 by simulating the emissions of pollutants and the
formation, transport, and deposition of PM2.5. If an air quality model
performs well for a historical episode, the model may then be useful for
identifying the sources of PM2.5 and helping to select the most
effective emissions reduction strategies for attaining visibility goals.
Although several types of air quality modeling systems are available,
the gridded, three-dimensional, Eulerian models provide the most
complete spatial representation and the most comprehensive
representation of processes affecting PM2.5, especially for situations
in which multiple pollutant sources interact to form PM2.5.

In Section 169A of the 1977 Amendments to CAA, Congress set forth a
program for protecting visibility in the nation’s national parks and
wilderness areas.  This section of the CAA establishes as a national
goal the “prevention of any future, and the remedying of any existing,
impairment of visibility in mandatory Federal Class I areas which
impairment results from manmade air pollution.” EPA promulgated a rule
to address regional haze on July 1, 1999 (64 FR 35713), the Regional
Haze Rule (RHR).  The RHR established the goal of achieving
“natural” visibility conditions in all 156 Federal Class I areas by
2064.

Because the pollutants that lead to regional haze can originate from
sources located across broad geographic areas, EPA has encouraged the
States and Tribes across the United States to address visibility
impairment from a regional perspective.  Five regional planning
organizations (RPOs) were developed to address regional haze and related
issues.  One of the main objectives of the RPOs is to analyze available
data and conduct pollutant transport modeling to assist the States in
developing their regional haze plans. 

The Western Regional Air Program (WRAP) RPO is a collaborative effort of
State governments, tribal governments, and various federal agencies
established to conduct data analyses, conduct pollutant transport
modeling, and coordinate planning activities among the western States. 
WRAP member State and Tribal governments include: Alaska, Arizona,
California, Colorado, Idaho, Montana, New Mexico, North Dakota, Oregon,
South Dakota, Utah, Washington, and Wyoming. Tribal board members
include Campo Band of Kumeyaay Indians, Confederated Salish and Kootenai
Tribes, Cortina Indian Rancheria, Hopi Tribe, Hualapai Nation of the
Grand Canyon, Native Village of Shungnak, Nez Perce Tribe, Northern
Cheyenne Tribe, Pueblo of Acoma, Pueblo of San Felipe, and
Shoshone-Bannock Tribes of Fort Hall.

Technical Requirements for Regional Haze SIPs 

The RHR does not mandate specific milestones or rates of progress, but
instead calls for States to establish goals that provide for
“reasonable progress” toward achieving natural visibility
conditions.  In setting Reasonable Progress Goals (RPGs), States must
provide for an improvement in visibility for the most impaired days over
the ten-year period of the SIP, and ensure no degradation in visibility
for the least impaired days over the same period. In setting the RPGs
for each 10-year period covered by a SIP, States must also compare the
RPGs to the uniform rate of progress needed to reach natural visibility
conditions by 2064, referred to as the  “glide path”, which is the
linear rate of reduction in visibility impairment (in deciviews) needed
to achieve natural conditions by 2064. 

According to the RHR, Regional Haze SIPs must specifically identify and
address the following elements:

Baseline Visibility Conditions

Natural Visibility Conditions

Uniform Rate of Progress

Best Available Retrofit Technology (BART)

Current and Future (2018) Emission Inventories

Source Contribution to Haze 

Reasonable Progress Goals

The purpose of this TSD is to review the technical products developed by
the WRAP for the western regional States, in support of their RH SIPs.
This TSD evaluated the methods and procedures used by WRAP to develop
products that assisted the western regional States in addressing the
required elements of a RH SIP. Specifically, this TSD reviewed the
meteorology, visibility impairment, source apportionment, and BART
modeling conducted by WRAP, and determined if these models met
applicable guidelines or protocols, and met modeling standards at the
time they were conducted.

C.  Format and Structure of This Document

Many portions of this text come directly from existing WRAP
documentation of their technical work.  To delineate explanations of
WRAP’s work from EPA review of that work, this document is structured
such that EPA comments appear in bold face as either ‘Review
Comments’ or ‘Explanatory Comments’.

WRAP Development of Visibility Baselines and Planning Goals

A.   Introduction

Under the Regional Haze Rule, each State is required to demonstrate
reasonable progress in visibility conditions for each of its Class I
areas.  The State is to determine a uniform rate of progress ("glide
path", "glide slope") toward the goal of natural visibility conditions
in 2064.  Considering various statutory factors, the State is also to
define a reasonable rate of progress, and compare this to the benchmark
uniform rate; if projected progress is less than the uniform rate, then
the State is to explain why.  Procedures for assessing progress are
described in the Regional Haze Rule and EPA guidance documents.

In brief, the guidance defines a metric to quantify visibility
conditions, together with procedures for determining a starting point
and an ending point, between which progress is to be made.  The metric
used is the Haze Index, measured in deciviews, and is designed to
correspond to human perception of visibility changes.  It is defined as
10*ln(bext/10), where bext is extinction, the fraction of light
scattered out of a sight path due to pollutants over a given distance
(with units of Mm-1 or "inverse megameters"); it is inversely related to
visual range.  A 24-hour average is used, so there is a deciview value
for each day of the year; the average of the 20% most-impaired days and
of the 20% least-impaired days are to be assessed.  The Regional Haze
Rule goal is to improve visibility on the worst 20% of days, while
having no degradation on the best 20%.

The starting point for progress is current or baseline visibility
conditions, as monitored by the Interagency Monitoring of PROtected
Visual Environments  (IMPROVE) monitoring network.  Monitored pollutant
concentrations are converted to visibility extinction using the IMPROVE
equation, which adds up the contribution of each pollutant to
extinction, while accounting for the effect of relative humidity; this
is then converted to deciviews in the Haze Index.  For each of the years
2000-2004 of the baseline period, the average of the deciviews on the
worst 20% of days is calculated; the five-year average of these defines
the baseline.  This procedure is described in detail in EPA's "Guidance
for Tracking Progress Under the Regional Haze Rule" (hereafter
“GTR”).  The guidance also makes provision for dealing with missing
data, since monitoring instrument maintenance and malfunctions mean that
data is not available for every scheduled measurement.

The end point for progress is the goal of natural visibility conditions
in 2064.  The default approach for determining these is described in
EPA's "Guidance for Estimating Natural Visibility Conditions Under the
Regional Haze Program" (hereafter “GENVC”).  Starting with annual
average natural background pollutant concentrations as estimated by
Trijonis et al. (1990) under NAPAP for the East and West parts of the
country, deciviews are calculated with the IMPROVE equation, using the
monthly relative humidity for each specific Class I area.  These annual
averages were then translated into estimates for the best 20% and worst
20% days needed for the progress assessment.  Extinction was assumed to
have a lognormal frequency distribution; deciviews would then have a
normal distribution, and its 10th and 90th percentiles were used as
estimates of the average of the best 20% and worst 20% of days,
respectively.  The result is a table of  best and worst 20% deciview
values for each Class I area, which appears in Appendix B of the
guidance.  The guidance also allows States to use a refined alternative
to this default approach for estimating natural conditions.

Finally, the uniform rate of progress is the difference between the
baseline and natural conditions, spread over the 60 years between 2004
and 2064: uniform deciviews per year improvement = (current 2004
deciviews - natural 2064 deciviews) / 60.  This rate is the benchmark
against which visibility improvement is to be compared by the State; the
first planning period envisaged by the Regional Haze Rule is through
2018, so this uniform rate is multiplied by 14 to determine the first
benchmark.

B.   WRAP Procedures

The WRAP procedure for developing a uniform rate of progress (URP, also
known as "glide path" or "glide slope") differed from the EPA default
procedure in three principal ways: A. revised IMPROVE equation, B.
alternative data substitution methods,  and C. refined estimate of
natural conditions.  These are discussed in the TSS Roadmap document,
especially its Appendix A (Roadmap pp. 34-71).

The procedures used by the WRAP for determining the uniform rate of
progress or "glide path" are consistent with those in the Regional Haze
Rule and EPA guidance.  They varied in some particulars, in order to
address some issues raised about the EPA default approaches.  These
issues were thought to be especially important for the western states
comprising the WRAP, for which visibility is generally already fairly
good, and for which natural visibility conditions are excellent.

C.   Revised IMPROVE Equation

WRAP used the revised IMPROVE equation for estimates of both baseline
and natural conditions.  The IMPROVE equation is used to convert
measured concentrations into extinction for each pollutant chemical
species, and then total them up, accounting for the effect of relative
humidity, and including the Rayleigh scattering that occurs in pure air.
 The extinction total is then used to calculate deciviews for use in
visibility progress assessments.  As summarized in the Roadmap, in
December 2005 the IMPROVE Steering Committee revised the IMPROVE
equation after a scientific assessment of its implications for regional
haze planning.  In particular, when compared to nephelometer direct
measurements of visibility extinction, the original IMPROVE equation
over-predicts for low extinction conditions and under-predicts for high
extinction.  These biases have direct relevance for estimates for the
best 20% and worst 20% visibility days that are used to assess progress.
 

Old IMPROVE equation:

bext 	= 3 * f(RH) * [sulfate]

 + 3 * f(RH) * [nitrate]

 + 4 * [organic mass]

 + 10 * [elemental carbon]

+ 1 * [fine soil]

+ 0.6 * [coarse mass]

+ 10

Each term in the equation is the extinction due to a particular measured
component; bracketed quantities are concentrations of as measured at
IMPROVE monitors.  The organic mass is assumed to be 1.4 times the
organic carbon mass that is measured by IMPROVE monitors.  The 10 is for
Rayleigh scattering which is due to the interaction of light with
molecules of air itself with no pollutants, and is assumed to be the
same for all locations, The f(RH)  is a water growth factor for sulfate
and nitrate, which are hygrosopic (their particles tend to attract
water).  Its value depends on relative humidity, ranging from 1 at low
humidity to 18 at 98% humidity.

New IMPROVE equation:

bext 	= 2.2 * fs(RH) * [small sulfate] + 4.8 * fL(RH) * [large sulfate]

+ 2.4 * fs(RH) * [small nitrate] + 5.1 * fL(RH) * [large nitrate]

+ 2.8 * [small organic mass] + 6.1 * [large organic mass]

+ 10 * [elemental carbon]

+ 1 * [fine soil]

+ 1.7 * fss(RH) * [sea salt]

+ 0.6 * [coarse mass]

+ Rayleigh scattering (site-specific)

+ 0.33 * [NO2 (ppb)]

Sulfate is assumed to be all “large sulfate” if total sulfate is
over 20 μg/m3, otherwise its fraction of the total is assumed to
increase uniformly between 0 and 1 when the total is in the range
between 0 and 20.  I.e. large sulfate = (total sulfate/20)*total.  A
similar definition applies for nitrate and for organic mass.  The
organic mass is assumed to be 1.8 times the organic carbon mass that is
measured by IMPROVE monitors, an increase over the old 1.4.  Sea salt is
estimated as 1.8 * [chloride]  (or chlorine if chloride not available) 
Finally, the fs, fL, fss are water growth factors for small (“S”)
and large (“L”) fractions of sulfate and nitrate, and for sea salt
(“SS”).  Their values depend on relative humidity, ranging from 1 at
low humidity to over 5 at 95% humidity.

The new equation has five changes: 1) greater completeness though the
inclusion of sea salt, which can be important for coastal sites; 2)
increased organic carbon mass estimate, based on more recent data for
remote areas; 3) Rayleigh scattering using site-specific elevation and
temperature, a refinement over the older network-wide constant; 4)
separate estimates for small and large particles of visibility impacts
and humidity-dependent particle size growth rates, which could affect
estimates at the low and high ends; and 5) greater completeness though
the inclusion of NO2 (Pitchford, 2006).

The new equation shows broader scatter overall, but less bias in
matching visibility measurements under high and low visibility
conditions.  That is, though it has a somewhat worse fit considering all
the data, it has a better fit under visibility conditions most relevant
to regional haze planning, the best and worst 20% of days.  The looser
overall fit can cause a slightly different set of days to be the ones
chosen as the 20% worst, but the chemical species composition for such
days is little changed (IMPROVE technical subcommittee for algorithm
review, 2001, pp. 11-12), and so this makes little difference for
assessing the contribution of emission sources to current conditions,
and for projecting the effect of emission controls.  The split between
small and large particles was the main factor in reducing the biases.

The organic carbon (OC) measured by the IMPROVE network does not include
all organic matter (OM); based on 1970's urban data, a scaling factor of
1.4 is embedded in the old equation to account for the full mass.  Based
on recent data more relevant to relatively remote Class I areas, the
revised IMPROVE equation embeds an OM/OC factor of 1.8.  In practice,
for the worst days the biggest effect of switching to the revised
IMPROVE equation is this increased organic carbon mass, since the worst
days are dominated by organic carbon from fires, rather than the
sulfates and nitrates that come more from anthropogenic sources.

Review Comment:  The revised IMPROVE equation has less bias, is more
refined, accounts for more pollutants, incorporates more recent data,
and is based on considerations of relevance for the calculations needed
for assessing progress under the RHR.  EPA believes it is appropriate
for the WRAP States to use the revised IMPROVE equation.

D.   Baseline Visibility Conditions

EPA's "Guidance for Tracking Progress Under the Regional Haze Rule"
("GTP")  describes a step-by-step process for calculating the visibility
metric to be used in tracking progress, including that for the baseline
period 2000-2004 (GTP, sec. 2).  The steps involve assembly of daily
species concentration data from the IMPROVE network, inclusion of
substitutions for missing data; assessment of site data completeness;
calculation of extinction via the IMPROVE equation and then the deciview
Haze Index; calculation of average deciviews for the 20% best and 20%
worst days; averaging these over the 5 year period.  These steps are
mostly straightforward and were stated to be followed by the WRAP,
thought with little documentation of that.  The main documentation
focussed on the differences between EPA guidance and WRAP procedures,
which involved substitution for missing data.

For data substitution, the EPA guidance describes two procedures.  The
first is the use of quarterly median concentrations.  This is the median
concentration from the quarter the data value is missing from, averaged
with similar medians from the preceding four years, using only quarters
with having at least 50% of the days available and no more than 10
consecutive missing days  (GTP, "Step 3: Determine Quarterly Median
Concentrations for Missing Variables", p.2-5).  The second substitution
involves using quarterly averages, as long as the substitution changes
extinction values by less than 10% in 90% of the data (GRP, "Step 5 -
Evaluate Feasibility of Substituting Average Values", pp. 2-7 - 2-8).

Completeness requirements are also given in the EPA guidance: to be
complete and included in the progress calculation, a year must have data
for 50% of the sampling days in every quarter, 75% of the days for the
year overall, and no more than 10 consecutive missing sampling days; the
overall 5-year period must have at least 3 complete years of data (GTP,
"Step 7 - Check Data Completeness", pp. 2-8 - 2-9).

In order to meet the completeness requirement for IMPROVE 13 sites that
either did not have 3 complete years, or that were missing the year 2002
(needed for applying modeled relative response factors or RRFs to
predict future conditions), WRAP used two additional procedures, carbon
substitution and donor sites (WRAP, 2007).  Under carbon substitution,
WRAP used hydrogen as a surrogate for missing carbon, based on the
historical relationship between carbon and hydrogen.  The relationship
was a quarterly regression between organic carbon (OC) and the fraction
of hydrogen assumed to be associated with organic carbon (measured
hydrogen, less 24% of the measured sulfur to reflect hydrogen in
inorganic compounds such as ammonium sulfate).  In turn, elemental
carbon (EC) was estimated from a quarterly regression with OC.

A second substitution method was the use of donor sites.  For missing
species, correlations between the missing data site and nearby candidate
sites were calculation, and then donor sites selected in consultation
with the State.  Despite these two additional substitution methods, data
did not meet the completeness requirements for four sites in 2000, and
one site in 2001.  However, at all sites data was considered complete
for 2002, and also met the overall completeness requirements for the
five-year period 2000-2004.

Review Comment:  The WRAP mainly followed EPA guidance for estimating
baseline visibility conditions, but did use two additional data
substitution methods.  EPA believes these are reasonable extensions to
the EPA approach, appropriate for the WRAP States to use, especially in
view of the importance of the year 2002 for projecting future visibility
conditions via modeling.

E.   Natural Visibility Conditions

EPA guidance set out a default procedure for estimating natural
conditions, but also describes circumstances when States might want to
use a more refined approach, such as to reduce uncertainty when baseline
visibility is already near natural conditions, or when there is marked
seasonality; these might be accomplished via alternative estimates of
natural concentrations, or use of temporally varying estimates (GENV
sec. 3.1 and 3.2).

A Natural Haze Levels II Committee ("NH2C") was established with
participation of several Regional Planning Organizations (RPOs) that are
responsible for regional haze planning.  This committee recommended a
refined approach for estimating natural visibility conditions, which was
used by the WRAP (Inter-RPO Natural Haze Levels II Committee, 2006;
hereafter “NH2C report”).  The natural conditions approach used the
revised IMPROVE equation, so that progress between baseline conditions
and natural conditions can be calculated on a consistent basis.  In
brief, the approach estimated the best and worst 20% day visibility
using the distribution of measured actual concentration data to reflect
temporal variation; however these data were scaled so that their annual
average matched the Trijonis values in the EPA guidance.

As discussed by the NH2C (NH2C report, notes to slide 5), the EPA
default approach assumed that extinction is lognormally distributed, as
is the case for the distributions of concentration for many pollutants. 
Since the deciview Haze Index is based on the logarithm of extinction,
this implies that deciviews are normally distributed.  Also, empirically
this assumption looks reasonable (GENV sec 2.6).  However, since
Rayleigh scattering is essentially constant for a given monitoring site,
and is thus a constant term in the IMPROVE equation for extinction, its
logarithm cannot be normally distributed.  This is important for natural
conditions, when the Rayleigh term will be a substantial fraction of the
total extinction.

This constant Rayleigh term was also included in calculations of mean
and standard deviation of deciviews for each site.  On scatter plots of
standard deviation against mean using data from multiple sites, this led
to an apparent decrease of standard deviation as mean decreases. 
Extrapolating down to a mean equal to the annual deciviews derived from
the Trijonis/NAPAP natural concentrations led to a corresponding natural
standard deviation, which was 2 deciviews for Western sites.  However,
for the least hazy sites, this decrease with mean was mainly due to the
Rayleigh term.  So, including the latter distorted the estimate of
standard deviation, which in turn distorted the estimates of the 10th
and 90th percentiles (NH2C report, notes to slide 5).

A further assumption in the EPA approach was that the average of the
normally distributed deciviews of the 20% best or worst days can be
respectively represented by the 10th or 90th percentile.  (These are
1.28 standard deviations below or above the mean, respectively.) 
Actually the 8th and 92nd are better estimators; but since the
distribution is not normal, this criticism is less relevant (NH2C
report, notes to slide 5).

To address these issues, the NH2C developed a refined approach (NH2C
report, notes to slide 6; Copeland et al., 2008) that did not depend on
any statistical assumptions.  Instead, actual monitored concentrations
were used as the basis for the best 20% and worst 20% deciview days. 
For each monitoring site and each chemical species, actual IMPROVE
concentrations were scaled so that the annual average was equal to
Trijonis/NAPAP natural concentrations of the EPA default approach. 
These "Trijonis-adjusted" concentrations reflect the observed relative
temporal variation of each IMPROVE site, yet are consistent with the
natural concentrations in the EPA guidance.  For each day, the adjusted
concentrations were used in the revised IMPROVE equation to calculate
extinction, and converted to deciviews.  The respective averages of the
20% best and the 20% worst days' deciviews was then calculated directly,
without assumptions on the statistical distributions as were needed in
the EPA approach to translate extrapolated standard deviations into
percentiles.  Sites with annual average concentrations already below the
Trijonis levels were not scaled upward, as that would imply current
visibility would worsen in the future despite emission reductions.  This
procedure is consistent with the refined "sample-period-by-sample-period
basis" described in EPA guidance (GENV sec. 3.6).

The procedure used has several acknowledged limitations.  One it shares
with the EPA approach: each chemical species can have one of only two
possible background concentrations, one for the East and the one for the
West.  Future efforts may provide for a larger number of geographic
zones with differing concentrations.

Two other limitations were discussed by the NH2C along with alternative
procedures to address them.  First, the split between small and large
particles in the revised IMPROVE equation depends on a relationship to
concentration that is based on today's conditions.  With future emission
reductions, concentrations will be lower, and the relationship may
change.  The NH2C decided to avoid the significant effort and time
needed to modify and validate changes to the IMPROVE equation that this
would entail.  In addition, future lower concentrations will likely
involve relatively more particles in the smaller sizes already
accommodated in the revised IMPROVE equation (NH2C report, notes to
slide 6).  The performance of the IMPROVE equation will need to be
assessed over time as emission reductions occur.

A second potential limitation is that the same approach is used for both
natural- and anthropogenic-dominated species components; EPA guidance
mentions the possibility of treating these separately (GENV sec. 3.4). 
For the most part the variations considered by the NH2C kept some or all
assumed natural species components constant, e.g. carbon assumed to come
from natural fires.  However, these alternative assumptions were
rejected as not valid for all sites, and/or requiring more site-specific
analysis than time permitted or than was warranted given the overall
uncertainties (NH2C report, notes to slide 6).  This, too, is a
potential area for future efforts.

Review Comment:  EPA guidance allows for a more refined approach to
estimating natural visibility conditions (GENVC 3.1, 3.3) as was used
here.  This is especially appropriate for areas in the West, some of
which already have fairly good visibility.  The method used by the WRAP
matches EPA's general approach, but is more refined and does not rely on
some of the statistical assumptions used in the default EPA procedure.

Review Comment:  The approach used to estimate natural visibility seems
superior to EPA default approach in a number of ways, and EPA believes
it is appropriate for the WRAP States to use.  

WRAP Emissions Inventories

This WRAP emissions inventory work was divided into three broad
categories: point and area source projections developed by ERG, fire
emissions inventory developed by Air Sciences and mobile source
emissions inventories developed by Environ. This chapter reviews the
methodologies behind the WRAP analysis of each of these broad source
categories to determine if the they are consistent with applicable EPA
guidance and are therefore likely to be accurate, current and complete. 

A.  Point and Area Sources Projections

The WRAP technical work for this category may be found in the Final
Technical Memorandum-Final developed by ERG Titled “WRAP PRP18b
Emissions Inventory – Revised Point and Area Sources Projections”
revised October 16, 2009.  The purpose of this evaluation is to
determine if the WRAP Inventory meets the CAA requirement for a
comprehensive, accurate and current inventory and can be used by States
and Tribes for inclusion in their Regional Haze State Implementation
Plans.  The WRAP Inventory provides the most significant stationary
point and area source visibility-impairing pollution within each of the
following States:  Alaska, Arizona, California, Colorado, Idaho,
Montana, Nevada, New Mexico, North Dakota, Oregon, South Dakota, Utah,
Washington, and Wyoming, plus tribal sources within these States.  

Scope of the Point and Area Source Projections

The Technical Memorandum identifies all stationary point and area
sources located with the WRAP states of Alaska, Arizona, California,
Colorado, Idaho, Montana, Nevada, New Mexico, North Dakota, Oregon,
South Dakota, Utah, Washington, and Wyoming; as well as tribal sources
within those states. Certain area sources are excluded because they are
being dealt with by other contractor teams: (i.e., fugitive dust, fires,
area ammonia sources, and on and off road mobile sources.). Updates to
the oil and gas (O&G) area source emissions, and O&G point source SO2
emissions were developed by ENVIRON under another project, and provided
to ERG for inclusion with the other point and area sources in the WRAP
PRP18 emissions inventory. The California emissions were not modified in
any way, and were simply retained as-is from the previous WRAP 2018 base
case version 1 inventory.

In some limited cases, changes were also made to the 2002 baseline
inventory when these changes were straightforward and did not affect
emission values (e.g., correction of missing or erroneous source
classification codes [SCCs], changing facility names to be consistent
across records, etc.). In only a few instances were records added to the
2002 inventory, and subsequently, were projected to 2018 and added to
the WRAP PRP18 inventory. Additional 2002 changes were identified, but
were deferred until the next update of these 2002 and 2018 WRAP
inventories.

The following categories were NOT captured by the Technical Memorandum
Developed by ERG: Fugitive dust, fires (natural, prescribed, and managed
wildfire), area ammonia sources, and on and off road mobile sources.

Point and Area Source Methodology

ERG updated the existing WRAP 2018 version 1 database to be consistent
with the 2018b “base case” emissions inventory used by the WRAP
Regional Modeling Center (RMC) for the 2018 base case modeling analysis
completed in June 2006. These changes are itemized in the RMC 2006 final
report, Appendix A.   After discussion with the RMC, and comparison to
the 2018b base case emissions inventory, a few other changes were
necessary to make the 2018 version 1 inventory consistent with the 2018b
base case inventory, including the following: 

Added NOx and CO emissions for four new (post-2002) compressor stations
in ND;

Added SO2 emissions for 4 facilities in WY; and

Added NOx, VOC, and CO emissions for 7 compressor stations in WY.

Using the updated 2018b base case emissions database, spreadsheets were
developed to show the top 80% of emissions of point source and area
source emissions, by SCC, for each state. Also, spreadsheet entitled
“Proposed Changes & Feedback Request” itemized other potential
changes that might need to be made to each jurisdiction’s 2018b base
case inventory to update it for PRP18 purposes. The types of changes
proposed and feedback requested from each state and local (S/L) agency
included the following:

Confirmation of changes in SO2 emissions from major sources located in
§309 states (i.e., AZ, NM, UT, and WY);

Request for BART emission limits;

Request for information on any new post-2004 “on-the-books” controls
or permit limits;

Request to provide missing SCCs (and itemized list of records with
missing SCCs was provided); and

Confirmation that all dual-fueled sources were accounted for in the
emissions inventory. These spreadsheets were e-mailed to each S/L agency
for review prior to holding conference calls to explain the spreadsheets
and receive initial comments from the agencies.

The basis of the WRAP 2002 baseline emissions inventory, and thus the
WRAP 2018 version 1 projections, were the data submitted by state and
local (S/L) agencies and tribes for the U.S. EPA National Emissions
Inventory (NEI). During the development of the NEI, ERG (as an U.S. EPA
contractor) was requested to use NOx and SO2 emissions data for subject
EGUs from the U.S. EPA Clean Air Markets Division (CAMD) database,
instead of the emissions submitted by the S/L agencies for those EGUs
reporting to CAMD. When these NOx and SO2 emissions data were placed
into the NEI, they were assigned a unique facility ID beginning with
“EGU” in order to distinguish that record from the record submitted
by the state. Generally, the NOx and SO2 emissions values from CAMD
varied only slightly from those submitted by the S/L agency (i.e., ERG
observed results showing about a 1-2% difference in emissions between
CAMD and S/L agency emissions). However, the use of the CAMD “EGU”
numbering scheme caused there to be two different facility IDs for the
same EGU in the resulting dataset. This eventually became recognized as
a problem when these data were used in the WRAP 2002 and 2018 pivot
tables and in the WRAP Emissions Data Management System (EDMS). These
issues were corrected in the final memorandum. 

Review comments: EPA finds that the methodologies used to develop the
stationary and area sources inventory is consistent with EPA’s
guidance document titled:  “Emissions Inventory Guidance for
Implementation of Ozone and Particulate Matter national Ambient Air
Quality Standards (NAAQS) and Regional Haze Regulations.” Dated August
2005.

B.  Fire Emission Inventories

The technical work for this category may be found in the document:
Development of 200-04 Baseline period and 2018 Projection Year Emission
Inventories, FINAL dated May 2007 by Air Sciences Inc.  

Scope and Methodology of the Fire Emission Inventories

The Fire Emissions Joint Forum (FEJF) of the Western Regional Air
Partnership (WRAP) completed an air emission inventory for fire in 14 of
the 15 states in the WRAP region. (Hawaii is not in the inventory.) The
inventory includes emission estimates and activity data for wildfire,
prescribed fire, wildland fire use (WFU), agricultural burning, and
non-federal prescribed rangeland (NF rangeland) burning for the calendar
year 2002. The FEJF collected fire activity data from federal, tribal,
and state agencies; arrived at data quality objectives; culled data from
the database that did not meet the data quality objectives; allocated
summary data to realistic fire events where necessary; devised emission
calculation routines; estimated emissions for all fire events; and
published an emission inventory database and dispersion model-ready
digital files.

Many of the techniques utilized for developing the fire emission
inventory are based on the WRAP technical report entitled “1996 Fire
Emission Inventory.” The FEJF produced an event-based emission
inventory, placing all fire emissions at coordinate locations on
specific days. Federal and state records of individual fire events were
collected. For agricultural and nonfederal rangeland burning, county
level data on a monthly basis was collected and allocated to the
coordinate level on a daily basis for jurisdictions where event level
data was not available. In general, burning activity data was not
available directly from tribal agencies. Federal land manager
(especially Department of Interior – Bureau of Indian Affairs) data
included in the federal databases as well as state data may include
burning in Indian Country. Quality Control

Packets where sent to states, Tribes, and Federal Land Management
agencies (FLMs) to solicit corrections to the 2002 activity data for
wildland burning and agricultural burning.

Twelve pollutants are included in the fire inventory. They are:  total
suspended particulate matter (TSP), particulate matter less than 10
microns in diameter (PM10), particulate matter less than 2.5 microns in
diameter (PM2.5), elemental carbon (EC), organic carbon (OC),
non-methane volatile organic compounds (VOC), methane (CH4), ammonia
(NH3), oxides of nitrogen (NOx), carbon monoxide (CO), sulfur dioxide
(SO2), and coarse particulate matter defined as the difference between
PM10 and PM2.5 (PMC). Activity records were used and checked for
completeness for fire size, fuel loading, date, and location.  Activity
records deemed incomplete and therefore not useable in an emission
calculation were culled from the database (and retained in a companion
database for documentation purposes).  Fuel loading and emission factor
tables along with diurnal consumption and plume profiles were developed
from the literature, expert and professional judgment, and stakeholder
input.  Spreadsheet and geographic information system software was used
to store the data, perform data augmentation and quality control
functions, calculate emissions, and produce the strictly formatted
National Emission Inventory Format (NIF) 3.0 and SMOKE/IDA text export
files of the inventories

Limitations of the Fire Emission Inventory

Limitations of the fire emission inventory include the omission of fire
events (e.g., tribal burning, in general, not accounted for in
non-tribal data sets and individual fire activity records deemed to be
incomplete) and variable data quality due to the variety of data sources
used. 

Estimating emissions from fire events involves considerable scientific
uncertainty. Historic data are of varying quality and for some areas
unavailable. Activity records were not ground-truthed and, other than
quality control steps described in Air Sciences report, were generally
accepted “as is.” Parameters such as the vegetation type of a burn,
the vegetation-specific fuel loading, pollutant specific emission
factors, and combustion efficiencies, to name a few, all have
uncertainties associated with them which may influence emission
estimates and regional modeling results.  Fire has been traditionally
treated as an “area” source, but in the WRAP’s regional dispersion
model, it is treated as a point source.  Therefore, fire emissions were
placed at a latitude/longitude coordinate location for each day. From
the daily and spatially resolved emission inventory, hourly consumption
and plume rise were estimated.  Finally, the inventory states that
professional judgment was used to select the best available or most
representative parameters or methods to estimate emissions. However,
other parameters and methods could have been chosen and could also be
considered “reasonable” for estimating emissions from fire.

Review comments: EPA finds that the methodologies used to develop the
fire sources inventory is consistent with EPA’s guidance document
titled:  “Emissions Inventory Guidance for Implementation of Ozone and
Particulate Matter national Ambient Air Quality Standards (NAAQS) and
Regional Haze Regulations.” Dated August 2005.  

C.  Mobile Source Inventories

The technical work for these source categories may be found in the
document titled:  Final Report, WRAP Mobile Source Emission Inventories
Update, Dated, May 2006, prepared by ENVIRON.  The purpose of this
evaluation is to determine if the methodologies used to develop the
Mobile Source Emission Inventories are adequate for a Regional Haze SIP.
  

Scope of the Mobile Source Inventories

Environ has produced methodologies for the following mobile sources: 
On-Road, Off Road, Locomotive, Aircraft, Commercial Marine, and Road
Dust.  

The scopes of these inventories are as follows:

Geographic domain:  Emissions were estimated by county for all counties
in the following 14 states:  Alaska, Arizona, California, Colorado,
Idaho, Montana, Nevada, New Mexico, North Dakota, Oregon, South Dakota,
Utah, Washington, and Wyoming.  Hawaii was not included.

Temporal resolution:  Emissions were estimated for an average day in
each of the four seasons, and for an average annual weekday.  Seasons
are defined as three-month periods: spring is March through May; summer
is June through August; fall is September through November; and winter
is December through February.  Emissions were estimated for the 2002
base year and for three future years – 2008, 2013, and 2018.

Pollutants:  Emissions were estimated for primary Particulate Matter
(PM10 and 2.5), Nitrogen Oxides (NOx), Sulfur Dioxide (SO2), Volatile
Organic Compounds (VOC’s), Carbon Monoxide (CO), Ammonia (NH3),
Elemental and Organic Carbon(EC/OC), and sulfate (SO4).

Sources:  For all pollutants, emissions were estimated separately by
vehicle class for on-road sources and by equipment type/engine type for
off-road sources.  Emissions were summarized for gasoline and
diesel-fueled engines.  

Mobile Source Inventory Methodologies 

As with most emissions sources, on-road and off-road mobile source
emissions are estimated as the products of emission factors and activity
estimates.  With the exception of California, all on-road mobile sources
emission factors were derived from EPA’s MOBILE6 model.  Activity for
on-road mobile sources is vehicle miles traveled stated as VMT.  State
and local agencies were provided default modeling inputs and VMT levels
for base and future years for review and update; all states and several
agencies provided updated.  The California Air Resources Board (CARB)
provided on-road emissions estimates by county and vehicle class
directly; these were based on CARB’s in-house version of their EMFAC
model for the State of California.

For all states except California, EPA’s draft NONROAD2004 model was
used to estimate so-called traditional off-road sources, all sources
listed above except aircraft, commercial marine, and locomotives.  The
NONROAD model includes estimates of emission factors, activity levels,
and growth factors for all traditional off-road sources.  The default
activity levels were provided to state agencies for input and update;
however, no state provided updated off-road activity data.  Emissions
estimation methods for aircraft, commercial marine, and locomotives were
similar to approaches EPA has recently used in developing national
emission inventories.  For California, CARB provided off-road emissions
estimates by source category and county directly.

Here are the methodologies used to create the mobile source inventories:

On-Road:

The emissions model inputs and calculations performed for estimating the
2002 base year and 2008/2013/2018 future year on-road emissions used
EPA’s MOBILE6 model.  The generation of SMOKE files for input into the
air quality model was also used.  To estimate on-road emissions,
defaults were established for all mobile source model input parameters,
and also for VMT estimates.  These default inputs were sent to state and
local air quality planning agencies in the WRAP states (except for
California), and they were requested to provide the most up-to-date
modeling and VMT inputs.  

Updated data and model inputs provided by state and local contacts are
shown below.  All state agencies responded to the survey requests for
base and future year modeling inputs.  In addition, survey responses
were received from the local agencies for the following counties:

Arizona – Pima and Maricopa 

Colorado – Denver nonattainment area counties

Nevada – Clark and Washoe

New Mexico – Bernalillo 

For California, the Air Resources Board provided emissions estimates
directly for all years, estimated using an internal version of the
EMFAC2002 model, with updated activity data for some areas.  

The default VMT data used as the starting point in this analysis were
the annual VMT estimates that EPA had compiled for the 2002 National
Emission Inventory (NEI2002), in the county database for the National
Mobile Inventory Model (NMIM).   EPA had derived the VMT estimates from
Highway Performance Monitoring System (HPMS) data from the Federal
Highway Administration (FHWA).  State and local data provided to EPA as
part of the June 2004 Consolidated Emissions Reporting Rule (CERR)
submittals were incorporated into the default VMT estimates.  These
default VMT estimates, by county, vehicle type, and roadway type, were
posted to the WRAP Mobile Sources Update Project web page, and a survey
form was sent to state and local air quality planners asking them to
review the estimates and provide updates where available.  By default
the annual VMT were assumed to be allocated evenly to each season; two
states (WA and UT) had provided seasonal VMT allocation data that was
included in the default modeling inputs files posted.  

Most of the state and local agencies that responded to the survey
provided updated VMT estimates.  The areas for which updates were not
provided and the state or local agency accepted the NMIM defaults were:
Arizona - all counties except Maricopa; Denver area; North Dakota; South
Dakota; and Washington.  In most of these cases, the VMT estimates had
been previously provided to EPA under the CERR and had already been
incorporated into the NMIM database.

Off Road:

This section describes the methods for estimating 2002 base year and
2008/2013/2018 future year emissions for so-called traditional off-road
equipment.  Equipment types included here are in the following
categories:

airport ground support, such as terminal tractors;

agricultural equipment, such as tractors, combines, and balers;

construction equipment, such as graders and back hoes;

industrial and commercial equipment, such as fork lifts and sweepers;

recreational vehicles, such as all-terrain vehicles and off-road
motorcycles; residential and commercial lawn and garden equipment, such
as leaf and snow blowers; 

logging equipment, such as shredders and large chain saws;

recreational marine vessels, such as power boats;

underground mining equipment; and

oil field equipment.

Seasonal average daily emissions were estimated for the thirteen
non-California western states using EPA’s NONROAD model. To estimate
emissions from these sources, defaults were established for all NONROAD
model input parameters.  These default inputs were sent to state and
local air quality planning agencies in the WRAP states (except for
California), and they were requested to provide the most up-to-date
modeling.  For California, the Air Resources Board (CARB) provided all
off-road emissions estimates from their own modeling system, which has
similar equipment types.  

Locomotive:

County level locomotive emissions estimates were estimated as the
product of locomotive fuel consumption and average locomotive emission
factors.  Previous WRAP locomotive emissions estimates (Pollack et al.,
2004) allocated national fuel consumption estimates to counties using
emissions data offered by the National Emissions Inventory.  For this
project a detailed revision to that allocation method was developed for
allocating 2002 national fuel consumption estimates.  Emission factors
were also revised to combine line-haul and switching engines because
only national total fuel consumption was available.  Additional emission
factors for ammonia and fuel sulfur provided by EPA were also
incorporated and form the basis from which sulfur dioxide was estimated.

Aircraft:

County-level aircraft emissions for 2002 for the WRAP states were
obtained from work performed for EPA’s 2002 National Emissions
Inventory (NEI2002).  Activity data for aircraft emissions are takeoff
cycles (LTOs), and emission factors are primarily from the Federal
Aviation Administration (FAA) Emissions and Dispersion Modeling System
(EDMS).   The 2002 emissions were projected to future years using
forecast LTOs available from the FAA.  More detailed estimates were
provided for some states.

The FAA EDMS model combines specified aircraft and activity levels with
default emissions factors in order to estimate annual inventories for a
specific airport.  Aircraft activity levels in EDMS are expressed in
terms of LTOs, which consist of the four aircraft operating modes:  taxi
and queue, take-off, climb-out, and landing.  Default values for the
amount of time a specific aircraft spends in each mode, or the
time-in-modes (TIMs), are coded into EDMS.

Aircraft emissions were estimated for four aircraft categories:

Air carriers, which are larger turbine-powered commercial aircraft with
at least 60 seats or 18,000 lbs payload capacity;

Air taxis, which are commercial turbine or piston-powered aircraft with
less than 60 seats or 18,000 lbs payload capacity; 

General aviation aircraft, which are small piston-powered,
non-commercial aircraft; and 

Military aircraft.   

Commercial Marine:

Commercial marine emissions comprise a wide variety of vessel types and
uses.  Table 3.1 describes the different types of commercial marine
vessel activity.  In the previous WRAP mobile sources emission inventory
work, emissions were estimated for most types of vessels (Pollack et
al., 2004).  

Table 3.1.  Commercial Vessel Types.

Source Definition	Purpose	Geographic Area

Deep draft	Ocean-going large vessels	Ocean Traffic



Near port

Tow or Push Boats	Barge Freight	River Traffic



Ocean Traffic

Tugs	Vessel assist and support functions	Near port

Ferries	River or lake ferrying	Regular routes

Other Commercial Vessels	Smaller support or excursion boats	Near dock

Dredges	Dredging projects	Varies

Commercial Fishing	Market fishing	Ocean

Military	Coast Guard and Navy	Ocean & Port



For this inventory, emissions were estimated for deep draft vessels
within shore and near port using port call data, and offshore emissions
generated from ship location data.  The most important revision for
commercial marine emissions leading to regional haze (PM, SOx, and NOx)
was the estimation of emissions for the offshore activity, primarily of
ocean-going vessels.  This activity was not previously estimated for the
WRAP emission inventory, and has been a subject of concern as vessel
traffic passes out from and along and upwind of the western coast of the
US.  The other revision conducted here was to update in-shore deep-draft
vessel emissions to reflect changing fleet mix, especially the
retirement of steamship powered vessels. 

One issue for modelers was which vertical grid layer to introduce the
deep draft emissions.                                                   
                            The stack height of 34 to 58 meters
(Starcrest, 2004) and plume rise for ocean-going (deep draft) vessels
indicated that the emissions should be placed in the second vertical
layer (above 36 and below 73 meters).  The plume rise was estimated at 2
meters using standard plume rise models with the vessel speed of 17 to
25 knots as the wind speed, exhaust exit rate of 35 to 40 meters per
second with an average stack diameter of about 1.3 meters.

Road Dust:

In the previous WRAP mobile source emissions inventory work, fugitive
road dust emissions for unpaved roads were revised from the traditional
EPA estimates with updated silt loading values, updated and revised
activity estimates, and the application of transport fractions (Pollack
et al., 2004).  The aim of that work was to resolve large differences in
road dust emissions in adjacent counties, and to use a consistent
methodology across the WRAP region.  The revised road dust emissions
were estimated for 1996, the base year for the original WRAP modeling
work, and 2018.  

For this inventory, paved and unpaved road dust emissions were updated
using the updated VMT for the base and future years provided by state
and local contacts as part of the base and future year survey work.  Any
updated road dust controls provided were also incorporated into the
estimates.  

Road dust emissions in the previous work included application of a
factor to account for deposition and other removal mechanisms that tend
to lower the amount of dust that is transported on a regional basis
(i.e., across the 36 km grid cells in the WRAP modeling domain).   The
county-specific transport fractions that were applied depend on the
vegetative characteristics of each county, and were calculated as the
weighted average of vegetation-specific transport fractions in each
county.  For the current work, updated transport fractions were
available, but were applied to the road dust emissions (and other dust
sources) in the SMOKE emissions processing rather than in the
development of the county-level emissions.

Limitations of the Methodologies

Commercial Marine:

This inventory did not consider ocean-going vessels heading to and from
Vancouver, B.C. and other Canadian traffic that may pass into or near US
waters.  The emissions from these vessels were estimated out 25 miles
from the Pacific Ocean coast and used to compare with the estimates from
the method used to estimate emissions off the coast and in the open
ocean.  For Alaska, there are missing ports that are not included in the
inventory such as Juneau, the Alaska State capital.  Juneau is locked
out from motor vehicle access to the surrounding area and therefore
relies upon marine shipping and transportation as well as air traffic
for its goods and services.  It is strongly advised that Alaska review
the emissions carefully as there may be numerous omissions, Juneau being
one of them.  Military emissions were also not estimated because the
activity data was not available to ENVIRON, and offshore emissions were
not included (i.e. off shore oil wells).

Road Dust Methodologies:

While this inventory was being developed, EPA’s guidance on estimating
paved and unpaved road dust emissions was updated.  For this inventory,
the road dust emissions only reflect updated VMT and controls supplied
be State and local agencies, and  do not reflect the updated EPA
guidance methodology

Road dust emissions estimates in the earlier WRAP work did not include
Alaska, as Alaska was not a WRAP member at the time.  Therefore, road
dust emissions are not estimated for Alaska.  For California, road dust
emissions provided by CARB were used.

 

Review Comments: The methodologies used to determine the 2002 base year
three future years inventory – 2008, 2013, and 2018 are correct.  EPA
finds that the methodologies used to develop the mobile source inventory
are consistent with EPA’s guidance with the above mentioned
limitations.

D. Emissions Inventory Versions

WRAP developed several emissions inventories over the course of their
technical work. This section describes only the final versions of these
inventories that were used to develop the 2018 visibility projections
and one of the source apportionment analyses. The focus here is on the
final emissions inventories served as the basis for technical work that
was then incorporated into state regional have plans. 

Two models were used in WRAP’s technical work. The Community
Multiscale Air Quality Modeling System (CMAQ) was used to estimate
visibility impairment at Class 1 areas in 2018. The Comprehensive Air
Quality Model with extensions (CAMx) was used to quantify the relative
contributions of sources of pollution causing visibility impairment at
Class I areas in 2018. These two efforts are described in more detail
later in this document. 

Table 3.2 shows the final inventory versions used in these CAMx and CMAQ
analyses. All of these inventories are complete, that is, they include
all of the relevant point, area and mobile sources of pollution that
have been shown to contribute visibility impairment. 

Table 3.2  – Inventories Used in WRAP Modeling

Purpose	CMAQ Model	CAMx Model

Model performance evaluation	Base02b

Base02b

	2018 Visibility Projections	Plan02d	Prp18b



Particulate Source Apportionment Technology

	Plan02c	Base18b



The first step in modeling is to evaluate the performance of the model
to verify that it is representative of real world conditions. WRAP
selected particular days where the ambient concentrations of pollutants
at Class I areas are known and evaluated how well the model predicts
those concentrations. One of the inputs to this model performance
evaluation is an emissions inventory that accurately reflects actual
emissions on the days in question. The WRAP used the Base02b inventory
to evaluate the performance of both models.

The Plan02 inventories are designed to be representative of baseline
conditions before the Regional Haze plans are put into place. They are
different from the Base02 inventory in that multi-year averages are used
for some source categories that are highly variable from year to year
(e.g. wildfires and electric generating unit emissions). Additional
refinements and data corrections were incorporated into the Plan02
inventories as they evolved in order to ensure the most accurate,
representative 2002 inventories possible at the time of the modeling. 
Details on the inventories up until the Plan02c may be found in the WRAP
summary document. The differences between the Plan02c and Plan02d
inventories may be found in the WRAP Regional Modeling Center
specification for Plan02d. 

The two 2018 inventories used by WRAP in their final work are the
Base18b inventory used in the particulate source apportionment
technology (PSAT) analysis and the Prp18b model used to estimate 2018
visibility levels at Class I areas. The differences between the Prp18b
and Base18b are discussed previously in this chapter. They are further
detailed in the WRAP Regional Modeling Center specification for Prp18b. 

Review Comment: It is reasonable and appropriate for WRAP to develop the
Plan02 inventories to better represent baseline conditions and to
incorporate the latest data into their emissions inventories.

The different versions of the 2002 and 2018 inventories used in the PSAT
and 2018 visibility projections are not significant for the WRAP area as
a whole. In the future, it would be better to use identical inventories
so that there is no perceived inconsistency. 

These emissions inventories are sufficiently complete, current and
accurate for the purposes of inclusion in state regional haze plans.

WRAP Meteorological Modeling

Photochemical grid models, such as the Community Multiscale Air Quality
Modeling System (CMAQ) and the Comprehensive Air quality Model with
extensions (CAMx), require inputs of three-dimensional gridded
meteorological data, including wind, temperature, humidity,
cloud/precipitation, and boundary layer parameters.  The
Fifth-Generation Penn State/NCAR Mesoscale Model (MM5) was used to
develop these input fields for the WRAP visibility modeling.  MM5 is a
state-of-the-science atmosphere model that has proven useful for air
quality applications and has been used extensively in past local, state,
regional, and national modeling efforts.  MM5 has undergone extensive
peer-review, with all of its components continually undergoing
development and scrutiny by the modeling community.  In-depth
descriptions of MM5 can be found in Dudhia (1993) and Grell et al.
(1994).  All meteorological data used for the WRAP air quality modeling
efforts are derived from MM5 model simulations.  

The SMOKE emissions processor also requires meteorological inputs
derived from MM5, most notably for temperatures needed for mobile and
biogenic emissions processing.  MM5 derived windfields were also used in
the creation of emission inventories for wind-driven emission sources.

The WRAP formed the Regional Modeling Center (RMC), consisting of the
University of California at Riverside (UCR), ENVIRON International
Corporation, and the University of North Carolina (UNC).  The RMC
completed all MM5 modeling necessary to support analysis for the Section
308 SIPs/TIPs.  Initially, the RMC completed meteorology modeling for
the entire year of 2002 on two grids:  a continental-scale domain with
36-km grid spacing, and a regional-scale domain with 12 km grid spacing
covering the western Class I areas.  After concluding that visibility
modeling had acceptable performance at both grid resolutions, the 12-km
modeling was discontinued.  Therefore, this document only discusses the
development of meteorological inputs for the 36-km domain. 

 

In addition to development of meteorological inputs for CMAQ and CAMx,
MM5 was also used to develop meteorological inputs for the
CALMET/CALPUFF modeling system.  As discussed further in Section 7 of
this TSD, CALMET/CALPUFF was used to determine whether a BART eligible
source contributes to visibility impairment at a Class I area.  Refer to
Section 7 of this TSD for further information on the use of MM5 for BART
modeling.

The WRAP meteorological modeling is described fully in the RMC report
entitled Annual 2002 MM5 Meteorological Modeling to Support Regional
Haze Modeling of the Western United States (Kemball-Cook et al., 2005).

Meteorological Modeling Protocol

The Revised Draft Protocol: 2002 Annual MM5 Simulations to Support WRAP
CMAQ Visibility Modeling for the Section 308 SIP/TIP (ENVIRON and UCR,
2004) describes in detail the MM5 model and the setup and evaluation
methods used by the RMC in the 2002 modeling effort. The Modeling
Protocol provides a brief description of MM5, the WRAP modeling domain,
the MM5 physical configuration, and model application. The Modeling
Protocol also presents the plan for evaluating the performance of the
model in replicating the evolution of observed winds, temperature,
humidity, and boundary layer morphology.  The model performance
evaluation serves as the primary approach to assess the reliability of
the meteorological fields to adequately characterize the state of the
atmosphere for input to CMAQ/CAMx.  Key elements of the MM5 model
configuration and evaluation are discussed briefly below.

Meteorological Modeling Domain and Vertical Layer Structure

In the WRAP 36-km run, MM5 was configured to run on the standard
continental-scale Regional Planning Organization (RPO) National Grid
with 36-km grid point spacing (Figure 4.1). The RPO National Grid is
defined on a Lambert conformal projection, with true latitudes at 33°N
and 45°N, and the central latitude and longitude at 40°N and 97°W,
respectively. The grid point spacing is 36-km. The continental expanse
of this domain results in a grid of 165 (east-west) by 129 (north-south)
dot points, and 164 (east-west) by 128 (north-south) cross points.
Overall, the domain covers 5904 km by 4608 km.  The MM5 domain provides
overlap of the CMAQ or CAMx air quality modeling grid to alleviate any
numerical boundary artifacts that may be present in the MM5 output
fields.

The vertical layer structure of the WRAP domain consists of 34 layers, a
top level at 100 millibars, and increasing layer thickness with
altitude. The vertical layer structure is further detailed in the
Modeling Protocol.

Figure 4.1.  RPO National 36-km MM5 Grid

Model Configuration

The final WRAP MM5 modeling system configuration for the 2002 annual
simulation is provided in the Modeling Protocol.  RMC conducted an
initial simulation with a configuration taken from earlier MM5
simulations by the State of Iowa and LADCO. Then, further sensitivity
tests were made to identify an MM5 configuration that would address
performance issues identified from the initial annual run.

The initial 2002 36-km WRAP simulation results showed that MM5 performed
better in the Central and Eastern U.S. than in the West, and performed
generally better in winter than in summer (Kemball-Cook et al., 2005).
In the western U.S., the amplitude of the diurnal temperature cycle was
persistently underestimated during the summer, especially in the
southwest. In the desert southwest, the humidity was greatly
overestimated during the summer as well, and there was a pronounced cold
bias. Some of these problems appeared to be linked to the excessive
simulated precipitation generated by MM5 during the summer, especially
in the southwest. This can have serious repercussions for CMAQ modeling
since too much rain can “wash out” pollutants, while the too cool,
humid and cloudy environment may lead to incorrect pollutant chemistry
and aerosol thermodynamics. Wind performance improved quickly with
height above the surface, suggesting that regional transport
speeds/directions were reasonably represented. 

A number of sensitivity tests were performed by the RMC to resolve the
poor performance issues identified in the initial simulation.   In
particular, RMC performed sensitivity tests related to four dimensional
data assimilation (i.e., grid or analysis nudging), as well as for a
number of physics options associated with cumulus paramertization, cloud
microphysics, land surface models, and planetary boundary layer models. 


The result of sensitivity tests was a revised MM5 model configuration
that showed:

A dramatic reduction in the summertime cold, wet bias in the desert
southwest;

Surface temperature and humidity performance now within benchmarks for
all WRAP subdomains except desert southwest for temperature;

More accurate representation of the diurnal temperature cycle in the
desert southwest;

Improvements in the index of agreement for temperature, humidity and
wind speed for all WRAP subdomains;

A more realistic precipitation pattern over the western U.S.;

Better model performance in the eastern U.S.; and

An improvement in performance throughout the year, making it unnecessary
to select different physics schemes for different seasons.

A full discussion of MM5 sensitivity testing and optimization can be
found in Kemball-Cook et al., (2004).

Model Performance

The model performance of MM5 in the 36-km simulations is described by
the final RMC modeling report (Kemball-Cook et al., 2005).  The goal of
the evaluation was to determine whether the meteorological fields are
sufficiently accurate to properly characterize the transport, chemistry,
and removal processes in CMAQ. If errors in the meteorological fields
are too large, the ability of the air quality model to replicate
regional pollutant levels over the entire base year will be severely
hampered and the predicted impacts from future year growth and controls
will be highly questionable. To provide a reasonable meteorological
characterization to the photochemical/visibility model, MM5 must
represent with some fidelity the:

Large-scale weather patterns (i.e., synoptic patterns depicted in the
850-300 mb height fields), as these are key forcings for mesoscale
circulations;

Mesoscale and regional wind, temperature, PBL height, humidity, and
cloud/precipitation patterns;

Mesoscale circulations such as sea breezes and mountain/drainage
circulations;

Diurnal cycles in PBL depth, temperature, and humidity.

For visibility applications, the moisture and condensate fields are
particularly important as they significantly impact PM chemical
formation, removal, and light scattering efficiency. In addition, cloud
and precipitation fields are a good measure of the integrated
performance of the model since these are model-derived quantities and
not nudged to observations. Because of the model’s coarse resolution
of 36-km, the model cannot be expected to faithfully simulate the
pattern or variability of the convective precipitation, but should
reproduce the synoptic precipitation and cloud patterns.

The RMC evaluation of the MM5 model performance was limited to
operational testing of the model, and not to a scientific evaluation. 
Previous peer-reviewed documentation of MM5 formulation, testing, and
evaluation provide the basis for its scientific validity. An operational
evaluation entails an assessment of the model's ability to correctly
estimate surface and boundary layer wind, temperature, and moisture
largely independent of whether the actual process descriptions in the
model are accurate. The operational evaluation essentially tests whether
the predicted meteorological fields are reasonable, consistent, and
agree adequately with available observations in time and space. The
process provides only limited information about whether the results are
correct from a scientific perspective or whether they are the fortuitous
product of compensating errors; thus a “successful” operational
evaluation is a necessary but insufficient condition for achieving a
sound, reliable performance testing exercise.

The basis for the RMC operational performance assessment entailed a
comparison of the predicted meteorological fields to available surface
and aloft data that are collected, analyzed, and disseminated by the
National Weather Service.  It was carried out both graphically and
statistically to evaluate model performance for winds, temperatures,
humidity, and the placement, intensity, and evolution of key weather
phenomena.  The MM5 results were compared to a specific set of
statistics that have been identified for use in establishing benchmarks
for acceptable MM5 model performance (Emery et al., 2001).

The RMC concluded, based on the results of the performance evaluation,
that the final 36 km WRAP MM5 simulations exhibit reasonably good
performance that is within the bounds of other meteorological databases
used for prior air quality modeling efforts.  It was therefore deemed
reasonable to proceed with its use as inputs for visibility modeling.

Review Comment:  The MM5 meteorological model used by WRAP was
state-of-the-science at the time the modeling was conducted and the
performance of the model was adequate for the purposes for which it was
used and on par with other studies at the time (Emery et al., 2001).  

MM5 Processing and Application

Several preprocessing steps are necessary to prepare input data for an
MM5 simulation. The MM5 modeling system provides all of the tools
necessary to prepare topographic, vegetative, initial condition,
boundary condition, and FDDA nudging input files.

Global topographic data at 10-minute (latitude/longitude) resolution
were used to define terrain elevations on the 36-km grid. Land use
distribution on the MM5 domains was defined from the 24-category USGS
vegetation data with a resolution of 10 minutes.

The underlying objective analyses used for the 2002 WRAP simulation were
taken from the Eta Data Assimilation System (EDAS). EDAS is available
from NCAR, and contains 3-hourly objective analysis initialization and
forecast fields from the National Center for Environmental
Prediction’s (NCEP) ETA model, which is the current short-term
national operational forecasting platform. The EDAS analyses are
provided on a standardized continental-scale pressure-level Lambert grid
with ~40 km horizontal grid point spacing. The EDAS analyses are
developed from a wide variety of observational sources, including
standard surface and upper air measurements, profiler networks, radar-
and satellite-derived measurements, and ship and aircraft reports. The
wide array of data sources, coupled with the high time and spatial
resolution provided by EDAS, result in an analysis product that far
exceeds the level of detail found in traditional global-scale analyses.

The EDAS analyses were processed for use by MM5 as initial/boundary
conditions, and for analysis nudging in the FDDA package.  In addition,
the raw EDAS was enhanced for non-standard pressure levels. Sea surface
temperatures (SSTs) were approximated by ETA skin temperatures, and the
SSTs were allowed to vary over the course of the simulation.   In
addition, the NCEP ADP Global Surface Observations and the NCEP ADP
Upper Air Observations were used to develop the 2-D FDDA fields for use
in MM5.

The annual simulation was made in sequential 5-day run segments, each
with an initial spinup period of 12 hours that overlapped the last 12
hours of the preceding run.   The model was spun up for the final two
weeks of December 2001 to allow for photochemical/visibility
applications with start dates at the beginning of January 2002.

WRAP Visibility Modeling

The WRAP has created a document titled Air Quality Modeling that
describes the visibility modeling tools that they used to develop
technical products in support of SIP development.  

Excerpts of WRAP’s Air Quality Modeling document referenced here were
drafted prior to the development of the final base case planning
emissions inventory called ‘Plan02d’, and prior to the development
of the final 2018 future-year emissions inventory called ‘prp18b’. 
However, the excerpts of Air Quality Modeling below remain applicable to
those later emissions scenarios, and where appropriate, EPA has added
explanatory as well as review comments.  

Air Quality Models

The WRAP RMC utilized two regulatory air quality modeling systems to
conduct all regional haze modeling.  A brief discussion of each of these
models is provided below.

Community Multi-Scale Air Quality Model 

EPA initially developed the Community Multi-Scale Air Quality (CMAQ)
modeling system in the late 1990s. The model source code and supporting
data can be downloaded from the Community Modeling and Analysis System
(CMAS) Center, which is funded by EPA to distribute and provide limited
support for CMAQ users. CMAQ was designed as a “one atmosphere”
modeling system to encompass modeling of multiple pollutants and issues,
including ozone, PM, visibility, and air toxics. This is in contrast to
many earlier air quality models that focused on single-pollutant issues
(e.g., ozone modeling by the Urban Airshed Model). CMAQ is an Eulerian
model—that is, it is a grid-based model in which the frame of
reference is a fixed, three-dimensional (3-D) grid with uniformly sized
horizontal grid cells and variable vertical layer thicknesses. The
number and size of grid cells and the number and thicknesses of layers
are defined by the user, based in part on the size of the modeling
domain to be used for each modeling project. The key science processes
included in CMAQ are emissions, advection and dispersion, photochemical
transformation, aerosol thermodynamics and phase transfer, aqueous
chemistry, and wet and dry deposition of trace species. CMAQ offers a
variety of choices in the numerical algorithms for treating many of
these processes, and it is designed so that new algorithms can be
included in the model. CMAQ offers a choice of three photochemical
mechanisms for solving gas-phase chemistry: the Regional Acid Deposition
Mechanism version 2 (RADM2), a fixed coefficient version of the SAPRC90
mechanism, and the Carbon Bond IV mechanism (CB-IV). 

Comprehensive Air Quality Model with Extensions 

The Comprehensive Air Quality Model with extensions (CAMx) model was
initially developed by ENVIRON in the late 1990s as a nested-grid,
gas-phase, Eulerian photochemical grid model. ENVIRON later revised CAMx
to treat PM, visibility, and air toxics. While there are many
similarities between the CMAQ and CAMx systems, there are also some
significant differences in their treatment of advection, dispersion,
aerosol formation, and dry and wet deposition.

Explanatory Comment:  WRAP used CMAQ modeling for model performance
evaluations with the Base02b emissions inventory, and then used CMAQ
with Plan02d and prp18b emissions inventories to generate the relative
response factors (RRF) needed to project 2018 future-year visibility. 
WRAP used CAMx modeling for model performance evaluations with the
Base02b emissions inventory, and then used CAMx with its Particulate
Source Apportionment Technology (PSAT) tool to provide source
apportionment of nitrate and sulfate aerosol with both the Plan02c and
Base18b emissions inventories.  

Review Comment:  States and Tribes are likely to rely heavily on the
CAMx PSAT results to quantify the sources of nitrate and sulfate
impacting their Class I Areas and, therefore, to shape their approach to
emissions controls.  However, WRAP has provided visibility projections
for 2018 using the CMAQ model, not CAMx.  WRAP compared the model
performance of CMAQ and CAMx using the Base02b emissions inventory and
found them comparable, but not identical.  While WRAP’s use of
different models for different products in this case is not considered
significant, it is advised that in the future there be better alignment
in the choice of models so that there is no perceived inconsistency.  

Model Versions

Both EPA and ENVIRON periodically update and revise their models as new
science or other improvements to the models are developed. WRAP elected
to operate CMAQ v4.5 using the MM5 data processed using MCIP v2.3 and
the AE3 aerosol module.  The version used for the comparison of CMAQ and
CAMx was CAMx v4.3.  

Review Comment:  The versions of CMAQ and CAMx used by WRAP in its
visibility modeling were the state-of-the-science at the time they were
implemented.  

Major differences between the two models that still exist are in the
basic model code, in the treatment of horizontal diffusion SOA formation
mechanisms, and in grid nesting. The publicly released version of CAMx
supports ozone and PM source apportionment through its Ozone and PM
Source Apportionment Technology (OSAT/PSAT) probing tools. 

Model Simulations

In support of the WRAP Regional Haze air quality modeling efforts, the
RMC developed air quality modeling inputs including annual meteorology
and emissions inventories for a 2002 actual emissions base case, a
planning case to represent the 2000-04 regional haze baseline period
using averages for key emissions categories, and a 2018 base case of
projected emissions. All emission inventories were developed using the
Sparse Matrix Operator Kernel Emissions (SMOKE) modeling system. Each of
these inventories has undergone a number of revisions throughout the
development process to arrive at the final versions used in CMAQ and
CAMx air quality modeling.  The development of each of these emission
scenarios as follows: 

The 2002 base case emissions scenario, referred to as “2002 Base
Case” or “Base02”.   The purpose of the Base02 inventory is to
represent the actual conditions in calendar year 2002 with respect to
ambient air quality and the associated sources of criteria and
particulate matter air pollutants.  The Base02 emissions inventories are
used to validate the air quality model and associated databases and to
demonstrate acceptable model performance with respect to replicating
observed particulate matter air quality. 

Explanatory Comment:  The final base case emissions inventory used was
called ‘base02b’.  For a review of this emissions inventory see the
emissions inventory development section of this document.  

The 2000-04 baseline period planning case emissions scenario is referred
to as “Plan02”. The purpose of the Plan02 inventory is to represent
baseline emission patterns based on average, or “typical”,
conditions.  This inventory provides a basis for comparison with the
future year 2018 projected emissions, as well as to gauge reasonable
progress with respect to future year visibility.

  

Explanatory Comment:  The final baseline planning case used was called
‘plan02d’.  For a review of this emissions inventory see the
emissions inventory development section of this document.

The 2018 future-year base case emissions scenario, referred to as
“2018 Base Case” or “Base18”.  These emissions are used to
represent conditions in future year 2018 with respect to sources of
criteria and particulate matter air pollutants, taking into
consideration growth and controls. Modeling results based on this
emission inventory are used to define the future year ambient air
quality and visibility metrics.  

Explanatory Comment:  The final future-year scenario used was called
‘prp18b’.  For a review of this emissions inventory see the
emissions inventory development section of this document.  

Data Sources

The CMAQ model requires inputs of three-dimensional gridded wind,
temperature, humidity, cloud/precipitation, and boundary layer
parameters.  All meteorological data used for the WRAP air quality
modeling efforts are derived from MM5 model simulations.  

Review Comment:  Meteorological modeling is reviewed elsewhere in this
document.

Review Comment:  WRAP conducted a sensitivity simulations comparing
model performance at 36 and 12 km horizontal resolution and found no
appreciable performance improvement in the model outputs at 12 km
resolution over those at 36 km resolution.  At the time, given resource
constraints and the lack of improved performance, WRAP decided to
conduct all emissions, meteorological, and visibility modeling at 36 km
horizontal resolution.  WRAP’s choice of 36 km horizontal resolution
was appropriate given the computer and resource limitations and the lack
of improved performance at 12 km resolution. 

Emission inventories for all WRAP air quality simulations were developed
using the Matrix Operator Kernel Emissions (SMOKE) modeling system.  

Initial conditions (ICs) are specified by the user for the first day of
a model simulation. For continental-scale modeling using the RPO Unified
36-km domain, the ICs can affect model results for as many as 15 days,
although the effect typically becomes very small after about 7 days. A
model spin-up period is included in each simulation to eliminate any
effects from the ICs. For the WRAP modeling, the annual simulation is
divided into four quarters, and included a 15-day spin-up period for the
quarters beginning in April, July, and October. For the quarter
beginning in January 2002, a spin-up period covering December 16-31,
2001, using meteorology and emissions data developed for CENRAP were
used.

Review Comment:  The 15 day spin-up period employed by WRAP was
sufficient given the size of the modeling domain.  

Boundary conditions (BCs) specify the concentrations of gas and PM
species at the four lateral boundaries of the model domain. BCs
determine the amounts of gas and PM species that are transported into
the model domain when winds flow is into the domain. Boundary conditions
have a much larger effect on model simulations than do ICs. For some
areas in the WRAP region and for clean conditions, the BCs can be a
substantial contributor to visibility impairment.  For this study BC
data generated in an annual simulation of the global-scale GEOS-Chem
model that was completed by Jacob et al.
(http://www-as.harvard.edu/chemistry/trop/geos/) for calendar year 2002
were applied. 

Review Comment:  Boundary conditions employed by WRAP were
state-of-the-science at the time they were implemented.  

The CMAQ model options and configuration used for the WRAP 36-km model
simulations are described in Tonnesen, G. et al., 2006.

2002 Base Case Modeling

The purpose of the 2002 Base Case modeling efforts was to evaluate air
quality/visibility modeling systems for a historical episode—in this
case, for calendar year 2002—to demonstrate the suitability of the
modeling systems for subsequent planning, sensitivity, and emissions
control strategy modeling. Model performance evaluation is performed by
comparing output from model simulations with ambient air quality data
for the same time period. 

Model Performance Evaluation

The objective of a model performance evaluation (MPE) is to compare
model-simulated concentrations with observed data to determine whether
the model’s performance is sufficiently accurate to justify using the
model for simulating future conditions. There are a number of challenges
in completing an annual MPE for regional haze. The model must be
compared to ambient data from several different monitoring networks for
both PM and gaseous species, for an annual time period, and for a large
number of sites. The model must be evaluated for both the worst
visibility conditions and for very clean conditions. Finally, final
guidance on how to perform an MPE for fine-particulate models is not yet
available from EPA. Therefore, the RMC experimented with many different
approaches for showing model performance results. The plot types that
were found to be the most useful are the following:

Time-series plots comparing the measured and model-predicted species
concentrations

Scatter plots showing model predictions on the y-axis and ambient data
on the x-axis

Spatial analysis plots with ambient data overlaid on model predictions

Bar plots comparing the mean fractional bias (MFB) or mean fractional
error (MFE) performance metrics 

“Bugle plots” showing how model performance varies as a function of
the PM species concentration

Stacked-bar plots of contributions to light extinction for the average
of the best-20% visibility days or the worst-20% visibility days at each
site; the higher the light extinction, the lower the visibility

Explanatory Comment:  The following plots depict summary model
performance for WRAP 2002 Base Case modeling using the base02b emissions
inventory and were downloaded from the RMC web site.  Below are three
sets of model bias and model error plots.  Each set of plots compares
the measured chemically speciated aerosol data from a monitoring network
with the corresponding model output.  The monitoring networks used for
comparison are IMPROVE, CASTNET, and STN, and are treated separately
because each monitoring network has different goals, siting criteria,
and data collection protocols.  The model performance plots depicted
here are “bugle plots”, and depict model performance (symbols) and
model performance standards (curves) on the y axis relative to measured
concentration on the x axis.  Model performance standards are of greater
latitude at lower concentrations because of the higher relative
uncertainties in the data at lower concentrations.  There are twelve
symbols for each chemical species, which represents the monthly model
performance for 2002.  Model performance at IMPROVE monitors is of
highest importance, because these monitors are sited to be
representative of the visibility conditions impacting each Class I Area.
 The CASTNET monitoring network is more sparse than the IMPROVE network,
but is also mostly sited at Class I Areas and as such, model performance
at CASTNET sites should also be considered important.  The STN
monitoring network is an urban network, and model performance relative
to this network should be given less importance.  

Figure 5.1.  WRAP model performance (factional bias and error) of the
Base02b modeling scenario for chemically speciated aerosol data from the
IMPROVE monitoring network.  The 12 symbols for each chemical species
represent monthly average model performance for the year 2002, averaging
all monitors in the WRAP region.  Solid lines represent WRAP modeling
goals and criteria.  For chemical species acronym explanations, see text
below.   

Figure 5.2.  WRAP model performance (factional bias and error) of the
Base02b modeling scenario for chemically speciated aerosol data from the
CASTNET monitoring network.  The 12 symbols for each chemical species
represent monthly average model performance for the year 2002, averaging
all monitors in the WRAP region.  Solid lines represent WRAP modeling
goals and criteria.  For chemical species acronym explanations, see text
below.

Figure 5.3.  WRAP model performance (factional bias and error) of the
Base02b modeling scenario for chemically speciated aerosol data from the
STN monitoring network.  The 12 symbols for each chemical species
represent monthly average model performance for the year 2002, averaging
all monitors in the WRAP region.  Solid lines represent WRAP modeling
goals and criteria.  For chemical species acronym explanations, see text
below.

Review Comment:  The model performance goals and criteria used by WRAP
were appropriate at the time the modeling was conducted.  

Review Comment:  For IMPROVE and CASTNET monitoring sites, model
performance was adequate based on stated criteria except for course
particulate matter (CM).  WRAP addressed this model performance issue by
setting all CM relative response factors (RRF) to a value of 1 when
projecting course matter visibility impacts to the future year 2018,
regardless of the future year modeling results.  This is considered
acceptable for most sites because CM that is both anthropogenic and
controllable is not believed to be of major importance for visibility
impairment at most Class I Areas.  However, where anthropogenic course
PM is suspected of being a significant contributor, SIP developers
should not rely on WRAP CMAQ model projections and should provide
alternate technical analyses.  

Review Comment:  For STN monitoring sites, WRAP modeling showed
significant underpredictions for all species and was beyond the criteria
for nitrate and organic carbon.  However, these model underpredictions
are expected given that WRAP’s model resolution was 36 km and urban
areas are known to have significantly inhomogeneous PM concentrations
over a scale of 36 km.  Typically, all model emissions are immediately
diluted into the grid box they are assigned to.  Hence, monitors that
experience emissions at time scales that don’t allow for the dilution
that is assumed in the model are likely to show model underpredictions. 
 

Review Comment:  The above model performance summary includes all sites
within the WRAP.  However, a model performance summary over such a
diverse geographic area may mask model performance issues occurring in
smaller geographic sub-regions.  WRAP performed some limited experiments
analyzing model performance on a sub-regional basis, however, WRAP
technical staff never arrived at an agreed upon methodology to define
the sub-regions.  There were many factors to consider when defining
sub-regions for model performance, some of which were conflicting. 
Additionally, including sub-regional model performance analysis was
considered resource prohibitive at the time.  In future Regional Haze
modeling and analyses, it is recommended that more work be done to
define coherent sub-regions for model performance analysis so that this
issue is addressed.  Special attention should be paid to meteorological
model performance in the desert southwest, as this was a recognized
problem in the WRAP modeling.  If possible, also consider including
model performance plots similar to the bugle plots above on a
site-by-site basis.    

2002 Planning Scenario

Input data used for the 2002 Planning model simulations consisted of the
same meteorology as for the 2002 Base Case and the Plan02 emission
inventories described under the Emissions Modeling section of the TSS.  

The setup of the CMAQ model (including science options, run scripts,
simulation periods, and ancillary data) for the Plan02 cases was
identical to that used in the Base02 modeling, as  described in the 2002
MPE report (Tonnesen et al., 2006).

Comparison With Base02 Simulations

For each of the Plan02 emissions datasets, annual visibility modeling
was performed using the CMAQ model. This was a key aspect of the QA
procedure, since errors in the emissions inventories that might not be
apparent during the emissions QA steps might be more readily detected in
the results from the CMAQ modeling. 

Explanatory Comment:  WRAP RMC compared CMAQ output using Plan02d
emissions with CMAQ output using Base02b emissions by plotting the
differences between the two air quality model runs for daily, monthly,
and annual averages for each chemical species predicted.  For a
description and comparison of the Base02b and Plan02d emissions
inventories, see section 3 above.  

Note that these plots are not useful for visibility planning purposes,
but are being provided to show the magnitudes of changes when moving
from the 2002 Base Case to the 2002 Planning Case—in other words, from
the actual emissions for the year 2002 to the “typical-year”
emissions created for the final Plan02 scenario. The primary analysis
“product” from the Plan02 CMAQ modeling is the use of its output in
combination with the CMAQ output from the 2018 modeling to develop the
visibility progress calculations and glide path plots, described below.

2018 Model Simulations

The 2018 future-year base case scenario is referred to as “2018 Base
Case” or “Base18”.  The purpose of the Base18 scenario is to
simulation the air quality representative of conditions in future year
2018 with respect to sources of criteria and particulate matter air
pollutants, taking into consideration growth and controls. Modeling
results based on this emission inventory are used to define the future
year ambient air quality and visibility metrics.

Input data used for the 2018 Base Case model simulations consisted of
the same meteorology as for the 2002 Base Case and the Base18 emission
inventories described under the Emissions Modeling section of the TSS.  

The setup of the CMAQ model (including science options, run scripts,
simulation periods, and ancillary data) for the Base18 cases was
identical to that used in the Base02 modeling, as  described in the 2002
MPE report (Tonnesen et al., 2006).

The purpose of modeling 2018 visibility is to compare the 2018
visibility predictions to the 2002 typical-year visibility modeling
results, as discussed below. Some improvements in visibility by 2018 are
expected because of reductions in emissions due to currently planned
regulations and technology improvements. 

Visibility Projections

The Regional Haze Rule (RHR) goals include achieving natural visibility
conditions at 156 Federally mandated Class I areas by 2064. In more
specific terms, that RHR goal is defined as (1) visibility improvement
toward natural conditions for the 20% of days that have the worst
visibility (termed “20% worst,” or W20%, visibility days) and (2) no
worsening in visibility for the 20% of days that have the best
visibility (“20% best,” or B20%, visibility days). One component of
the states’ demonstration to EPA that they are making reasonable
progress toward this 2064 goal is the comparison of modeled visibility
projections for the first milestone year of 2018 with what is termed a
uniform rate of progress (URP) goal. As explained in detail below, the
2018 URP goal is obtained by constructing a “linear glide path” (in
deciviews) that has at one end the observed visibility conditions during
the mandated five-year (2000-2004) baseline period and at the other end
natural visibility conditions in 2064; the visibility value that occurs
on the glide path at year 2018 is the URP goal. 

Explanatory Comment:  WRAP has made 2018 visibility projections using
Plan02d and prp18b CMAQ 36-km modeling results

[F]ollowing EPA guidance that recommends applying the modeling results
in a relative sense to project future-year visibility conditions (U.S.
EPA, 2001, 2003a, 2006). Projections are made using relative response
factors (RRFs), which are defined as the ratio of the future-year
modeling results to the current-year modeling results. The calculated
RRFs are applied to the baseline observed visibility conditions to
project future-year observed visibility. These projections can then be
used to assess the effectiveness of the simulated emission control
strategies that were included in the future-year modeling. The major
features of EPA’s recommended visibility projections are as follows
(U.S. EPA, 2003a,b, 2006):

Monitoring data should be used to define current air quality.

Monitored concentrations of PM10 are divided into six major components;
the first five are assumed to be PM2.5 and the sixth is PM2.5-10.

SO4 (sulfate)

NO3 (particulate nitrate)

OC (organic carbon)

EC (elemental carbon)

OF (other fine particulate or soil)

CM (coarse matter).

Models are used in a relative sense to develop RRFs between future and
current predicted concentrations of each component.

Component-specific RRFs are multiplied by current monitored values to
estimate future component concentrations.

Estimates of future component concentrations are consolidated to provide
an estimate of future air quality.

Future estimated air quality is compared with the goal for regional haze
to see whether the simulated control strategy would result in the goal
being met.

It is acceptable to assume that all measured sulfate is in the form of
ammonium sulfate [(NH4)2SO4] and all particulate nitrate is in the form
of ammonium nitrate [NH4NO3].

To facilitate tracking the progress toward visibility goals, two
important visibility parameters are required for each Class I area:

Baseline Conditions: “Baseline Conditions” represent visibility for
the B20% and W20% days for the initial five-year baseline period of the
regional haze program. Baseline Conditions are calculated using
monitoring data collected during the 2000-2004 five-year period and are
the starting point in 2004 for the uniform rate of progress (URP) glide
path to Natural Conditions in 2064 (U.S. EPA, 2003a).

Natural Conditions: “Natural Conditions,” the RHR goal for 2064 for
the Federally mandated Class I areas, represent estimates of natural
visibility conditions for the B20% and W20% days at a given Class I
area.

Mapping Model Results to IMPROVE Measurements

As noted above, future-year visibility at Class I areas is projected by
using modeling results in a relative sense to scale current observed
visibility for the B20% and W20% visibility days. This scaling is done
using RRFs, the ratios of future-year modeling results to current-year
results. Each of the six components of light extinction in the IMPROVE
reconstructed mass extinction equation is scaled separately. Because the
modeled species do not exactly match up with the IMPROVE measured PM
species, assumptions must be made to map the modeled PM species to the
IMPROVE measured species for the purpose of projecting visibility
improvements. 

Projecting Visibility Changes Using Modeling Results

RRFs calculated from modeling results can be used to project future-year
visibility. For the current modeling efforts, RRFs are the ratio of the
2018 modeling results to the 2002 modeling results, and are specific to
each Class I area and each PM species. RRFs are applied to the Baseline
Condition observed PM species levels to project future-year PM levels,
which are then used with the IMPROVE extinction equation to assess
visibility. The following six steps are used to project future-year
visibility for the B20% and W20% visibility days (the discussion below
is for W20% days but also applies to B20% days):

For each Class I area and each monitored day, daily visibility is ranked
using IMPROVE data and IMPROVE extinction equation for each year from
the five-year baseline period (2000-2004) to identify the W20%
visibility days for each year.

Use an air quality model to simulate a base-year period (ideally
2000-2004, but in reality just 2002) and a future year (e.g., 2018),
then apply the resulting information to develop Class-I-area-specific
RRFs for each of the six components of light extinction in the IMPROVE
aerosol extinction equation.

Multiply the RRFs by the measured 24-h PM data for each day from the
W20% days for each year from the five-year baseline period to obtain
projected future-year (2018) 24-h PM concentrations for the W20% days.

Compute the future-year daily extinction using the IMPROVE aerosol
extinction equation and the projected PM concentrations for each of the
W20% days in the five-year baseline from Step 3.

For each of the W20% days within each year of the five-year baseline,
convert the future-year daily extinction to units of deciview and
average the daily deciview values within each of the five years
separately to obtain five years of average deciview visibility for the
W20% days.

Average the five years of average deciview visibility to obtain the
future-year visibility Haze Index estimate that is compared with the
2018 progress goal.

Review Comment:  The six steps listed above are an appropriate
application of national EPA modeling guidance for regional haze at the
time the modeling was performed.  

Glide Path to Natural Conditions

A linear URP from the Baseline Conditions in 2004 to Natural Conditions
in 2064 is assumed, and the value on the glide path at 2018 is the
presumptive URP visibility target that the modeled 2018 projections are
compared against to judge progress.

Visibility Projection Results

For all of the WRAP Class I areas, the RMC performed preliminary 2018
visibility projections and compared them to the 2018 URP goals.

Explanatory Comment:  WRAP performed visibility projections to 2018 and
compared them to the 2018 URP goals using the Plan02d scenario for the
base year period and the prp18b scenario for 2018.  These emissions
scenarios were modeled with CMAQ and the resulting RRFs were applied to
baseline monitoring data using the new IMPROVE equation.  For a review
of the 6 steps listed above to project future year visibility, see the
review comments in Chapter 5.  

WRAP Source Apportionment Modeling

The WRAP performed five types of attribution analyses to
visibility-related data in an effort to better understand source region
impacts of emissions on visibility.  Discussed in the following
subsections, those analyses include:

PM Source Apportionment Technology (PSAT) analysis

Weighted Emissions Potential analysis

Organic Aerosol Tracer analysis

Positive Matrix Factorization analysis

Causes of Dust analysis

A.  PM Source Apportionment Technology (PSAT) Analysis

Visibility impairment in Class I areas is the result of local air
pollution as well as transport of regional pollution across long
distances.  The relative contributions to visibility impairment from
each source region and category is needed to develop effective control
strategies to improve visibility.  WRAP used CAMx Version 4.30 with its
Particulate Source Apportionment Technology (PSAT) tool to provide
source apportionment by geographic regions and major source category. 
CAMx was run with similar options and inputs as the CMAQ modeling with
both the 2002 baseline and 2018 future case emission inventories.  PSAT
uses reactive tracers that operate in parallel to the CAMx host model
using the same emissions, transport, chemical transformation and
deposition rates as the host model to account for the contributions of
user specified source regions and categories to PM concentrations
throughout the modeling domain. Details on the formulation of the CAMx
PSAT source apportionment can be found in the CAMx user’s guidance. 
The goals of the PSAT assessment are to evaluate the contributions of
different geographic regions and source categories to visibility
impairment at Class I areas in 2002 and the projected 2018 case in order
to identify those regions and source categories that, if controlled,
would produce the greatest improvements in visibility.  Further
information regarding the PSAT analysis technique can be found in the
TSS document PM Source Apportionment Technology (PSAT) Analysis

The WRAP defined 18 geographical source regions consisting of the
individual WRAP states, Nevada, the CENRAP states, Canada, Mexico,
Pacific off-shore and the remainder of the eastern U.S. including the
Gulf of Mexico and the Atlantic (Table 6.1, Figure 6.1).  Six source
categories (point, mobile, area, anthropogenic wildfires (WRAP), natural
or non-anthropogenic sources (WRAP), and sources outside of the modeling
domain) were tracked separately.   The PSAT modeling focuses on sulfate
and nitrate contribution only and takes into account chemistry and
deposition. Contributions for the 20% worst and 20% best days at each
WRAP and nearby Class I area were extracted from the PSAT results.  A
PSAT Visualization Tool was developed that can be used by States, Tribes
and others to generate displays of the contributions of source regions
and categories to visibility impairment for the average of the worst 20
percent and best 20 percent days at each WRAP and nearby Class I areas.

PSAT results rely on the model’s value for each of the individual
modeled species in calculating the contribution.  Therefore, the PSAT
results are directly impacted by model performance issues.  In reviewing
PSAT results, an evaluation of model performance for that species for
the specific Class I monitor that is being evaluated should be taken
into consideration.   EPA does not have any specific guidance on how to
conduct source apportionment modeling.  The WRAP used
state-of-the-science source apportionment tools within a widely used
photochemical model.  EPA has reviewed the PSAT tools and techniques
that were used in the PSAT analysis and considers the analysis
acceptable. 

For sites where the state establishes a reasonable progress goal that
does not meet the Uniform Rate of Progress to attain natural conditions
by 2064, the state must demonstrate that the rate of progress for the
implementation plan to attain natural conditions by 2064 is not
reasonable and that the progress goal adopted by the State is
reasonable.  PSAT results along with other source apportionment
techniques and analysis of emission inventories is required to determine
the causes of future visibility impairment and support the conclusion
that the proposed RPG is reasonable.

Table 6.1. WRAP CAMx/PSAT source regions.

Source 

Region ID	Source Region Description	Source 

Region ID	Source Region 

Description

1	Arizona (AZ)	10	South Dakota (SD)

2	California (CA)	11	Utah (UT)

3	Colorado (CO)	12	Washington (WA)

4	Idaho (ID)	13	Wyoming (WY)

5	Montana (MT)	14	Pacific off-shore & Sea of Cortez (OF)

6	Nevada (NV)	15	CENRAP states (CE)

7	New Mexico (NM)	16	Eastern U.S., Gulf of Mexico, & Atlantic Ocean (EA)

8	North Dakota (ND)	17	Mexico (MX)

9	Oregon (OR)	18	Canada (CN)



Figure 6.1. WRAP CAMx/PSAT source region map. Table 6 defines the source
region IDs.

Review Comment: The CAMx model selection and performance are reviewed
elsewhere in this document.  

Review Comment:  The CAMx PSAT analysis has been tested and evaluated
against other apportionment techniques,.  

Review Comment: PSAT results from CAMx utilized the Plan02c and Base18b
emission inventories, while CMAQ modeling to derive RRFs and the WEP
analyses are performed with the Plan02d and Prp18b emission inventories.
 There are a number of differences between these emission inventories
(e.g. Plan02c does not include any biogenic emissions of NOx or VOC from
Mexico; Prp18b includes projected emissions for Mexico) that impact
visibility projections and must be considered when interpreting results.
 

B.  Weighted Emissions Potential

The Weighted Emissions Potential (WEP) analysis was developed as a
screening tool for states to decide which source regions have the
potential to contribute to haze formation at specific Class I areas,
based on both the Baseline and 2018 emissions inventories.  Unlike the
PSAT analysis described above, this method does not account for
chemistry and removal processes.  Instead, the WEP analysis relies on an
integration of gridded emissions data, meteorological back trajectory
residence time data, a one-over-distance factor to approximate
deposition, and a normalization of the final results.  Residence time
over an area is indicative of general flow patterns, but does not
necessarily imply the area contributed significantly to haze at a given
receptor.  Therefore, users are cautioned to view the WEP analysis as
one piece of a larger, more comprehensive weight of evidence analysis. 
Further information regarding the WEP analysis technique can be found in
the TSS document Weighted Emissions Potential Analysis.

The emissions data used were the annual, 36km grid SMOKE-processed,
model-ready emissions inventories provided by the WRAP Regional Modeling
Center (RMC).  The analysis was performed for nine (9) pollutants (maps
were generated for all but the last three): sulfur oxides, nitrogen
oxides, organic carbon, elemental carbon, fine particulate matter,
coarse particulate matter, ammonia, volatile organic carbon, carbon
monoxide.  The following source categories for each pollutant were
identified and preserved through the analysis:  Biogenic, Natural fire,
Point, Area, WRAP oil and gas, Off-shore, On-road mobile, Off-road
mobile, Road dust, Fugitive dust, Windblown dust, Anthropogenic fires 

The back trajectory residence times were provided by the WRAP Causes of
Haze Assessment (COHA).  The COHA project used NOAA’s HYSPLIT model to
generate eight (8) back trajectories daily for each WRAP Class I area
for the entire five-year baseline period (2000-04).  The major model
parameters selected for this analysis are presented in Table 6.2.  From
these individual trajectories, residence time fields were generated for
one-degree latitude by one-degree longitude grid cells.  Residence time
analysis computes the amount of time (e.g., number of hours) or percent
of time an air parcel is in a horizontal grid cell.  Plotted on a map,
residence time is shown as percent of total hours in each grid cell
across the domain, thus allowing an interpretation of general air flow
patterns for a given Class I area.  The residence time fields for the
20% worst and best IMPROVE-monitored extinction days were selected for
the WEP analysis to highlight the potential emissions sources during
those specific periods.

Table 6.2.  Back Trajectory Model Parameters Selected for WEP Analysis

Model Parameter	Value

Trajectory duration	192 hours (8 days) backward in time

Top of model domain	14,000 meters

Vertical motion option	used model data

Receptor height	500 meters

Meteorological Field	EDAS and FNL (location dependent)



Review comment:  The receptor height of 500 meters may be more
appropriate for secondary particulate matter than for course particulate
matter.  

The WEP analysis consisted of weighting the annual gridded emissions (by
pollutant and source category) by the worst and best extinction days
residence times for the five-year baseline period.  To account for
deposition along the trajectories, the result was further weighted by a
one-over-distance factor, measured as the distance in km between the
centroid of each emissions grid cell and the centroid of the grid cell
containing the Class I area monitoring site under investigation.  (The
“home” grid cell of the monitoring site was weighted by one fourth
of the 36km grid cell distance, or one-over-9km, to avoid a large
response in that grid cell.)  The resulting weighted emissions field was
normalized by the highest grid cell to ease interpretation.  
Interpretation of the results should focus on which grid cells (or
larger regions) have significant potential to affect the Class I area,
and on changes between 2002 and 2018.

The WEP is not a rigorous, stand-alone analysis, but a simple,
straightforward use of existing data.  As such, there are several
caveats to keep in mind when using WEP results as part of a
comprehensive weight of evidence analysis: 

This analysis does not take into account any emissions chemistry.

While actual emissions may vary considerably throughout the year, this
analysis pairs up annual emissions data with 20% worst/best extinction
days residence times – this is likely most problematic for carbon and
dust emissions, which can be highly episodic.

Coarse particle and some fine particle dust emissions tend not to be
transported long distances due to their large mass.

The WEP results are unitless numbers, normalized to the largest-valued
grid cell.  Effective use of these results requires an understanding of
actual emissions values and their relative contribution to haze at a
given Class I area.

Review comment: The Weighted Emissions Potential (WEP) analysis has a
number of limitations, listed above.  However, it may be useful as a
screening tool for states to decide which source regions have the
potential to contribute to haze formation at specific Class I areas.

C.  Organic Aerosol Tracer

Contributors to organic carbon aerosols (OC) were evaluated as part of
the weight of evidence for the clean conditions scenario.  The CMAQ
model results were analyzed to identify primary organic carbon aerosol
source contributions as originating in one of three categories:

Primary anthropogenic OC resulting from direct organic mass emissions,
such as primary organic aerosol (POA).  In CMAQ these species are lumped
into the term AORGPA.

Secondary anthropogenic OC resulting from aromatic VOCs, such as xylene,
toluene, and cresols.  In CMAQ these species are lumped into the term
AORGA.

Secondary biogenic OC resulting from biogenic VOCs, such as terpenes. 
In CMAQ these species are lumped into the term AORGB.

This analysis did not include identification of emissions source regions
or detailed source category information. Because it was not cost
effective to carry out CAMx/PSAT simulations with OC, the explicit OC
results for the clean conditions case were analyzed, and then compared
to the Base02b case in an attempt to infer the relative contributions of
biogenic and anthropogenic VOCs to OC. These results are difficult to
interpret for at least two reasons: 

Because of the simplified approach used by CMAQ and the Carbon Bond
Mechanism version 4 (CB4) to represent these species, it is not possible
to accurately classify all emissions into the CMAQ model as either
biogenic or anthropogenic based simply on the species name. Thus, some
biogenic OC might be included with AORGA, and some anthropogenic OC
might be included in AORGB. 

Some fire emissions are classified as anthropogenic, but these emissions
might include species such as terpenes that are typically considered
biogenic. Hence, using this simplified approach can be misleading. 

In spite of these difficulties, however, the results should classify the
majority of the emissions correctly as either biogenic or anthropogenic.
 For each of the above three components of OC, plots of the annual
average mass in the Base02b case were prepared, and then estimated the
controllable mass as the difference between the Base02b case the
Base02nt clean emissions scenario.  Comparing these two scenarios
indicates that in the western U.S. there is considerable AORGPA mass
that is not controllable. It is likely that much of this mass is from
fires, since uncontrollable AORGPA mass is present at the site of large
fires in southern Oregon and north of Tucson, AZ. It might be difficult
for the WRAP states and tribes to use these results quantitatively in
developing emissions control strategies for visibility SIPs and TIPs.
However, the results do provide some insight into the relative
contributions of biogenic and anthropogenic OC as well as the amount of
each that is controllable in the model simulations.  

There are uncertainties in the modeled emissions of anthropogenic VOCs,
and larger uncertainties in the modeled emissions of biogenic VOCs. It
is not possible to evaluate the model performance individually for
biogenic and anthropogenic OC because the OC measurements do not
distinguish between those two forms. Instead, we can only compare total
modeled OC to total measured OC. Therefore, even when the model achieves
good performance for total OC, it is possible that the model may be
overpredicting one component of total OC and underpredicting the other.
The inability to evaluate model performance for each component of OC
increases the uncertainty of the results, so caution should be used when
drawing conclusions about the sources of OC based on these results. 

Review comment:  WRAP model achieves good performance for total OC, as
discussed in Section 5, figure 5.1 of this document.   However, as
stated above, because of the potential for compensatory errors, caution
should be used when drawing conclusions regarding the source of OC based
on these results. 

D.	Positive Matrix Factorization

As part of their Causes of Haze Analysis (CoHA) project for WRAP, Desert
Research Institute (DRI) performed a Positive Matrix Factorization (PMF)
analysis using the IMPROVE aerosol data set and meteorological back
trajectories.  PMF is one among several receptor modeling methods. 
Receptor modeling is a fundamentally different approach to determining
source contributions to visibility impairment in that they are based on
a statistical analysis of the monitoring data collected at the IMPROVE
monitoring sites, and not on dispersion modeling.  The purpose of this
analysis was to distinguish chemical source profiles which could
describe aerosol contributions to IMPROVE monitoring sites within the
WRAP region.    Through a review of source profile characteristics,
profiles were identified with general or specific emissions source
categories, such as “Smoke” or “Urban/Diesel” or any of several
other categories.  Percent contributions of each profile to each IMPROVE
site’s aerosol concentrations were derived and compared with emissions
inventories to evaluate the level of confidence in the results.

In order to identify the sources of aerosols in the western United
States,  HYPERLINK
"http://www.coha.dri.edu/web/general/PMF%20modeling%20for%20WRAP%20COHA.
pdf" \t "_top" Positive Matrix Factorization (PMF)  receptor model was
applied to the 24-hr integrated aerosol chemical composition data
obtained at the Class I areas of the Western Regional Air Partnership
(WRAP) region through the Interagency Monitoring of Protected Visual
Environments (IMPROVE) program. The IMPROVE sites in the WRAP region
were grouped into 18 sub-regions (including West Texas region). 
HYPERLINK
"http://www.coha.dri.edu/web/general/PMF%20modeling%20for%20WRAP%20COHA.
pdf" \t "_top" PMF  is applied to each group to generate profiles of
source factors. Normalized source profiles and the quantitative source
contributions for each resolved factor were calculated. The major
sources that contribute to the aerosol loadings in the western United
States were identified. The similarities and differences of chemical
source profiles, and the major aerosol source contributors in different
regions of the western United States were investigated. Based on the
profile and the daily contribution to aerosol concentration of each
source factor, the contributions of source factors to the aerosol light
extinction coefficients were estimated using the IMPROVE method. The
importance of the major aerosol sources to regional haze and visibility
in the Class I areas of the western United States is discussed. 

A trajectory analysis was used to show the major source regions of
sources (factors) defined by PMF. These results were compared with the
emissions inventories data to evaluate level of confidence in results.

Review comment: The PMF and trajectory analysis may be fairly robust for
the source factors identified.  However, EPA believes that the grouping
of IMPROVE sites into 18 sub-regions, while less resource intensive,
gives results that are less useful than conducting PMF modeling for
individual IMPROVE sites.   If PMF modeling will be used in future
regional haze technical analyses, then each individual IMPROVE site
should be modeled.  In future applications, PMF contribution results
should be compared to other modeling methods such as CAMx using PSAT. 
Such a comparison could be of significant utility in evaluating upwind
emission inventories used as inputs to models such as CAMx and CMAQ.    

E.	Causes of Dust Analysis

As part of their CoHA project for WRAP, DRI performed a Causes of Dust
Analysis, designed to characterize aerosol sampling days when coarse
mass and fine soil combined constituted the dominant aerosol extinction
species.  In addition to categorizing dust events, the project was also
able to identify a number of temporal trends.

The principal aim of the study was to specifically identify the primary
causes of dust measured in the WRAP region by: 

developing a methodology for assigning worst days when dust constituted
the largest contributor to aerosol visibility extinction (worst dust
days, hereafter) at IMPROVE monitors within the WRAP domain to a set of
source classes;

using the methodology to categorize worst dust days over the period 2001
– 2003. 

The methodology employs several existing tools in novel ways including
air mass backward trajectories, land use maps, and soil characteristics
maps.  In addition, two new methods have been developed as part of this
work. The first is a metric for estimating the contribution of Asian
dust to IMPROVE-measured dust on worst dust days. The second utilizes
multivariate linear regression of measured dust concentrations vs.
nominally local surface meteorological data. These tools were combined
using a semi-quantitative approach to preliminarily determine the likely
source of dust on a worst dust day at a given site.  Due to limitation
of the information and capabilities of the tools, the causes of some
worst dust days were not determined with any confidence.  Using
2001-2003 data from IMPROVE (and some protocol) monitors in the WRAP
regions, each worst dust day was associated with one of these events:

Transcontinental transport of large scale events from Asia

Windblown dust events

Transport of windblown dust from sources upwind (i.e. not from immediate
vicinity of site). 

Further specification if windblown and upwind transport events appear to
be regional in nature based on scale of meteorological phenomenon
causing dust and number of sites affected.

Undetermined Events

This study focused on 71 sites from the IMPROVE network (and protocol
sites) located in the WRAP domain. These sites were selected based on
availability of data over the 2001 – 2003 period and the availability
of a nearby surface meteorological station over the same period. 

The transport of airborne dust emitted from high wind events originating
in China to the west coast of the US (about 7 – 10 days en route) has
received considerable attention in recent years (Cheng et al., 2005;
Park et al., 2005; Zhang et al., 2005; Darmenova et al., 2005). Large
“Asian dust” events can contribute significantly to haze over large
portions of the western US. These large Asian dust episodes are
initiated by low pressure systems in the Gobi desert region of Mongolia
and northwest China. Once elevated to the troposphere, Asian dust can
move fast under zonal flow due to the jet stream. Under high pressure
ridge conditions, large-scale exchange of dust from the troposphere to
the boundary layer may occur resulting in elevated ground-level mineral
aerosol concentrations. Although it is difficult to quantitatively
separate the influence of the Asian dust from dust that is generated on
the North American continent or transported from other regions of the
world (e.g. Africa), some chemical markers can help identify dust of
Asian origin. Perry et al. [1997] and Van Curen and Cahill [2002]
suggested that Al/Ca and K/Fe ratios are useful for identifying Asian
and African dust. African dust is associated with Al/Ca ratios greater
than 3.8, while those ratios for Asian dust are generally less than 2.6.
The K/Fe ratio is consistently above 0.5 for Asian dust, while African
dust exhibits lower values for this ratio. Similar chemical markers have
been adopted to help distinguish Asian dust from dust generated on the
North American continent for this study. The large Asian dust storm on
April 19, 1998 was used as a benchmark for establishing these markers.
The dust plume from the 4/19/1998 storm crossed the Pacific Ocean, and
subsided to the surface of the western United States around 4/29/1998.

A total of 644 worst dust days, defined as 20% worst visibility days
when the sum of extinction from coarse mass (CM) and (FS) was larger
than any other component, were observed during the period 2001-2003.
Using the tools described in the report, it was found that:

approximately 50% (318 cases) of worst dust days were attributable, with
a moderate (***) to high (*****) degree of confidence, the following
events/classes:

Transcontinental transport from Asia: 48 cases (7.5%);

Windblown dust (generated locally in the vicinity – nominally within
10 km - of the site): 125 cases (19.4%);

Upwind transport (does not involve significant windblown dust from
sources local to the site): 145 cases (22.5%);

Approximately 30% (190 cases) of the remaining days were attributed to
the following events/classes with a low (*) to moderate (***) degree of
confidence:

Transcontinental transport from Asia: 7 cases (1.1%);

Windblown dust: 76 cases (11.8%);

Upwind transport: 107 cases (16.6%);

The remaining 21% of worst dust days (136 case) were not attributable to
any events/classes using the tools employed in this study.

A number of temporal trends were also observed, both in terms of
frequency of event occurrence and in terms of worst dust days resulting
from undetermined sources. The impact of transcontinental transport  
from Asia was only observed during spring (100% of 48 cases). Windblown
dust as a dust causing event was most important in spring (56.8% of 125
cases), while transport from upwind sources did not vary significantly
by season except for a notable decrease in the winter months (spring:
35% of 145 cases, summer: 28%, fall: 31%, winter: 6%).

For the sites considered in this study, worst dust days exhibited a
seasonal pattern, with the most frequent occurrences in summer (246 out
of 644) and spring (241). The fall (115) and winter (43) were associated
with significantly fewer worst dust days. Of the 644 total worst dust
days, a total of 136 were a result of events/sources that could not be
determined using the tools employed in this study. The greatest number
of undetermined events occurred in the summer, corresponding to 79 cases
(32% of summer worst dust days), followed by spring (23 cases, 10% of
all spring worst dust days) and fall (22 cases, 19% of all fall worst
dust days), and winter (12 cases, 28% of all winter worst dust days).

The results of four example case studies were examined and are presented
below:

April 16, 2001  

On April 16, 2001, 29 sites were classified as worst dust days. For 22
sites, the Asian Dust Score indicated a strong Asian signature.
Satellite and Naval Research Laboratory model results corroborated a
large Asian dust plume engulfing a large portion of the West coast. 

September 10, 2001 

On September 10, 2001, 5 sites located in Arizona were classified as
worst dust days. For all sites, the ADS was low (or not calculated),
suggesting a negligible contribution of transported Asian dust. 
Trajectory analysis for all sites indicated moderate-to high speed
trajectories over areas with moderate-high erodibility in southeast
Arizona, south New Mexico and east/southeast Texas.

July 06, 2001 

On July 06, 2001, 4 sites were classified as worst dust days. At
Colombia River Gorge (CORI) the LWD to Total measured dust ratio was
40.5%. However, back trajectories did not show sustained high winds over
moderately (or highly) erodible terrain. Thus, CORI was assigned to a
windblown event.  At Bandalier (BAND) the LWD to total measured dust
ratio was ~ 6% and trajectories showed some high winds over moderately
erodible terrain. BAND was associated with windblown event .  The
information available for Nearby San Pedro (SAPE) And Gila (GICL) did
not provide any indication of the event that may have caused a worst
dust day there.).

April 03, 2003 

Great Sand Dunes (GRSA), Weminuche Wilderness (WEMI), and Rocky Mountain
(ROMO) had worst dust days on April 03, 2003. For all three sites, the
event was flagged as a regional scale event since the same general flow
pattern caused all three sites to have worst dust days.

Review comment:  The Assessment of the Principal Causes of
Dust-Resultant Haze at IMPROVE Sites in the Western United States
provides a reasonable basis for the attribution of transcontinental
transport form Asia.  The techniques employed were relatively more
successful for spring events (10% of events could not be determined)
than fall (19%), winter (28%) and summer events (32%).

WRAP BART modeling 

 

The Clean Air Act establishes the national goal of eliminating man-made
visibility impairment from all Class I areas. As a part of the plan for
achieving this goal Section 169A(b)(2)(A) of the act requires certain
major stationary sources in existence between 1962 and 1977 to be
reviewed for BART.  Pursuant to federal regulations, states have the
option of exempting a BART-eligible source from the BART requirements
based on dispersion modeling demonstrating that the source cannot
reasonably be anticipated to cause or contribute to visibility
impairment in a Class I area. According to 40 CFR Part 51, Appendix Y, a
BART eligible source is considered to “contribute” to visibility
impairment in a Class I area if the modeled 98th percentile change in
deciviews is equal to or greater than the “contribution threshold.”
Any BART-eligible source determined to cause or contribute to visibility
impairment in any Class I area is subject to BART. The EPA BART
Guidelines recommend a contribution threshold of 0.5 change in deciview
be used, although States have the option to establish alternative
thresholds.

To determine whether a source exceeds the BART contribution threshold,
EPA recommends use of the CALMET/CALPUFF modeling system; the main
components of this modeling system are CALMET (a diagnostic
three-dimensional meteorological model), CALPUFF (an air quality
dispersion model), and CALPOST (a post-processing package). Six WRAP
States (Arizona, Montana, New Mexico, Nevada, South Dakota, and Utah)
requested that the WRAP regional modeling center perform BART exemption
screening modeling to help determine whether potential BART-eligible
sources in their states contribute significantly to visibility
impairment at a Class I area. In response WRAP conducted a modeling
analysis and provided spreadsheets of Calpuff modeled 24-hour average
change in deciview from each potential BART-eligible source and each
Class I area. That CALMET/CALPUFF modeling effort is described on the
RMC and WRAP websites.  Note that in several of these States the WRAP
modeling discussed below provided a foundation for more refined or
supplemental BART–exemption modeling conducted by the individual
States. A description and evaluation of these additional modeling
analyses is contained in the individual Regional Haze SIPs submitted by
each State.  

Overview of WRAP Modeling Approach

To conduct the modeling, WRAP followed the EPA BART guidelines (U.S.
EPA, 2005) and the applicable CALMET/CALPUFF modeling guidance (e.g.,
IWAQM, 1998; FLAG, 2000; EPA, 2003c) in effect at the time the analysis
was conducted. This included EPA’s March 16, 2006, memorandum
“Dispersion Coefficients for Regulatory Air Quality Modeling in
CALPUFF” (Atkinson and Fox, 2006). This memo was written by the EPA
Office of Air Quality and Planning Standards (OAQPS) Model Clearinghouse
Director in response to questions from EPA Region 4 on what constitutes
the regulatory version of CALPUFF and recommended CALPUFF options for
BART modeling in their region. The memo’s modeling recommendations are
followed in WRAP’s protocol for conducting the modeling, except as
noted below. Using the procedures in the modeling protocol, WRAP
conducted initial subject-to-BART screening analysis modeling to
determine which sources contribute significantly to visibility
impairment at Class I areas. WRAP then provided the modeling results to
the affected states, and the states used those results, in conjunction
other information, to determine whether or not the individual
BART-eligible source will be subject to BART control requirements. For
some parameters WRAP modeled multiple scenarios and deferred to the
States to select which modeled parameter is most appropriate. For
example WRAP modeled background visibility conditions corresponding to
both the 20% best visibility days and annual average visibility days.
EPA guidance allows States to select either characterization of
background conditions for their BART modeling. WRAP also provided
modeling results showing the highest 24 average change in visibility and
the 8th highest (98th percentile) predicted change. States may utilize
the maximum result if a conservative screening analysis is deemed
appropriate. The Regional Haze SIPs submitted by each State should be
reviewed to determine which of the WRAP BART modeling options was
selected for use by the State. 

Model Selection and Applicability

Relevant guidance (IWAQM, 1998; FLAG 2000; EPA, 2003) states that the
CALPUFF model is generally applicable at distances from 50 km to at
least 200-300 km downwind of a source. Given the large number of Class I
areas in the West, most BART-eligible stationary sources modeled by WRAP
were located within 300 km of at least one Class I area. In some cases
where the transport distance to the nearest Class I area exceeded 300 km
the States, in consultation with EPA and the FLM’s, have conducted
supplemental analysis for their regional haze BART submittals.

The WRAP modeling was conducted using CALPUFF version 6.0. At the time
of the analysis the Calpuff regulatory version was v 5.711, however
discussions with the Federal Land Managers (FLMs) and others revealed
that this version of the model contained software bugs and that a newer
version should be used. EPA’s regulatory version has subsequently been
revised to version 5.8 which should be used in new regulatory
applications. Calpuff version 6.0 (and version 5.711) is considered
obsolete; however, it was considered appropriate at the time of the
analysis and is therefore acceptable for use in State regional haze
plans that have relied on WRAPs modeling.  

Modeling Domain and Meteorology Inputs

The WRAP RMC developed a set of CALMET/CALPUFF modeling domains in the
contiguous U.S. that focus on the following states and nearby Class I
areas: Arizona, Montana, New Mexico, Nevada, South Dakota, and Utah.  In
addition, there is an Alaska CALMET/CALPUFF modeling domain.  The
dimensions of the domain were selected to include the State of interest,
plus Class I areas in nearby states, and to provide a sufficient buffer
between the Class I areas and the domain boundaries (e.g., 50 km) to
assure that CALPUFF puffs are not eliminated that may temporarily leave
the domain and later reemerge and cause visibility impacts at a Class I
area. WRAP applied CALPUFF/CALMET for each of these domains. For the
five continental-U.S. domains, WRAP used a 4-km grid and 11 vertical
layers with a Lambert conformal conic (LCC) map projection identical to
that used for the WRAP MM5 and CMAQ modeling (see Section 1.3 of the
WRAP 2002 MPE report. The CALMET modeling for these five states used MM5
meteorological data at 36-km resolution and terrain and land use data at
4-km resolution, along with surface meteorological and precipitation
observations. For the WRAP Alaska CALMET/CALPUFF modeling, a 2-km grid
was used, along with 15-km-resolution MM5 data, surface meteorological
and precipitation data, and 1-km-resolution land use and terrain data.
For CALMET modeling for the continental-U.S. states, WRAP used MM5
meteorology data for the years 2001, 2002 and 2003. EPA recommends
performing three years of CALMET/CALPUFF modeling when using MM5 data,
and 2001 through 2003 were the most recent three years with MM5 data
available at the time this modeling was initiated. For Alaska CALMET
modeling, WRAP used one year of data (2002) because only one year of
historical MM5 modeling is available. 

Note that although surface meteorological and precipitation observations
were used as input to CALMET, upper-air meteorological observations were
not. EPA and the FLMs generally recommend both surface and upper air
information be blended into the CALMET modeling, although in some
applications they may be redundant to the MM5 data which already
includes the upper air data. 

Emissions Input

According to the EPA BART Guidelines: “The emissions estimates used in
the models are intended to reflect steady-state operating conditions
during periods of high capacity utilization. We do not generally
recommend that emissions reflecting periods of start-up, shutdown, and
malfunction be used, as such emission rates could produce higher than
normal effects than would be typical of most facilities. We recommend
that States use the 24 hour average actual emission rate from the
highest emitting day of the meteorological period modeled, unless this
rate reflects periods start-up, shutdown, or malfunction.” (EPA,
2005).

As a first approach to identify BART eligible sources and the
appropriate modeling emission rates, WRAP used SCC-based BART
eligibility criteria and information provided by ERG, Inc., in their
report “Identification of BART-eligible Sources in the WRAP Region”
(ERG, 2006). WRAP then provided spreadsheets with this information to
the states and tribes for comment. The States/Tribes reviewed this
information along with NSR permitting and other records and determined
which sources are BART eligible. Sources determined to be BART eligible
were then generally modeled using emission estimates based on the
maximum 24 hour emission rates during the 2001-2003 meteorological
period modeled. In-stack continuous emission monitoring (CEM) data were
used where available. For sources where CEM data were not available
permitted allowable or calculated AP-42 emission rates were used. For
source specific emissions information the individual State regional haze
SIP should be consulted. 

Calpuff/Calmet Model Settings

The CALMET and CALPUFF model settings used by WRAP complied with EPA’s
recommendations for regulatory application of the CALPUFF modeling
system in effect at the time of the analysis, except as noted below:

	

The maximum allowable mixing height (Z1MAX) for the WRAP CALMET modeling
was 4,500 m AGL, versus the EPA regulatory default of 3,000 m AGL.
Vertical temperature soundings indicate higher summertime mixing heights
are appropriate in many areas of the intermountain west. 

The EPA default assumes no MM5 data will be used (IPROG=0). In the RMC
BART screening analysis, MM5 data were used as an initial-guess field
(IPROG=14). 

The EPA default assumes values for IEXTRP (-4) and RMIN2 (4) that are
incompatible with each other in some of the applications and therefore
used values of 1 and 4, respectively. WRAP used the hourly 36-km MM5
data to define the upper-level winds, thus the extrapolation of the
surface wind data aloft as recommended by EPA (IEXTRP= -4) was not
needed. 

WRAP set IAVET to 0 to turn off spatial averaging of the temperature
interpolation, since the MM5 temperatures are already fairly smooth. 

The beginning and ending water land use categories were changed to 51
and 55, respectively, rather than using the EPA default (999) that
assumes no water land use categories. 

Model Evaluation

The CALMET meteorological fields were spot-checked for reasonableness
using visualization and animation tools, but no comprehensive evaluation
was undertaken. The main inputs to CALMET are the hourly 36-km MM5 data
(15-km MM5 data for Alaska), surface meteorological and precipitation
measurements, terrain data, and land use data. The 36-km MM5 fields were
evaluated previously (McNally, 2003; Kemball-Cook et al., 2005; Baker,
2004a,b), and the terrain and land use inputs were evaluated by
comparing spatial plots of the inputs against topographic and land use
maps. The Alaska 15-km MM5 data were also evaluated (Kemball-Cook et
al., 2005). EPA has reviewed the WRAP CALMET fields for the 6 modeling
domains and found them adequate for use by States initially in BART
Calpuff modeling applications. Several States have modified the WRAP
CALMET analysis to account for site specific issues such as nearby
terrain features in their BART modeling.  The individual State regional
haze plans provide details on these modifications.

Calpuff has been evaluated nationally by EPA and is considered
appropriate for long range transport applications and no site specific
performance evaluations were performed.

Review Comment (summary of EPA comments on WRAP BART Modeling):  WRAP
conducted initial subject-to-BART screening analysis modeling for six
western States to determine which sources contribute significantly to
visibility impairment in Class I areas. The overall approach followed
EPA guidance and modeling procedures in effect at the time of the
analysis and is therefore acceptable for use in the current regional
haze plans. In several of these States the WRAP modeling provided a
foundation for more refined or supplemental BART–exemption modeling
conducted by the individual States. A description and evaluation of
these additional modeling analyses is contained in the individual
Regional Haze SIPs submitted by each State.

Conclusions 

The technical work of the WRAP was consistent with the best practices
and science at the time it was performed. It is appropriate for the
western states to use it as the basis for complying with the
requirements of the Regional Haze Rule.

The EPA’s evaluation of the WRAPs work is summarized below, organized
by topic area.

A. Baseline Visibility Conditions

The WRAP followed EPA guidance in developing the baseline visibility
conditions for the Class I areas in their region. They appropriately
extended EPA-recommended methods in some cases in order to ensure a
sufficiently large data set for determining baseline conditions.

B. Natural Visibility Conditions

WRAP used a refined method to estimate natural visibility conditions.
The states are able to do this under the rule. WRAP’s method matches
EPA’s general approach, but is more refined and statistically
sophisticated than the default EPA procedure.

C. Source Contribution to Haze

PM Source Apportionment Technology Analysis:  The PSAT analysis provides
useful qualitative information about source contributions to visibility
impairment. It is appropriate for states to consider the results of this
analyses in their reasonable progress demonstrations.

Weighted Emissions Potential Analysis:  The Weighted Emissions Potential
analysis is a simple integration of emissions and wind data. It is
potentially useful as a screening tool, but has a number of limitations.
It should be used only in conjunction with other methods.

Organic Aerosol Tracer Analysis:  The organic aerosol tracer analysis
made a good case for the fact that there is a significant amount of
uncontrollable, organic aerosols contributing to haze in the West.
However, further analysis would be required in order to use this
knowledge to change assumptions about natural conditions or to make the
case that it is unreasonable to further control anthropogenic organic
aerosols.

Positive Matrix Factorization analysis:  The positive matrix
factorization analysis is useful for identifying source categories
impacting class one areas as long as those source categories were
evaluated in the analysis.

Causes of Dust Analysis:  The Causes of Dust Analysis provides a
reasonable account of transcontinental transport of dust. The techniques
were more successful for spring events than other times of the year. If
a particular Class I area is significantly impacted by coarse mass, this
analysis could be useful it characterizing the extent to which those
coarse mass impacts are controllable by the state.

D. Reasonable Progress Goals.

The reasonable progress goals are based on the WRAP’s projection of
visibility impairment at Class I areas in 2018. This projection is
derived from the results of the visibility modeling. The visibility
modeling depends on the meteorological modeling and the emissions
inventory.

Emissions Inventory:  The emissions inventory was completed in
accordance with EPA guidance and the final versions of the 2002 and 2018
inventories were complete, recent and accurate. The inventories should
be adequate for all of the Class I areas covered by the WRAP analysis. 

Meteorological Modeling:  The meteorological model and methodology used
by WRAP was state-of-the-science at the time the modeling was conducted.
The performance of the model was adequate for use in developing the
reasonable progress goals. For future modeling, additional attention
should be paid to performance in the desert southwest. The performance
of the model could be further improved for this area.

Visibility Modeling:  The visibility model and methodology used by WRAP
was state-of-the-science at the time the modeling was conducted. The
performance of the model was adequate for use in developing the
reasonable progress goals. For future modeling, the performance should
be analyzed at the geographic sub-region level. Evaluating performance
over such a large area can mask performance problems at particular Class
1 areas. Any sub-region performance analysis should examine the effects
of any remaining performance concerns with the meteorological modeling.
Given the the broad geographic scope in the emissions inventories and of
the model performance analysis, a more detailed analysis of the model
performance and relevant emissions inventory source categories may be
required for some Class I areas.

Appendix: Accessing WRAP Technical Products

The following text in italics is the executive summary of a document
created by WRAP titled “TSS Roadmap and Users Guide”.  The WRAP
Technical Support System (TSS) is publically accessible over the
internet and is the primary method WRAP uses to convey technical
products to State and Tribes in support of their RH SIP development
efforts. 

The Western Regional Air Partnership‘s (WRAP) Technical Support System
(TSS) is intended to present and disseminate technical regional haze
planning results for the 116 Class I areas in the WRAP region on an
ongoing basis, identify associated technical data and analysis products
and resources, and describe how these data and results were derived and
relate to one another.  The TSS presents air quality indicators for
regional haze using a “weight-of-evidence” approach as suggested in
EPA guidance – monitoring, emissions, source apportionment, and
visibility modeling results.  The TSS data displays and analysis results
are formatted to comply with the metrics and specific requirements in
the EPA Regional Haze Rule (RHR) and are being used by states and EPA
regional offices in the WRAP region, in the technical support documents
for State or Federal Implementation Plans (SIPs and FIPs) required for
each Class I areas under the RHR.  The methodologies for technical data
collection and processing, quality assurance and control, and analysis
activities are documented on the TSS.  

Much of the data and regional analysis results on the TSS are also
suitable, and have been used for other air quality analysis and planning
purposes, because they were developed at the direction of, and through
the collaborative committee and workgroup efforts within the WRAP
organization.  A significant amount of more detailed data and/or
analysis results also exist in data support systems (VIEWS. EDMS, FETS)
or separate projects (RMC, CoHA), all of which support the TSS summary
data developed to support RHR air quality planning.  These systems or
projects are listed, described, and linked to on the Projects section of
the TSS.  Also, many related reports for specific source sectors,
analysis of regional impacts to Class I areas, and control strategy
analyses are found under individual Committees, Forums, and Workgroups
on the WRAP website – access to these results are generally not found
on the TSS due to their geographic- and/or source-specific nature, and
may be applied by the appropriate regulatory jurisdiction for RHR
planning as that agency wishes.  This version of the WRAP website moved
into archive status in early 2010, but the links will remain accessible.

On the TSS under the “Home” section, links to administrative and
reference information are provided to assure feedback to keep the TSS
current.  Under the “Resources” section, descriptions,
documentation, and access to tools and data are provided, for the Haze
Planning, Monitoring, Emissions, Modeling, and Apportionment results
required in a State or Federal Implementation Plan under the Regional
Haze Rule, as well as the subsequent implementation of those plans. 

Required Technical Elements on TSS for Regional Haze Implementation
Plans

Monitored baseline visibility conditions (2000-04) for each Class I
area, plus subsequent data

Natural conditions estimates for each Class I area (2064 target)

Uniform glide slope for each Class I area

Baseline emissions (2000-2004)

Projected 2018 emissions

Source apportionment at each Class I area for 2000-04 baseline and 2018
projections

Projected 2018 visibility conditions at each Class I area



The “Projects” section of the TSS provides brief descriptions of the
data systems and projects, and links to the more detailed and voluminous
results found in each system or project.  Finally, the “Partners”
section provides links to the team behind the TSS, including
descriptions of their expertise and experience.

All data visualization and analysis tools on the TSS have integrated
access to “Help” for the user, via the master TSS “Getting
Started” document, an HTML page.  The Monitoring, Emissions, Modeling,
and Source Apportionment subsections contain detailed “Methods”
descriptions, identifying how datasets and analysis tools were prepared
and how they are used on the TSS.

  HYPERLINK "http://www.epa.gov/air/caa/" http://www.epa.gov/air/caa/  

  HYPERLINK "http://www.epa.gov/air/visibility/regional.html" \l
"thefive"
http://www.epa.gov/air/visibility/regional.html#thefive#thefive  

  HYPERLINK
"http://www.epa.gov/fedrgstr/EPA-AIR/1999/July/Day-01/a13941.pdf"
http://www.epa.gov/fedrgstr/EPA-AIR/1999/July/Day-01/a13941.pdf  

 EPA, 2003, Guidance for Tracking Progress Under the Regional Haze Rule,
EPA-454/B-03-004, September 2003, EPA OAQPS; web page:  HYPERLINK
"http://www.epa.gov/ttn/oarpg/t1pgm.html"
http://www.epa.gov/ttn/oarpg/t1pgm.html  

direct link:  HYPERLINK
"http://www.epa.gov/ttn/oarpg/t1/memoranda/rh_tpurhr_gd.pdf"
http://www.epa.gov/ttn/oarpg/t1/memoranda/rh_tpurhr_gd.pdf 

 EPA, 2003, Guidance for Estimating Natural Visibility Conditions Under
the Regional Haze Program, EPA-454/B-03-005, September 2003, EPA OAQPS;
web page:  HYPERLINK "http://www.epa.gov/ttn/oarpg/t1pgm.html"
http://www.epa.gov/ttn/oarpg/t1pgm.html   

direct link:  HYPERLINK
"http://www.epa.gov/ttn/oarpg/t1/memoranda/rh_envcurhr_gd.pdf"
http://www.epa.gov/ttn/oarpg/t1/memoranda/rh_envcurhr_gd.pdf 

 Trijonis, J.C., et al., 1990, "Visibility: Existing and Historical
Conditions-Causes and Effects", chapter 24 in NAPAP State of Science &
Technology, Vol. III; web page:  HYPERLINK
"http://vista.cira.colostate.edu/improve/Publications/Principle_pubs.htm
"
http://vista.cira.colostate.edu/improve/Publications/Principle_pubs.htm 

 Pitchford, Marc, 2006, "New IMPROVE algorithm for estimating light
extinction approved for use", The IMPROVE Newsletter, Volume 14, Number
4, Air Resource Specialists, Inc.; web page:  HYPERLINK
"http://vista.cira.colostate.edu/improve/Publications/news_letters.htm"
http://vista.cira.colostate.edu/improve/Publications/news_letters.htm  

direct link:  HYPERLINK
"http://vista.cira.colostate.edu/improve/Publications/NewsLetters/IMPNew
s4thQtr2005.pdf"
http://vista.cira.colostate.edu/improve/Publications/NewsLetters/IMPNews
4thQtr2005.pdf  

 "WRAP IMPROVE Data Substitutions", April 3, 2007;   

web page:  HYPERLINK
"http://vista.cira.colostate.edu/TSS/Results/Monitoring.aspx"
http://vista.cira.colostate.edu/TSS/Results/Monitoring.aspx    

direct link  HYPERLINK
"http://vista.cira.colostate.edu/docs/wrap/Monitoring/WRAP_Data_Substitu
tion_Methods_April_2007.doc"
http://vista.cira.colostate.edu/docs/wrap/Monitoring/WRAP_Data_Substitut
ion_Methods_April_2007.doc 

 "Natural Haze Levels II: Application of the New IMPROVE Algorithm to
Natural Species Concentrations Estimates; Final Report by the Natural
Haze Levels II Committee to the RPO Monitoring/Data Analysis Workgroup",
presentation at WRAP Attribution of Haze Workgroup Meeting, July 26-27,
2006, Denver, CO. web page:  HYPERLINK
"http://vista.cira.colostate.edu/improve/Publications/GrayLit/gray_liter
ature.htm"
http://vista.cira.colostate.edu/improve/Publications/GrayLit/gray_litera
ture.htm   direct link:  HYPERLINK
"http://vista.cira.colostate.edu/improve/Publications/GrayLit/029_Natura
lCondII/naturalhazelevelsIIreport.ppt"
http://vista.cira.colostate.edu/improve/Publications/GrayLit/029_Natural
CondII/naturalhazelevelsIIreport.ppt  

 EPA’s MOBILE6 model is available at  HYPERLINK
"http://www.epa.gov/OMSWWW/m6.htm" http://www.epa.gov/OMSWWW/m6.htm . 

 The final version of NONROAD (NONROAD2005, available at  HYPERLINK
"http://www.epa.gov/otaq/nonrdmdl.htm"
http://www.epa.gov/otaq/nonrdmdl.htm ) was released after the work in
this project was completed. 

 National Mobile Inventory Model is available at  HYPERLINK
"http://www.epa.gov/OMSWWW/nmim.htm" http://www.epa.gov/OMSWWW/nmim.htm
. 

 Pollack, A.K., R. Chi, C. Lindhjem, C. Tran, P. Chandraker., P.
Heirigs, L. Williams, S. S. Delaney, M. A. Mullen, and D. B. Thesing. 
2004.  “Development of WRAP MOBILE Source Emission Inventories.” 
Prepared for Western Governors’ Association, Denver, Colorado.  

 Port of Los Angeles. 2001 Baseline Emissions Inventory, prepared by
Starcrest Consulting Group, LLC, June, 2004.

  HYPERLINK
"http://vista.cira.colostate.edu/docs/wrap/emissions/OffshoreEmissions.d
oc"
http://vista.cira.colostate.edu/docs/wrap/emissions/OffshoreEmissions.do
c  

 see  HYPERLINK "http://www.epa.gov/ttn/chief/ap42/ch13/index.html"
http://www.epa.gov/ttn/chief/ap42/ch13/index.html . 

 “Emissions Modeling” at
http://vista.cira.colostate.edu/docs/wrap/emissions/EmissionsOverview.do
c.

 Plan02d specification at
http://pah.cert.ucr.edu/aqm/308/spec_sheets/SpecSheet_Plan02d_11_03_2008
.doc

 Prb18b specification at
http://pah.cert.ucr.edu/aqm/308/spec_sheets/SpecSheet_PRP18b_Aug11_2009f
inal.doc

  HYPERLINK "http://www.mmm.ucar.edu/mm5/" http://www.mmm.ucar.edu/mm5 

 Dudhia, J., 1993. “A non-hydrostatic version of the Penn State/NCAR
Mesoscale Model: validation tests and simulation of an Atlantic cyclone
and cold front.” Mon. Wea. Rev. 121, pp.1493-1513.

 Grell, G.A., J. Dudhia, and D.R. Stauffer, 1994.  “A description of
the Fifth Generation Penn State/NCAR Mesoscale Model (MM5).” NCAR
Technical Note, NCAR TN-398-STR, 138 pp.

 Kemball-Cook, S., Y. Jia, C. Emery, R. Morris, Z. Wang and G. Tonnesen.
2005. Annual 2002 MM5 Meteorological Modeling to Support Regional Haze
Modeling of the Western United States.”  Draft Final Report.  ENVIRON
International Corporation and UC Riverside. Available at:  HYPERLINK
"http://pah.cert.ucr.edu/aqm/308/mm5.shtml"
http://pah.cert.ucr.edu/aqm/308/mm5.shtml  

 ENVIRON and UCR. 2004. “2002 Annual MM5 Simulations to Support WRAP
CMAQ Visibility Modeling for the Section 308 SIP/TIP.” Draft Protocol.
ENVIRON International Corporation and the University of California at
Riverside. April. Available at:  HYPERLINK
"http://pah.cert.ucr.edu/aqm/308/mm5.shtml"
http://pah.cert.ucr.edu/aqm/308/mm5.shtml  

 Kemball-Cook, S., Y. Jia, C. Emery, R. Morris, Z. Wang and G. Tonnesen.
2004. “MM5 Sensitivity Simulations to Identify a More Optimal MM5
Configuration Sensitivity Testing.” Revised Report.  ENVIRON
International Corporation and UC Riverside. Available at:  HYPERLINK
"http://pah.cert.ucr.edu/aqm/308/mm5.shtml"
http://pah.cert.ucr.edu/aqm/308/mm5.shtml  

 Emery, C.A. and E. Tai. 2001. “Enhanced meteorological modeling and
performance evaluation for two Texas ozone episodes.” Prepared for the
Texas Natural Resource Conservation Commission, by ENVIRON International
Corporation.

 A list of the run segments and their date/time durations is provided at
 HYPERLINK
"http://pah.cert.ucr.edu/aqm/308/2002met/mm5/2002_MM5_dateskey_UCR.xls"
http://pah.cert.ucr.edu/aqm/308/2002met/mm5/2002_MM5_dateskey_UCR.xls 

  HYPERLINK
"http://vista.cira.colostate.edu/docs/wrap/Modeling/AirQualityModeling.d
oc"
http://vista.cira.colostate.edu/docs/wrap/Modeling/AirQualityModeling.do
c  

  HYPERLINK "http://www.cmascenter.org/" http://www.cmascenter.org/ 

  HYPERLINK
"http://pah.cert.ucr.edu/aqm/308/reports/final/2002_MPE_report_main_body
_FINAL.pdf"
http://pah.cert.ucr.edu/aqm/308/reports/final/2002_MPE_report_main_body_
FINAL.pdf 

 U.S. EPA. 2001. “Guidance for Demonstrating Attainment of Air Quality
Goals for PM2.5 and Regional Haze”, Draft Report, U.S. Environmental
Protection Agency, Research Triangle Park, NC.

 U.S. EPA. 2003a. “Guidance for Estimating Natural Visibility
Conditions under the Regional Haze Rule.” EPA-454/B-03-005. September
2003

 U.S. EPA. 2006. Guidance on the Use of Models and Other Analyses for
Demonstrating Attainment of Air Quality Goals for Ozone, Pm2.5, and
Regional Haze – Draft 3.2. U.S. Environmental Protection Agency,
Office of Air Quality and Planning Standards, Research Triangle Park,
North Carolina. September. ( HYPERLINK
"http://www.epa.gov/scram001/guidance/guide/draft_final-pm-O3-RH.pdf"
http://www.epa.gov/scram001/guidance/guide/draft_final-pm-O3-RH.pdf ).

 U.S. EPA. 2003b. “Guidance for Tracking Progress under the Regional
Haze Rule.” U.S. EPA, EPA-454/B-03-004. September 2003.

 ENVIRON. 2006. “User’s Guide – Comprehensive Air-quality Model
with extensions, Version 4.30.” ENVIRON International Corporation,
Novato, California. (available at  HYPERLINK "http://www.camx.com"
http://www.camx.com ). 

 http://vista.cira.colostate.edu/docs/wrap/attribution/PSATMethods.doc

 available at  HYPERLINK
"http://vista.cira.colostate.edu/TSS/Results/SA.aspx"
http://vista.cira.colostate.edu/TSS/Results/SA.aspx  

 Morris, R.E., G.Y., C.E., G.W., B.K. 2005. “Recent Advances in
One-Atmospheric Modeling Using the Comprehensive Air-quality Model with
Extensions.” Presented at the 98th Annual Air and Waste Management
Conference, Minneapolis, MN. June. 

 Yarwood, G., R.E. Morris, G. Wilson. 2004. “Particulate Matter Source
Apportionment Technology (PSAT) in the CAMx Photochemical Grid Model."
Presented at the ITM 27th NATO Conference- Banff Centre, Canada,
October. 

  HYPERLINK
"http://vista.cira.colostate.edu/docs/wrap/attribution/WEPMethods.doc"
http://vista.cira.colostate.edu/docs/wrap/attribution/WEPMethods.doc 

  HYPERLINK "http://pah.cert.ucr.edu/aqm/308/"
http://pah.cert.ucr.edu/aqm/308/  

  HYPERLINK "http://coha.dri.edu/index.html"
http://coha.dri.edu/index.html  

 Further information regarding the Organic Aerosol Tracer analysis
technique can be found in the TSS document   HYPERLINK
"http://pah.cert.ucr.edu/aqm/308/reports/final/2006/WRAP-RMC_2006_report
_FINAL.pdf"
http://pah.cert.ucr.edu/aqm/308/reports/final/2006/WRAP-RMC_2006_report_
FINAL.pdf , page 73.

 Further information regarding the PMF analysis can be found on the CoHA
web site  HYPERLINK
"http://www.coha.dri.edu/web/general/tools_PMFModeling.html"
http://www.coha.dri.edu/web/general/tools_PMFModeling.html .

 Further information regarding the Causes of Dust analysis can be found
on the CoHA web site  HYPERLINK
"http://www.coha.dri.edu/dust/index.html"
http://www.coha.dri.edu/dust/index.html .

 Cheng, TT, Lu, DR, Wang, GC, Xu, YF. Chemical characteristics of Asian
dust aerosol from Hunshan Dake sandland in Northern China. Atmospheric
Environment, 2005. 2903-2911

 Park, SU, Chang, LS, Lee, EH. Direct radiative forcing due to aerosols
in East Asia during a Hwangsa(Asian dust) event observed on 19-23 March
2002 in Korea. Atmospheric Environment, 2005, 2593-2606

 Zhang, RJ, Arimoto, R, An, JL, Yabuki, S, Sun, JH. Ground observations
of a strong dust storm in Beijing in March 2002. Journal of Geophysical
Research, 2005, D18S06

 Darmenova, K, Sokolik, IN, Darmenov, A. Characterization of east Asian
dust outbreaks in the spring of 2001 using ground-based and satellite
data. Journal of Geophysical Research, 2005 D02204

 Perry KD, Cahill TA, Eldred RA, Dutcher DD, Gill T, 1997, Long-range
transport of North African dust to the eastern United States Journal of
Geophysical Research 102 (D10): 11225-11238

 VanCuren RA, Cahill TA Title: Asian aerosols in North America:
Frequency and concentration of fine dust. Journal of Geophysical
Research 107 (D24): Art. No. 4804 DEC 28 2002

  HYPERLINK "http://pah.cert.ucr.edu/aqm/308/bart.shtml"
http://pah.cert.ucr.edu/aqm/308/bart.shtml 

  HYPERLINK
"http://pah.cert.ucr.edu/aqm/308/reports/final/2006/WRAP-RMC_2006_report
_FINAL.pdf"
http://pah.cert.ucr.edu/aqm/308/reports/final/2006/WRAP-RMC_2006_report_
FINAL.pdf 

  HYPERLINK
"http://pah.cert.ucr.edu/aqm/308/bart/WRAP_RMC_BART_Protocol_Aug15_2006.
pdf"
http://pah.cert.ucr.edu/aqm/308/bart/WRAP_RMC_BART_Protocol_Aug15_2006.p
df 	

 Interagency Workgroup on Air Quality Modeling (IWAQM) Phase 2 Summary
Report and Recommendations for Modeling Long Range Transport Impacts
(EPA-454/R-98-019) 

  HYPERLINK "http://www.nature.nps.gov/air/Pubs/pdf/flag/FlagFinal.pdf"
http://www.nature.nps.gov/air/Pubs/pdf/flag/FlagFinal.pdf  

 EPA. 2003. “Revisions to the Guideline on Air Quality Models:
Adoption of a Preferred Long Range Transport Model and Other
Resources”; Final Rule. Fed. Reg./Vol. 68, No. 72/Tuesday April 15,
2003/Rules and Regulations. 40 CFR51.

  HYPERLINK "http://pah.cert.ucr.edu/aqm/308/docs.shtml"
http://pah.cert.ucr.edu/aqm/308/docs.shtml 

 EPA. 2005. “Regional Haze Regulations and Guidelines for Best
Available Technology (BART) Determinations”. Fed. Reg./Vol. 70, No.
128/Wed. July 6, 2005, Rules and Regulations, pp. 39104-39172. 40 CFR
Part 51, FRL-7925-9, RIN AJ31.

 ERG, Inc., 2006: Identification of BART-eligible Sources in the WRAP
Region, Draft Report. Prepared for the Western Regional Air Partnership
by ERG, Inc., Sacramento, CA. April 4, 2006

 McNally, D.E. 2003. Annual Application of MM5 for Calendar Year 2001.
Prepared for U.S. EPA, Office of Air Quality and Planning Standards.
Prepared by Alpine Geophysics, Arvada, CO.

 Kemball-Cook, S., Y. Jia, C. Emery and R. Morris. 2005. Alaska MM5
Modeling for the 2002 Annual Period to Support Visibility Modeling.
Western Regional Air Partnership, Regional Modeling Center. 

 Baker, K. 2004a. Summer MM5 Performance. Midwest Regional Planning
Organization.

 Baker, K. 2004b. Monthly Rainfall Evaluation. Midwest regional Planning
Organization.

  HYPERLINK "http://www.wrapair2.org/" http://www.wrapair2.org/  

  HYPERLINK "http://vista.cira.colostate.edu/tss/"
http://vista.cira.colostate.edu/tss/  

  HYPERLINK "http://www.wrapair.org/RH_Rule_P51/index.html"
http://www.wrapair.org/RH_Rule_P51/index.html  

  HYPERLINK "http://views.cira.colostate.edu/web/"
http://views.cira.colostate.edu/web/  

  HYPERLINK "http://www.wrapedms.org/" http://www.wrapedms.org/  

  HYPERLINK "http://www.wrapfets.org/" http://www.wrapfets.org/  

  HYPERLINK "http://pah.cert.ucr.edu/aqm/308/"
http://pah.cert.ucr.edu/aqm/308/  

  HYPERLINK "http://www.coha.dri.edu/" http://www.coha.dri.edu/  

**** DRAFT 10/4/2010 ****

 PAGE   

 PAGE   58 

**** DRAFT ****

