Summary
of
Comments
and
Responses
on
Update
of
Continuous
Instrumental
Test
Methods
Prepared
March
2006
Proposed
in
the
Federal
Register
on
October
10,
2003
(
This
page
included
to
provide
for
two
sided
copying)
i
TABLE
OF
CONTENTS
Section
No.
Page
1.0
Summary
1
2.0
List
of
Commenters
2
3.0
Public
Comments
and
Responses
10
3.1
Rationale
for
Revisions
10
3.2
Uncertainty
of
Data
11
3.3
Sampling
System
Bias,
Calibration
Drift,
and
Analyzer
Calibration
Error
15
3.4
Interference
Test
22
3.5
Alternative
Dynamic
Spike
Check
26
3.6
Sampling
Traverse
Points
29
3.7
Sampling
Dilution
Systems
36
3.8
Moisture
Removal
System
37
3.9
Equipment
Heating
Specifications
41
3.10
Technology­
Specific
Analyzers
42
3.11
Calibration
Gases
44
3.12
NO2
Converter
Efficiency
Test
in
Method
7E
48
3.13
General
Comments
53
3.13.1
Supportive
Comments
53
3.13.2
Adverse
Comments
54
3.13.3
Other
Comments
55
3.14
Specific
Comments
on
Method
3A
61
3.15
Specific
Comments
on
Method
6C
67
3.16
Specific
Comments
on
Method
7E
79
3.17
Specific
Comments
on
Method
10
119
3.18
Specific
Comments
on
Method
20
125
ii
(
This
page
included
to
provide
for
two
sided
copying)
1
1.0
SUMMARY
On
October
10,
2003,
the
U.
S.
Environmental
Protection
Agency
(
EPA)
proposed
amendments
to
five
instrumental
test
methods
in
Appendix
A
of
40
CFR
Part
60.
Methods
3A,
6C,
7E,
10,
and
20
determine
diluent
(
oxygen
and
carbon
dioxide),
sulfur
dioxide,
nitrogen
oxides,
and
carbon
monoxide
emissions
from
stationary
sources.
The
methods
were
developed
for
boilers,
electric
utility
plants,
refinery
catalytic
cracking
catalyst
regenerators,
and
gas
turbines
covered
under
the
New
Source
Performance
Standards
(
NSPS)
in
40
CFR
Part
60.
They
were
later
adopted
into
the
Clean
Air
Markets
(
Acid
Rain)
regulations
and
State
and
regional
programs.
The
test
methods
were
not
developed
at
the
same
time
and
do
not
contain
consistent
equipment
and
performance
requirements.
Currently,
some
methods
require
more
up­
to­
date
equipment
than
others
and
some
have
more
stringent
performance
requirements
than
others.
These
dissimilarities
have
hampered
the
current
trend
of
using
the
methods
together
in
the
field.
The
proposal
noted
collective
changes
that
would
render
the
methods
easier
to
use
by
harmonizing
their
requirements.
Obsolete
requirements
were
also
updated
and
flexibility
was
added
by
allowing
alternatives
to
various
equipment
and
performance
specifications.

On
August
27,
1997
(
62
FR
45369),
many
of
the
updates
of
this
action
were
proposed
with
a
larger
action
that
amended
the
stationary
source
testing
and
monitoring
rules
in
40
CFR
Parts
60,
61,
and
63.
In
that
proposal,
minor
revisions
and
updates
were
made
and
all
test
methods
and
performance
specifications,
and
they
were
revised
into
the
new
Environmental
Monitoring
Management
Council
(
EMMC)
format.
Several
commenters
asserted
that
the
preamble
gave
inadequate
notice
of
the
changes
being
made
to
the
instrumental
methods.
They
argued
that
the
proposal
provided
an
inadequate
basis
and
purpose
statement
and
that
it
misled
readers
into
thinking
that
no
substantive
changes
were
being
made
to
the
methods.
Due
to
the
large
number
of
changes
we
were
making
in
the
regulations
at
that
time,
and
in
light
of
the
Section
307(
d)
requirements,
the
commenters
requested
that
we
address
the
instrumental
method
revisions
through
a
separate
proposal
and
not
promulgate
them
with
the
rest
of
the
package
that
was
proposed
on
August
27,
1997.

We
understood
these
concerns
and
stated
our
intention
in
the
final
rule
(
65
FR
61744)
to
repropose
the
revisions
to
the
instrumental
methods
as
a
separate
rule.
The
reproposal
was
published
on
October
10,
2003.
We
have
considered
the
comments
we
received
pertinent
to
these
methods
in
the
first
proposal
and
are
combining
and
addressing
them
in
this
document
with
the
public
comments
received
on
the
October
10,
2003
proposal.
This
comment
summary
and
the
Agency=
s
responses
serve
as
the
basis
for
the
revisions
made
between
proposal
and
promulgation.
2
2.0
LIST
OF
COMMENTERS
First
Proposal
on
August
27,
1997
Item
Number
in
Docket
A­
97­
12
Commenter
and
Affiliation
IV­
D­
03
Kevin
L.
Kitchen
Lehi,
Utah
IV­
D­
05
Mel
S.
Schulze
Hunton
&
Williams
IV­
D­
06
Phillip
Juneau
Emission
Monitoring,
Inc
IV­
D­
08
R.
T.
Shigehara,
Vice
President
Environmental
Monitoring,
Inc.

IV­
D­
18
Ed
S.
Surla,
Chief
Indiana
Dept
of
Environmental
Management
IV­
D­
19
Raymond
A.
Walters
New
Hampshire
Dept
of
Environmental
Services
IV­
D­
20
R.
W.
Orchowski
Dequesne
Light
IV­
D­
21
Debra
J.
Jezouit
Baker
&
Botts
L.
L.
P.

IV­
D­
24
Frances
Cameron
California
EPA
IV­
D­
25
Peter
Ironwode,
Missouri
Dept
of
Natural
Resources
IV­
D­
26
Ray
Walters
New
Hampshire
Air
Resources
Division
IV­
D­
28
Stephen
C.
Anderson
3
2.0
LIST
OF
COMMENTERS
First
Proposal
on
August
27,
1997
Item
Number
in
Docket
A­
97­
12
Commenter
and
Affiliation
Ct
Dept
of
Environmental
Protection
IV­
G­
01
Richard
L.
White,
Vice
President
TU
Services
IV­
G­
02
Ed
S.
Surla,
Chief
Indiana
Dept
of
Environmental
Management
IV­
G­
04
Billy
J.
Mullins,
President
METCO
Environmental
4
2.0
LIST
OF
COMMENTERS
(
Cont
=

d)

Re­
proposal
on
October
10,
2003
Item
Number
in
Docket
OAR­
2002­
0071
Commenter
and
Affiliation
OAR­
2002­
0071­
0002
Tom
Gasloli
Michigan
Dept
of
Environmental
Quality
OAR­
2002­
0071­
0003
Tom
Gasloli
Michigan
Dept
of
Environmental
Quality
OAR­
2002­
0071­
0004
Mark
Patrick
TRC
Environmental
Salt
Lake
City,
UT
OAR­
2002­
0071­
0005
Anonymous
OAR­
2002­
0071­
0006
Anonymous
OAR­
2002­
0071­
0007
Anonymous
OAR­
2002­
0071­
0008
Brian
Glendening
Consumers
Energy
Company
OAR­
2002­
0071­
0009
Reggie
Davis,
Vice
Pres
&
Gen
Mgr
Spectrum
Systems,
Inc.

OAR­
2002­
0071­
0010
Bruce
Rising
Gas
Turbine
Association
OAR­
2002­
0071­
0011
Kevin
L.
Kitchen
OAR­
2002­
0071­
0012
Bob
Finken
OAR­
2002­
0071­
0013
Suzanne
Blackburn
San
Diego
County
Air
Pollution
Control
District
OAR­
2002­
0071­
0014
Peter
Pakalnis
5
2.0
LIST
OF
COMMENTERS
(
Cont
=

d)

Re­
proposal
on
October
10,
2003
Item
Number
in
Docket
OAR­
2002­
0071
Commenter
and
Affiliation
LEHDER
Environmental
Services
OAR­
2002­
0071­
0015
Phillip
Juneau
B3
Systems,
Inc.

OAR­
2002­
0071­
0016
Phillip
Juneau
B3
Systems,
Inc.

OAR­
2002­
0071­
0017
Anonymous
OAR­
2002­
0071­
0018
Anonymous
OAR­
2002­
0071­
0019
Anonymous
OAR­
2002­
0071­
0020
Michael
A.
Klein
New
Jersey
Dept
of
Environmental
Protection
OAR­
2002­
0071­
0021
Lee
Cecchi
Precision
Emissions,
Inc.

OAR­
2002­
0071­
0022
Scott
Evans
Clean
Air
Engineering
OAR­
2002­
0071­
0023
Joe
Gregoria
M&
C
Products
Analysis
Technology,
Inc.

OAR­
2002­
0071­
0024
Phillip
Buillemette
Flint
Hill
Resources
OAR­
2002­
0071­
0025
Barbara
A.
Kwetz
Massachusetts
Dept
of
Environmental
Protection
OAR­
2002­
0071­
0026
Mark
W.
Bailey
Oregon
Dept
of
Environmental
Quality
6
2.0
LIST
OF
COMMENTERS
(
Cont
=

d)

Re­
proposal
on
October
10,
2003
Item
Number
in
Docket
OAR­
2002­
0071
Commenter
and
Affiliation
OAR­
2002­
0071­
0027
Bighorn
Environmental
OAR­
2002­
0071­
0028
GE
Mostardi
Platt
OAR­
2002­
0071­
0029
John
E.
Pinkerton
National
Council
for
Air
&
Stream
Improvement,
Inc.

OAR­
2002­
0071­
0030
Daniel
E.
Fitzgerald
ARI
Environmental,
Inc.

OAR­
2002­
0071­
0031
John
Johnston/
Russell
DiRaimo
Stork
Southwestern
Laboratories
OAR­
2002­
0071­
0032
Lisa
Beal
Interstate
Natural
Gas
Association
of
America
OAR­
2002­
0071­
0033
Dominion
 
Emission
Monitoring
Support
Group
OAR­
2002­
0071­
0034
Kenneth
R.
Loder
TRC
Environmental
Solutions
OAR­
2002­
0071­
0035
Michael
W.
Hartman
Air­
Tech
Environmental
OAR­
2002­
0071­
0036
Scott
Evans
Clean
Air
Engineering
OAR­
2002­
0071­
0037
John
Smith
Texas
Comission
for
Environmental
Quality
OAR­
2002­
0071­
0038
Lauren
Freeman
Hunton
&
Williams
for
UARG
7
2.0
LIST
OF
COMMENTERS
(
Cont
=

d)

Re­
proposal
on
October
10,
2003
Item
Number
in
Docket
OAR­
2002­
0071
Commenter
and
Affiliation
OAR­
2002­
0071­
0039
Bill
Mayhew
Source
Test
and
Consulting
Services,
Inc.

OAR­
2002­
0071­
0040
Anonymous
OAR­
2002­
0071­
0041
Stephen
E.
Woock
Weyerhaeuser
Company
OAR­
2002­
0071­
0042
David
P.
Duncan
TXU
Energy
OAR­
2002­
0071­
0043
David
Rossman
Horizon
Engineering
OAR­
2002­
0071­
0044
Jack
Herbert
Oregon
Dept
of
Environmental
Quality
OAR­
2002­
0071­
0045
David
Rossman
Horizon
Engineering
OAR­
2002­
0071­
0046
Kenneth
R.
Loder
TRC
Environmental
OAR­
2002­
0071­
0047
Jim
Steiner
TRC
Environmental
OAR­
2002­
0071­
0048
Calvin
Loomis
Bison
Engineering
OAR­
2002­
0071­
0049
Elvin
D.
Lang,
Chief
Emissions
Measurement
Section
Alabama
Department
of
Environmental
Management
8
2.0
LIST
OF
COMMENTERS
(
Cont
=

d)

Re­
proposal
on
October
10,
2003
Item
Number
in
Docket
OAR­
2002­
0071
Commenter
and
Affiliation
OAR­
2002­
0071­
0050
Michael
W.
Stroben
EHS
Director
Duke
Energy
OAR­
2002­
0071­
0051
Barbara
A.
Kwetz
Massachusetts
Dept
of
Environmental
Protection
OAR­
2002­
0071­
0052
Barbara
A.
Kwetz
Massachusetts
Dept
of
Environmental
Protection
OAR­
2002­
0071­
0053
David
C.
Foerter
/
Richard
A.
Hovan
Institute
of
Clean
Air
Companies
OAR­
2002­
0071­
0054
David
C.
Foerter
/
Richard
A.
Hovan
Institute
of
Clean
Air
Companies
OAR­
2002­
0071­
0055
Gregory
R.
Sims
Sr.
Project
Manager/
Technical
Director
Weston
Solutions
OAR­
2002­
0071­
0056
Gregory
R.
Sims
Sr.
Project
Manager/
Technical
Director
Weston
Solutions
OAR­
2002­
0071­
0057
Gregory
R.
Sims
Sr.
Project
Manager/
Technical
Director
Weston
Solutions
OAR­
2002­
0071­
0058
Gregory
R.
Sims
Sr.
Project
Manager/
Technical
Director
Weston
Solutions
OAR­
2002­
0071­
0059
Michael
Harley
9
2.0
LIST
OF
COMMENTERS
(
Cont
=

d)

Re­
proposal
on
October
10,
2003
Item
Number
in
Docket
OAR­
2002­
0071
Commenter
and
Affiliation
Harley
Engineering
and
Technologies
(
HEAT)

OAR­
2002­
0071­
0060
Leanne
J.
Tippett
Director
State
of
Missouri
Department
of
Natural
Resources
OAR­
2002­
0071­
0061
Elvin
D.
Lang,
Chief
Emissions
Measurement
Section
Alabama
Department
of
Environmental
Management
10
3.0
PUBLIC
COMMENTS
AND
RESPONSES
3.1
Rationale
for
Revisions
1.
Comment:
We
do
not
believe
the
preamble
language
for
the
proposal
or
reproposal
adequately
reflected
the
changes
being
proposed.
No
supporting
documents,
technical
analysis,
discussion,
or
data
to
support
the
proposed
methods
were
included
in
the
electronic
docket
nor
were
adequate
basis
and
purpose
statements
supplied.
Section
307(
d)
of
the
Clean
Air
Act
(
CAA),
which
applies
to
this
rule
making,
requires
that
the
Agency
provide
a
statement
of
basis
and
purpose
for
its
proposal.
This
misled
the
readers
into
thinking
that
the
proposal
contained
no
substantive
changes
to
the
test
methods.
One
purpose
of
this
provision
is
to
provide
adequate
notice
to
potential
commenters
regarding
the
content
of
and
rationale
underlying
the
proposal.
The
EPA
should
not
move
forward
to
final
rule
making
but
should
withdraw
these
proposed
revisions
and
develop
a
complete
technical,
legal,
and
policy
record
to
support
the
proposed
revisions,
with
full
public
participation.
The
EPA
has
not
provided
the
support
CAA
Section
307
(
d)
requires
for
meaningful
opportunity
to
comment
or
to
support
(
with
actual
data
and
technical
documentation)
a
final
rule.
One
commenter
felt
that
research
to
validate
the
proposed
procedures
under
Method
301
or
any
other
means
should
have
been
supplied.
(
IV­
D­
05,
IV­
D­
20,
IV­
D­
21,
0032,
0038,
0039,
0050)

Response:
We
made
an
effort
to
provide
an
adequate
rationale
and
basis
for
the
proposal
since
it
was
a
reproposal
of
what
was
published
in
an
October
17,
2000
notice
that
public
commenters
felt
lacked
adequate
rationale.
Our
goal
was
to
harmonize,
update,
and
simplify
the
instrumental
test
methods.
In
the
proposal,
we
enumerated
the
major
revisions
and
explained
that
most
were
minor
in
nature
and
based
on
practical,
common
sense
needs
that
are
generally
understood
by
affected
parties.
That
is
why
the
preamble
gave
a
general
discussion
of
the
revisions
and
did
not
include
supporting
data
and
documentation.
We
believe
this
provided
adequate
public
notice
for
the
revisions.
However,
we
understand
the
concern
that
some
of
the
performance
test
limits
and
the
use
of
uncertainty
to
determine
compliance
may
constitute
major
changes.
In
these
instances,
we
were
seeking
public
input
on
these
ideas.
We
have
formulated
the
final
rule
so
that
these
points
will
not
result
in
major
changes.

2.
Comment:
Implementation
of
the
new
requirements
will
likely
increase
the
cost
of
established
methods.
EPA
has
not
demonstrated
the
necessity
of
the
new
requirements
to
achieve
improved
emission
data
or
justified
the
costs
associated
with
the
new
requirements.
(
0032,
0038)

Response:
We
believe
the
changes
we
have
made
to
add
consistency
and
streamline
the
methods
alone
should
facilitate
the
method=
s
use
and
reduce
costs.
We
have
added
flexibility
in
the
choice
of
analyzer
technology
where
we
had
only
allowed
specific
types
to
be
used.
We
believe
the
overall
benefits
the
revisions
bring
in
enhancing
the
certainty
of
data
and
tester
efficiency
will
justify
any
additional
costs
incurred.

3.
Comment:
The
proposed
revisions
appear
to
be
a
collection
of
many
different
ideas
and
concepts,
some
of
which
are
conflicting,
and
contain
numerous
apparent
errors,
missstatements
and/
or
inconsistencies
as
are
detailed
later.
There
are
so
many
unresolved
issues
and
11
deficiencies
that
it
is
impossible
to
determine
the
impacts
that
the
proposed
revisions
will
have
on
the
regulated
industries
or
the
organizations
that
perform
such
tests.
It
is
our
position
that
the
proposed
rule
changes,
while
a
step
in
the
right
direction,
are
not
sufficiently
complete
that
minimal
changes
based
on
comments
will
allow
for
final
promulgation.
With
widespread
access
to
the
internet,
it
would
seem
beneficial
to
try
and
come
to
a
broad
consensus
between
industry,
testing
organizations,
and
regulators
prior
to
another
draft
publishing
in
the
Federal
Register.
(
0009)

Response:
We
received
a
large
number
of
comments
in
61
comment
letters
that
addressed
most
aspects
of
the
methods
and
made
recommendation
for
improvement.
We
have
made
major
revisions
in
the
final
rule
to
reflect
these
recommendations
and
believe
the
resulting
methods
meet
the
primary
objectives
to
update
and
harmonize
them.
Issues
of
concern
on
the
proposal
must
be
addressed
during
the
public
comment
period
or
through
the
requesting
of
a
public
hearing.
The
closing
of
the
public
comment
period
ends
the
opportunity
for
public
interchange
and
further
collaboration
in
designing
the
final
rule
beyond
this
point
is
legally
precluded.

3.2
Uncertainty
of
Data
4.
Comment:
The
EPA
proposes
to
add
Section
1.3
to
each
of
the
methods
to
address
the
data
quality
objectives
(
DQO)
which
"
define
the
quality
of
data
you
need
for
the
test."
The
only
DQO
actually
identified
in
this
section,
however,
is
the
requirement
to
evaluate
data
relative
to
the
applicable
emission
standard.
The
remainder
of
the
section
allows
a
"
data
user"
(
which
EPA
explains
is
a
regulating
agency)
to
exercise
discretion
regarding
whether
or
not
to
accept
data
that
has
been
collected
in
accordance
with
the
method
under
some
undefined
DQO.
To
provide
information
related
to
this
undefined
DQO,
EPA
requires
that
sources
calculate
the
"
uncertainty
estimate."
Significantly,
the
method
contains
no
performance
specification
to
compare
this
calculated
"
uncertainty,"
but
apparently
leaves
it
up
to
the
regulating
agency
to
determine
whether
data
are
of
sufficient
quality
to
meet
the
agency's
"
testing
objectives."
EPA
must
identify
the
acceptable
level
of
uncertainty,
and
assess
the
impact
of
that
criterion
on
the
underlying
requirements
for
use
of
the
data.
The
sole
purpose
of
including
QA/
QC
procedures
in
the
method
is
to
define
the
acceptability
of
the
data.
Because
these
are
compliance
test
methods,
sources
relying
on
the
methods
must
have
certainty
that
properly
obtained
results
will
be
sufficient
to
satisfy
their
compliance
obligations.
(
0008,
0038)

Response:
The
DQOs
for
a
specific
test
are
defined
by
the
data
user
since
these
objectives
will
vary
according
to
the
purpose
of
the
collected
data.
For
example,
the
quality
of
data
collected
to
show
compliance
with
an
emission
standard
is
more
important
in
the
concentration
vicinity
of
the
limit
than
at
other
concentration
levels.
Data
quality
in
a
market­
based
trading
program
is
most
important
wherever
the
emission
concentrations
may
fall.
Other
data
quality
needs
may
differ
and
the
DQOs
are
set
accordingly.
Data
user
discretion
becomes
a
part
of
the
data
quality
assessment
(
DQA)
of
the
test
results.

The
proposed
requirement
to
calculate
the
uncertainty
of
each
test
run
has
been
dropped.
At
a
future
date,
EPA
may
issue
a
guidance
document
quantifying
the
errors
associated
with
the
instrumental
methods
in
their
present
form.
A
similar
exercise
was
undertaken
in
1970
for
the
original
reference
methods
(
see
Shigehara,
Todd,
and
Smith,
"
Significance
of
Errors
in
Stack
12
Sampling
Measurements"
as
sited
in
the
Method
2
bibliography).
The
objective
of
such
guidance
would
be
to
encourage
advances
in
emission
testing
technology.

5.
Comments:
It
is
imperative
to
preserve
the
concept
that
properly
quality
assured
reference
method
data
is
the
real
value.
Using
uncertainty
produces
a
grey
area
that
will
incite
many
frivolous
lawsuits
and
is
ultimately
confusing
and
incorrectly
applied
in
this
case.
The
tester
and
facility
need
to
have
a
reasonable
assurance
that
they
have
met
the
requirements
based
on
a
properly
quality
assured
test.
Data
Quality
Objectives
and
uncertainty
present
a
number
of
very
serious
issues,
with
a
Pandora=
s
box
related
to
use
of
continuous
emissions
monitoring
systems
(
CEMS)
in
the
Clean
Air
Markets
programs
and
other
compliance
programs.
There
are
too
many
individuals
ready
to
use
the
perception
of
uncertainty
to
continuously
challenge
any
decision
made
related
to
compliance.
Uncertainty
is
a
statistical
concept
that
is
too
easy
for
those
without
a
thorough
understanding
of
statistics
to
misapply,
and
these
calculations
should
remain
in
the
ivory
tower.

The
instrumental
methods
are
"
compliance"
methods
rather
than
"
violation"
methods.
While
the
concern
may
appear
to
be
merely
one
of
"
semantics,@
it
is
both
very
real
and
very
significant.
The
instrumental
methods
included
in
this
notice
are
the
federally
defined
methods
for
the
determination
of
compliance
with
the
requirements
of
various
federal
NSPS
and
Clean
Air
Markets
programs
as
well
as
federally
enforceable
State
Air
Implementation
Plans.
The
requirement
and
concept
of
the
federal
test
methods
are
to
provide
a
means
of
demonstrating
"
compliance"
with
the
applicable
requirements
on
the
basis
of
the
test
method.
The
addition
of
an
uncertainty
calculation
opens
the
door
for
third
parties
and
regulatory
agencies
to
challenge
the
validity
of
compliance
demonstrations
where
the
results
of
a
test
(
using
the
federally
required
method)
are
below
or
very
close
to
the
standard.

The
application
of
"
uncertainty"
calculations
will
subject
those
who
have
made
a
"
good
faith"
demonstration
of
compliance
(
based
on
one
of
the
instrumental
methods
in
this
notice)
to
the
possibility
of
enforcement.
In
addition,
the
inclusion
of
an
"
uncertainty"
evaluation
would
increase
the
cost
and
duration
of
enforcement
actions,
to
the
detriment
of
both
the
parties
and
the
environment.
Since
CEMS
are
evaluated
against
instrumental
methods
through
RATAs
and
most
CEMS
use
the
same
technology
as
the
reference
methods,
measurements
with
compliance
CEMS
might
be
even
larger.
The
application
of
uncertainty
calculations
could
result
in
many
cases
similar
to
the
Louisiana
Pacific
opacity
case
and
instability
in
federal
Clean
Air
Market
or
other
emission
allowance
programs.
Please
do
not
include
any
requirements
or
references
concerning
uncertainty
in
the
final
instrumental
methods.
(
0039,
0059)

Response:
See
response
to
Comment
4.

6.
Comment:
The
proposed
requirements
regarding
measurement
uncertainty
and
the
use
of
uncertainty
to
determine
compliance
status
are
inappropriate.
In
particular,
we
object
to
EPA=
s
recommendations
in
the
methods
that
permitting
authorities
may
use
the
uncertainty
data
to
ascertain
the
compliance
status
of
a
source.
We
note
that
this
approach
does
not
address
the
fact
that
existing
emission
limitations
have
been
developed
using
emission
data
derived
principally
from
these
same
test
methods
with
no
consideration
of
uncertainty.
We
also
note
that
the
proposed
revisions
failed
to
provide
a
definition
for
uncertainty
and
the
proposed
equation
for
13
uncertainty
(
Equation
7E­
5)
reflects
only
two
factors,
related
to
bias,
rather
than
the
true
uncertainty
of
the
measurement.
(
0028)
(
0032)

Response:
See
response
to
Comment
4.

7.
Comment:
It
was
asserted
in
the
proposal
that
the
use
of
uncertainty
estimates
encourages
better
technology.
This
assertion
is
unfounded
and
appears
to
be
only
the
opinion
of
the
author.
The
use
of
uncertainty
estimates
may
assist
in
the
evaluation
of
data;
however,
as
applied
in
this
method
it
only
serves
to
complicate
implementation
of
the
method
significantly.
The
entire
section
on
data
quality
objectives
should
be
removed
from
the
method.
Further,
this
is
in
direct
opposition
to
the
goals
of
simplifying
and
reducing
costs.
(
0032,
0043)

Response:
See
response
to
Comment
4.

8.
Comment:
(
a)
The
preamble
to
the
reproposal
noted
that
Amany
commenters
objected
to
the
proposed
bias
correction
equation
and
argued
it
was
too
complicated.
We
are
proposing
to
drop
the
bias
correction
requirement
in
favor
of
calculating
the
level
of
uncertainty
for
a
run.@
The
proposed
bias
correction
equation
used
a
least
square
calculation,
which
did
complicate
bias
corrections
without
significant
benefits.
Furthermore,
the
previously
proposed
bias
correction
was
based
on
an
incorrect
statistical
premise.
Rather
than
going
back
to
the
original,
simple
bias
correction
equation
(
which
corrected
for
both
bias
and
drift),
EPA
has
decided
to
drop
altogether
any
bias
correction
and
has
chosen
to
complicate
enforcement
by
giving
the
regulating
agency
the
option
to
determine
whether
the
test
results
are
over
or
above
the
emission
limit.
By
adjusting
the
data
for
bias,
the
questions
in
Section
1.3.1
become
moot.
(
b)
The
issue
of
whether
to
accept
data
of
lesser
quality
has
nothing
to
do
with
the
determination
of
bias.
The
issue
is
how
much
is
a
small
fraction?
How
will
knowing
the
bias
(
which
is
generally
negative)
help
the
agency
to
decide
whether
to
accept
data
of
less
quality,
as
defined
by
EPA.
The
issue
exists
whether
the
bias
is
zero
or
not.
Results
with
bias
correction
are
of
better
quality
than
results
without
bias
correction,
and
the
bias
correction
should
be
retained.
Will
EPA
now
calibrate
the
dry
gas
meter
for
conventional
methods
and
not
apply
the
correction
factor,
but
instead
determine
an
uncertainty?
(
c)
It=
s
interesting
to
note
that
EPA
has
chosen
to
consider
uncertainty
for
only
two
measurements
(
bias
and
converter
efficiency)
and
has
chosen
to
ignore
the
uncertainties
incurred
in
all
the
rest
of
the
measurements
(
calibration
gas
accuracy,
calibration
error,
gas
concentrations,
F­
factor
(
including
Fo)
variations,
temperature,
pressure,
velocity,
etc.).
Why?
(
0009,
0032)

Response:
See
response
to
Comment
4.
14
9.
Comment:
The
uncertainty
calculation
indicates
that
the
bias
tests
will
tell
the
user
what
the
uncertainty
of
the
data
is.
The
bias
test
only
tells
the
user
how
much
bias
is
being
impacted
by
the
sampling
system.
The
uncertainty
of
the
data
should
be
calculated
using
the
1­
minute
averages
obtained
during
the
test.
It
involves
calculating
the
variation
of
the
data
(
i.
e.
significant
deviations)
and
plotting
a
confidence
interval.
At
the
end,
the
user
could
say
he
was
95
percent
confident
that
all
or
some
of
the
data
fell
below
the
emission
limit.
A
bias
test
won=
t
do
this.
(
0035)

Response:
See
response
to
Comment
4.

10.
Comment:
The
wording
in
this
section
is
confusing
in
a
procedural
sense,
but,
more
importantly,
the
section
is
technically
incorrect.
It
is
not
clear
what
EPA=
s
uncertainty
term
really
means,
since
it
is
undefined,
and
is
inappropriately
used
and
should
be
removed.
The
current
uncertainty
term
does
not
consider
data
system
resolution
and
accuracy,
linearity,
calibration
gas
accuracy,
ASME
or
NIST
type
of
analyses,
etc.,
in
the
analysis.
If
EPA
wants
to
use
uncertainty,
it
is
a
complicated
enough
subject
to
dedicate
an
entire
method
to,
as
ASME
has
done,
and
it
should
apply
to
all
test
methods,
not
just
instrumental
test
methods.
(
0039)

Response:
See
response
to
Comment
4.

11.
Comment:
The
reason
for
having
performance
and
design
specifications
is
to
limit
uncertainties.
If
the
uncertainties
are
too
large,
then
EPA
should
revise
the
specifications.
(
0032)

Response:
We
agree.
See
the
response
to
Comment
4.

12.
Comment:
Now
that
EPA
has
required
the
uncertainty
to
be
determined,
how
will
it
be
used?
Will
EPA
now
revise
the
various
subsections
of
the
applicable
subparts
to
legalize
its
use?
The
application
of
uncertainty
estimates
is
unclear,
subjective,
and
unnecessary.
We
cannot
find
any
criteria
for
the
uncertainty
values,
and
it
is
not
clear
how
they
might
be
used.
(
0032,
0038)

Response:
See
the
response
to
Comment
4.

13.
Comment:
A
complete
explanation
of
upper
and
lower
uncertainty,
to
include
the
acceptability
requirements,
is
needed.
(
0049)

Response:
See
response
to
Comment
4.

14.
Comment:
It
is
not
obvious
how
the
calculated
uncertainty
is
to
be
used
if
the
measured
concentration
does
not
exceed
an
emission
limit,
but
the
adjusted
concentration
with
uncertainty
does.
For
example,
assume
a
source
with
an
emission
limit
of
100
ppm
nitrogen
oxides
(
NOx)
conducts
a
3­
run
test
with
results
of
95,
98,
and
101
ppm.
If
the
bias
factor
is
­
0.05,
then
the
uncertainty
values
would
be
99.9,
103.1,
and
106.3
ppm,
assuming
the
2nd
term
in
the
equation
is
zero.
The
measured
values
show
a
3­
run
average
below
100
ppm,
while
the
3­
run
average
with
uncertainty
is
greater
than
100.
It
is
unclear
what
conclusion
the
regulatory
agency
would
make
in
this
case.
(
0029)
15
Response:
See
the
response
to
Comment
4.

15.
Comment:
The
sampling
system
bias
adjustment
in
existing
Methods
3A,
6C
and
7E
should
be
retained
and
included
in
the
other
instrumental
test
methods.
Removal
of
the
bias
correction
factor
from
Methods
3A,
6C
and
7E
and
replacement
with
a
statement
of
uncertainty
is
a
very
bad
idea
and
would:
(
a)
decrease
the
quality
of
the
measurement
results,
(
b)
complicate
and
confuse
compliance
determinations,
(
c)
significantly
detract
from
determination
of
bias
correction
factors
and
thus
the
resulting
data
obtained
in
market­
based
programs,
including
the
Acid
Rain
and
NOx
Budget
trading
programs.
The
Method
3A,
6C,
and
7E
sampling
system
bias
correction
factors
are
specifically
that
which
the
name
implies.
They
adjust
for
the
bias
demonstrated
to
occur
in
the
sampling
system
due
to
adsorption,
absorption,
and
residual
moisture
and
are
based
on
the
differences
observed
from
the
injection
of
calibration
standards
directly
into
the
analyzer
and
the
same
standards
injected
through
the
entire
sampling
and
analysis
system.
The
observed
deviations
are
not
statistical
uncertainty
but
are
definite
biases
of
known
direction
and
magnitude
and
for
which
they
can
and
should
be
corrected.
This
approach
is
well
understood
and
widely
used.
Within
the
instrumental
methods,
these
corrections
are
justified
because
the
analyzer
calibration
error
test
demonstrates
that
the
system
is
linear
and
accurately
measures
the
standards.
After
the
analyzer
calibration
error
specification
has
been
performed,
the
sampling
system
bias
checks
are
performed
before
and
after
each
sample
run
and
used
to
compensate
for
the
observed
bias
and
drift
during
the
run.
This
approach
is
technically
correct.
This
part
of
the
instrumental
methods
is
not
broken
and
does
not
require
fixing.
(
0009,
0016,
0021,
0032,
0034,
0038)

Response:
See
the
response
to
Comment
4.

3.3
Sampling
System
Bias,
Calibration
Drift,
and
Analyzer
Calibration
Error
16.
Comments:
We
disagree
with
eliminating
the
system
bias
correction
and
calibration
drift
test
before
and
after
each
run.
How
about
correcting
for
drift
in
a
worst­
case
way
and
call
the
run
good
if
it
still
meets
emissions
standards?
The
analogy
is
correcting
for
a
bad
post­
test
leak
check
using
Method
5.
(
0013,
0021,
0034,
0043)

Response:
We
agree
and
have
retained
the
determination
of
calibration
drift
and
system
bias
as
it
existed
before.

17.
Comment:
Performance
criteria
should
be
left
relative
to
span,
rather
than
modified
to
be
a
percent
of
the
emission
standard.
Using
a
percent
of
the
emission
standard
has
no
technical
basis.
Performance
as
a
percentage
of
span
is
the
way
instrument
manufacturers
evaluate
their
instruments.
The
current
limits
are
based
on
what
a
reasonable
well­
operated
measurement
system
should
be
capable
of.
The
new
approach
may
also
result
in
performance
criteria
that
cannot
be
met
under
certain
conditions
and
would
relax
the
bias
specification
for
sources
significantly
below
the
emission
standard.
We
believe
the
current
span
specification
limits
how
the
tester
chooses
a
span,
and
this
specification
ties
the
span
to
the
emission
standard.
One
commenter
suggested
making
the
span
no
greater
than
twice
the
concentration
of
the
emission
standard
would
be
more
appropriate
than
the
current
limit
of
the
span
not
being
more
than
three
times
the
concentration
of
the
emission
standard.
The
proposed
requirement
that
emissions
be
16
between
20
and
100
percent
of
span
should
be
sufficient
to
add
uniformity
to
the
span
selection.
Another
commenter
urged
the
retaining
of
the
current
span­
based
performance
criteria
and
suggested
adding
safeguards
to
prevent
the
selection
of
an
inappropriate
instrument
range
that
would
relax
the
performance
requirement.
(
0016,
0032)

Response:
We
believe
the
current
allowable
range
of
span
choices
is
too
broad
to
result
in
a
fair
stringency
of
performance
criteria
among
tester
choosing
significantly
different
spans.
Defining
the
span
as
twice
the
concentration
of
the
emission
standard,
as
the
commenter
suggested,
appears
to
be
a
good
idea
and
would
effectively
reduce
this
range.
However,
in
the
final
rule,
we
have
opted
for
a
different
approach
that
will
achieve
the
desired
objective.
The
term
"
span"
has
been
replaced
with
the
term
"
calibration
span"
which
is
synonymous
with
the
high
calibration
gas
concentration.
Further,
the
revised
rule
requires
that
the
calibration
span
value
be
selected
to
ensure
that
the
majority
of
the
emissions
fall
between
20
and
80
percent
of
this
value.
This
approach
offers
several
advantages.
First,
it
alleviates
concern
about
the
QA
status
of
data
points
that
exceed
the
high
level
gas
value
but
are
below
the
current
span.
Second,
if
properly
chosen,
the
calibration
span
should
not
be
exceeded,
and
the
A20­
to­
80
percent@
guideline
will
prevent
an
inordinately
high
calibration
span
from
being
chosen.
Fourth,
it
allows
you
freedom
to
set
the
calibration
span
equal
to
the
full­
scale
of
the
analyzer
(
if
the
full­
scale
is
the
right
size
for
the
application)
or
to
quality­
assure
a
segment
of
the
range,
if
the
range
is
over­
sized.

18.
Comments:
Many
of
us
use
STRATA
DAQs,
and
this
proposal
renders
this
software
useless
because
of
the
change
from
percent
of
range
to
percent
of
emission
standard.
The
provision
for
low­
emitting
sources
is
only
good
for
3
yrs,
which
means
another
software
change.
This
becomes
quite
expensive.
If
testers
are
abusing
the
analyzer
range
selection
to
make
the
bias
test
easier
to
pass,
why
not
have
the
methods
state
that
the
lowest
analyzer
range
consistent
with
maintaining
the
measured
concentration
between
20
percent
and
100
percent
of
that
range
must
be
used?
Then
the
bias
check
can
remain
as
is.
(
0047)

Response:
See
the
response
to
Comment
17.

19.
Comment:
The
EPA
proposes
to
redefine
sampling
system
bias
in
terms
of
the
applicable
emission
limit.
What
is
the
definition
of
emission
standard?
Is
this
the
same
as
the
emission
limit
in
the
site=
s
operating
permit?
In
any
case,
using
the
emission
standard
as
a
basis
for
determining
system
bias
is
problematic
because
the
units
are
frequently
not
in
concentration
(
e.
g.,
lbs/
hr,
lb/
mmBTU,
or
lb/
ton
stone
feed),
and
there
may
be
multiple
limits
at
a
site.
This
is
a
can
of
worms.
Testers
do
not
always
know
the
emission
standard,
analyzers
have
fixed
ranges,
new
standards
may
require
calibration
gases
for
each
source
resulting
in
more
added
costs
and
lead­
time.
Conversion
tables
would
require
flows
which
are
not
always
available
or
needed,
and
conversion
tables
would
not
work
well
with
production­
based
limits.
The
EPA
did
not
explain
why
some
of
the
performance
specifications
are
based
on
the
emission
standard,
while
others
are
based
on
the
span.
We
do
not
believe
that
use
of
the
emission
standard
is
appropriate
for
any
of
the
criteria.
Analyzer
performance
and
accuracy
are
a
function
of
analyzer
span.
Use
of
the
emission
standard
is
also
problematic
in
cases
where
concentrations
are
corrected
to
some
dilution
standard,
such
as
ppm
NOx
at
15
percent
oxygen.
(
0008,
0038,
0043,
0048)
17
Response:
See
the
response
to
comment
17.
The
sampling
system
bias
and
analyzer
calibration
drift
will
continue
to
be
calculated
based
on
the
span
and
not
on
the
emission
standard
as
proposed.
The
definition
of
span
has
been
revised
to
limit
tester
selection
of
an
inappropriate
value.

20.
Comment:
The
alternative
sampling
system
bias
specification
of
"
5
percent
of
the
span
if
not
subject
to
an
emission
standard
does
not
provide
appropriate
criteria
for
sources
subject
to
emission
standards
that
are
also
included
in
a
market­
based
trading
program.
Given
the
tolerances
of
gases
and
the
allowable
calibration
error
of
the
analyzer,
it
is
nearly
impossible
to
meet
a
bias
of
5
percent
of
an
emission
standard,
and
should
be
based
on
the
upper
range
of
the
analyzer.
For
most
sources
the
span
is
set
at
200
to
250
percent
of
the
standard
to
capture
shortterm
peaks,
so
the
proposed
change
would
reduce
the
allowed
error
by
40
to
60
percent,
and
would
result
in
the
failure
of
many
tests.
When
not
subject
to
an
emission
standard,
this
is
too
lenient,
and
a
1000
ppm
span
would
have
an
allowable
bias
of
50
ppm,
so
it
is
recommended
to
use
a
bias
limit
of
2
percent
of
span.
(
0011,
0026,
0029,
0032,
0038)

Response:
See
the
response
to
Comment
19.

21.
Comment:
There
are
problems
with
using
the
emission
allowable
to
establish
allowable
bias.
First,
not
all
processes
tested
are
combustion
processes,
and,
second,
combustion
processes
requires
one
to
assume
an
O2
level,
and
it
does
not
seem
appropriate
to
determine
test
run
acceptance
based
on
an
estimate.
Basing
drift
on
the
range
or
span
is
a
better
approach.
(
0020)

Response:
See
the
response
to
Comment
19.

22.
Comment:
Why
change
the
calibration
error
and
bias
calculations
to
a
percentage
of
the
calibration
standard?
How
do
the
proposed
revisions
address
using
a
true
zero
gas
for
calibration
as
is
most
commonly
done?
Only
Method
25A
uses
this
percentage
of
calibration
standard
way
of
calculating
the
percent
errors
of
calibration,
and
in
so
doing,
no
error
can
be
associated
with
the
zero
calibration
point.
Currently,
all
5
methods
(
3A,
6C,
7E,
10
and
20)
calculate
calibration
error
as
a
percent
of
span
of
the
instrument.
Why
change
this?
Calculating
everything
from
the
span
allows
an
assessment
of
the
accuracy
of
the
zero
gas
and
does
not
make
meeting
the
specification
harder
to
accomplish
the
closer
the
calibration
gas
is
to
zero.
The
instruments
don=
t
get
more
accurate
the
closer
the
output
gets
to
zero,
which
is
what
in
effect
the
proposed
revisions
would
require.
To
simplify
the
reference
methods,
bias
should
be
based
on
the
full
scale
range
of
the
analyzer.
However,
the
range
of
the
analyzer
should
be
limited
so
that
the
majority
of
the
readings
are
20
to
80
percent
of
full
scale.
(
0034,
0038)

Response:
See
the
response
to
Comment
17.

23.
Comment:
Recently
we
tested
a
source
with
an
emission
limit
of
50
ppm,
but
had
spikes
up
to
1500
ppm.
Basing
the
bias
on
percent
of
the
standard
would
be
nearly
unachievable.
Everything
should
be
based
on
percent
of
range,
perhaps
with
the
clarification
that
range
be
no
greater
than
200
percent
of
the
standard.
Exceptions
should
include
sources
with
emissions
standards
less
than
15
ppmv,
where
the
minimum
required
range
should
be
10
ppmv.
For
sources
18
that
occasionally
exceed
200
percent
of
the
emission
limit,
the
range
should
be
considered
to
be
the
peak
value
observed
and
the
upper
limit
should
not
exceed
the
analyzer
range
or
the
span
gas
by
more
than
10
percent.
Calibration
gases
should
be
spaced
accordingly,
and
at
least
one
calibration
gas
should
be
within
10
percent
of
the
calculated
emissions
standard,
or
for
sources
with
emission
standards
less
than
15
ppmv,
the
calibration
standards
should
be
spaced
equally
(
within
15
percent)
over
the
measurement
range.
(
0039)

Response:
We
understand
that
accurately
testing
a
complying
source
with
an
emission
limit
of
50
ppm
but
has
spikes
up
to
1500
ppm
is
difficult.
However,
we
believe
this
to
be
a
calibration
challenge
rather
than
what
should
be
acceptable
bias.
The
use
of
a
dual­
range
analyzer
may
be
appropriate
in
such
cases.
See
the
response
to
Comment
17.

24.
Comment:
Several
commenters
thought
it
was
inappropriate
and
unwarranted
to
base
performance
criteria
on
a
percentage
of
the
emission
standard.
The
following
points
were
made:
a.
The
definition
should
instead
be
the
system
calibration
result
divided
by
the
direct
calibration
result.
b.
The
definition
precludes
the
alternative
approach
suggested
in
Section
1.3
for
those
cases
where
an
emission
limit
does
not
exist
or
should
not
be
used.
c.
There
are
concerns
about
sites
without
a
specific
standard
for
each
pollutant,
or
where
there
are
multiple
standards
based
on
different
programs.
It
is
not
clear
what
an
emission
standard
is
in
many
cases.
For
example
a
gas
turbine
may
mean
Part
60,
Subpart
GG
emission
standard,
the
permit
limit
in
ppm
at
15
percent
oxygen,
or
the
permit
limit
in
lb/
hr.
d.
The
definition
should
revert
to
what
is
current
in
Method
6C.
Bias
is
a
measure
of
system
recovery
efficiency
and
should
not
be
measured
in
terms
of
the
emission
standard.
e.
To
accommodate
both
dilution
and
direct
systems,
the
term
in
the
denominator
needs
to
be
the
concentration
of
span
gas
introduced
to
the
system.
(
0013,
0032,
0033,
0038,
0059)

Response:
See
the
response
to
Comment
17.

25.
Comment:
Changing
the
bias
calculations
from
the
current
procedures
is
arbitrary
and
will
not
increase
accuracy.
Time
and
training
are
needed
to
implement
these
changes,
which
are
not
better,
just
different,
and
eliminating
the
calibration
error
correction
will
reduce
accuracy.
Since
span
is
currently
set
at
1.5
to
2.5
times
the
expected
emissions,
the
proposed
rules
reduce
allowable
error
by
40
to
60
percent,
and
combines
analyzer
error
and
system
bias
error.
Many
monitoring
systems
will
fail
to
meet
this
requirement.
Analyzer
performance
is
a
function
of
range/
span
setting
and
not
a
permit
limit.
We
recognize
that
sometimes
unnecessarily
high
spans
are
chosen
to
loosen
the
specs,
but
this
can
be
easily
corrected
by
requiring
the
tester
to
use
a
scale
such
that
the
majority
of
readings
are
between
20
and
80
percent
of
range.
(
0028,
0038,
0041)

Response:
See
the
response
to
Comment
17.

26.
Comment:
We
see
the
definition
for
span
(
Method
7E
Section
3.12)
has
changed.
According
to
the
proposed
change,
the
span
will
be
based
on
the
calibration
gas
instead
of
the
range
that
the
analyzer
is
set
on.
So
if
high
level
calibration
gas
is
changed
so
will
the
span
value.
Why
would
the
span
be
based
on
calibration
gas
instead
of
the
analyzer
range?
(
0008)
19
Response:
See
the
response
to
Comment
17.

27.
Comments:
Basing
reference
method
analyzer
range
on
the
applicable
standard
is
anticipated
to
result
in
forcing
the
replacement
of
existing
CO
analyzers
or
the
addition
of
new
ones
with
different
ranges.
This
will
have
a
major
cost
for
what
otherwise
appears
to
be
a
minor
change
in
the
method
procedures.
The
existing
reference
methods
appear
to
provide
satisfactory
and
reliable
means
for
obtaining
representative
measurements.
(
0042)

Response:
See
the
response
to
comment
17.

28.
Comments:
Overall,
it
appears
that
EPA
is
complicating
the
QA/
QC
requirements
with
the
use
of
two
separate
criteria
(
for
the
analyzer
and
for
the
sampling
system)
and
it
is
not
evident
how
this
will
improve
acceptable
measurement
system
performance.
We
believe
the
use
of
the
appropriate
certified
gas
concentration
value
(
or
tag
value)
as
the
tolerance
criteria
for
both
the
analyzer
and
sampling
system
bias
checks
would
be
a
better
approach.
Then
potential
emission
limits
will
be
incorporated
into
the
tolerance
criteria.
(
0029)

Response:
We
agree
that
the
commenter
proposes
a
viable
approach
and
have
revised
the
performance
requirement
for
both
the
calibration
error
and
system
bias
to
be
relative
to
the
high
calibration
gas
(
calibration
span).

29.
Comment:
At
this
time
we
do
not
perform
a
bias
test
as
described
in
Section
8.2.5.
We
perform
the
analyzer
error
and
system
drift
test
by
introducing
calibration
gas
into
the
system
at
the
3­
way
valve
located
on
the
probe.
We
have
had
conversations
with
an
EPA
representative
on
this
topic
who
agrees
that
this
method
is
acceptable.
Calibrating
through
the
entire
sampling
system,
eliminating
the
need
for
a
bias
test,
should
be
included
as
an
option
in
the
test
method.
(
0008)

Response:
The
procedure
described
by
the
commenter
is
a
variation
from
the
method
and
requires
Administrative
approval.
The
method
requires
the
initial
calibration
be
directly
to
the
analyzer,
and
a
bias
test
of
the
entire
system
must
be
performed
before
sampling
begins.
In
this
way,
the
bias
of
the
system
can
be
determined
instead
of
being
adjusted
out
at
the
calibration.
Alternatively,
Section
8.2.3
allows
the
calibration
error
test
gas
to
be
introduced
at
a
point
upstream
of
the
analyzer,
provided
that
it
is
downstream
of
all
sample
conditioning
equipment.

30.
Comment:
The
proposed
requirement
to
limit
calibration
error
to
within
2
percent
of
the
certified
gas
concentration
is
unnecessarily
restrictive
compared
to
the
existing
specification
of
2
percent
of
span.
The
additional
restriction
is
not
necessary
and
does
not
improve
data
quality
when
sampling
system
bias
adjustments
are
applied
to
a
demonstrated
linear
system.
The
EPA
has
provided
no
technical
basis
for
such
increased
restriction
and
this
requirement
should
be
removed.
This
requirement
may
also
prove
impossible
to
meet
if
there
is
any
error
in
the
measurement
system,
because
the
calibration
gases
themselves
are
allowed,
and
often
have,
2
percent
error
from
the
tag
value.
One
comment
believed
that
EPA
has
now
specified
an
accuracy
that
is
50
times
tighter
than
the
analyzer
linearity
specification,
and
does
not
even
consider
calibration
gas
error.
(
0012,
0026,
0028,
0029,
0031,
0032,
0038,
0041)
20
Response:
We
have
revised
the
requirement
that
calibration
error
for
the
low,
mid­,
and
high­
level
gases
be
within
2
percent
of
their
certified
tag
values
in
favor
of
each
gas
being
within
2
percent
of
calibration
span
(
high­
level
certified
tag
value).

31.
Comment:
According
to
Section
7.1.3,
the
low­
level
gas
may
contain
a
zero
concentration.
Since
results
of
division
by
zero
are
undefined,
calibration
error
at
the
zero
level
cannot
be
calculated.
(
0012,
0030)

Response:
An
alternative
calibration
error
of
±
0.5
ppm
has
been
added
to
grant
relief
for
low
measurements
and
to
accommodate
the
use
of
zero
gas
as
the
low­
level
calibration
gas.

32.
Comment:
Why
must
the
same
gases
be
used
to
set
up
the
analyzer
as
for
the
calibration
error
test?
It
appears
as
if
we
must
perform
the
calibration
error
t
analyzer
calibration
error
check
and
bias
checks
being
there
to
determine
how
well
this
was
accomplished.
The
proposed
procedure
is
a
waste
of
time
and
the
current
procedure
is
much
preferred.
(
0009,
0013,
0038)

Response:
We
have
dropped
the
parenthetical
note
in
Section
8.2.3
that
notes
the
calibration
gases
and
the
calibration
error
test
gases
are
the
same.

33.
Comment:
Procedure
is
acceptable,
but
for
the
case
of
direct
calibrations,
the
resultant
concentrations
should
still
be
corrected
for
sampling
system
bias.
Considering
the
duration
of
sample
runs
(
i.
e.
RATAs
at
21
minutes),
an
option
of
conducting
multiple
runs
between
bias
determinations
should
be
included.
The
risk
will
be
that
of
the
tester
and
failure
would
void
all
runs
performed
since
the
last
acceptable
bias
check.
For
bias
corrections,
all
runs
conducted
within
a
set
would
be
corrected
with
the
same
pre­
and
post­
test
results.
(
0056)

Response:
We
believe
the
commenter
makes
a
good
point.
We
have
retained
the
correction
of
data
for
bias
as
is
currently
done
in
the
methods.

34.
Comment:
The
analyzer
calibration
error
section
is
flawed.
Allowing
two
percent
of
the
manufacturer
certified
concentration
for
the
low­,
mid­
and
span­
level
calibration
gases
is
too
stringent.
Assume
the
low­,
mid­,
and
span­
level
calibration
gases
are
10,
50
and
90
ppm.
According
to
the
wording,
the
analyzer
must
be
between
9.8
­
10.2
ppm,
49.0
­
51.0
and
88.2
­
91.8
ppm.
In
this
case,
the
low­
level
tolerance
is
0.2
ppm
and
the
high­
level
tolerance
is
1.8
ppm,
all
for
the
same
range
setting.
Calibration
error
must
be
consistent
with
the
industry
standard
which
is
based
upon
a
percentage
of
the
selected
range,
such
as
2
percent
of
100
ppm
or
2
ppm.
It
also
should
be
noted
that
the
calibration
gases
have
a
tolerance
from
the
certified
concentration.
Typically,
the
gas
manufacturers
guarantee
within1
or
2
percent
of
concentration.
This
means
that,
in
the
above
case,
the
low­
level
calibration
gas
concentration
can
potentially
be
9.8
­
10.2
ppm.
Since
the
calibration
gas
already
has
a
2
percent
tolerance,
the
analyzer
would
need
to
be
perfect
in
its
detection.

The
same
section
states
that
zero
gas
must
be
within
0.25
percent
of
analyzer
upper
range.
If
the
range
is
100
ppm,
the
analyzer
response
to
zero
gas
must
be
less
than
0.25
ppm.
This
is
21
impractical;
a
reasonable
analyzer
response
to
zero
gas
would
be
within
2.0
percent
of
the
analyzer
upper
range
limit.
(
0011)

Response:
The
commenters
points
are
well
taken.
We
have
revised
the
allowable
analyzer
calibration
error
to
within
2
percent
of
the
calibration
span
or
0.5
ppm,
whichever
is
less
restrictive.
References
to
a
zero
gas
and
the
corresponding
analyzer
calibration
error
criterion
have
been
dropped.

35.
Comments:
Section
8.2.5
refers
to
a
zero
gas.
What
if
the
calibration
gas
is
a
lowlevel
non­
zero
gas?
A
specification
for
a
non­
zero
gas
is
needed
for
a
stable
response.
Since
zero
times
zero
is
zero
and
97
percent
of
zero
is
zero,
the
instrument
is
doomed
if
the
response
ever
hits
zero.
Perhaps,
this
should
be
zero­
level
value
or
a
stable
response
to
the
zero
(
or
lowlevel
gas.

Response:
The
mention
of
Azero@
gas
in
Section
8.2.5
is
in
error.
It
has
been
corrected
to
Alow­
level@
gas.

36.
Comment:
Where
performance
specifications
are
included,
we
suggest
that
EPA
consider
addressing
significant
figures
by
expanding
these
to
another
decimal
place,
such
as
2.0
percent
of
span,
5.0
percent
of
tag
value,
etc.
If
this
is
not
done,
then
technically,
these
specifications
could
round
up
to
0.4
percent
higher
than
what
they
were
originally
intended.
(
0038)

Response:
The
commenter
raises
a
valid
point.
The
performance
specifications
now
contain
2
significant
figures.

37.
Comment:
The
EPA
has
proposed
a
number
of
QA
requirements
that
are
unreasonable
and
unnecessary
and
will
result
in
the
failure
of
many
otherwise
valid
tests.
In
most
cases,
the
requirements
tester
must
meet
are
more
stringent
than
those
the
manufacturer
must
meet.
We
believe
these
issues
clearly
pose
a
significant
burden
on
small
entities,
such
as
ours,
who
have
invested
everything
into
their
businesses
in
good
faith
and
with
good
intentions.
(
0039)

Response:
The
changes
we
have
made
in
the
final
rule
will
mitigate
the
commenter=
s
concerns.

38.
Comment:
Would
it
be
plausible
to
add
an
alternative
to
the
sample
line
test?
In
extreme
conditions,
when
it
is
difficult
or
impossible
to
run
a
sample
line
to
a
source,
can
one
take
a
30­
minute
sample
in
a
Tedlar
bag,
then
transport
it
to
an
analyzer?
(
0048)

Response:
We
believe
the
commenter=
s
suggestion
has
technical
merit.
However,
it
is
more
appropriate
to
obtain
approval
for
this
alternative
under
40
CFR
Part
60.8
on
a
case­
by­
case
basis
to
determine
if
the
bag
sample
meets
the
sampling
time
needs
of
the
underlying
regulation.

39.
Comment:
For
hot­
wet
systems
performing
the
response
time
test,
is
it
0.5
percent
of
the
certified
dry
gas
value
or
0.5
percent
of
the
certified
dry
gas
plus
water
value?
If
the
system
measures
both
gas
and
diluent
simultaneously,
shouldn=
t
the
bias
specification
consider
the
22
combination?
To
consider
the
bias
separately
will
require
moisture
determination
(
with
its
inaccuracies)
to
adjust
the
measured
concentration
to
dry
conditions.
In
some
cases,
0.5
percent
of
certified
value
may
be
unattainable.
(
0012,
0032,
0049)

Response:
The
system
response
time
criterion
has
been
changed
in
the
final
rule.
The
response
time
is
now
defined
as
the
time
it
takes
for
the
measured
gas
concentration
to
increase
to
a
value
that
is
within
95
percent
or
0.5
ppm
(
whichever
is
less
restrictive)
of
the
certified
upscale
gas
concentration
or
until
the
response
is
0.5
ppm
or
5.0
percent
of
the
upscale
gas
concentration
when
the
low­
level
gas
is
used.
The
system
bias
test
uses
dry
cylinder
gas
to
test
the
measurement
system.
The
hot­
wet
sampling
system
surfaces
may
be
wet
during
the
bias
test
but
there
is
no
way
to
measure
this
residual
moisture
or
correct
its
effect
on
the
test
gas.
The
bias
test
determines
the
recovery
of
test
gas
under
simulated
test
conditions.
Since
the
normal
operation
of
hot­
wet
systems
handles
moisture
by
keeping
it
in
the
vapor
state
during
sampling
and
not
on
correcting
the
results,
moisture
correction
during
the
bias
test
of
such
systems
in
not
appropriate.
The
bias
test
only
evaluates
the
pollutant
measuring
component
of
a
system
since
the
calculation
is
based
on
the
recovery
of
the
test
gas,
not
the
test
gas
corrected
for
diluent
3.4
Interference
Test
40.
Comments:
The
EPA=
s
rationale
for
conducting
the
interference
test
on
an
annual
basis
has
not
been
shown.
Please
provide
data
that
show
an
annual
interference
test
is
necessary.
Today=
s
generation
of
analyzers
do
not
have
many,
if
any,
interference
problems.
Furthermore,
the
manufacturer
should
perform
these
tests
or
they
should
only
be
required
after
major
instrument
modifications.
Annual
interference
testing
puts
a
major
burden
on
the
testing
industry,
causes
many
practical
problems,
and
results
in
unclear
issues
for
implementation.
An
initial
certification
along
the
lines
of
40
CFR
53
with
a
5­
yr
phase­
in
is
a
more
sensible
alternative.
The
requirement
needs
to
be
modified
and
reduced
in
scope.
The
proposed
test
has
a
very
high
cost
to
low
benefit
ratio
and
would
require
many
gases
that
would
only
be
used
once
a
year.
(
0008,
0011,
0012,
0013,
0016,
0027,
0029,
0030,
0032,
0033,
0034,
0035,
0038,
0047,
0056)

Response:
The
commenters
have
raised
valid
concerns.
The
proposed
requirement
to
conduct
the
interference
test
on
an
annual
basis
has
been
dropped.
The
interference
check
will
be
a
one­
time
test,
except
for
major
instrument
modifications,
as
is
the
current
requirement.
The
tester
must
have
information
available
for
inspection
showing
that
potential
interferences
encountered
during
a
test
do
not
affect
the
test
results.

41.
Comment:
With
respect
to
interference
checks,
the
burden
of
performing
annual
checks
at
every
type
of
source
and
specific
instrument
would
be
costly
and
time
consuming.
This
is
of
minimal
concern
after
the
initial
factory
certification
because
these
things
do
not
really
change
and
no
component
is
actively
eliminating
interferences
from
stack
or
calibration
gas.
Many
of
our
sources
are
working
at
low
emission
rates,
even
at
zero.
Has
EPA
looked
at
the
quality
of
performing
interference
checks
on
very
low
concentrations,
and
what
should
we
do
when
the
emissions
are
at
or
near
zero?
Our
last
source
tested
at
0.3
ppmv
sulfur
dioxide
(
SO2),
and
this
is
difficult
to
get
with
the
Barium­
Thorin
titration.
Maybe
issuing
a
reference
document
of
when
interferences
are
of
concern
would
be
better.
Why
even
do
instrumental
testing
when
23
you
have
to
do
wet
chemistry
interference
tests
every
year?
It
should
be
one
time
test
for
the
life
of
the
instrument.
(
0028,
0048)

Response:
We
have
dropped
the
requirement
to
conduct
an
annual
interference
test.
However,
the
interference
test
must
be
repeated
whenever
major
component
repairs
are
made.
Instruments
that
measure
emissions
below
15
ppm
on
a
routine
basis
must
pass
a
manufacturer's
stability
test.

42.
Comment:
The
EPA
has
not
provided
data
(
or
technical
discussion)
to
support
the
Agency's
apparent
concern
that
many
of
the
substances
listed
in
Table
7E­
3
of
Method
7E
could
cause
interferences
in
the
method
measurements
or
that
interferences
by
those
substances
would
differ
by
source
category.
The
proposed
interference
check
significantly
expands
the
scope
of
testing
and
substantially
increases
the
costs
of
the
methods.
(
0038)

Response:
The
Agency
has
not
evaluated
the
interference
concerns
of
different
analyzers
and
their
use
at
the
various
regulated
facilities.
Therefore,
the
proposed
interference
check
was
an
attempt
to
include
all
suspected
interferent
under
what
was
believed
to
be
the
best
evaluation
conditions.
From
the
public
comments,
it
appears
the
interference
test
is
better
served
through
a
general
performance
requirement
that
can
be
individually
suited
to
the
analyzer
type
and
testing
needs
than
through
the
specific
requirements
of
the
proposal.
We
therefore
give
Table
7E­
3
as
a
list
of
potential
interferents
that
the
tester
can
use
to
evaluate
the
analyzer
for
future
test
sites.
The
tester
is
required
to
have
documentation
available
showing
that
the
analytical
system
used
is
not
affected
(
beyond
2.5
percent
of
analytical
span)
by
potential
interferents
at
the
sources
tested.

43.
Comment:
The
number
of
gases
now
needed
for
interference
checks
has
been
increased
significantly
(
Table
7E­
3
lists
11
gases).
Costs
will
be
increased
due
to
the
number
of
gases
to
complete
these
checks.
Can
you
provide
guidance
as
to
which
gases
must
be
introduced
into
which
analyzers?
Is
not
oxygen
an
interfering
gas
for
some
analyzers?
If
no
oxygen
is
present,
how
would
this
affect
the
readings?
(
0014,
0025,
0032)

Response:
The
number
of
gases
needed
in
a
particular
interference
test
will
vary
according
to
analyzer
type.
Each
listed
test
gas
may
not
be
appropriate
for
each
analyzer
type.
The
tester
or
manufacturer
must
evaluate
the
potential
of
the
listed
test
gases
or
other
possible
interferences
to
affect
the
analyzer
performance
and
plan
the
test
accordingly.
The
tester
must
have
documentation
available
to
show
that
potential
interferences
at
tested
sources
do
not
exceed
2.5
percent
of
the
calibration
span.

44.
Comment:
Some
instruments
use
selective
scrubbing
to
remove
interferences.
In
these
cases,
the
removal
efficiency
should
be
verified
after
each
test
using
the
same
principle
as
that
of
the
NO2
to
NO
converter
test.
They
should
be
tested
using
those
gases
that
the
scrubber
claims
to
remove.
(
0016)

Response:
Instruments
that
use
selective
scrubbing
to
remove
interferences
should
test
the
scrubber
at
appropriate
intervals
and
have
documentation
to
verify
that
it
meets
the
2.5
percent
interference
criterion.
24
45.
Comment:
The
method
states
that
the
interference
test
should
be
performed
with
and
without
NOx
and
(
NO
and
NO2).
The
procedure
is
not
clear.
Does
this
imply
the
checks
must
be
completed
with
NOx
mixtures
for
all
interferent
gases?
This
makes
a
total
of
8
mixtures
and
8
single
component
cylinders.
Why
is
the
interference
test
required
with
and
without
NOx
present
and
at
a
concentration
of
at
least
80
percent
of
span?
(
0014,
0033,
0036,
0047)

Response:
A
particular
interference
check
should
be
tailored
to
fit
the
needs
of
the
specific
analyzer
type
and
may
be
designed
to
evaluate
the
potential
interferents
singly,
collectively
through
a
manifold,
or
in
blends.
For
some
analyzers,
the
appropriate
pollutant
will
be
NOx,
for
others,
it
may
be
NO
or
NO2.
We
have
reduced
the
specifics
of
the
proposed
interference
test
to
allow
the
tester
or
manufacturer
flexibility
in
choosing
what
is
appropriate
for
their
testing
needs.
Instead
of
requiring
analyzers
be
evaluated
with
specific
interferents
at
specific
concentrations
both
with
and
without
NOx,
or
single
NOx
component,
the
methods
now
require
that
documentation
be
possessed
to
show
that
interference
from
potential
interferents
at
the
test
location
are
within
2.5
percent
of
calibration
span.

46.
Comment:
May
a
correction
factor
be
used
to
correct
the
data
if
the
interference
can
be
quantified?
(
0036)

Response:
Interference
effects
beyond
2.5
percent
of
calibration
span
are
unacceptable
and
may
not
be
corrected
for.

47.
Comment:
The
proposed
interference
check
with
its
11
gases
is
extensive
and
would
be
better
left
to
the
analyzer
manufacturer,
possibly
conducted
along
with
the
stability
test.
Manufacturers
should
be
urged
to
conduct
this
procedure
and
provide
a
copy
of
the
data
and
a
signed
certification
for
the
purchaser.
Analyzers
can
be
type­
certified
for
specific
source
categories.
(
0014,
0033,
0041)

Response:
The
interference
test
may
be
designed
and
conducted
by
the
manufacturer.

48.
Comment:
Not
all
sources
are
going
to
have
ammonia,
hydrogen
chloride,
hydrogen,
or
nitrous
oxide
present
in
the
effluent.
The
interference
test
should
only
be
done
for
those
compounds
present
in
the
gas
stream
to
be
tested.
(
0047)

Response:
Table
7E­
3
is
a
listing
of
potential
interference
gases
that
may
be
encountered
at
various
test
sites.
The
table
is
not
exhaustive,
and
the
interference
test
may
include
other
compounds
not
listed.
Most
analyzers
are
affected
by
only
a
few
of
the
listed
gases.
The
interference
check
should
be
designed
to
evaluate
the
potential
interferences
at
facilities
subsequently
tested.

49.
Comment:
Since
all
monitors
have
multiple
ranges,
the
80
percent
of
analyzer
range
implies
a
test
for
every
source
category
and
analyzer
range.
Is
this
necessary?
(
0047)

Response:
The
requirement
to
test
at
80
percent
of
the
analyzer
range
has
been
dropped.
25
50.
Comment:
Method
6C
contains
too
many
interference
tests.
The
interference
test
comparing
Method
6C
to
Method
6
is
obsolete
and
should
be
deleted
to
meet
the
stated
intention
of
the
rule
of
removing
obsolete
specifications,
harmonizing
similar
requirements,
and
simplifying
the
procedures
to
enhance
their
utility
and
reduce
the
costs
of
testing.
(
0041)

Response:
The
Method
6C/
Method
6
comparison
test
is
now
listed
as
an
alternative
interference
check
procedure
and
is
completely
discussed
in
Section
16.
This
simplifies
the
main
method
text
where
the
interference
check
against
interference
gases
is
discussed.

51.
Comment:
Add
to
Section
16
of
Method
7E
to
incorporate
an
annual
primary
interference
gas
recheck.
(
0041)

Response:
We
have
dropped
the
requirement
to
repeat
the
instrument
interference
test
on
a
yearly
basis.

52.
Comments:
In
Method
7E,
the
interference
test
per
source
category
criteria
is
not
listed
as
it
is
in
Method
6C,
Section
8.3.
Is
there
a
way
to
include
a
list
of
source
categories
in
the
document?
(
0014)

Response:
We
have
revised
the
interference
check
in
Method
6C
to
conform
to
that
of
Method
7E.
The
Method
6C/
6
comparison
interference
check
is
now
given
as
an
allowable
alternative
to
the
interference
check
using
interferent
gases.
The
requirement
to
conduct
the
interference
check
at
each
different
source
category
only
applies
to
the
alternative
(
Method
6C/
6
comparison)
interference
check.

53.
Comment:
It
is
not
necessary
to
perform
the
interference
response
tests
for
individual
source
categories,
such
as
250
ppm
of
an
interference
emitted
from
a
coal­
fired
boiler
being
more
or
less
identical
if
it
were
measured
at
a
hazardous
waste
incinerator.
Interference
response
could
be
accurately
quantified
over
a
range
of
different
concentrations,
rather
than
source
categories.
A
one­
time
check
on
each
model
by
the
manufacturer
should
demonstrate
that
the
analyzer
technique
does
not
show
interference.
(
0038)

Response:
See
response
to
previous
comment.

54.
Comment:
In
the
preamble
section
to
the
proposed
rule,
under
the
comments
received
from
the
first
proposal
of
revisions,
the
comments
on
the
interference
test
only
apply
to
Method
6C.
That
was
not
clear
until
I
read
all
the
methods.
(
0020)

Response:
The
commenter
is
correct.
The
comments
refer
to
the
Method
6C
interference
test
as
it
was
published
in
the
first
proposal.

3.5
Alternative
Dynamic
Spike
Check
55.
Comment:
Our
association
supports
including
an
alternative
dynamic
spike
procedure
for
calibration
error
tests
and
sampling
system
bias
checks;
however,
the
procedures
are
overly
burdensome,
internally
inconsistent,
and
technically
unsupported.
As
proposed,
the
procedure
26
discourages
testers
from
using
it.
The
procedure
should
only
require
the
tester
to
follow
a
standard
written
procedure
without
having
to
be
show
certified
proficiency
with
the
procedure
within
the
previous
year.
The
analyte
spiking
method
is
itself
self­
certifying.
The
EPA
already
allows
it
as
one
option
in
validating
new
test
methods
in
Method
301.
Procedures
for
analyte
spiking
are
in
practice
and
have
been
used
for
many
years.
The
procedure
should
be
revised
to
reflect
approaches
that
have
been
successfully
implemented
and
accepted
by
EPA.

In
addition,
the
proposed
pre­
test
spike
accuracy
of
±
5
percent
is
not
achievable
in
many
cases
for
sources
subject
to
low­
concentration
standards.
In
all
cases,
the
pre­
and
post­
test
spike
criterion
should
be
±
10
percent.
We
also
think
it
ill­
advised
to
defer
to
the
manufacturer's
stability
test
in
place
of
a
pre­
test
spike
and
to
confirm
that
an
array
of
instrument
operating
parameters
are
within
manufacturer
specifications
for
certain
reporting
scenarios.
(
0032)

Response:
The
commenter
made
valid
points
and
offered
useful
suggestions.
We
have
dropped
the
requirement
that
testers
be
certified
for
their
procedure.
The
procedures
have
been
revised
to
allow
testers
to
use
any
documented
procedures.
The
spike
recovery
accuracy
has
been
changed
to
±
10
percent
for
both
pre­
test
and
post­
test
spikes.
The
manufacturer's
stability
test
has
been
dropped
as
an
alternative
to
the
pre­
test
spike.

56.
Comment:
The
EPA
has
not
provided
sufficient
explanation
of
the
procedure
for
us
to
understand
why
they
believe
the
procedure
is
helpful,
or
even
how
the
procedure
would
actually
be
performed.
No
data
obtained
using
this
procedure
has
been
provided.
Because
EPA=
s
description
and
supporting
information
for
the
procedure
is
so
inadequate,
we
do
not
even
have
enough
information
to
comment
on
the
proposal,
let
alone
determine
whether
our
members
would
benefit
from
finding
someone
who
did
meet
EPA=
s
requirements.
Dynamic
spiking
procedures
are
not
commonly
performed
and
thus
most
testers
will
not
be
able
to
meet
the
proposed
requirements
of
having
demonstrated
ability
to
follow
a
written
procedure
within
the
last
calendar
year.
Until
we
see
a
written
procedure
and
data,
we
strenuously
object
to
including
the
dynamic
spike
procedure,
even
as
an
option.
It
is
an
unknown
and
undemonstrated
procedure
that
has
no
place
in
a
reference
method.
(
0038)

Response:
The
dynamic
spiking
procedure
is
not
a
requirement,
but
is
offered
as
an
alternative
to
the
traditional
procedures
for
validating
results
for
testers
who
have
the
desire
and
proficiency.
This
procedure
will
not
be
allowed
under
Part
75
applications,
except
where
Administrative
approval
is
granted.
Dynamic
spiking
may
not
be
commonly
performed,
but
the
technique
is
widely
recognized
as
a
superior
means
of
evaluating
methodology
in
the
field.
For
a
number
of
years,
analyte
spiking
has
been
an
accepted
procedure
in
Method
301
of
Part
63
for
validating
new
methods.
The
proposed
spiking
procedure
in
Method
7E
was
purposely
written
in
general
terms
to
make
it
performance­
based
and
allow
for
tester
flexibility
in
choice
of
procedure
(
as
in
Method
301).
We
have
included
an
example
of
an
acceptable
procedure
as
guidance
for
testers.
We
believe
the
testing
community
is
better
served
in
this
way
without
any
loss
of
testing
quality.

57.
Comment:
The
dynamic
spike
test
is
referred
to
by
several
names
in
the
methods,
most
commonly
the
Alternative
Dynamic
Spiking
Procedure
or
the
Alternative
Dynamic
Spike
27
Check
(
ADSC).
We
assume
that
all
these
names
are
the
same
check,
and
recommend
giving
the
procedure
a
single
name
that
is
used
consistently.
(
0025)

Response:
Both
terms
are
synonymous.
We
used
the
term
Alternative
Dynamic
Spike
Procedure.

58.
Comment:
Dynamic
spiking
is
a
good
idea
if
implemented
properly.
It
is
not,
however,
a
reliable
method
for
calibration,
and
is
not
a
good
substitute
for
cal
error
and
bias
tests.
The
10
percent
limit
on
spike
recovery
could
easily
mask
significant
bias
and/
or
non­
linearity
and
greatly
increase
the
allowed
uncertainty
of
the
method.
The
spiking
option
alone
would
not
be
appropriate
for
CEMS
RATAs
because
of
the
level
of
uncertainty
it
introduces.
EPA
should
not
remove
the
bias
test
requirement
when
dynamic
spiking
is
used.
(
0016)

Response:
The
concept
of
dynamic
spiking
is
not
new;
another
name
for
it
is
the
method
of
standard
additions.
As
such,
if
properly
implemented,
it
can
be
a
reliable
method
of
calibration.
We
do
not
allow
the
ADSC
as
an
alternative
to
the
analyzer
calibration
error
test.
We
believe
the
ADSC
is
a
better
indicator
of
bias
and
interferences
than
the
sampling
system
bias
check
and
the
interference
test
normally
required.
The
ADSC
determines
the
pollutant
in
its
native
environment
(
in
the
stack),
and
as
such,
better
simulates
a
method's
capabilities.

59.
Comment:
The
new
dynamic
spike
and
interference
tests
are
especially
confusing.
We
are
especially
concerned
about
the
lack
of
a
timetable
for
phasing
in
the
new
methods.
(
0025,
0033)
(
0043)

Response:
The
public
will
be
given
a
90­
day
period
after
publication
of
the
final
rule
to
implement
the
revisions
to
the
methods.
We
have
dropped
the
proposed
requirement
to
redo
the
interference
check
on
a
yearly
basis.
The
interference
check
may
now
be
performed
by
the
instrument
manufacturer.
We
have
removed
the
proposed
language
that
the
alternative
dynamic
spiking
procedure
be
performed
by
a
person
certified
within
the
past
year
for
the
particular
procedure
used.
This
should
add
significant
clarity
to
the
procedures.

60.
Comments:
Two
commenters
thought
the
application
of
dynamic
spiking
to
diluent
methods
(
Method
3A)
was
inappropriate.
One
of
the
commenters
thought
that
if
dynamic
spiking
is
used
in
Method
3A,
there
should
be
separate
procedures
primarily
because
it
deals
with
diluent
gases
as
opposed
to
primary
pollutants.
(
0032,
0035)

Response:
We
agree
that
the
dynamic
spiking
procedure
will
be
rarely
used
for
diluent
methods,
and
have
removed
all
references
to
the
ADSC
from
Method
3A.

61.
Comment:
For
the
spike
procedure
to
work
as
it
is,
the
process
emissions
must
be
100
percent
constant.
The
requirements
are
too
stringent
for
processes
with
temporal
variability.
We
recommend
removing
this
procedure
or
increasing
the
allowable
to
15
percent.
(
0041)

Response:
The
dynamic
spike
procedure
may
not
be
appropriate
at
sources
where
emission
variations
over
time.
The
tester
should
consider
this
before
opting
to
use
this
alternative
28
procedure.
We
have
increased
the
pre­
test
spike
recovery
allowable
from
5
percent
to
10
percent.
We
do
not
believe
the
raising
of
this
level
to
15
percent
is
justified.

62.
Comment:
40
CFR
136,
Appendix
B
 
Definition
and
Procedure
for
the
Determination
of
the
Method
Detection
Limit
 
Revision
1,
states
"
The
method
detection
limit
(
MDL)
is
defined
as
the
minimum
concentration
of
a
substance
that
can
be
measured
and
reported
with
99
percent
confidence
that
the
analyte
concentration
is
greater
than
zero
and
is
determined
from
analysis
of
a
sample
in
a
given
matrix
containing
the
analyte."
This
definition
could
be
extended
to
determining
the
precision
and
accuracy
at
or
near
the
analyte
concentration.
For
the
purposes
of
this
evaluation,
the
definition
could
be
adapted
from
"
method
detection
limit"
to
"
method
quantitation
limit"
(
MQL).
As
with
MDL,
MQL
could
be
defined
as
follows:
"
The
MQL
is
defined
as
the
minimum
difference
in
concentration
of
a
substance
that
can
be
measured
and
reported
with
99
percent
confidence
and
is
determined
from
analysis
of
a
sample
in
a
given
matrix
containing
the
analyte."
(
0012)

Response:
The
proposed
requirement
to
calculate
data
uncertainty
has
been
dropped.

63.
Comment:
Equation
7E­
5
is
confused
about
bias
and
uncertainty.
If
bias
can
be
determined,
it
is
no
longer
an
uncertainty.
Uncertainty
indicates
a
range
of
random
error
around
a
mean
value.
If
there
is
an
uncertainty
associated
with
bias,
it
is
in
the
precision
of
the
measurement.
Note
that
the
accuracy
of
the
cylinder
gas
has
an
uncertainty,
but
since
the
measurement
of
bias
is
a
difference
between
direct
and
system
calibrations
using
the
same
gas
cylinder,
any
bias
in
the
gas
cylinder
value
cancels
out.
Artificially
inflating
the
uncertainty
by
combining
bias
and
uncertainty
together
is
a
poor
solution.
We
recommend
using
the
bias
correction
based
on
Equation
7E­
3
in
Method
7E.
Equations
7E­
5
and
6
are
actually
forms
of
a
bias
correction
procedure,
not
an
expression
of
uncertainty.
If
you
substitute
Equation
7E­
3
into
Equation
7E­
5,
take
out
the
conversion
efficiency
term
and
rearrange
the
equation,
it
becomes
the
same
as
the
current
Method
6C
bias/
drift
correction
equation.
(
0011,
0022,
0029,
0032,
0039)

Response:
The
proposed
requirement
to
calculate
data
uncertainty
has
been
dropped
and
Equations
7E­
5
and
7E­
6
have
been
dropped.

64.
Comment:
Dynamic
spiking
should
be
required
as
an
additional
QA
check
when
the
circumstances
require
it.
We
know
that
at
low
NOx
concentrations,
there
can
be
2
problems
that
can
be
evaluated
by
dynamic
spiking.
(
1)
Bias
due
to
scrubbing
of
NO2
can
be
checked
by
spiking
NO2;
and
(
2)
ammonia
can
cause
NO2
converter
to
lose
efficiency
or
the
ammonia
may
be
converted
to
NO.
(
0016)

Response:
We
have
not
evaluated
the
full
capabilities
of
the
dynamic
spike
check
enough
to
make
it
a
requirement
in
special
circumstances.

3.6
Sampling
Traverse
Points
65.
Comment:
The
EPA
proposes
to
add
a
requirement
to
conduct
sampling
traverses
using
Method
1
traverse
points
unless
a
stratification
test
is
conducted.
This
requirement
will
slow
testing
greatly
and
increase
costs.
Performance
Specification
2
(
PS­
2)
recognizes
certain
29
instances
where
stratification
may
be
a
potential
problem.
Sections
8.1.3.1
and
8.1.3.2
of
PS­
2
provide
reasonable
guidance
for
selecting
points
for
a
3­
point
traverse.
The
EPA
should
use
the
sampling
guidance
in
PS­
2.

For
combustion
turbines
with
NOx
controls
(
steam,
water,
or
ammonia
injection),
significant
stratification
can
occur
if
the
injection
nozzles
flow
rates
are
not
balanced.
In
this
case,
if
the
results
from
the
3
traverse
points
do
not
agree
within
5
percent
of
the
mean,
a
full
traverse
should
be
conducted
using
Method
1
points.
(
0016)

Response:
We
agree
with
the
commenter
and
have
adopted
the
language
of
PS­
2
to
describe
the
selection
of
traverse
points.
Unless
stratification
is
suspected,
a
3­
point
traverse
test
is
conducted.
Where
stratification
is
suspected,
the
stratification
test
must
be
conducted
using
9
or
12
traverse
points
according
to
Table
1­
2
of
Method
1.
The
option
to
use
single­
point
sampling
is
allowed
when
a
stratification
test
shows
that
fewer
samples
are
sufficient.
The
noted
case
of
combustion
turbines
with
NOx
controls
may
be
the
exception
to
this
procedure.
In
all
cases,
we
have
noted
that
the
tester
is
responsible
for
collecting
representative
data.

66.
Comment:
If
stratification
is
less
than
5
percent
of
the
mean,
then
sampling
at
a
single
point
should
be
allowed.
If
stratification
is
less
than
10
percent,
three
points
at
16.7,
50,
and
83.3
percent
of
the
measurement
line
should
be
allowed.
(
0011)

Response:
These
options
are
allowed
in
the
final
methods.

67.
Comment:
Can
there
be
an
option
for
1­
point
or
3­
point
testing
if
the
source
meets
8/
2
or
2/
0.5
criteria
from
flow
disturbances
or
can
something
similar
to
PS­
2
be
used?
(
0031,
0034)

Response:
See
response
to
Comment
65.

68.
Comment:
The
proposed
requirements
to
use
Method
1
for
sample
selection
are
unnecessary
for
measurement
of
gaseous
exhaust
species
and
should
be
eliminated
from
the
proposed
revisions.
Flow
disturbances,
extreme
turbulence,
induced
draft
fans,
etc.
all
create
mixing
and
thus
homogeneous
gas
streams
and
are
very
good
places
to
obtain
representative
gas
concentration
measurements.
(
0032)

Response:
See
response
to
Comment
65.

69.
Comments:
For
the
stratification
test,
Part
75
has
specified
a
method
in
Appendix
A
6.5.6.1.
Since
the
objective
is
to
harmonize
the
methods,
it
would
be
easier
to
have
one
method
for
consistency.
(
0019,
0038)

Response:
See
response
to
Comment
65.
We
have
revised
the
proposed
stratification
requirement
to
make
it
similar
to
the
requirements
in
Appendix
A
of
Part
75.
30
70.
Comments:
What
is
the
justification
for
multiple
point
Method
1
sampling
for
gases
at
all
sources,
when
in
the
past
it
was
required
only
at
sources
with
the
potential
for
stratification,
such
as
scrubbers,
combined
gas
streams,
etc.?
Traversing
will
require
an
extra
person
to
do
it,
and
will
add
about
$
600/
day
to
the
testing
costs.
With
Relative
accuracy
test
audits
(
RATA)
requiring
3
points
and
the
other
instrumental
methods
calling
for
Method
1
traverses,
RATAs
and
compliance
tests
can
no
longer
be
done
concurrently,
unless
the
stratification
check
allow
only
3
points,
which
will
also
add
big
dollars.
We
suggest
either
going
with
a
3­
point
multi­
sample
probe
or
specify
when
stratification
may
be
observed
in
a
stream.
(
0028,
0029,
0035,
0043,
0048)

Response:
See
response
to
Comment
65.

71.
Comment:
The
general
sampling
point
requirements
in
Section
8.1.2
are
confusing
and
incoherent.
(
0008)

Response:
Portions
of
Section
8.1.2
have
been
rewritten
to
add
clarity.

72.
Comments:
A
stratification
test
procedure
has
been
added
to
Section
8.1.3
and
EPA
should
not
attempted
to
briefly
summarize
something
this
complex
in
a
one­
half
paragraph
writeup
in
PS­
2
of
Appendix
B.
The
appropriate
number
of
sampling
points
will
vary
significantly
depending
on
the
test
location
and
source
conditions.
In
many
cases
temporal
changes
will
need
to
be
considered
and
may
require
one
of
several
testing
techniques.
During
the
period
required
to
perform
a
stratification
test
(
typically
one
hour
or
more),
stack
gas
pollutant
concentrations
can
easily
vary
by
more
than
10
percent,
especially
downstream
of
an
SO2
scrubber.
Most
importantly,
if
stratification
were
detected,
defaulting
to
a
3­
point,
across­
the­
diameter
traverse
would
frequently
not
insure
the
collection
of
a
representative
sample.
The
EPA
should
eliminate
this
discussion
and
make
the
source
owner
and
test
crew
responsible
for
collecting
representative
data,
as
is
currently
the
case.
(
0038)

Response:
We
realize
that
a
brief
discussion
of
determining
representative
testing
may
not
address
all
test
situations;
however,
we
believe
the
approach
given
in
the
final
rule
can
adequately
document
the
lack
of
stratification
at
most
sources.
We
agree
that
the
source
owner
and
test
crew
are
ultimately
responsible
for
collecting
representative
data.

73.
Comment:
I
agree
that
stratification
may
introduce
significant
errors
when
a
singlepoint
strategy
is
employed;
however,
the
point
selection
and/
or
stratification
procedures
should
be
made
less
cumbersome.
Traversing
as
described
in
Sections
8.1.1
and
8.1.2
will
add
significant
costs
to
most
test
programs.
The
stratification
procedure
described
in
8.1.3
adds
less
cost
(
though
still
significant),
and
allows
single
point
sampling
after
completing
a
thorough
stratification
determination
on
a
source.
Once
this
is
conducted
on
a
source,
however,
I
would
propose
an
abbreviated
3­
pt
stratification
check
to
verify
the
absence
of
stratification
for
subsequent
tests.
Another
alternative
which
should
be
included
is
the
"
short­
line"
point
selection
and
probe
(
3
points
at
0.4,
1.2,
and
2.0
meters
distance
from
the
stack
wall
as
described
in
PS­
2)
in
cases
where
multi­
point
sampling
is
required.
Note
that
16.7,
50,
and
83.3
percent
are
often
impractical
on
large
diameter
stacks.
31
Another
problem
is
that
many
sources
will
have
greater
variability
with
time
rather
than
with
sample
point
location.
In
these
cases,
the
short­
line
3­
point
sample
probe
should
be
sufficient.
(
0009,
0041,
0028,
0029,
0056)

Response:
See
response
to
Comment
65.
The
"
short
line"
point
selection
of
PS­
2
has
been
included
as
an
option
except
after
wet
scrubbers
or
in
streams
where
different
pollutant
concentrations
are
combined.

74.
Comment:
Please
clarify
which
Method
1
traverse
points
are
determined,
flow
or
particulate?
(
0004,
0009,
0028,
0030,
0038,
0039,
0043,
0059)

Response:
For
the
stratification
test,
nine
traverse
points
are
used
for
rectangular
stacks;
12
points
are
used
for
circular
stacks.
Method
1
only
applies
in
the
context
of
locating
the
test
points,
not
the
criteria
for
distances
from
flow
disturbances.

75.
Comment:
The
requirement
to
traverse
at
up
to
16
points
is
an
onerous
requirement
and
is
contrary
to
the
purpose
of
simplifying
the
methods
and
reducing
testing
costs.
Testing
at
the
three
points
noted
in
PS­
2
is
a
better
solution.
The
requirement
to
sample
in
accordance
with
Method
1
is
unnecessary
when
sampling
downstream
of
an
induced
draft
(
ID)
fan.
The
ID
fan
mixes
the
gas
stream
to
eliminate
stratification
and
pressurizes
the
gas
to
eliminate
the
possibility
of
in­
leakage.
Single­
point
sampling
should
be
allowed
for
all
locations
downstream
of
an
ID
fan.
(
0027,
0029,
0041)

Response:
See
response
to
Comment
65.

76.
Comment:
The
sampling
point
locations
should
be
deleted
from
the
test
method
because
of
numerous
conflicts
with
other
regulations
and
lack
of
legitimate
technical
criteria
for
the
prescribed
sample
points.
This
section
begins
with
a
statement,
Unless
otherwise
specified
in
an
applicable
regulation
or
by
the
administrator,
use
the
traverse
points
listed
in
and
located
according
to
Method
1.
The
first
part
of
this
sentence
requires
that
the
tester
be
an
expert
in
air
pollution
regulations.
Some
Part
60
Subparts,
such
as
Subpart
D,
specify
the
sampling
location
and
sample
point
(
one
meter
from
the
stack
or
duct
wall)
while
other
subparts
do
not.
Certain
performance
specifications
in
Part
60
and
in
Part
75
specify
sample
point
traverses
and/
or
locations
for
the
conduct
of
relative
accuracy
tests,
while
others
do
not.
There
are
no
cases
that
require
sampling
12
or
more
points
to
determine
representative
gas
concentrations.
EPA
has
provided
no
evidence
to
support
this
proposition
which
is
potentially
very
costly
and
difficult.
Many
facilities
do
not
have
the
necessary
sample
platforms
and
access
and
sample
ports
necessary
to
conduct
the
traverses
that
EPA
has
specified.

Furthermore,
the
basis
for
using
Method
1
for
gas
sampling
measurements
does
not
exist
and
is
contrary
to
basic
technical
judgment.
Method
1
has
a
procedure
for
selecting
traverse
points
for
particulate
matter
and
for
velocity
measurements.
There
is
no
procedure
within
Method
1
to
select
sampling
points
for
gas
concentration
measurements.
The
criteria
for
selecting
particulate
and
velocity
sample
points
and
locations
are
entirely
inappropriate
for
selecting
gas
sampling
locations.
While
flow
disturbances,
ID
fans,
and
extreme
turbulence
are
detrimental
to
velocity
measurements
and
isokinetic
sampling
for
particulate
matter,
they
are
actually
32
advantageous
for
creating
a
homogeneous
concentration,
and
thus
ideal
sampling
locations,
for
gas
measurements.
Further,
what
if
the
source
geometry
does
not
meet
Method
1
criteria?
Method
1
is
cumbersome
on
tall,
large
sources
and
pointless
on
small
sources
with
respect
to
gases.
Can
EPA
generate
a
guidance
document
on
when
to
be
concerned
with
stratified
streams?
Many
sampling
locations
used
strictly
for
instrumental
testing
do
not
meet
Method
1
criteria,
so
the
rule
should
be
clear
that
Method
1
only
applies
in
the
context
of
locating
the
test
points,
not
the
criteria
for
distances
from
flow
disturbances.
This
will
avoid
over­
interpretation
of
the
rules
and
avoid
litigation
issues
which
will
surely
arise.
What
is
the
course
of
action
when
Method
1
criteria
for
upstream/
downstream
distance
are
not
met?
Note
that
many
facilities
are
not
required
to
test
for
velocity
or
particulate,
meaning
that
the
Method
1
criteria
have
not
been
an
issue
under
the
current
instrumental
methods.
(
0032,
0038,
0039,
0043,
0048)

Response:
See
the
responses
to
Comments
65
and
73.

77.
Comment:
The
section
on
sample
point
selection
is
misleading.
It
says
to
use
the
points
specified
in
Method
1
or
conduct
a
stratification
check,
then
it
says
if
we
are
doing
a
RATA
to
use
the
procedures
in
the
applicable
performance
specification
(
which
means
3
points).
This
is
a
serious
problem,
since
there
are
many
installations
with
SCRs
that
are
completely
stratified,
and
3
points
will
not
provide
a
representative
sample.
Section
8.1.3
goes
on
to
describe
how
many
points
to
use
with
a
stratified
stack,
which
is
fine,
but
the
last
sentence
in
8.1.1
seems
to
contradict
these
criteria.
(
0047)

Response:
The
last
sentence
notes
that
the
revisions
being
made
do
not
supersede
the
current
sampling
requirements
in
the
CEMS
performance
specifications.
In
the
performance
specifications,
the
tester
is
given
the
minimum
requirement
of
sampling
at
three
points.
These
or
additional
points
must
be
selected
to
assure
the
acquisition
of
representative
samples
over
the
stack
or
duct
cross
section.

78.
Comment:
It
is
not
as
easy
as
you
imply
to
alleviate
Method
1
points
by
performing
a
stratification
test.
Because
most
gaseous
emissions
are
not
stable
but
have
continuous
concentration
drift,
the
best
stratification
test
is
performed
by
using
two
independent
sample
systems.
One
system
is
placed
at
a
set
sample
point
and
the
second
is
traversed
at
Method
1
points.
The
stationary
point
is
then
directly
compared
to
the
traversed
point.
This
procedure
can
take
a
full
day
to
complete
on
very
large
sources.
I
suggest
either
go
with
a
three
point
multisample
probe
or
specify
when
stratification
may
be
observed
within
a
stream.
(
0048)

Response:
We
agree
that
the
stratification
test
described
using
two
independent
sampling
systems
is
a
good
way
to
determine
if
emissions
are
stratified,
and
this
procedure
may
be
used.

79.
Comment:
The
stratification
procedure
includes
no
provisions
for
accounting
for
temporal
variations
in
the
effluent
concentration
during
the
stratification
test.
Hence,
sources
that
cannot
operate
at
a
constant
effluent
concentration
cannot
be
demonstrated
to
be
non­
stratified.
The
procedure
should
be
modified
to
allow
use
of
a
dual
probe
approach
and
switching
between
the
traverse
and
stationary
sampling
probes
along
with
appropriate
calculations
that
separate
the
spatial
and
temporal
variations.
Such
procedures
have
been
widely
used
for
over
25
years.
For
the
described
test,
the
operation
of
the
source
must
be
constant
(
within
3
percent
by
definition),
33
precluding
its
use
at
any
source
with
temporal
variability.
Thus,
the
procedure
is
not
universal
and
many
sources
will
have
to
perform
multi­
point
sampling.
We
recommend
requiring
up
to
3
sample
points
or
increasing
the
allowable
variation
to
30
percent.
The
idea
of
reducing
sample
points
where
it
can
be
demonstrated
that
no
stratification
exists
has
a
great
deal
of
merit.
However,
since
flow
conditions
can
vary
somewhat
during
a
test,
it
does
not
seem
reasonable
to
reduce
the
traverse
points
below
the
3
required
in
PS­
2.
(
0032,
0027,
0041,
0059)

Response:
See
response
to
Comments
65
and
78.

80.
Comments:
Is
there
a
method
for
confirming
that
a
multi­
hole
probe
is
properly
designed
to
sample
from
each
hole
at
the
same
flow
rate?
If
the
method
is
going
to
include
using
such
probes,
and
have
criteria
for
sampling
rates,
then
a
procedure
sanctioned
by
EPA
to
meet
the
"
10
percent
specification
is
essential.
A
rake
probe
(
probe
with
3
holes)
is
described
for
multipoint
traverse.
How
is
10
percent
of
the
mean
flow
rate
determined?
Presently
this
type
of
probe
is
not
acceptable
for
CAMD
use,
mainly
because
there
is
a
belief
they
do
not
work.
This
type
of
probe
should
not
be
allowed.
Unless
such
a
procedure
is
used,
this
will
merely
create
controversy
and
reduce
the
data
quality
EPA
is
concerned
about.
There
is
no
reason
to
prohibit
use
of
a
multi­
point
probe
that
included
independently
controlled
sample
extraction
lines
that
simultaneously
draw
gas
at
the
same
rate,
but
please
specifically
prohibit
the
so­
called
rake
probe.
(
0003,
0018,
0025,
0032,
0059)

Response:
The
Agency
cannot
evaluate
all
potential
technologies
nor
prescribe
procedures
for
verifying
them.
We
added
flexibility
to
the
test
methods
by
allowing
options
at
the
tester's
discretion
and
ability
to
meet
stated
performance
criteria.
In
this
way,
we
can
be
less
prescriptive
while
maintaining
an
acceptable
level
of
performance.

81.
Comment:
We
submit
the
following
comments
on
Section
8.1.3:
(
1)
Setting
stratification
limits
implies
allowable
deviations
(
and
hence
uncertainty)
from
the
true
average.
What
will
EPA
do
with
this
uncertainty?
(
2)
According
to
this
section,
there
are
two
options:
(
a)
when
each
point
is
<
5.0
percent
of
mean
and
(
b)
when
each
point
is
<
10.0
percent
of
the
mean.
In
the
former,
a
point
nearest
the
mean
is
used.
Although
highly
unlikely,
it
is
possible
that
half
are
exactly
+
5
percent
and
the
other
half
are
exactly
­
5
percent.
Therefore,
the
maximum
possible
bias
is
"
5.0
percent.
It
is
more
likely
that
the
point
nearest
the
mean
is
1
percent
or
2
percent,
or
3
percent
off,
in
which
case,
respective
biases
would
be
introduced.
In
the
latter
option,
the
possible
bias
is
even
worse.
The
average
of
the
three
traverse
points
could
very
well
be
±
10.0
percent
off,
in
which
case
a
bias
of
±
10
percent
is
introduced.
(
c)
These
biases
can
be
limited
by
placing
an
appropriate
specification,
e.
g.,
"
1.0
percent
or
even
"
0.5
percent
(
some
examples
from
stratification
tests
can
be
investigated
to
determine
an
appropriate
figure).
Thus,
the
specification
would
say
a
single
point
that
most
closely
matches
or
is
within
"
1.0
percent
(
or
some
other
number)
of
the
mean.
If
a
single
point
does
not
meet
the
specification,
a
combination
of
points
may
be
used.
The
same
holds
true
for
the
latter
option.
The
average
of
the
three
points
should
be
within,
e.
g.,
"
1.0
percent
(
or
some
other
number)
of
the
mean.
There
should
be
higher
limits
(
e.
g.,
10,
15,
or
20
percent)
and
an
absolute
limit
(
e.
g.,
0.5
ppm),
whichever
is
less
stringent,
for
low
level
concentrations.
Five
percent
of
a
source
operating
at
1.8
ppm
is
0.09
ppm;
thus
a
source
ranging
from
1.7
to
1.9
ppm
with
a
deviation
of
0.10
ppm
from
the
mean
would
exceed
the
limits
(
stratified).
(
0032)
34
Response:
We
have
not
conducted
a
thorough
study
of
the
uncertainty
of
the
allowable
deviations
for
the
stratification
test.
In
this
rulemaking,
we
are
attempted
to
add
provisions
to
promote
representative
sampling
that
have
been
lacking
in
the
past.
The
criteria
we
added
have
been
in
common
use
in
numerous
other
testing
requirements.

82.
Comment:
The
stratification
test
should
not
be
determined
with
a
NOx
concentration
corrected
for
dilution,
which
would
mask
the
results.
Rather,
the
degree
of
stratification
should
be
determined
based
on
the
absolute,
uncorrected
concentration.
Dilution
corrections
are
in
conflict
with
Part
75
stratification
tests.
Also
there
is
no
standard
against
which
the
data
are
to
be
corrected.
(
0002,
0010,
0011,
0029,
0039)

Response:
We
have
dropped
the
requirement
that
the
stratification
test
be
made
with
diluent­
corrected
pollutant
concentrations.

83.
Comment:
Section
8.1.3
of
Method
7E
requires
that
NOx
concentrations
be
corrected
for
diluent.
The
detailed
procedure
is
wrong
since
it
will
only
be
comparing
NOx
variations
during
the
stratification
test.
For
example,
if
you
have
one
point
that
has
a
higher
oxygen
content,
you
would
adjust
the
NOx
concentration
to
compare
to
the
other
points,
thus
if
there
is
stratification
it
would
be
masked
by
correction
of
all
NOx
concentrations
to
the
same
diluent
concentration.
A
higher
oxygen
or
lower
carbon
dioxide
content
indicates
infiltration
of
ambient
air
or
unmixed
flue
gases.
Isn=
t
that
stratification?
If
the
diluent
concentration
for
all
points
is
less
than
or
equal
to
"
5
percent
of
the
mean
concentration,
then
a
single
point
may
be
sampled.
If
the
range
of
the
individual
traverse
point
diluent
concentrations
is
equal
to
or
less
than
"
10
percent
of
the
mean,
then
sample
three
points
at
16.7,
50.0
and
83.3
percent
of
stack
diameter.
(
0011)

Response:
See
response
to
Comment
82.

84.
Comment:
The
section
refers
to
determining
the
point
and
mean
NOx
concentrations
corrected
for
diluent,
which
may
not
be
applicable
in
all
cases.
We
propose
determining
the
point
and
mean
NOx
concentrations
in
units
consistent
with
the
purpose
of
the
test,
such
as
testing
an
emission
limit.
(
0044)

Response:
See
response
to
Comment
82.

85.
Comment:
The
wording
appears
to
limit
this
test
to
NOx
only.
If
the
test
is
meant
to
apply
to
other
pollutants,
consider
changing
NOx
to
pollutant.
(
0025)

Response:
Since
the
stratification
test
is
detailed
in
Method
7E,
the
narrative
addresses
the
pollutant
as
NOx.
We
have
noted
that
the
pollutant
of
interest
is
determined
in
the
stratification
test.

86.
Comment:
Once
a
stratification
test
has
been
passed
at
a
source,
there
should
be
no
requirement
to
perform
this
test
every
year,
assuming
no
major
changes
have
been
made
to
the
unit.
(
0028)
35
Response:
The
requirement
to
test
using
Method
1
sampling
points
in
all
cases
has
been
dropped.
See
response
to
Comment
57.
A
stratification
test
should
be
performed
if
stratification
is
suspected
at
the
test
location
or
a
3­
point
sample
shows
the
concentration
at
any
point
deviating
from
the
mean
by
more
than
10
percent.
The
test
must
be
conducted
to
assure
that
representative
sampling
is
performed.

87.
Comment:
The
stratification
requirements
in
Section
8.1.3
do
not
contain
sufficient
detail
for
application
to
actual
testing.
For
example,
for
multi­
load
testing
(
e.
g.,
NSPS
Subpart
GG),
at
what
operating
load
should
the
stratification
test
be
performed?
We
believe
that
only
one
load
level
should
be
checked
for
stratification.
If
the
load
level
is
an
issue,
EPA
should
address
that.
The
EPA
should
also
make
clear
that
if
the
"
5
or
"
l0
percent
criteria
under
the
stratification
test
are
not
met,
Section
8.1.1
of
the
proposed
Method
7E
should
be
used
to
determine
the
number
of
sampling
points.
However,
for
stacks
with
low
concentrations
(
e.
g.,
2.5
ppm
NOx
permit
limits),
some
type
of
alternative
stratification
criteria
should
be
allowed
(
e.
g.,
all
points
within
"
0.5
ppm
of
the
overall
average).
Meeting
a
5
percent
(
i.
e.,
0.13
ppm)
or
10
percent
(
i.
e.,
0.25
ppm)
criterion
is
unnecessarily
restrictive.

In
the
event
that
prior,
historical
data
shows
no
stratification,
and
no
changes
to
the
stack
design
or
process
operations/
controls
have
occurred
since
the
last
stack
test,
language
should
be
included
to
allow
the
stratification
test
to
be
waived.
Our
past
experience
with
numerous
reference
method
stack
tests
has
shown
gas
concentration
stratification
is
almost
never
encountered.

The
EPA
should
also
consider
exempting
small
stacks
(
e.
g.,
<
24
inches)
from
the
stratification
testing
requirement,
since
such
a
test
is
not
likely
to
be
necessary
or
useful
in
those
cases.
(
0038)

Response:
See
the
response
to
Comment
65.
The
special
testing
conditions
for
turbines
are
best
addressed
under
Subpart
GG
and
are
beyond
the
scope
of
the
revisions
we
are
making.
An
alternative
stratification
criterion
of
within
5.0
ppm
or
0.5
percent
diluent
of
the
average
has
been
added.
For
measurements
used
to
comply
with
low­
concentration
limits,
single
point
sampling
is
allowed
if
the
results
of
a
3­
point
stratification
test
are
within
0.5
ppm,
and
3­
point
sampling
is
allowed
if
all
points
are
within
1.0
ppm.
We
believe
3­
point
sampling
is
useful
for
small
stacks
and
is
not
overly
burdensome.
This
is
also
a
good
test
to
initially
show
whether
likedesign
sources
experience
individual
leaks
or
other
phenomena
which
affect
emission
stratification.

88.
Comment:
In
Method
20,
the
diluent
was
traversed
without
the
pollutant,
which
took
care
of
temporal
changes.
In
the
proposal
you
now
request
pollutant
and
diluent,
which
means
that
two
samples
should
be
taken
and
compared.
Otherwise
most
CO,
SO2,
and
NOx
samples
will
have
to
have
the
maximum
points
(
i.
e.
affects
costs).
(
0035)

Response:
See
response
to
Comments
65
and
82.
36
89.
Comment:
Since
these
methods
are
used
for
RATAs,
the
new
method
will
affect
cost/
time,
and
each
test
could
be
as
long
as
96
min,
not
21
as
currently
stated.
(
0035)

Response:
For
RATA
determinations,
the
sampling
points
and
sampling
times
in
the
applicable
performance
specifications
govern.

90.
Comment:
In
Section
8.1.1,
does
this
mean
that
when
we
conduct
RATA=
s
and
there
is
stratification
that
we
traverse
over
as
many
as
49
points
(
square
stack
with
7
ports)?
This
would
mean
that
at
a
measurement
system
response
time
of
1.5
minutes,
that
each
point
would
be
measured
for
3
minutes,
multiplied
by
49
points
equals
2
hours,
30
minutes
per
run.
Assuming
10
runs
for
a
normal
RATA
makes
for
an
awfully
long
test.
We
recommend
that
you
return
to
the
3­
point
test.
(
0004)

Response:
When
conducting
RATA's,
the
sampling
requirements
of
the
applicable
performance
specification
govern.

3.7
Sampling
Dilution
Systems
91.
Comment:
There
appears
to
be
a
grave
oversight
in
the
proposed
methods
for
the
case
of
dilution
systems,
which
have
been
approved
by
the
Emission
Measurement
Center
as
alternative
method
ALT­
007
(
Use
of
Dilution
Probes
with
Instrumental
Methods).
The
new
rules
should
be
clearly
modified
to
allow
dilution
systems
or
have
language
implying
dry­
extractive
systems
removed
as
not
to
limit
technology
choices
in
the
future.
Incorporating
dilution
systems
into
Method
7E
would
be
one
way
to
do
this,
and
would
advance
the
concept
of
a
technologyneutral
method
(
and
then
drop
ALT­
007).
One
of
the
biggest
issues
with
dilution
systems
is
requiring
calibration
standards
that
have
very
defined
molecular
weights
to
match
the
expected
effluent,
so
the
dilution
rate
doesn=
t
change
when
the
system
is
operating.
Dilution
systems
typically
use
a
system
calibration
instead
of
a
direct
calibration,
and
should
incorporate
a
discussion
of
calibration
drift
using
a
procedure
similar
to
bias
determination.
Although
it
no
longer
appears
on
the
EMC
website,
Guidance
Document
GD­
18
(
06­
18­
92)
indicated
that
dilution
sampling
systems
were
acceptable
for
use
with
EPA
Methods
6C,
7E,
20,
and
10.
The
document
discussed
the
virtues
and
special
requirements
of
dilution­
extractive
sampling
and
it
should
seem
reasonable
to
reword
the
new
methods
to
allow
this.
If
dilution
probes
are
allowed,
the
difference
between
molecular
weights
of
the
cal
gas
and
sample
gas
must
also
be
addressed,
since
this
affects
the
dilution
ratio
as
explained
in
AAn
Operator=
s
Guide
to
Eliminating
Bias
in
CEM
Systems,@
EPA/
430/
R­
94­
016,
November
1994,
pp
3­
5
to
3­
9.
(
009,
0016,
0022,
0033,
0034,
0038,
0056)

Response:
The
methods
have
been
modified
to
clearly
note
that
dilution
systems
are
acceptable.
We
have
included
discussions
of
calibration
gas
needs
relative
to
the
sample
gas
molecular
weight,
calibration
drift
test
variations,
and
other
instructions
pertinent
to
dilution
systems
that
were
a
part
of
EMC
Guidance
Document
GD­
18.
We
have
also
included
procedures
and
specifications
pertaining
to
dilution­
type
systems
that
are
consistent
with
Chapter
21
of
the
APart
75
Emissions
Monitoring
Policy
Manual@.
37
92.
Comment:
Please
clarify
the
difference
in
procedures
for
direct­
extractive
and
dilution­
extractive
systems.
The
current
draft
is
focused
on
direct­
extractive.
(
0059)

Response:
See
the
response
to
Comment
91.

93.
Comment:
The
test
methods
should
include
provisions
for
additional
methods/
analytical
approaches
that
are
acceptable
in
the
appropriate
test
method,
including
the
use
of
dilution­
based
sampling
systems
and
extractive
FTIR
measurement
systems.
(
0032)

Response:
One
of
the
purposes
of
the
rulemaking
was
to
allow
for
the
use
of
acceptable
alternative
procedures
in
the
methods.
For
example,
Methods
6C
and
7E
were
reworded
to
allow
analyzers
beyond
those
traditionally
required.
Discussions
have
been
added
to
the
methods
to
address
the
special
requirements
of
dilution­
based
sampling
systems.

94.
Comment:
The
method
does
not
address
the
use
of
a
Standard
Gas
Dilution
system
to
generate
low­
level
calibration
gases
from
high­
level
gases.
(
0013)

Response:
A
gas
dilution
system
meeting
the
requirements
of
Method
205
of
Appendix
M
to
40
CFR
51
may
be
used
to
generate
low­
concentration
calibration
gases
except
under
Part
75
applications,
where
Administrator
approval
is
required.

3.8
Moisture
Removal
Systems
95.
Comment:
The
phrase
thermo­
electric
type
condenser
is
incorrect
and
should
be
thermo­
electric
cooled
condenser.
Referring
to
specific
types
of
moisture
removal
systems
and
operating
temperatures
is
not
technology­
neutral
and
should
be
removed
from
the
rule
or
clarified
as
required
technology.
Passing
a
bias
test
should
be
the
real
criteria.
"
Moisture
removal
system"
should
be
changed
to
"
sample
conditioning
system."
We
recommend
the
dew
point
temperature
at
the
outlet
of
the
drier
must
be
60
º
F
(
15
°
C)
as
measured
or
corrected
to
and
continuously
monitored.
The
language
used
is
a
direct
quote
from
the
ICAC
document
and
insinuates
favoritism
towards
chillers
sold
by
ICAC
members.
(
0009,
0013,
0022,
0023,
0032,
0039,
0054)

Response:
The
reference
to
Athermo­
electric@
type
condensers
has
been
dropped
to
make
the
item
technology­
neutral.
AConditioning
equipment@
was
used
instead
of
Amoisture
removal
system.@
This
equipment
must
keep
the
sample
gas
above
the
dew
point
temperature
by
either
dilution,
by
heating
it
prior
to
analysis
(
for
a
cool,
dry
measurement
system),
or
through
the
analysis
(
for
a
hot­
wet
system).

96.
Comment:
The
sampling
system
bias
check
must
include
the
same
water
(
1
percent
absolute)
concentration
found
in
the
sample.
This
should
be
rephrased
found
in
the
analyzed
sample.
Whenever
specifications
are
given,
you
need
a
procedure
for
verifying
that
the
moisture
content
is
indeed
within
"
1.0
percent
absolute.
Does
this
mean
that
a
dilution
probe
must
analyze
hot/
wet
cal
gas
(
impossible
w/
o
compromising
the
Protocol
analysis)
or
is
EPA
referring
to
the
moisture
concentration
as
seen
by
the
analyzer?
The
testing
firm
will
never
know
what
the
water
content
is
in
advance
of
the
test.
(
0016,
0032,
0038)
38
Response:
The
proposed
requirement
that
the
calibration
gas
used
in
the
sampling
system
bias
check
be
within
1
percent
absolute
of
that
found
in
the
sample
gas
has
been
dropped.

97.
Comment:
The
statement
concerning
not
having
to
measure
moisture
content
for
wet
basis
pollutant/
diluent
analyzers
is
false.
One
must
measure
moisture
content
in
order
to
demonstrate
that
the
system
meets
the
1.0
percent
absolute
moisture
specification,
even
though
the
specification
is
meaningless
and
unnecessary
in
the
first
place.
We
interpret
for
dilution
systems
at
100
to
1,
we
have
made
moisture
1
percent
at
most,
thereby
allowing
us
to
calibrate
from
cylinders
containing
no
moisture.
How
does
a
wet
system
bias
check
get
the
same
moisture
as
a
scrubber
exhaust?
(
0009,
0011,
0028,
0032,
0035)

Response:
See
response
to
Comment
96.

98.
Comment:
Section
6.1.4
of
Method
7E
requires
some
moisture
generating
system
with
the
calibration
gases
and
will
add
to
the
cost
and
complexity
of
the
tests.
(
0032)
Anything
out
there
to
just
dial
up
25
percent
moisture,
mixed,
of
course,
with
the
correct
ppm
of
NOx?
Get
real.
We
are
not
aware
of
any
cal
gases
with
stack
concentrations
of
water,
because
it
is
thermodynamically
impossible
to
make.
This
requirement
appears
to
be
impossible
to
meet
as
a
practical
matter.
(
0038,
0043)

Response:
Analyzers
that
have
water
as
a
potential
interference
must
document
that
water
does
not
interfere
with
the
analysis,
through
the
testing
of
wet
gases
or
by
other
means.
Analyzers
that
are
not
affected
by
moisture
are
not
required
to
make
that
evaluation.

99.
Comment:
As
written,
this
section
will
require
heated
sample
lines
with
all
dilution
systems
which
should
not
be
the
case.
Thousands
of
tests
have
been
made
with
dilution
ratios
of
100:
1
or
higher
that
do
not
use
heated
lines.
The
only
requirement
is
to
maintain
the
dew
point
of
the
diluted
sample
below
that
provided
by
a
condenser
or
dryer.
(
0009,
0059)

Response:
See
response
to
Comment
105.

100.
Comment:
As
written,
the
section
precludes
the
use
of
permeation
dryers,
which
are
currently
acceptable
to
EPA.
One
commenter
mentions
Nafion
dryers
in
particular,
and
presents
a
lengthy
discussion
of
dryer
technologies.
Wording
should
be
changed
to
include
condensers,
dilution
and
permeation
dryers,
with
suitable
wording
to
indicate
water
is
removed
or
reduced,
not
condensed.
Comment
discusses
several
chiller/
dryer
designs
and
a
University
of
California
study
evaluation
of
them,
and
recommends
technology­
neutral
language.
The
Aor
similar
device@
is
a
direct
quote
from
ICAC
document,
so
still
seems
self­
interested.
Further,
I
would
avoid
using
a
permeation
dryer
in
any
other
context
than
as
a
polishing
moisture
removal
system
after
another
condenser.
Thermo­
electric
condensers
are
not
the
predominant
type
used,
but
Teflon
coiled
condensers
in
an
ice­
bath
with
peristaltic
pumps
for
moisture
removal,
are.
(
0009,
0011,
0023,
0038,
0039,
0054)

Response:
See
response
to
Comment
105.
39
101.
Comment:
There
are
additional
costs
to
monitor
temperatures
at
the
outlet
of
the
drier.
Is
this
necessary
since
the
gas
conditioning
system
is
effectively
eliminating
all
moisture
prior
to
the
gas
outlet?
What
accuracy
does
this
lend
to
the
method?
Is
monitoring
and
recording
temperatures
what
is
intended?
Regulators
in
Idaho
will
want
proof.
Is
there
a
way
of
monitoring
the
dew
point
in
the
exit
gas
stream
without
adding
moisture
to
the
gas
stream
(
wet
bulb
temp)
or
an
expensive
dew
point
probe?
Even
thermocouples
will
require
calibrations.
Do
we
need
a
data
logger,
or
can
the
probe
pusher
record
the
values?
If
the
regulation
says
Amust
be@,
our
regulators
want
proof.
This
is
definitely
not
simplifying
our
work.
We
need
details
on
how
this
monitoring
is
to
be
done,
and
note
that
much
of
the
present
testing
equipment
is
not
capable
of
measuring
these
temperatures,
and
it
is
not
really
necessary.
(
0028,
0038,
0043)

Response:
Different
techniques
may
be
used
to
prevent
condensation.
Assurance
that
the
sample
is
maintained
above
its
dew
point
may
be
shown
by
monitoring
the
appropriate
temperatures
or
parameters.
The
tester
must
verify
that
the
system
used
does
not
result
in
sample
loss
due
to
condensation.

102.
Comment:
Countercurrent
heat
exchanger
designs
are
difficult
to
measure
exit
gas
temperature
and
it
is
unclear
how
to
use
them
for
this
application.
(
0012)

Response:
See
the
response
to
Comment
101.
Where
technology
does
not
allow
gas
temperature
to
be
measured,
alternative
means
may
be
used
to
demonstrate
that
the
sample
is
maintained
above
its
dew
point.

103.
Comment:
Is
a
41
°
F
dew
point
possible
with
any
condenser?
A
60
°
F
dew
point
is,
but
a
41
°
F
dew
point
is
20
percent
relative
humidity
at
60
°
F.
Why
is
that
necessary
when
the
interference
check
is
mandated?
(
0035)

Response:
We
are
not
specifying
minimum
condenser
or
dryer
outlet
temperatures
because
of
the
different
techniques
used
to
prevent
condensation.
The
tester
must
verify
that
the
system
used
does
not
result
in
sample
loss
due
to
condensation.

104.
Comment:
The
most
popular
moisture
removal
system
is
a
chiller
type
system
that
cools
the
sample
gas
and
removes
the
condensed
moisture.
Below
is
a
list
of
and
description
of
moisture
removal
systems.
I
have
personally
used
a
Nafion
Dryer
and
prefer
it
to
a
condenser.

A
Nafion
Dryer
does
not
condense
the
water
vapor.
The
water
vapor
is
removed
as
a
gas
(
vapor),
thus
it
greatly
reduces
the
potential
for
removing
water
soluble
compounds
like
NO2
and
SO2.
A
Nafion
Dryer
has
no
need
to
monitor
the
drier
outlet
gas
dew
point.
Condensers
function
by
cooling
the
gas
stream
until
water
(
and
other
liquids)
coalesce,
then
collecting
the
condensate
and
draining
it
away.
Condensers
are
simple
to
operate
and
to
understand.
Unfortunately,
they
are
very
non­
specific;
not
only
do
they
remove
whatever
gases
condense
at
lower
temperatures,
but
also
at
least
a
portion
of
whatever
gases
dissolve
in
the
condensate.
Condenser
systems
are
designed
to
minimize
the
contact
of
the
gas
stream
with
the
condensate
to
limit
this
deficiency,
but
water­
soluble
gases
are
always
lost
to
varying
degrees,
depending
upon
the
solubility
of
the
gas
in
question.
Large
amounts
of
gases
such
as
sulfur
dioxide
are
lost
by
condensers,
and
40
condensers
are
entirely
inappropriate
to
dry
gas
streams
containing
hydrogen
chloride
or
chlorine
(
unless
its
removal
is
desired).

Desiccant
dryers
function
by
binding
water
to
an
absorbent.
The
absorbent
may
be
a
solid
(
such
as
silica
gel)
or
a
liquid
(
such
as
sulfuric
acid)
that
binds
water
to
its
chemical
structure
as
water­
of­
hydration.
Desiccants
are
very
simple
to
understand
and
to
operate.
Unfortunately,
like
condensers,
they
are
very
non­
specific,
and
remove
many
compounds
other
than
water.
Unlike
condensers,
water
cannot
be
removed
from
desiccants
by
simply
draining
it
away.
While
in
operation,
desiccants
become
progressively
more
loaded
with
water,
and
must
periodically
be
regenerated
by
replacement
of
the
desiccant
or
by
driving
off
the
water.
Continuous
operation
desiccant
dryers
use
either
a
drastic
change
in
surrounding
pressure
(
pressure­
swing
heatless
desiccant
dryers)
or
a
drastic
change
in
surrounding
temperature
(
temperature­
swing
desiccant
dryers)
to
remove
water
from
one
chamber
of
desiccant
while
a
second
chamber
is
used,
and
the
chambers
alternate
operation
and
regeneration.

Permeation
dryers
function
on
a
principle
of
selection
on
the
basis
of
molecular
size.
Permeation
dryers
are
a
microporous
material.
When
forced
under
pressure
across
the
surface
of
the
microporous
material,
large
molecules
tend
to
remain
in
the
gas
stream
while
small
molecules
tend
to
move
through
the
microporous
material
and
be
removed.
Permeation
dryers
are
very
simple
to
operate,
but
are
primarily
suitable
as
air
dryers.
Nitrogen
and
oxygen
are
larger
molecules
than
water,
so
air
can
be
dried
by
this
method.
Permeation
dryers
are
too
non­
specific
to
dry
complex
gas
sample
streams.
Nafion
dryers
function
on
a
principle
of
selection
on
the
basis
of
affinity
for
the
sulfonic
acid
group.
Although
water
passage
through
Nafion
is
described
as
permeation,
Nafion
dryers
do
not
operate
on
the
same
principles
as
permeation
dryers.
Nafion
is
not
a
microporous
material
and
does
not
separate
compounds
on
the
basis
of
their
molecular
size.
For
example,
Nafion
dryers
can
remove
water
from
a
hydrogen
stream,
even
though
the
hydrogen
molecule
is
much
smaller
than
water.
Pressure
is
not
required
to
drive
the
process;
the
driving
force
for
the
reaction
is
the
partial
pressure
of
water
vapor.
Unlike
the
competing
methods,
Nafion
dryers
are
highly
selective
in
the
compounds
they
remove.
The
water
moves
through
the
membrane
wall
and
evaporates
into
the
surrounding
air
or
gas
in
a
process
called
perevaporation.
This
process
is
driven
by
the
humidity
gradient
between
the
inside
and
the
outside
of
the
tubing.
(
0011)

Response:
The
commenter
presented
an
interesting
and
descriptive
summary
of
the
major
moisture
removal
systems.
See
response
to
Comment
103.

3.9
Equipment
Heating
Specifications
105.
Comments:
Throughout
the
proposed
rules,
there
are
numerous
references
to
heating,
and
it
is
likely
that
regulators
will
conclude
that
the
probe,
umbilical,
or
other
equipment
be
electrically
heated.
The
language
should
state
that
the
sample
be
maintained
at
a
temperature
above
the
dew
point
of
the
sample
gas.
In
many
cases,
especially
with
respect
to
dilution
systems,
the
necessary
heat
is
provided
by
the
process
itself.
The
language
in
the
rules
should
be
changed
to
provide
a
technology­
neutral
approach,
and
show
that
the
desired
temperature
may
be
reached
from
the
process
itself.
With
a
dilution
system,
heating
various
components
to
95
C
to
prevent
condensation
is
not
necessary,
and
dilution
systems
should
be
exempt
from
these
sample
41
conditioning
requirements.
Why
increase
the
temperature
from
250
to
285
F
when
any
moisture
will
be
steam
above
212
F?
A
system
at
250
F
should
meet
all
conditions.
(
0009,
0010,
0038,
0048)

Response:
The
language
in
the
methods
was
revised
to
make
them
technology­
neutral.
The
requirement
was
made
to
maintain
the
sample
gas
above
the
dew
point
of
the
stack
gas
(
including
all
gas
components,
e.
g.
acid
gas
constituents)
so
that
no
loss
of
sample
results.
This
may
be
done
by
heating,
diluting,
drying,
desiccating,
or
a
combination
thereof,
or
by
other
means.
It
was
noted
that
heat
may
be
provided
by
the
process
itself
if
heating
is
the
means
of
preventing
sample
loss.

106.
Comment:
Temperatures
should
match
the
248
degrees
Fahrenheit
temperature
required
in
Methods
4
and
5.
(
0042)

Response:
The
temperature
required
to
prevent
condensation
in
Methods
4
and
5
is
sufficient
for
testing
in
most
normal
gas
streams.
However,
the
dew
point
of
the
gas
streams
will
increase
as
higher­
boiling
constituents
increase
in
concentration.
At
sources
high
in
SO2/
SO3
emissions,
the
dew
point
is
likely
to
be
higher
than
248
EF.
Our
new
approach
allows
the
tester
flexibility
in
preventing
condensation,
whether
by
a
source­
specific
heating
temperature,
an
appropriate
dilution
factor,
or
other
means.

107.
Comment:
Why
are
temperature
requirements
being
raised
to
284
EF,
or
77
EF
above
the
dew
point?
The
dew
point
requirement
adds
confusion
and
one
more
determination
that
needs
to
be
made
to
sample
properly.
There
are
equipment
problems
above
135
C,
and
industry
standard
of
120
C
should
be
enough
to
prevent
condensation.
There
are
temperature
conflicts
between
sections
of
the
method
and
the
QA/
QC
table.
(
0026,
.0028,
0031,
0039)

Response:
See
response
to
Comment
105.

108.
Comment:
Needs
a
clearer
and
broader
definition
of
dew
point.
It
is
important
to
define
total
dew
point
and
not
be
limited
to
water
dew
point.
ICAC
recommends
a
more
complete
approach
that
states
to
maintain
the
sample
25
C
above
the
highest
dew
point
in
the
sample
to
include
acid
effects
(
H2SO4,
H2S,
HCl,
etc.).
(
0054)

Response:
See
response
to
Comment
105.

109.
Comment:
States
140
C
or
25
C
above
the
dew
point,
whichever
is
higher.
It
should
say
whichever
is
lower
since
there
is
no
concentration
of
moisture
that
will
condense
above
284
F
unless
under
extreme
pressure.
(
0035,
0039)

Response:
The
language
has
been
changed
to
allow
the
tester
to
choose
which
procedure
or
technology
is
used
to
prevent
condensation.
We
only
require
the
sampling
system
maintain
the
sample
gas
above
the
dew
point
of
the
stack
gas
so
that
no
loss
of
sample
results.
This
may
be
done
by
heating,
diluting,
drying,
desiccating,
or
a
combination
thereof,
or
by
other
means.
42
110.
Comment:
This
section
rules
out
unheated
in­
stack
filters
if
the
stack
temp
is
not
>
25
C
above
the
dew
point
and
makes
our
direct
probe­
to­
condenser
conditioner
obsolete...
you
are
prescribing
the
system
rather
than
making
it
performance­
based.
To
make
three
systems
to
meet
your
specifications
will
cost
approximately
$
24,600
plus
operating
costs.
(
0043)

Response:
See
response
to
Comment
109.

111.
Comment:
The
Method
7E
measurement
system
requirement
that
equipment
to
be
heated
to
140
C
or
25
C
above
the
dew
point
is
excessive,
and
many
sources
will
have
dew
points
much
lower
than
140
C.
The
requirement
should
mirror
other
methods
with
an
acceptable
operating
range
(
i.
e.
Method
5
at
120
+­
14
C),
with
an
increased
requirement
(
14­
25
C)
above
the
dew
point
where
appropriate.
The
current
proposal
will
render
many
systems
with
selflimiting
heaters
unusable
and
will
significantly
increase
test
costs.
Change
the
wording
to
heated
to
at
least
100
C
or
25
C
above
the
concentration
dew
point
of
the
sample,
whichever
is
lower,
or
back
to
EPA=
s
original
wording.
(
0039,
0056)

Response:
See
response
to
Comment
105.

112.
Comment:
The
section
suggests
that
probe
and
flow
material
must
be
externally
heated,
but
in
some
cases
with
stack
exhausts
>
284
F
this
will
not
be
necessary.
(
0038)

Response:
See
response
to
comment
105.

113.
Comment:
A
heated
sample
line
is
not
required
for
in­
situ
dilution
probes
and
is
therefore
not
an
essential
component.
If
the
exhaust
gases
are
very
hot,
it
may
even
be
necessary
to
cool
the
probe.
The
language
should
state
that
the
conditions
should
be
maintained
at
a
temperature
above
the
dew
point
of
the
sample
gas
which
must
include
all
of
the
gas
components,
such
as
acid
in
lieu
of
water
only.
All
of
the
new
methods
contain
similar
language
related
to
the
minimum
operating
temperature,
which
one
commenter
shows
calculations
to
be
284
F,
which
is
in
excess
of
250
F,
a
more
realistic
value.
(
0009,
0010,
0011,
0022,
0029)

Response:
See
response
to
Comment
105.

3.10
Technology­
Specific
Analyzers
114.
Comment:
The
performance
criteria
contained
in
the
proposed
method
are
not
sufficient
to
justify
the
last
sentence
in
this
section:
AAnalyzers
operating
on
other
principles
may
also
be
used
provided
the
performance
criteria
are
met.@
For
example,
analyzers
using
electrochemical
cells
provide
completely
unreliable
results
when
not
operated
in
diffusion
limiting
conditions
even
though
such
analyzers
could
meet
the
criteria
of
the
proposed
method
while
operating
outside
of
diffusion­
limited
conditions.
The
method
would
need
to
include
special
procedures
such
as
those
included
in
ASTM
D
6522­
00.
However,
these
procedures
would
be
completely
inappropriate
for
chemiluminescent
analyzers.
No
one
set
of
performance
specifications
is
appropriate
for
all
analytical
techniques.
(
0032).
43
Industry
has
been
generating
data
for
many
years
with
reference
method
instrumentation,
so
just
deleting
terms
such
as
chemiluminescence
or
NDUV
without
a
clearly
defined
means
of
determining
analytical
uncertainties
in
new
technologies
is
a
recipe
for
disaster
in
the
courts.
Dropping
the
chemiluminescence
requirement
and
allowing
electrochemical
cell
analyzers
to
be
used
for
NOx
testing
raises
concerns
about
data
quality,
since
these
analyzers
are
more
prone
to
ambient
influences,
stability
problems,
and
operator
error.
We
recommend
that
this
concern
be
assessed
prior
to
relaxing
the
NOx
detection
principle
of
7E.
(
0023,
0037)

Response:
It
may
be
difficult
to
prescribe
a
set
of
performance
specifications
that
appropriately
evaluate
all
analytical
techniques
100
percent
of
the
time.
However,
we
believe
the
interference,
calibration
error,
and
bias
tests
provide
an
adequate
assessment
of
performance
the
majority
of
the
time.
We
are
making
the
methods
as
flexible
and
performance­
based
as
possible
and
are
not
requiring
specific
tests
that
check
specific
analyzer
peculiarities.
The
electrochemical
analyzer
has
been
shown
to
be
capable
of
producing
reliable
results
in
an
ETV
verification
study.

115.
Comment:
References
to
specific
technologies
should
be
removed.
(
0009,
0054)

Response:
References
to
specific
technologies
have
been
removed.

116.
Comment:
The
Agency
is
commended
on
the
approach
taken
in
recognizing
the
uncertainty
in
all
monitoring.
In
particular,
we
agree
with
the
language
in
Section
1.3
of
Method
7E
which
states
that
data
of
lesser
quality
may
be
accepted
if
the
data
user
deems
that
the
testing
objectives
are
met.
We
also
agree
with
moving
to
a
more
technology­
neutral,
performance­
based
approach,
specifically
by
opening
up
Method
7E
to
non­
chemiluminescent
analyzers.
(
0022)

Response:
Although
EPA
appreciates
these
supportive
comments,
the
Agency
notes
that
the
Adata
quality
assessment@
section,
including
the
requirement
to
calculate
and
report
the
uncertainty
of
the
test
data,
was
not
retained
in
the
final
rule.

117.
Comment:
ICAC
recommends
substantially
improving
the
proposal
by
(
1)
adding
an
analogous
performance­
based
test
method
for
CO
emissions
measurement;
and
(
2)
removing
any
references
to
specific
types
of
instruments
or
sampling
components
or
systems
from
the
body
of
the
proposal
(
i.
e.
make
it
technology­
neutral).
(
0053)

Response:
We
agree
and
made
these
changes
in
the
final
rule.

118.
Comment:
The
EPA
should
require
any
new
technology
to
pass
a
Method
301
validation
procedure
or
equivalent
prior
to
its
acceptance
for
reference
testing.
The
Method
301
process
could
be
used
to
identify
potential
interference
for
a
particular
technology,
and
the
testing
to
quantify
it
could
be
part
of
a
guidance
document.
Without
a
clear­
cut
means
of
determining
the
analytical
uncertainty
between
a
new
performance­
based
technology
and
the
previous
reference
method,
the
quality
and
defensibility
of
the
data
will
suffer
greatly.
(
0023)

Response:
We
do
not
think
a
Method
301
validation
is
needed
for
the
new
technologies
we
are
allowing
in
the
instrumental
test
methods.
We
believe
the
prescribed
performance
tests
will
ensure
that
analytical
uncertainty
is
minimized.
44
119.
Comment:
In
Section
6.2,
the
use
of
specific
technologies
should
be
removed
from
this
section
and
are
more
appropriate
in
the
preamble,
if
at
all.
The
goal
of
the
proposed
rules
is
to
move
to
performance­
based
test
methods
and
including
references
to
specific
instruments,
sampling
components,
and
systems
would
undermine
that
goal.
(
0009)

Response:
In
the
proposed
rule,
we
attempted
to
incorporate
performance­
based
criteria
as
our
knowledge
of
various
technologies
allowed.
Since
we
had
not
fully
studied
the
potential
technologies
and
the
ability
of
the
Method
6C
performance
tests
to
adequately
evaluate
the
analytical
uncertainties
of
these
technologies,
we
believed
it
was
appropriate
to
mention
the
technology
options
that
have
been
successfully
used
for
the
SO2
analyzer.
However,
the
final
rule
does
allow
the
use
of
Aother@
technology
options,
provided
the
performance
specifications
can
be
met.

120.
Comment:
The
rule
should
maintain
a
technology­
neutral
posture
and
should
not
refer
to
a
specific
technology.
(
0054)

Response:
See
the
response
to
Comment
119.

3.11
Calibration
Gases
121.
Comment:
The
selection
of
span
gases
should
be
tied
to
the
emission
limit
or
equivalent
limit
and
not
to
the
measured
concentrations.
(
0031)

Response:
We
have
defined
the
calibration
span
gas
in
terms
of
the
measured
concentrations
to
maximize
the
measurement
accuracy
in
the
range
of
measurements.

122.
Comment:
This
section
does
not
list
the
allowed
gas
combinations,
like
those
shown
in
3A
and
6C.
Note
that
NO
in
air
is
not
recommended
since
it
would
oxidize
to
NO2.
(
0038)

Response:
Method
7E
now
notes
that
the
calibration
gas
must
be
NO
in
nitrogen.

123.
Comments:
Pages
58840
and
58841,
end
of
2nd
paragraph,
says
that
the
cal
gases
now
required
in
Methods
3A,
6C,
and
7E
are
being
proposed
for
Method
10,
which
implies
the
use
of
3
gases
(
zero,
mid
and
span).
Item
4
of
the
proposed
changes,
however,
says
three
cal
gases
would
be
required
for
each
test
method
(
Method
10
now
requires
four
gases).
In
Section
7.1
of
Method
7E
(
7.1.1,
7.1.2,
and
7.1.3;
and
also
8.2.5?)
it
states
a
span,
mid,
and
low
are
used.
To
me
this
implies
4
gases
in
total
(
zero,
low,
mid
and
span).
Please
clarify,
and
if
four
gases
are
used,
recognize
the
increased
cost
effect
of
requiring
this.
Also
define
what
constitutes
a
zero
gas.
(
0004,
0014,
0020,
0027)

Response:
Methods
3A,
6C,
7E,
10,
and
20
require
three
calibration
gases:
low
(
which
may
include
zero),
mid,
and
high.
The
cited
instances
of
ambiguity
in
the
methods
have
been
corrected.
45
124.
Comment:
Only
five
gas
blending
options
are
listed
for
calibration
gases.
Especially
given
the
rigorous
interference
checks
outlined
elsewhere,
this
is
hard
to
justify.
This
use
of
O2
in
N2
in
calibration
gases
is
not
allowed
in
Method
3A
but
this
is
the
primary
and
most
commonly
used
calibration
gas
for
O2
analyzers.
Blending
CO2
or
O2
with
other
gases
shown
not
to
interfere
should
be
allowed,
especially
CO,
NOx/
CO2
in
N2,
O2/
CO2
in
N2,
etc.,
and
the
wording
should
be
changed
to
allow
flexibility.
A
number
of
common
and
viable
options
have
been
excluded.
Comment
lists
4
specific
mixtures
based
on
compounds
just
mentioned.
Does
CO
or
NOx
blending
affect
the
feasibility
of
the
interference
check?
In
addition,
we
typically
use
CO,
NO,
and
CO2
in
N2
mixtures.
In
addition,
we
use
propane
in
some
blends
without
interference.
Scott
Specialty
Gases,
Air
Products,
Praxair,
Matheson,
and
Messer
catalogs
for
commercially­
available
offer
additional
protocol
blends
other
than
those
mentioned
above
(
0004,
0009,
0011,
0013,
0028,
0030,
0035,
0038,
0039,
0043,
0048,
0059)

Response:
Blended
calibration
gases
are
allowed
in
the
final
rule
provided
they
meet
the
requirements
of
the
Traceability
Protocol
and
the
additional
gas
components
are
shown
not
to
interfere
with
the
analysis.

125.
Comments:
Section
7.1
of
6C
should
allow
additional
blended
gases,
including
mixtures
with
NO
(
provided
no
oxygen
is
present),
CO,
methane,
propane,
etc.,
provided
that
the
gases
meet
Protocol
standards.
This
also
applies
to
3A,
7E,
10,
and
20.
I
recommend
removing
this
section
altogether
in
all
the
methods.
Many
other
comments
recommended
additional
gas
blends,
and
are
also
summarized
in
comments
for
each
of
the
individual
methods.
(
0039,
0059)

Response:
See
the
response
to
Comment
124.

126.
Comments:
The
proposed
mid­
level
cal
gas
range
of
20
to
70
percent
is
an
improvement
over
the
existing
40
to
60
percent.
It
allows
more
leeway
and
freedom
to
use
a
representative
cal
standard
for
those
emissions
that
are
between
20
to
40
percent
and
60
to
70
percent
of
the
analyzer
span.
(
0038)

Response:
After
further
consideration,
we
have
decided
to
retain
the
A40­
to­
60
percent
of
calibration
span@
requirement
for
the
mid­
level
gas.
The
Agency
believes
that
this
ensures
a
better
evaluation
of
the
analyzer=
s
linear
response.
If
the
mid
gas
line
were
lowered
to
20
percent
of
calibration
span,
it
would
be
possible
to
do
the
analyzer
calibration
error
test
with
low,
mid
and
high­
level
gases
of
19,
21
and
80
percent
of
calibration
span
(
since
the
low­
level
gas
concentration
can
be
up
to
20
percent
of
span).

127.
Comment:
Method
3A
should
explicitly
allow
the
use
of
ambient
air
as
1)
a
high
range
calibration
gas
for
O2
analyzers
and
2)
as
a
zero
gas
for
non­
dilution
based
systems
with
CO2
analyzers
where
appropriate
for
the
measurement
range
of
the
system.
(
0032)

Response:
Ambient
air
is
generally
unacceptable
for
use
as
a
calibration
gas
in
Method
3A.
Prepurified
air
may
be
acceptable
in
some
cases.
However,
without
prior
permission,
only
gases
of
traceabiliy
protocol
quality
are
allowed
for
calibration
gases.
46
128.
Comment:
It
is
not
clear
whether
the
cylinder
gases
used
for
Method
3A
calibrations
are
to
be
prepared
according
to
EPA
protocol.
A
lot
of
controversy
and
delay
can
be
avoided
if
the
gases
used
during
a
compliance
demonstration
are
EPA
protocol
gases
with
a
current
certificate
at
the
time
of
the
test.
(
0059)

Response:
See
the
response
to
Comment
127.

129.
Comment:
Why
not
allow
the
ability
to
mix
CO
and/
or
NOx
into
these
blends.
Does
it
affect
the
feasibility
of
the
interference
check?
(
0004)

Response:
See
the
response
to
Comment
124.

130.
Comment:
Only
five
gas
blending
options
are
listed
for
calibration
gases.
Especially
given
the
rigorous
interference
checks
outlined
elsewhere,
this
is
hard
to
justify.
Blending
SO2
with
other
gases
shown
not
to
interfere
should
be
allowed.
CO
could
be
one
component
of
a
gas
mixture,
a
mixture
of
SO2/
NOx/
CO2
in
N2,
etc.,
should
be
allowed,
and
the
method
should
be
rewritten
to
allow
this
flexibility.
A
number
of
common
and
viable
options
have
been
excluded.
In
addition,
we
use
propane
blends
without
interference.
See
Scott
Specialty
Gases,
Air
Products,
Praxair,
Matheson,
and
Messer
catalogs
for
commercially­
available
protocol
blends
other
than
those
mentioned
above.
(
0009,
0011,
0027,
0035,
0038,
0039,
0048,
0059)

Response:
See
the
response
to
Comment
124.

131.
Comment:
It
is
not
clear
whether
it
is
acceptable
to
prepare
calibration
gas
mixtures
from
high­
concentration
EPA
Protocol
gases
in
accordance
to
Method
205.
If
so,
will
this
be
acceptable
to
both
EPA=
s
OAR
and
CAMD?
The
method
does
not
address
the
use
of
a
Standard
Gas
Dilution
system
to
generate
low­
level
cal
gases
from
high­
level
gases.
(
0013,
0059)

Response:
Method
205
may
be
used
to
dilute
calibration
gases
of
traceability
protocol
quality.
Method
205
may
only
be
used
under
part
75
of
this
chapter
with
Administrator
approval.

132.
Comment:
Zero
gas
is
not
defined;
only
low,
mid,
and
span
gases
are
listed.
Zero
gas
is
required
in
the
analyzer
calibration
error
test
and
should
be
defined.
(
0035)

Response:
The
final
rule
requires
three
gas
concentrations
for
the
analyzer
calibration
error
test,
i.
e.,
low,
mid
and
high.
The
rule
makes
clear
that
the
tester
may
use
a
zero
gas
as
the
low­
level
calibration
gas.

133.
Comment:
The
sentence
AThe
calibration
gas
certification
(
or
recertification)
must
be
complete
and
the
test
must
be
completed
before
the
expiration
date
@

needs
some
editing.
The
first
sentence
says,
AMust
be
certified
according
to
the
Protocol.@
Must
be
certified
requires
completion.
Is
the
word
test
referring
to
the
emission
test?
Perhaps
the
first
sentence
should
be
Your
calibration
gas
must
be
certified
(
or
recertified)
in
accordance
.
.
.
The
second
sentence
should
read
The
calibration
gas
shall
not
be
used
after
its
expiration
date.
The
statement,
The
goal
is
to
bracket
the
sample
concentrations
and
have
at
least
one
calibration
gas
below
and
one
above
the
measurements
should
be
deleted
or
clearly
rewritten
as
a
suggestion
so
as
to
not
be
47
mistaken
for
a
requirement
that
would
be
misinterpreted
where
emissions
are
highly
variable
(
containing
spikes)
and
or
where
a
calibration
gas
closely
approximates
the
effluent
concentration
but
does
not
necessarily
fall
above
or
below
the
exact
concentration
(
0032,
0038).

Response:
The
suggested
calibration
gas
certification
language
has
been
added.
The
goal
of
bracketing
the
sample
concentrations
with
calibration
gas
has
been
listed
as
a
recommendation
whenever
possible.

134.
Comments:
We
have
few
issues
with
the
analyzer
accuracy
or
ability
of
existing
reference
methods
to
perform.
The
real
place
for
improvement
is
with
uncertainty
of
certified
gases.
Based
on
cost,
the
proposed
changes
are
not
justified.
Is
a
1­
yr
limit
on
gas
tag
values
reasonable?
Does
data
support
the
need
for
the
proposed
expiration
lengths
and
requirements
for
recertification?
Through
my
observations,
I
believe
that
an
improved
audit
program
for
the
gas
manufacturers
would
provide
better
results
than
the
proposed
changes
in
the
methods
since
we
rely
on
them
as
standards.
The
EPA
is
not
made
aware
of
these
problems,
but
we
frequently
observe
them
in
the
field
when
conducting
relative
accuracy
test
audits.
(
0039,
0042,
0043)

Response:
We
understand
your
concern
with
the
uncertainty
of
certified
gases.
We
are
currently
evaluating
ways
of
auditing
manufacturer=
s
gases.
Through
this
effort,
we
hope
to
obtain
sufficient
data
to
update
our
current
thinking
on
the
matter.
Until
then,
we
must
continue
to
rely
on
the
specifications
and
expiration
limits
set
forth
in
the
Traceability
Protocol.

135.
Comment:
The
mid­
level
gas
range
is
too
wide
to
demonstrate
linearity
of
the
system.
Non­
linearity
is
typically
worst
near
50
percent
of
the
calibration
range,
and
this
proposed
change
would
allow
using
a
gas
that
hides
the
non­
linearity.
Why
change
the
current
40
to
60
percent
to
the
proposed
20
to
70
percent?
It
appears
to
provide
less
accurate
data.
The
width
of
the
proposed
range
is
excessive
and
is
so
wide
that
the
calibration
curve
may
not
accurately
reflect
instrument
response.
The
old
40
to
60
percent
is
reasonable
and
should
remain
the
standard.
(
0016,
0028,
0032,
0041,
0059)

Response:
We
have
retained
the
40
to
60
percent
of
calibration
span
range
for
the
midlevel
calibration
gas.

136.
Comment:
Zero
gas
falls
in
the
category
of
low­
level
gas.
Is
the
gas
manufacturer
required
to
certify
the
zero
gas
according
to
the
Protocol?
If
so,
the
Protocol
should
include
a
procedure
for
zero
gas.
(
0032)

Response:
Where
a
zero
gas
is
used
for
the
low­
level
gas,
it
does
not
have
to
be
certified
by
the
traceability
protocol.

137.
Comment:
Is
the
term
low­
level
gas
also
a
potential
reference
to
zero
gas?
Can
calibration
gases
still
be
generated
by
a
Method
205
dilution
system?
Some
test
firms
have
invested
largely
in
dilution
systems.
(
0030)

Response:
See
the
response
to
Comments
138.
48
3.12
NO2
Converter
Efficiency
Test
in
Method
7E
138.
Comment:
NO2
calibration
gases
for
the
converter
test
are
not
available
as
EPA
Traceability
Protocol
for
Assay
and
Certification
of
Gaseous
Standards.
Since
these
gases
are
not
available,
this
section
cannot
be
completed
to
meet
the
standards
in
Section
7.1.
NO2
gas
exhibits
unusual
storability
problems
and
maintaining
an
NO2
calibration
gas
at
its=
certified
concentration
may
be
difficult.
(
0008,
0010,
0038)

Response:
We
have
found
that
NO2
of
traceability
protocol
quality
is
available
commercially,
but
in
limited
concentrations
from
limited
sources.
We
also
concur
that
there
may
be
long­
term
stability
problems
with
NO2
cylinder
gases.
Because
of
these
concerns,
we
have
retained
the
original
procedures
cited
in
Method
20
for
determining
converter
efficiency
in
addition
to
this
procedure
of
direct
evaluation
with
NO2.
The
NO2
approach
is
simple,
quick,
and
provides
a
direct
assessment
of
the
converter
efficiency
from
the
known
cylinder
tag
value.
However,
although
this
approach
is
theoretically
acceptable
and
straight
forward,
we
have
cautioned
the
user
that
state­
of­
the­
art
NO2
calibration
gases
may
not
be
sufficiently
stable
and
thus
make
it
more
difficult
to
pass
the
90
percent
conversion
efficiency
requirement.
In
cases
where
this
option
is
taken,
the
NO2
must
be
prepared
according
to
the
traceability
protocol
and
be
accurate
within
2
percent.

139.
Comment:
For
the
converter
efficiency
gases
in
Section
7.1.4,
NO2
calibration
gases
are
not
available
as
an
EPA
Traceability
Protocol
for
Assay
and
Certification
of
Gaseous
Standards.
Since
NO2
protocol
gases
are
not
available,
this
section
could
not
be
completed
per
the
standards
required
in
Section
7.1.
(
0042)

Response:
See
response
to
Comment
138.

140.
Comment:
The
preamble,
section
II,
item
6
states
to
perform
converter
check
before
each
test,
but
this
conflicts
with
the
Summary
Table,
which
says
after
each
test.
(
0025)

Response:
The
converter
check
must
be
passed
before
each
test;
this
has
been
clarified.

141.
Comment:
The
methodology
for
implementing
a
NOx
converter
correction
factor
based
on
the
converter
test
results
is
unclear.
While
requiring
this
test
is
preferred,
maintaining
the
current
methodology
for
correcting
for
system
bias
eliminates
the
need
for
any
additional
corrections.
(
0038)

Response:
Correcting
the
test
results
for
converter
efficiency
is
not
required
in
the
final
rule
but
may
be
done
at
the
discretion
of
the
tester.
We
have
provided
an
equation
for
this
purpose
where
desired
when
the
alternative
Tedlar
bag
converter
check
is
used.

142.
Comment:
What
happened
to
the
current
method
of
mixing
air
and
NO
to
generate
NO2
for
the
converter
check?
There
are
several
practical
problems
with
using
NO2
cylinders
in
the
proper
ranges
for
every
test.
(
0004,
0009)
49
Response:
The
current
method
of
mixing
air
and
NO
to
generate
NO2
for
the
converter
check
was
retained
as
an
allowable
option.

143.
Comment:
The
requirement
for
the
uncertainty
of
the
NO2
cylinder
concentration
is
missing,
and
should
be
listed
as
2
percent.
(
0013)

Response:
The
uncertainty
of
the
NO2
cylinder
concentration
must
be
within
2
percent,
and
this
has
been
noted.

144.
Comment:
Method
7E
section
requires
that
the
converter
efficiency
gas
have
a
concentration
within
50
percent
of
the
measured
concentration.
In
most
cases,
the
tester
has
no
means
to
determine
the
NO2
concentration
in
advance
so
there
is
no
viable
means
to
comply
with
this
specification.
It
is
also
impractical
to
carry
several
NO2
cylinders
to
be
properly
prepared.
Some
comments
suggest
a
simple
daily
check
of
the
converter
is
the
practical
and
reasonable
approach.
Some
comments
say
this
section
conflicts
with
Section
8.2.3
or
8.2.4,
and,
furthermore,
uses
the
term
suggest
rather
than
must.
What
does
within
50
percent
mean?
Especially
clarify
vs.
50­
150
percent
suggested
in
Section
8.2.4.
Since
we
do
not
know
the
NOx
analyzer
range
at
which
we
will
be
operating,
it
could
be
specified
that
the
NO2
converter
check
be
done
at
a
specified
percentage
of
the
range
on
which
the
analyzer
will
be
operated.
Perhaps
require
using
a
gas
concentration
equal
to
or
greater
than
the
highest
NO2
concentration
that
will
be
measured.
Comment
cites
NIST
stability
problems
with
NO2,
and
suggests
EPA=
s
Alt­
013
method
for
using
50
ppmv
NO2
gas
as
a
one
size
fits
all
approach
if
it
is
necessary
to
retain
use
of
NO2
gas
at
all.
A
converter
efficiency
test
can
be
demonstrated
with
any
NO2
gas,
such
as
20
ppm,
relevant
to
a
specific
source
category.
(
0004,
0011,
0012,
0013,
0014,
0016,
0022,
0027,
0029,
0028,
0031,
0031,
0032,
0034,
0035,
0038,
0039,
0047)

Response:
We
understand
the
difficulty
in
preparing
NO2
cylinder
gases
to
match
anticipated
emission
levels.
We
have
dropped
the
proposed
requirement
to
match
the
stack
NO2
concentration
within
50
percent
and
have
added
a
40
to
60
ppm
test
concentration
for
all
cases
as
is
allowed
in
our
Alt­
013
method.

145.
Comment:
If
the
NO2
concentration
is
low,
as
it
usually
is
(<
5
ppm),
the
"
50
percent
of
the
average
measured
concentration
will
be
very
difficult
to
meet.
Therefore,
an
alternative
specification
of
"
10
ppm
(
or
perhaps
20
ppm)
should
be
provided.
In
addition,
if
the
NO2
is
significantly
variable,
the
"
50
percent
of
the
average
measured
concentration
will
be
too
restrictive.
For
these
sources,
there
should
be
an
alternative
specification,
e.
g.,
>
150
percent
of
average.
In
all
cases,
dilution
of
the
NO2
standard
should
be
allowed
(
0032,
0030,
0031).

Response:
See
response
to
Comment
144.

146.
Comment:
Note
that
NO2
is
a
gas
that
must
be
handled
with
extreme
care
even
at
low
concentrations
because
of
exposure
effects.
Do
the
proposed
changes
produce
an
increase
in
data
quality
that
justifies
the
burdens
of
the
procedure?
Since
most
converters
have
a
definite
lifespan,
would
it
be
possible
to
evaluate
it
on
a
less
frequent
basis
based
on
operational
life
and
NOx
concentrations
in
the
effluent?
Is
it
acceptable
to
both
the
Office
of
Air
Quality
Planning
and
50
Standards
(
OAQPS)
and
the
Clean
Air
Markets
Division
(
CAMD)
to
produce
the
required
concentrations
of
NO2
from
protocol
cylinders
using
Method
205?
(
0059)

Response:
We
have
noted
in
the
safety
section
that
NO2
is
a
toxic
and
dangerous
gas.
We
are
allowing
this
procedure
at
the
discretion
of
the
tester
because
it
is
simple,
quick,
and
provides
an
accurate
assessment
of
the
converter
efficiency
from
the
known
cylinder
concentration.
We
believe
a
converter
test
before
each
source
test
is
a
better
way
to
assure
converter
efficiency
than
by
scheduling
evaluations
whenever
projected
life­
spans
and
exposure
concentrations
suggest
the
need.
The
new
requirement
to
test
the
converter
at
40
to
60
ppm
should
eliminate
the
need
for
gas
dilution.
Gas
dilution
meeting
the
Method
205
criteria
is
acceptable
to
OAQPS,
but
for
part
75
applications,
Administrative
approval
from
CAMD
is
required.

147.
Comments:
Why
was
the
current
converter
efficiency
method
(
as
described
in
Method
20,
using
a
Tedlar
bag
with
NO
and
O2
mixture
for
30
minutes)
removed
from
the
rule?
We
recommend
allowing
the
current
method
as
well
as
the
NO2
calibration
gas
method.
(
0009,
0020,
0027,
0031)

Response:
The
converter
efficiency
method
currently
in
Method
7E
has
been
retained
as
an
additional
option.

148.
Comments:
The
NO2
to
NO
converter
check
outlined
in
40
CFR
86.123­
78
says
you
can
perform
it
prior
to
service
and
at
least
monthly
thereafter,
while
the
Summary
Table
of
QA/
QC
says
after
every
test.
Please
clarify,
and
note
the
increased
cost
if
it
is
the
latter.
Please
clarify
what
prior
to
the
test
means.
It
can
be
interpreted
as
each
test,
1­
day,
7­
days,
30­
days,
or
before
each
project.
Before
each
test
is
too
excessive.
NO2
is
an
unstable,
expensive
gas
with
a
short
shelf
life,
so
before
each
test
is
excessive
and
should
be
quarterly
or
bi­
annually.
Note
that
stainless
steel
converters
have
endless
life,
unlike
the
Molly­
converters
which
may
require
more
stringent
checks.
(
0014,
0020,
0034,
0035,
0043,
0048,
0056)

Response:
The
converter
efficiency
must
be
verified
prior
to
each
source
test
whether
you
use
the
NO2
cylinder
gas
test,
the
NO
and
O2
in
the
Tedlar
bag
procedure
cited
in
Method
20,
or
the
procedure
in
40
CFR
86.123­
78.
We
do
not
believe
this
is
excessive.
Other
less­
frequent
alternative
procedures
may
be
used
with
the
approval
of
the
Administrator.

149.
Comment:
What
about
the
cases
where
the
amount
of
NO2
is
low
in
comparison
to
NO?
Perhaps
a
fixed
value
of
NO2
could
be
specified
based
on
impact.
In
some
cases,
the
principal
component
is
NO2
(
as
in
some
combustion
turbines
with
CO
catalyst);
90
percent
efficiency
means
that
the
measured
concentration
would
be
10
percent
low.
If
the
NO2
component
is
5
percent
of
the
total,
then
90
percent
efficiency
would
mean
that
the
measured
concentration
is
0.45
percent
low
(
0032).

Response:
The
allowance
currently
in
Method
7E
for
a
converter
exemption
in
cases
where
the
NO2
portion
of
NOx
is
less
than
5
percent
provides
opportunity
for
a
low
data
bias.
We
have
not
included
the
converter
exemption
allowance
in
Method
7E
to
improve
the
certainty
of
the
generated
data.
51
150.
Comments:
How
can
a
tester
determine
what
the
amount
of
NO2
is
in
the
sample
if
the
analyzer
does
not
differentiate
between
NO
and
NO2?
(
0009,
0031)

Response:
The
proposed
requirement
to
conduct
the
converter
test
at
an
NO2
concentration
within
50
percent
of
the
emission
NO2
concentration
has
been
dropped.
A
new
requirement
to
evaluate
the
converter
in
the
40
to
50
ppm
range
has
been
added.

151.
Comment:
NO2
cylinders
are
more
unstable
than
the
Tedlar
bag
technique.
In
my
experience,
NO2
cylinders
are
fairly
accurate
for
up
to
1
month,
then
continue
to
degrade,
and
I
have
checked
this
on
multiple
analyzers,
so
it
is
not
a
converter
issue.
The
comment
then
presents
a
modified
Method
20
bag
technique
to
determine
true
converter
efficiency,
because
the
author
assumes
the
lack
of
this
procedure
is
why
the
bag
technique
was
discarded.
(
0031,
0039)

Response:
See
response
to
Comment
138.
The
modifications
to
the
Method
20
bag
procedure
that
the
commenter
recommends
have
been
incorporated
into
Method
7E.

152.
Comment:
The
uncertainty
calculation
does
not
adjust
for
NO2
converter
efficiency.
Some
instruments
do,
and
that
is
what
it
should
say
here.
(
0035)

Response:
We
have
dropped
the
proposed
requirement
to
calculate
and
report
the
uncertainty
of
each
test
run.

153.
Comment:
Using
actual
NO2
cal
gas
to
determine
conversion
efficiency
removes
any
question
of
the
NO
to
NO2
conversion
as
per
the
old
Tedlar
bag
method;
we
think
this
is
a
good
choice.
We
suggest
retaining
the
stability
requirements
in
the
current
Method
20,
Section
5.6.1,
along
with
the
>
90
percent
conversion
efficiency
requirement
for
an
acceptable
system.
(
0041)

Response:
We
have
retained
the
90
percent
converter
efficiency
and
30­
minute
stability
requirement.

154.
Comment:
This
section
says
that
converter
efficiency
gas
must
have
a
NO2
concentration
within
50
percent
of
the
average.
Is
this
referring
to
the
measured
concentration
of
NOx
or
NO2?
To
be
consistent
with
Section
7.1.4,
this
should
be
NO2.
(
0038)

Response:
The
proposed
requirement
to
evaluate
the
converter
within
50
percent
of
the
NO2
concentration
in
the
source
has
been
dropped.
The
new
converter
test
requires
an
NO2
concentration
between
40
to
60
ppm
if
this
option
of
evaluating
the
converter
is
chosen.

155.
Comment:
For
the
converter
efficiency
gas,
it
is
impractical
to
have
an
NO2
calibration
gas
cylinder
that
is
within
50
percent
for
every
measured
NO2
concentration.
This
will
require
contractors
to
carry
many
NO2
calibration
gas
cylinders.
How
would
one
know
what
the
NO2
concentration
is
if
reported
emissions
are
for
NOx?
What
portion
is
NO
and
what
is
NO2?
A
daily
check
of
the
converter
with
a
cylinder
of
50
ppm
NO2
is
practical
and
reasonable.
The
converter
should
demonstrate
greater
than
90
percent
conversion
of
NO2
to
NO.
52
Additional
gases
will
significantly
affect
testing
costs.
Another
cost
may
be
associated
with
DOT
requirements
for
commercial
vehicles
and
hazardous
material
transport.
Any
time
additional
gases
can
be
eliminated
from
a
test
program,
this
should
be
considered.
(
0009,
0011,
0056)

Response:
We
agree
with
the
commenter
and
have
incorporated
the
recommendations.
The
proposed
converter
test
using
NO2
cylinder
gas
is
now
one
of
three
options
we
allow
for
the
efficiency
test.
The
original
Tedlar
bag
test
option
and
a
similar
procedure
cited
in
40
CFR
86.123
are
testing
options
as
well.

156.
Comment:
In
Section
10.1
of
Method
7E,
the
3rd
sentence
states
if
your
analyzer
measures
NO
and
NO2
separately,
then
you
must
use
both
NO
and
NO2
calibration
gases.
Please
clarify
and
give
examples
of
which
analyzers
would
be
covered
here...
in
my
opinion
it
would
be
most
of
them.
For
example,
in
my
opinion
a
chemiluminescence
analyzer
measures
NO
and
NO2
separately.
Does
this
mean
I
now
have
to
calibrate
with
three
NO
and
three
NO2
gases?
Most
NOx
analyzers
could
measure
both
NO
and
NO2
by
making
total
NOx
and
NO
measurements
separately.
Make
it
clear
that
analyzers
using
NOx
converters
do
not
fall
into
the
separate
measurement
category.
(
0014,
0038)

Response:
Clarity
has
been
added
to
Section
10.1
to
state
that
the
analyzer
must
be
calibrated
for
all
species
of
NOx
that
it
detects.
Since
the
chemiluminescence
analyzer
only
detects
NO,
it
only
has
to
be
calibrated
with
NO.
This
has
been
specifically
stated.

157.
Comment:
There
should
be
an
exemption
for
low­
level
NO2
emitters
because
determining
the
90
percent
converter
efficiency
is
not
reasonable
or
practical
for
such
sources
(
0032).

Response:
The
proposed
requirement
to
evaluate
the
converter
within
50
percent
of
the
NO2
concentration
in
the
source
has
been
dropped.
The
new
requirement
to
evaluate
the
converter
in
the
40
to
60
ppm
range
is
practical
for
low­
level
NO2
emitters
and
no
exemption
is
needed.

158.
Comment:
For
the
converter
efficiency
in
Section
3.4,
what
happened
to
the
current
method
of
mixing
air
and
NO?
Does
this
mean
that
for
every
test
we
have
to
carry
a
cylinder
of
NO2
so
that
we
can
conduct
a
converter
efficiency
test
for
every
concentration
and
still
use
the
equations
for
upper
and
lower
uncertainty?
It
appears
that
the
use
of
an
NO2
calibration
standard
is
required.
NO2
gas
exhibits
unusual
storability
difficulties.
Maintaining
an
NO2
calibration
gas
at
its
certified
concentration
could
be
difficult.
(
0004,
0010)

Response:
See
response
to
Comments
138,
148,
and
150.

159.
Comment:
For
the
converter
efficiency
gas
in
Section
7.1.4,
how
do
we
know
what
the
measured
NO2
concentration
is?
(
0004)

Response:
See
response
to
Comment
155.
53
3.13
General
Comments
3.13.1
Supportive
Comments
160.
Comment:
(
1)
Bringing
consistency
to
the
methods
will
bring
clarity
and
remove
a
great
deal
of
confusion.
Standardizing
6C
or
7E
is
a
good
idea,
since
the
precision
and
repeatability
of
those
methods
has
been
demonstrated
by
an
EPA
study.
(
2)
Removal
of
the
drift
requirement
will
simplify
testing
and
should
be
acceptable
as
long
as
the
system
passes
initial
and
final
bias
tests.
(
3)
The
NO2
system
bias
and
NH3
interference
problems
found
at
low
concentrations
are
best
evaluated
using
dynamic
spiking.
(
4)
The
30­
sec
minimum
response
time
requirement
was
always
difficult
to
meet
and
never
necessary.
(
0016)

Response:
Of
the
noted
revisions
the
commenter
thought
were
a
good
idea,
only
the
calibration
drift
requirement
between
samples
has
been
retained
because
of
its
expressed
need
by
other
commenters.

161.
Comment:
Overall,
the
proposed
revisions
are
positive
improvements
to
the
test
methods
and
are
expected
to
be
much
easier
to
follow
by
individuals
conducting
air
emission
stack
tests.
(
0024)

Response:
No
response
is
necessary.

162.
Comment:
We
compliment
EPA
on
their
very
credible
performance
in
addressing
a
complex
subject,
and
revising
it
into
a
well
harmonized,
simplified,
and
updated
test
methods
that
allow
for
flexibility
and
innovation
in
the
emissions
measurement
industry.
We
compliment
the
EPA
staff
at
OAQPS/
EMC
for
developing
a
well
thought
out
rule
that
encourages
a
system
of
testing
performance
for
instruments
and
sampling
systems.
(
0053)

Response:
No
response
is
necessary.

163.
Comment:
This
proposal
addresses
long­
standing
inconsistencies
between
instrumental
test
methods.
The
revised
methods
provide
a
standard
approach
for
defining
a
confidence
range
around
the
method
result
which
will
aid
in
assessing
compliance.
The
previous
procedures
generated
a
single
value
with
no
explicit
estimate
of
uncertainty
which
created
a
level
of
certainty
that
is
not
warranted
by
the
technology
of
the
methods.
This
new
approach
is
particularly
useful
when
measuring
very
low
emission
levels,
such
as
NOx
emissions
from
gas
turbines
with
selective
catalytic
reduction
control
equipment.
These
are
very
positive
changes.
(
0060)

Response:
No
response
is
necessary.

164.
Comment:
It
is
obvious
that
the
editors
put
a
great
deal
of
time
and
effort
into
the
preparation
of
this
proposal,
and
they
should
be
commended
for
their
dedication.
(
0059)

Response:
No
response
necessary.
54
3.13.2
Adverse
Comments
165.
Comment:
The
proposed
changes
are
intended
to
harmonize,
simplify,
and
update
the
test
methods,
but
they
do
not
accomplish
this
goal.
The
revised
methods
are
conflicting,
contain
numerous
errors,
misstatements,
and
inconsistencies.
Some
of
the
requirements
are
so
confusing
and
lack
proper
explanation
that
we
could
not
determine
how
to
conduct
those
test
requirements.
Other
requirements
appear
to
be
impossible
to
meet,
and
we
do
not
believe
we
could
perform
these
reference
method
tests
as
described
in
the
proposed
procedure.
Further,
I
do
not
see
how
the
proposed
changes
either
simplify
or
reduce
the
costs
of
testing.
The
proposed
rules
may
simplify
the
way
the
methods
are
written,
in
that
they
all
refer
to
7E,
but
they
increase
the
complexity
of
the
field
procedures
rather
than
simplifying
them.
The
proposed
rules
will
make
the
source
testing
profession
more
difficult,
with
the
result
being
less­
experienced
people
generating
worse
data
than
is
presently
obtained.
(
0008,
0009,
0014,
0027,
0031,
0032,
0033,
0034,
0039,
0041,
0042,
0043,
0055)

Response:
We
disagree
that
the
updates
do
not
improve
the
methods
or
the
data
generated.
In
the
final
rule,
we
attempted
to
correct
any
errors
and
inconsistencies
that
were
brought
to
our
attention
through
the
public
comments
on
the
proposal.

166.
Comment:
Some
of
the
proposed
changes
are
anticipated
to
impact
the
length
and
cost
of
RATA
and/
or
stratification
tests
and
result
in
additional
cal
gases
being
ordered
and
maintained
for
testing.
(
0042)

Response:
We
disagree
that
we
lengthened
and
increased
the
cost
of
RATA
because
we
specifically
allow
for
performance
testing
of
CEMS
(
i.
e.,
RATA)
using
procedures
in
the
appropriate
performance
specification
or
applicable
regulation.
See
Section
8.1.1.
Secondly,
we
added
flexibility
by
including
alternative
procedures
that
may
save
the
tester
time
when
applied
in
appropriate
cases.
Finally,
we
allow
Method
205
for
diluting
high
concentration
calibration
gases
of
traceability
protocol
quality.
Any
additional
costs
associated
with
purchasing
additional
calibration
gases
is
offset
by
the
increase
in
data
certainty.

167.
Comment:
Almost
all
third
party
testing
companies
are
small,
and
much
of
the
available
talent
resides
in
these
small
companies.
These
companies
cannot
afford
the
equipment
with
recorded
alarms
and
superfluous
flow
controllers
that
larger
companies
have
but
rely
on
knowledge,
experience,
and
finesse.
The
proposed
rules
have
many
direct
quotes
and
ideas
from
a
set
of
comments
presented
to
EPA
in
draft
form
by
and
industry
trade
group,
which
is
a
group
that
manufactures
air
pollution
control
equipment,
instrumentation,
stack
testing
equipment
and
CEMS.
While
this
group
is
an
important
technical
resource,
it
is
also
clear
that
they
only
stand
to
gain
from
this
legislation
and
have
nothing
to
lose.
Some
of
the
trade
group
comments
are
reasonable,
but
others
are
self­
serving.
The
EPA
has
proposed
a
number
of
quality
assurance
requirements
which
are
unreasonable,
unnecessary,
and
more
stringent
on
the
tester
than
the
analyzer
and
calibration
gas
manufacturers.
(
0039)

Response:
The
EPA
received
input
from
a
number
of
stakeholders
including
industry
groups,
gas
and
instrument
manufacturers,
stack
testers,
and
regulators
prior
to
the
proposal.
Valuable
input
was
obtained,
and
we
tried
to
keep
the
method
revisions
neutral
and
not
favor
a
55
particular
group.
Acceptable
performance
criteria
for
tests
are
based
on
what
is
necessary
for
acceptable
results,
whereas
acceptable
performance
criteria
for
manufacturers
are
based
on
producing
a
quality
instrument.
We
continue
to
rely
on
testers=
knowledge
and
experience
to
select
and
use
appropriate
equipment
at
individual
facilities.

3.13.3
Other
Comments
168.
Comment:
We
support
the
goal
of
addressing
low­
concentration
measurements.
However,
these
revisions
provide
little
to
enhance
those
measurements.
In
addition,
the
special
provisions
for
sampling
system
bias
performance
criteria
for
measuring
low
concentrations
of
exhaust
species
include
a
3­
yr
sunset
clause.
EPA
should
eliminate
the
sunset
clause,
identify
issues
associated
with
low
concentration
measurement,
and
identify
additional
measures
to
address
those
issues.
(
0032)

Response:
The
Agency
has
tried,
within
its
resources,
to
identify
the
issues
associated
with
low­
concentration
measurements.
We
have
consulted
with
instrument
manufacturers
and
cylinder
gas
producers
to
determine
the
reasonable
capabilities
of
current
measurement
technology.
There
is
still
much
work
to
be
done
to
adapt
current
test
methods
to
the
demands
of
low
Bconcentration
measurements;
however,
we
feel
the
method
additions
that
were
proposed
are
a
step
in
the
right
direction.
Note
that
we
have
dropped
the
proposed
3­
year
Asunset
clause@
on
the
alternative
specifications.
However,
we
reserve
the
right
to
revise
them
as
more
data
become
available.

169.
Comment:
For
concentrations
less
than10
ppm,
EPA
proposes
a
bias
specification
of
0.5
ppm,
which
makes
QA
specifications
practically
meaningless
for
many
low­
concentration
measurements.
Problems
with
low­
concentration
measurements
have
been
discussed
in
public
forums
for
at
least
10
years
now.
Some
investigative
work
has
been
done
by
groups
outside
of
EPA,
but
it
has
not
been
enough,
largely
due
to
funding
constraints.
EPA
should
conduct
a
study
focusing
on
low
concentration
CO
and
NOx
issues:
(
1)
Stability
and
accuracy
of
calibration
gases;
(
2)
achievable
calibration
error
and
bias
specifications;
(
3)
effects
of
ammonia
in
sample
gas;
(
4)
significance
of
NO2
sampling
system
bias;
and
(
5)
cross­
interference
from
compounds
other
than
analytes.
(
0016)

Response:
The
proposed
alternative
bias
specification
of
0.5
ppm
has
been
retained,
but
we
have
dropped
the
proposed
three­
year
sunset
provision.
The
Agency
does
not
currently
have
funds
to
perform
the
study
suggested
by
the
commenter.

170.
Comment:
The
test
methods
should
include
a
discussion
of
measurement
sensitivity,
minimum
detection
limit
(
MDL),
and
practical
quantification
limit
(
PQL).
These
important
issues
are
at
the
forefront
of
critical
problems
associated
with
emissions
testing.
EPA
should
address
these
topics
during
this
revision
to
the
test
methods
to
clarify
the
definition
of
MDL
and
PQL
and
provide
appropriate
procedures
to
determine
these
parameters
for
emissions
tests
with
results
that
challenge
instrument
sensitivity.
(
0032)
56
Response:
These
recommendations
are
a
good
idea
but
beyond
the
scope
of
this
rulemaking
since
we
have
not
fully
assessed
the
methods
relative
to
establishing
MDLs
and
PQLs.

171.
Comment:
The
EPA
proposes
to
revise
Section
1.2
of
the
methods
to
state
that
they
are
required
in
"
specific
New
Source
Performance
Standards,
Clean
Air
Marketing
Rules,
and
State
Implementation
Plans
and
Permits
where
measuring
[
the
applicable
parameter]
in
stationary
source
emissions
is
required."
This
statement
could
be
misinterpreted
to
say
that
the
methods
are
required
in
each
of
those
regulatory
programs
whenever
such
measurements
are
required.
That
is
clearly
not
the
case.
Each
method
applies
when
a
specific
permit
or
regulation
requires
its
use.
The
programs
EPA
cites
may
very
well
be
examples
of
programs
where
regulations
may
specify
(
or
allow)
the
use
of
these
methods,
but
they
do
not
necessarily
require
these
methods.
Other
methods,
including
the
non­
instrumental
test
methods,
or
some
alternative
method,
may
be
allowed.
EPA
should
abandon
the
proposed
revision
to
the
applicability
sections
and
retain
the
existing
language,
which
states
that:
AThis
method
is
applicable
to
the
determination
of
[
the
applicable
parameter]
in
emissions
from
stationary
sources
only
when
specified
within
the
regulations.@
If
the
EPA
wants
to
name
specific
programs
where
use
of
these
methods
might
be
required
or
allowed,
they
should
list
them
as
examples
and
not
as
statements
of
applicability.
Applicability
of
particular
test
methods
even
within
Part
60
is
addressed
in
the
individual
NSPS
subparts,
not
in
the
test
methods
themselves.
(
0038)

Response:
Section
1.2
was
modified
to
mention
the
programs
as
examples
of
where
the
methods
may
be
required.

172.
Comment:
Since
major
national
emission
reduction
efforts
such
as
the
Acid
Rain
and
NOx
Budget
programs
rely
on
40
CFR
Part
75
emission
monitoring
accuracy
calculations,
we
request
that
EPA=
s
Emissions
Measurement
Center
consult
with
the
Clean
Air
Markets
Division
as
to
what
possible
effects
the
proposed
changes
could
have
on
calculated
accuracy
and
on
a
facilities=
ability
to
pass
RATAs.
(
0016,
0025,
0038,
0042)

Response:
The
Clean
Air
Markets
Division
is
and
has
been
a
component
of
the
internal
review
team.

173.
Comments:
Method
25A
is
considered
an
instrumental
method
and
should
have
been
included
with
this
package
for
consistency.
Methods
3A,
6C,
and
7E
should
be
used
as
the
foundation,
and
Methods
10,
20,
and
25A
should
be
revised
to
match
this
format.
Most
state
agencies
currently
allow
Method
10
to
be
performed
by
the
procedures
of
Method
6C
as
long
as
a
4­
pt
calibration
is
performed
(
zero,
low,
mid
and
high).
Similarly,
most
state
agencies
allow
or
specify
Method
20
to
be
performed
as
per
7E
while
following
sample
collection
procedures
of
Method
20.
For
25A...
it
is
a
matter
of
correcting
out
bias
according
to
6C
procedures.
The
hotwet
nature
of
25A
should
be
preserved,
but
cal
gas
ranges,
QA
parameters,
etc.,
should
be
made
consistent
with
the
other
methods.
In
addition,
I
recommend
and
option
in
25A
for
methane
and/
or
ethane
analysis
by
one
of
the
Method
18
techniques
for
subtraction
from
the
25A
result.
Some
of
the
newer
analyzers,
such
as
the
VIG,
TECO,
and
California
Analytical
non­
methane
analyzers,
should
be
included
as
an
alternative
in
the
procedure.
Other
alternatives
would
be
bag
sampling
with
lab
analysis
or
on­
site
GC
with
direct
interface.
See
original
comment
for
further
57
details
of
the
suggestion.
Once
a
final,
uniform
methodology
can
be
agreed
upon
and
approved,
the
7E
procedure
should
also
be
incorporated
by
reference
into
Method
25A.
(
0017,
0021,
0034,
0038,
0039,
0043)

Response:
We
agree
that
Method
25A
should
be
updated
to
incorporate
the
revisions
made
to
Method
7E.
We
plan
to
do
this
in
a
separate,
future
rulemaking.

174.
Comments:
How
will
the
proposed
changes
affect
Performance
Specifications
2,
3,
and
4?
(
0021)

Response:
The
changes
will
not
affect
the
procedures
and
requirements
in
the
applicable
performance
specifications.
The
sampling
points
prescribed
in
the
performance
specifications
will
govern
when
the
instrumental
methods
are
used
in
relative
accuracy
testing.

175.
Comment:
Maintenance
logs
for
all
equipment
should
be
required
to
be
maintained
and
available
in
the
field
for
inspection.
These
logs
should
have
entries
that
are
dated
and
initialed
and
include
records
of
maintenance,
parts
replacement,
problems
reports,
corrective
actions
taken
and
other
relevant
information.
(
0022,
0023)

Response:
This
is
a
good
suggestion
which
should
already
be
an
integral
part
of
a
tester=
s
standard
operating
procedure.
However,
to
keep
the
updates
as
minimal
and
simple
as
possible,
we
prefer
the
keeping
of
maintenance
logs
be
a
part
of
the
testing
company=
s
QA/
QC
and
not
a
part
of
the
methods.

176.
Comments:
The
preamble
states
that
the
manufacturer=
s
stability
test
(
MST)
is
required
for
all
instruments
used
routinely
for
low
concentrations,
but
the
statement
is
not
made
in
the
text
of
any
of
the
methods.
It
should
be
stated
clearly
in
the
Summary
Table
for
QA/
QC
of
7E,
Section
9,
and
in
7E,
Section
16.2.
Further,
the
Summary
Table
indicates
that
the
MST
is
mandatory
for
all
analyzers,
but
the
preamble
only
mentions
low
emitters,
so
which
is
it?
(
0025)

Response:
It
is
now
stated
in
Method
7E
that
the
MST
is
required
for
all
instruments
used
routinely
for
low­
concentration
measurements,
and
is
cited
in
the
other
methods.
The
summary
QA/
QC
table
in
Section
9
of
Method
7E
was
corrected
to
require
it
only
in
the
case
of
routine
low­
concentration
measurements.

177.
Comments:
There
is
a
discussion
of
type
certification
in
Method
6C,
but
none
for
7E.
Is
this
an
oversight?
(
0025)

Response:
The
manufacturer=
s
stability
test
is
offered
as
an
option
in
Method
3A,
6C,
7E,
10,
and
20
and
is
a
requirement
in
each
method
where
routine
low­
concentration
measurements
are
made.

178.
Comments:
The
commenter
suggests
that
we
remove
Section
12.7.5
ANOx
Emission
Rate
Concentration@
from
Method
7E
and
include
it
in
Method
19.
Method
7E
can
then
reference
Method
19.
(
0034)
58
Response:
We
agree
with
the
commenter
and
removed
Section
12.7.5
from
Method
7E.

179.
Comments:
Why
are
Methods
3A,
6C,
and
10
abbreviated?
There
is
a
great
deal
of
confusion,
over­
specification
and
under­
specification
that
occurs
with
this
cost­
saving.???????
In
the
long
run,
it
will
be
cheaper
to
have
each
method
stand
alone
with
each
section
having
been
carefully
thought
out
for
that
method/
instrument.
The
interference
check
of
6C
is
a
prime
example.
(
0035)

Response:
We
agree
that
specifying
all
the
requirements
in
each
method
is
clearer
than
abbreviating
(
i.
e.
cross­
referencing).
However,
due
to
cost
constraints,
cross­
referencing
is
necessary.

180.
Comments:
Several
of
the
proposed
changes
will
impact
the
current
RATA
software
developed
around
the
existing
reference
methods.
Changes
will
be
costly
and
will
require
time
to
identify,
re­
design,
and
test.
Specific
areas
include
(
1)
elimination
of
the
drift
test;
(
2)
bias
based
on
emission
standard
rather
than
span;
(
3)
use
of
mid
or
high­
level
gas
for
bias
test;
(
4)
low­
level
gas
required
to
be
within
0.25
percent
of
upper
range.
(
0042)

Response:
The
final
rule:
(
1)
retains
the
drift
test
and
correction;
(
2)
retains
the
requirement
to
calculate
bias
as
a
percent
of
span;
(
3)
retains
the
requirement
to
use
the
mid­
or
high­
level
gas
for
bias
checks;
and
(
4)
specifies
that
the
low­
level
calibration
gas
must
be
less
than
20
percent
of
the
calibration
span,
except
where
the
low­
level
gas
is
a
zero
gas.
In
that
case,
the
specification
is
0.25
percent
of
the
calibration
span
or
0.5
ppm,
whichever
is
less
restrictive.

We
understand
that
some
of
the
updated
specifications
may
require
the
adjustment
of
testing
software
now
in
use.
We
believe
the
benefits
gained
from
using
harmonized
methods
outweigh
this
initial
inconvenience.

181.
Comment:
The
term
system
calibration
seems
wrong.
Don=
t
you
mean
system
bias?
We
don=
t
calibrate
a
system.
(
0048)

Response:
You
have
a
point.
We
have
added
definitions
for
Asystem
calibration
mode@
and
Adirect
calibration
mode@
to
alleviate
confusion.
We
prefer
these
simplified
terms
over
the
current
wording
to
A
introduce
calibration
gas
at
the
calibration
valve
installed
at
the
outlet
of
the
sampling
probe@
and
Aintroduce
calibration
gas
to
the
measurement
system
at
any
point
upstream
of
the
gas
analyzer.@

182.
Comments:
The
methods
should
state
that,
in
the
event
the
tester=
s
data
acquisition
and
handling
system
malfunction
or
otherwise
are
not
capable
of
providing
certain
data,
hand
written
data
are
acceptable.
(
0038)

Response:
This
is
already
allowed
in
Section
6.1.9
of
Method
7E.

183.
Comment:
The
40
CFR
Part
60
Appendix
A
test
methods
that
were
revised
in
the
1997
proposal
should
be
updated
to
include
more
current
technology
while
not
making
the
tests
more
restrictive.
The
current
acceptance
criteria
prescribed
by
the
calibration
error,
calibration
59
drift,
bias
and/
or
interference
check
for
Methods
3A,
6C,
and
7E
should
not
be
changed
in
the
proposal.
Method
10
should
be
modified
to
require
the
same
performance
specifications
as
Methods
3A,
6C,
and
7E.
Method
25
should
be
modified
to
allow
direct
sample
interface
with
the
newer
methane/
nonmethane
analyzers
available
from
gas
analyzer
manufacturers.
Method
25
should
also
be
modified
to
have
fixed
performance
specifications
for
the
analyzer/
sample
interface
similar
to
Methods
3A,
6C,
7E,
and
10.
(
IV­
D­
03)

Response:
In
the
October,
2003
reproposal,
the
methods
were
written
to
allow
more
current
technology
to
be
used.
The
acceptance
criteria
prescribed
in
the
1997
proposal
for
calibration
error,
calibration
drift,
and
bias
has
been
changed
to
conform
with
the
current
limits.
Method
10
is
being
modified
to
require
the
same
performance
specification
as
Methods
3A,
6C,
and
7E.
Revisions
to
Method
25
will
have
to
be
made
under
a
separate
rulemaking
since
it
is
not
a
continuous
instrumental
test
method.
Such
revisions
are
beyond
the
scope
of
this
rulemaking.

184.
Comment:
Regardless
of
the
data
recording
device,
it
should
be
mandatory
that
the
calibration
error,
converter
efficiency,
stratification,
and
bias
checks
results
be
recorded
along
with
the
sample
run
data
and
copies
of
all
data
be
included
in
the
report.
Having
a
hard
copy
of
all
these
values
will
enable
agency
personnel
to
perform
a
thorough
data
validation
of
the
values.
(
0011)

Response:
The
final
rule
requires
that
this
data
be
documented
and
included
in
the
report.

185.
Comment:
In
the
nomenclature,
specifically
Cdir
and
Cs,
both
state
AYreported
by
the
gas
analyzer.@
To
be
consistent,
this
should
say
AYreported
by
the
data
recorder.@
These
values
for
determining
emissions
are
taken
from
the
data
recorder.
Since
the
data
from
the
data
recorder
will
be
used
for
emission
calculations,
then
all
other
recorded
values
used
in
the
emission
calculations
should
be
consistent
and
come
from
the
data
recorder.
(
0011)

Response:
Cdir
and
Cs
are
revised
in
the
final
rule.

186.
Comment:
Lime
kilns
should
be
added
to
the
Table
1,
Entities
Potentially
Affected
by
this
Action
in
the
preamble.
(
0007)

Response:
Lime
kilns
are
added
to
the
final
rule
preamble.

187.
Comment:
Uniform
procedures
should
be
prescribed
for
the
determination
of
analyzer
span.
For
example,
reference
method
analyzer
span
could
be
selected
such
that
the
pollutant
gas
concentration
encountered
is
not
less
than
30
percent
of
the
analyzer
range.
This
guideline
is
currently
enforced
by
the
Pennsylvania
Department
of
Environmental
Protection.
Also,
if
Methods
6C,
7E,
and
10
pollutant
gas
concentrations
are
less
than
100
ppmv,
reference
method
analyzer
span
cannot
exceed
100
ppmv.
This
would
establish
an
upper
limit
in
which
pollutant
concentrations
are
non­
detect
(
for
example,
SO2
emissions
from
a
low
odor
recovery
furnace).
(
0021)

Response:
We
believe
that
our
revised
definition
of
span
and
the
requirement
for
the
majority
of
the
data
to
be
between
20
and
80
percent
of
span
address
the
commenter=
s
concerns.
60
188.
Comment:
EPA
should
consider
a
style
similar
to
25A,
which
evaluates
(
1)
the
entire
system
accuracy
(
analyzer
error
and
system
bias)
at
the
same
time;
(
2)
calibration
limits
of
5
percent
of
gas
value;
(
3)
span
level
at
1.5
to
2.5
expected
emission
concentrations;
(
4)
four
calibration
gases
at
zero­,
low­,
mid­,
and
high­
level;
(
5)
bias/
drift
conducted
with
low
and
midlevel
gases.
(
0041)

Response:
The
requirement
to
perform
separate
analyzer
calibration
error
and
bias
checks
has
been
retained
(
except
for
dilution­
type
systems),
with
the
low­
level
gas
and
either
the
mid­
or
high­
level
gas
used
for
the
bias
checks.
The
methods
also
continue
to
require
the
use
of
three
calibration
gases
at
low­,
mid­,
and
high­
levels.
We
believe
our
revised
span
definition
provides
more
flexibility
than
the
approach
advocated
by
the
commenter.

189.
Comment:
For
all
methods,
Section
1.2
(
applicability)
should
clearly
state
that
the
methods
apply
to
periodic
monitoring
and
exclude
CEMS
installed
and
operated
according
to
Parts
60
and
75.
There
are
not
existing
CEMS
that
could
meet
all
of
the
proposed
requirements
in
the
methods.
(
0041)

Response:
Since
the
subject
methods
are
reference
methods,
the
requirements
would
not
apply
to
CEMS.

190.
Comment:
I
agree
that
the
proposal
would
advance
the
science
of
source
testing,
but
I
question
whether
the
benefits
to
public
health
are
sufficient
to
justify
the
increased
costs.
Data
quality
is
unimportant
if
it
doesn=
t
improve
public
health.
Please
don=
t
adopt
the
proposed
rules
without
further
revisions
and
opportunities
to
comment.
(
0043)

Response:
We
have
made
significant
revisions
to
the
methods
as
a
result
of
voluminous
comments
and
don=
t
believe
that
a
re­
proposal
is
necessary.

191.
Comment:
Determining
emission
rates
can
be
a
challenge
in
a
large
exhaust
stack.
An
approach
used
by
some
in
the
industry
is
to
determine
the
average
concentration
of
the
emissions,
then
calculate
the
emission
rate
based
on
the
fuel
flow
rate
and
the
calculated
emission
factor
(
or
alternatively,
the
emission
index).
This
approach
should
always
be
recommended
as
a
tool
for
assuring
data
quality
in
determining
mass
emission
rates.
(
0010)

Response:
This
approach
may
be
helpful
and
recommended
in
a
number
of
cases.
However,
we
are
limiting
the
method
revisions
to
the
determination
of
emissions
on
a
concentration
basis.

192.
Comment:
The
ASME
and
SAE
have
both
developed
procedures
for
measuring
emissions
from
gas
turbines.
SAE=
s
ARP­
1256A
is
a
common
tool
used
in
the
aircraft
engine
industry,
and
ASME=
s
B133.9­
1994
"
Measurement
of
Exhaust
Emissions
from
Stationary
Gas
Turbines"
is
also
an
excellent
reference
on
methods
for
measuring
gas
turbine
emissions.
These
methods
should
be
codified
as
alternate
measurement
procedures,
or
as
procedures
for
quality
assurance.
(
0010)
61
Response:
We
were
not
forwarded
copies
of
the
noted
methods
to
evaluate
their
requirements
relative
to
Method
7E.
Unless
these
methods
satisfy
the
requirements
of
Method
7E,
permission
to
use
them
must
be
granted
by
the
Agency
on
a
case­
by­
case
basis.

3.14
Specific
Comments
on
Method
3A
Section
1.0
Scope
and
Application
193.
Comment:
Method
3A
and
Method
3B
should
be
listed
as
appropriate
method
to
determine
diluent
O2
or
CO2
measurements
necessary
to
convert
emissions
to
units
of
the
standard.
(
0032)

Response:
The
appropriate
method
for
determining
the
diluent
O2
and
CO2
measurements
necessary
to
convert
emissions
to
units
of
the
standard
are
better
specified
in
the
applicable
regulation
than
the
NOx
measurement
method.

Section
3.1
Definitions
194.
Comment:
Definitions
for
direct
calibration,
system
calibration,
range,
and
span
are
missing.
(
0013)

Response:
We
incorporated
definitions
for
these
terms
by
referring
to
Method
7E.

195.
Comment:
Delete
high­
level
gas,
mid­
level
gas,
and
low­
level
gas
(
they
are
not
found
in
Section
3.0)
from
the
list
of
definitions.
(
0013)

Response:
We
made
sure
that
the
Method
3A
definitions
refer
to
the
appropriate
Method
7E
section(
s).

196.
Comment:
Change
Asampling
system@
to
Ameasurement
system@.
(
0013)

Response:
We
agree
that
Asampling
system@
is
redundant
with
Ameasurement
system@
and
deleted
it
from
the
final
rule.

Section
4.0
Interferences
[
Reserved]

197.
Comment:
This
section
is
reserved;
yet
extensive
resources
are
required
to
conduct
interference
checks
of
O2
analyzers.
Potential
documented
interferences
should
be
discussed
in
this
section
and
methods
for
eliminating
or
minimizing
the
effects
should
be
included.
(
0032)

Response:
We
are
unaware
of
potential
interferences
other
than
those
currently
cited
in
Method
3A
and
determining
these
interferences
is
beyond
the
scope
of
this
rulemaking.

Section
6.1
Equipment
and
Supplies
62
198.
Comment:
This
section
adopts
numerous
equipment
requirements
from
Method
7E
that
are
entirely
inappropriate
for
measurement
of
O2
and
CO2
concentrations.
For
example,
the
proposed
revisions
to
Method
3A
reference
the
requirements
of
Method
7E
Section
6.1.3
which
explicitly
requires
that
sample
lines
be
fabricated
of
stainless
steel
or
Teflon.
Oxygen
and
CO2
are
non­
reactive
gases
present
at
percentage
levels
and
many
other
much
less
expensive
materials
are
perfectly
fine
for
use
in
sample
lines
and
are
commonly
used
in
practice.
Other
inappropriate
requirements
referenced
by
the
proposed
revisions
are
associated
with
the
requirements
to
heat
the
sampling
probe,
sample
transport
lines
and
other
components
of
the
sampling
system.
In
many
cases
these
provisions
are
not
necessary
when
only
diluent
concentrations
are
needed
to
be
measured
for
a
specific
test
program.
(
0032)

Response:
We
agree
and
have
added
"
as
applicable"
to
the
equipment
specifications
in
Section
6.1
of
Method
3A.

199.
Comment:
For
wet­
based
systems
(
i.
e.,
dilution
or
hot­
wet
extractive),
moisture
removal
systems
and
heated
sample
lines
are
neither
performed
nor
essential.
In­
stack
dilution
probes
may
not
need
a
heated
sample
line,
depending
on
the
dew
point
of
the
sample.
Dry­
based
extractive
systems
may
use
a
combined
sample
probe/
moisture
removal
system,
after
which
an
unheated
sample
line
may
be
used,
depending
on
the
dew
point.
(
0009,
0010,
0043,
0059)

Response:
We
agree.
See
response
to
Comment
198.

200
Comment:
It
seems
like
a
modified
M5
box
with
hot
probe,
filter,
and
moisture
removal
system
hung
from
a
rail
could
work
for
a
traversing
gas
sampling
system,
which
would
not
be
allowable
with
your
proposed
heated
line
requirement.
The
heated
line
should
not
be
a
requirement.
(
0043)

Response:
A
modified
Method
5
box
with
hot
probe,
filter,
and
moisture
removal
system
is
allowed.
We
removed
reference
to
specifications
for
essential
components
in
Section
6.
This
would
facilitate
the
use
of
the
modified
Method
5
sampling,
transport,
and
conditioning
system
you
described.

201.
Comment:
In
addition
to
the
above,
if
only
O2
and
CO2
are
measured,
then
there
is
no
reason
why
you
even
need
to
prevent
condensation,
since
these
gases
are
not
water
soluble
and
would
not
be
affected
by
any
condensation
formed
upstream
of
the
moisture
removal
system.
(
0011,
0048,
0059)

Response:
See
response
to
Comment
198.

202.
Comment:
Add
AWhat
do
I
need
for
the
measurement
system?@
before
the
first
sentence.
(
0013)

Response:
This
statement
was
added.
63
203.
Comment:
What
is
the
difference
between
flow
control/
gas
manifold
and
sample
gas
manifold?
If
the
method
requires
constant
sample
flow,
then
a
flow
control
should
definitely
be
there.
(
0043)

Response:
We
replaced
the
term
Aflow
control/
gas
manifold
A
with
Acalibration
gas
manifold@
to
distinguish
it
from
the
sampling
manifold.

204.
Comment:
We
suggest
that
you
call
6.1
ASample
Collection
and
Transport@
and
6.2
AAnalyzer
and
Data
Recorder.@
It
seems
the
Adata
recorder@
is
out
of
place
in
Section
6.1.
(
0043)

Response:
The
commenter
provides
a
good
suggestion;
however,
we
are
removing
reference
to
specifications
for
essential
components
in
Section
6
as
noted
in
previous
responses.

Section
7.1
Calibration
Gas
205.
Comment:
The
proposed
revisions
to
Method
3A
fail
to
explicitly
require
use
of
low­
level
O2
mixtures
rather
than
zero
gas
for
O2
analyzers
that
should
not
use
zero
gas
such
as
the
very
commonly
used
zirconium
oxide
analyzers.
Use
of
zero
gas
for
these
types
of
analyzers,
or
any
other
measurement
system
that
has
a
logarithmic
output
relative
to
concentration,
will
provide
a
false
and
misleading
response.
This
false
response
will
generally
indicate
stable
operation
even
if
the
analyzer
is
calibrated
or
operating
incorrectly.
Note
that
the
existing
Method
3A
specifies
the
use
of
a
low
range
gas,
having
an
oxygen
concentration
of
less
than
10
percent
of
the
measurement
range.
(
0032)

Response:
The
statement
in
old
Method
3A
says
that
a
non­
zero
gas
may
be
used
for
an
O2
analyzer
that
cannot
analyze
zero
gas.
But
it
does
not
specify
which
type(
s)
of
analyzers
have
this
issue.
Therefore,
the
statement
has
not
been
included
in
the
final
rule.
It
is
the
responsibility
of
the
tester
to
know
which
types
of
analyzers
are
not
capable
of
analyzing
zero
gas.
The
final
rule
allows
the
low­
level
gas
concentration
to
be
up
to
20
percent
of
calibration
span,
which
gives
the
tester
the
necessary
flexibility
to
select
an
appropriate
non­
zero,
low­
level
gas
for
such
analyzers.

206.
Comment:
A
low­
level
cal
gas
has
been
added
and
this
does
not
simplify
or
reduce
testing
costs.
Along
with
new
definition
of
span,
the
upper
cal
range
has
been
significantly
reduced,
and
3
calibration
levels
of
zero,
mid,
and
span
are
adequate.
Remove
the
low­
level
requirement.
(
0041)

Response:
We
have
not
added
a
new
low­
level
gas
in
addition
to
the
low­,
mid­,
and
high­
levels
required.

207.
Comment:
It
is
not
clear
whether
it
is
acceptable
to
prepare
cal
gas
mixtures
from
high­
concentration
EPA
Protocol
gases
in
accordance
to
Method
205.
If
so,
will
this
be
acceptable
to
both
EPA=
s
OAR
and
CAMD?
(
0059)

Response:
Method
205
is
allowed,
except
for
Part
75
applications.
Administrative
approval
is
required
to
use
the
Method
for
Part
75
testing.
64
Section
7.2
Interference
Check
208.
Comment:
The
interference
checks
for
most
common
O2
and
CO2
analyzers
are
not
necessary
and
the
referenced
requirements
from
Method
7E
are
entirely
inappropriate
for
them.
Oxygen
and
CO2
analyzers
measure
percent
levels
(
i.
e.,
10,000
to
210,000
ppm)
so
the
interference
check
at
levels
of
10
to
50
ppm
as
specified
in
Method
7E
is
ludicrous.
The
choice
of
gases
to
inject
for
such
tests
is
(
a)
inconsistent
with
the
existing
Method
3A
and
Method
20,
and
(
b)
inconsistent
with
even
the
most
basic
understanding
of
the
analytical
principals
used
for
the
diluent
analyzers.
Virtually
all
CO2
analyzers
are
infrared
based
systems
and
the
interferences
can
easily
be
determined
through
basic
spectroscopic
considerations.
Virtually
all
of
the
O2
analyzers
that
are
used
for
such
measurements
are
based
on
diffusion
through
zirconium
oxide
(
ZrO2),
measurement
of
paramagnetic
dipole
moments,
and
electro­
catalytic
(
fuel
cell).
The
interferences
for
each
of
these
analytical
techniques
are
very
well
known
and
can
be
determined
from
standard
chemistry
and
physics
references,
basic
literature,
and
data
provided
by
manufacturers.
In
almost
every
case,
no
interference
test
is
needed
nor
can
the
cost
be
justified.
Where
such
interference
tests
are
needed
and
can
be
justified,
they
should
be
specific
to
the
analytical
method
used.
Interference
checks
should
be
the
responsibility
of
the
manufacturer
for
the
given
analytical
technique
(
0032,
0036).

Response:
We
have
revised
the
language
in
Section
8.3
to
state
that
the
O2
or
CO2
analyzer
must
be
documented
to
show
that
interference
effects
to
not
exceed
2.5
percent
of
the
calibration
span.
The
interference
test
in
Section
8.2.7
of
Method
7E
is
referenced
as
an
example
test
for
Method
3A
where
the
analyzer
is
evaluated
for
the
potential
interferences
encountered
at
applicable
emission
concentrations.

209.
Comment:
The
cited
Method
7E
table
reference
should
be
7E­
3,
not
7E­
1.
(
0013,
0035)

Response:
This
correction
has
been
made.

Section
8.1
Sampling
Site
and
Sampling
Points
210.
Comment:
This
section
refers
to
Section
8.1
of
Method
7E
for
sampling
points
and
stratification
check,
but
the
stratification
check
requires
correction
for
diluent.
Method
3A
measures
diluent.
What
is
the
stratification
procedure
if
Method
3A
is
used
without
any
other
instrumental
method?
(
0020)

Response:
The
sampling
point
and
stratification
check
in
Section
8.1
applies
to
Method
3A
in
general.
We
have
clarified
in
Method
7E
that
a
correction
for
diluent
is
not
required
for
any
of
the
stratification
tests.

Section
8.2
(
untitled
section)

211.
Comment:
There
is
no
CO2
or
O2
emission
standard
for
any
source.
Therefore,
the
interference
test
for
this
method
cannot
be
done
as
written.
(
0038)
65
Response:
The
proposed
interference
test
was
not
tied
to
the
applicable
emission
standard
but
was
expressed
as
a
percentage
of
the
analyzer
range.

Section
8.3
Sample
Collection
212.
Comment:
If
we
need
just
CO2
and
O2
for
molecular
weight
or
diluent
correction,
we
use
the
exhaust
of
the
M5
box
directly
to
the
analyzers.
The
proposed
equipment
requirements
would
not
allow
this
and
would
require
another
person,
or
about
$
600
per
day
for
this
simplest
of
tests.
Sections
6.1,
8.1,
and
8.3
of
Method
3A
should
allow
the
single­
point
grab
and
single­
point
integrated
sampling
procedures
of
Methods
3
and/
or
3B
as
well
as
the
multipoint
integrated
sampling
procedure.
This
allowance
might
be
covered
easier
under
the
alternative
procedures
section.
(
0039,
0043)

Response:
There
is
no
prohibition
against
using
Method
3
or
3B,
if
the
applicable
regulations
allow
these
methods
to
be
used.
However,
the
final
rule
allows
the
use
of
the
singlepoint
integrated
sampling
described
in
Method
3
as
an
alternative
to
the
Method
1
traverse
points,
when
Method
3A
is
used
only
for
stack
gas
molecular
weight
determinations.

Section
12.0
Calculations
and
Data
Analysis
213.
Comment:
For
clarity,
state
that
the
term
for
NO2
converter
efficiency
in
proposed
7E,
equations
7E­
5
and
6,
should
not
be
used
for
other
methods.
(
0039)

Response:
For
those
methods
that
refer
to
Method
7E,
the
tester
is
directed
to
"
follow
the
applicable
procedures
for
calculations
and
data
analysis
in
Section
12.0
of
Method
7E."
In
Section
12.0
of
Method
7E,
the
introductions
to
the
converter
efficiency
calculations
note
that
they
are
for
NOx
and
have
conditional
use
only
for
analyzers
that
convert
NO2
to
NO
before
measurement.
It
appears
self­
evident
that
these
converters
are
only
used
for
NOx;
we
do
not
believe
stating
this
in
Methods
3A,
6C,
10,
and
20
is
necessary.

214.
Comment:
Add
Aas
applicable@
to
the
end
of
the
sentence
(
that
references
the
procedures
and
calculations
in
Method
7E).

Response:
This
has
been
done.

Sections
13.1
and
13.2
215.
Comment:
The
CO2
and
O2
specifications
should
be
revised
to
be
absolute
values
rather
than
percentages.
Two
(
2)
percent
of
25
is
0.5
percent
CO2
or
O2.
Based
on
RATA
data
and
experience,
it
is
estimated
that
measurements
can
easily
be
made
to
within
0.2
percent
CO2
or
O2.
For
the
bias
test,
3
percent
of
25
is
0.75
percent
CO2
or
O2,
which
is
very
loose
and
can
introduce
a
large
amount
of
error
in
combustion
turbines
with
emission
limits
in
lb/
mm
Btu
or
at
15
percent
O2.
With
all
the
RATA
testing
performed
and
reported
for
market­
based
programs,
the
EPA
has
thousands
of
data
points
showing
how
well
diluent
monitors
are
operating
(
0032).
66
Response:
We
do
not
have
sufficient
data
to
warrant
changing
the
current
performance
requirements.

216.
Comment:
Numerous
commenters
noted
that
the
statement
that
bias
be
Aless
than
"
0.25
percent
of
upper
range@
is
extremely
stringent
and
difficult,
if
not
impossible,
to
meet.
One
commenter
rightly
noted
that
AYfor
the
zero
gas@
should
have
followed
the
phrase,
but
was
omitted.
(
0004,
0009,
0011,
0013,
0014,
0028,
0030,
0035,
0038,
0041,
0043,
0049)

Response:
The
sampling
system
bias
specification
of
"
5%
of
span
has
been
reinstated
for
all
calibration
gas
levels.

217.
Comment:
Since
there
is
no
emission
standard
for
O2
and
CO,
does
this
mean
that
I
can
pick
any
range
I
choose
as
long
as
I
enclose
the
effluent
concentration
within
the
20
percent
of
span
and
span
gas
bracket?
Since
3A
does
not
prescribe
a
range,
you
will
find
less
scrupulous
testers
using
the
maximum
range
of
100
percent
for
some
analyzers
which
is
likely
to
have
the
opposite
effect
from
what
EPA
is
trying
to
accomplish.
(
0004,
0039)

Response:
We
believe
that
the
revised
definition
of
span
addresses
the
commenter=
s
concerns.
If
a
high
range
(
e.
g.,
25
percent
O2)
is
selected,
the
final
rule
does
not
allow
the
tester
to
set
the
full­
scale
range
value
equal
to
the
span
and
use
it
for
calibration
error,
bias
and
drift
calculations,
unless
the
actual
emissions
are
between
20
and
80%
of
the
full­
scale.
If
the
O2
readings
are
low
(
say
3­
to­
5
percent
O2),
then
a
lower
span
(
a
fixed
percentage
of
the
full­
scale,
e.
g.
40
percent
of
the
range,
or
10
percent
O2)
would
be
required
to
meet
the
A20­
to­
80
percent@
guideline.

218.
Comment:
Does
the
final
paragraph
imply
that
pre­
and
post­
test
bias
checks
must
be
conducted
using
three
gases?
(
0004)

Response:
Section
13.2
does
not
imply
that
three
gases
are
used
for
the
bias
checks.
The
bias
test
checks
the
system
at
the
low
concentration
level
and
at
the
mid­
or
high­
level
concentration,
depending
on
which
level
is
closer
to
the
measured
concentration.
The
test
only
uses
two
gases.
The
low­
level
gas
may
be
a
zero
gas,
at
the
discretion
of
the
tester.

Section
16.1
Dynamic
Spiking
Procedure
and
Manufacturer's
Stability
Test
219.
Comment:
Since
O2
and
CO2
are
typically
not
subject
to
any
significant
bias
or
matrix
effects,
the
spiking
procedures
are
superfluous
and
should
be
removed.
I
know
from
experience
with
other
methods
that
some
regulators
will
end
up
requiring
an
alternative
in
addition
to
the
normal
criteria
just
because
it
is
there.
(
0039)

Response:
We
agree
and
have
dropped
the
proposed
alternative
dynamic
spiking
procedure
from
Method
3A.

3.15
Specific
Comments
on
Method
6C
General
comment
67
220.
Comment:
Based
on
the
August
1997
proposal,
the
commenter
disagrees
with
setting
an
80
percent
limit
on
concentration
of
interest.
By
not
defining
the
analyzer
span
with
respect
to
the
level
of
measurements
to
be
made
(
as
was
previously
done),
EPA
is
encouraging
the
use
of
non­
optimum
analyzer
ranges
(
spans).
Measurements
up
to
100
percent
of
the
concentration
of
interest
should
be
allowed.
By
restricting
the
measurements
to
80
percent,
EPA
is
forcing
the
use
of
artificially
high
calibration
gases.
Furthermore,
many
sources
are
tested
at
emission
levels
significantly
below
the
emission
standard
(
concentration
of
interest)
and
the
proposed
change
will
force
the
use
of
high­
level
calibration
gases
that
are
inappropriately
high.
(
0038)

Response:
We
believe
that
the
revised
definition
of
span
and
the
accompanying
guidelines
in
the
final
rule
address
the
commenter=
s
concerns.
The
revised
span
definition
and
guidelines
encourage
the
use
of
appropriate
calibration
gases,
with
concentrations
in
the
vicinity
of
the
actual
emission
levels.

221.
Comment:
In
the
first
proposal
in
August
27,
1997,
the
calibration
error
specification
was
changed
from
"
2
percent
of
the
span
for
the
zero,
mid­
range
and
high­
range
calibration
gases"
to
"
4
percent
of
the
concentration
corresponding
to
the
emission
standard
for
any
of
the
calibration
gases."
This
specification
may
be
tighter
or
looser
depending
on
the
measurements
being
made.
(
0038)

Response:
The
calibration
error
specification
given
in
the
August
1997
proposal
has
been
dropped.
The
2
percent
of
calibration
span
specification
for
analyzer
calibration
error
has
been
retained.

222.
Comment:
In
the
August
1997
proposal,
the
sampling
system
bias
test
specification
has
been
changed
from
"
5
percent
of
the
span
for
either
the
zero
or
upscale
calibration
gases"
to
"
10
percent
of
the
concentration
corresponding
to
the
emission
standard."
Disregarding
the
fact
that
the
equation
does
not
work,
the
change
could
be
a
tighter
or
looser
specification,
depending
on
the
relationship
of
the
measurements
being
made
to
the
high­
level
calibration
gas.
(
0038)

Response:
The
bias
specification
given
in
the
August
1997
proposal
has
been
dropped.
The
5
percent
of
calibration
span
specification
for
sampling
system
bias
has
been
retained.

223.
Comment:
There
are
no
instructions
for
how
to
determine
"
the
concentration
corresponding
to
the
emission
standard."
For
each
electric
utility
steam
generator,
this
SO2
or
NOx
concentration
(
in
ppm)
will
vary
depending
on
the
diluent
(
CO2
or
O2)
concentration.
It
will
also
change
with
time
as
the
boiler
responds
to
process
changes.
(
0038)

Response:
We
have
dropped
the
proposed
determination
of
performance
tests
based
on
the
applicable
emission
standard
Section
1.0
Scope
and
Application
68
224.
Comment:
The
proposed
method
is
completely
devoid
of
any
relevant
discussion
of
sensitivity,
minimum
detection
limit,
and
practical
quantification
limit
and
should
be
revised
to
address
these
issues
explicitly.
These
issues
commonly
arise
in
determining
the
applicability
of
the
method
and
interpretation
of
the
results.
These
issues
should
be
addressed
by
the
method
developers
or
at
the
time
a
method
is
proposed
and
promulgated
rather
than
being
left
open
for
interpretation
by
others
years
later
(
0032).

Response:
We
believe
that
requiring
calibration
gases
in
the
proper
range
such
that
measured
emissions
will
be
within
20
to
80
percent
of
the
span
is
appropriate
in
lieu
of
specifying
minimum
detection
limits
and
practical
quantification
limits.
Instrument
sensitivity
normally
varies
according
to
principle
of
operation.
We
note
in
the
methods
that
sensitivity
is
generally
less
than
2
percent
of
the
calibration
span.

Section
1.3
Data
Quality
Objectives
225.
Comment:
Section
1.3
refers
to
Method
7E,
Section
1.3,
which
refers
to
Section
13,
which
refers
to
Section
1.3;
this
is
confusing.
(
0035)

Response:
The
commenter
is
correct.
However,
we
have
dropped
the
proposed
data
quality
assessment
procedures
and
requirements.

Section
3.2
Interference
Check
226.
Comment:
The
interference
check
is
spelled
out
in
Method
6,
but
abbreviated
in
Method
7E.
There
is
a
great
deal
of
confusion,
over­
specification
and
under­
specification
that
occurs
with
this
"
cost
savings"
way
of
writing
the
methods.
In
the
long
run,
it
would
be
cheaper
to
have
each
method
stand
alone
with
each
section
having
been
carefully
thought
out
for
that
method/
instrument.
(
0035)

Response:
We
understand
the
commenter's
concern.
We
are
not
able
to
publish
the
methods
as
stand
alone
methods
because
of
our
convention
in
method
writing.
However,
by
making
the
methods
consistent
as
we
are
in
these
revisions,
being
knowledgeable
of
Method
7E
makes
one
knowledgeable
of
the
requirements
of
the
other
methods
as
well.

Section
4.0
Interferences
(
Reserved)

227.
Comment:
This
section
is
reserved
yet
extensive
resources
are
required
to
conduct
interference
checks
of
SO2
analyzers.
Potential
documented
interferences
should
be
discussed
in
this
section
and
methods
for
eliminating
or
minimizing
the
effects
should
be
included.
The
discussions
should
include
sampling
system
interferences
such
as
the
presence
of
ammonia
which
can
lead
to
the
formation
of
ammonium
bisulfate
or
other
salts
or
absorption
of
SO2
within
the
condenser
system.
In
addition,
cogent
explanations
of
expected
analytical
interferences
for
infrared,
fluorescence,
and
differential
adsorption
analytical
techniques
should
be
included.
(
0032)
69
Response:
We
have
added
a
note
to
Section
4.0
directing
the
reader
to
refer
to
Section
4.1
of
Method
6
for
a
discussion
of
interferences.
Describing
other
potential
interferences
not
covered
in
Method
6
is
beyond
the
scope
of
this
rulemaking.

Section
6.0
Equipment
and
Supplies
228.
Comment:
The
referenced
equipment
specifications
for
Method
6C
in
Method
7E
will
not
prevent
reactions
of
SO2
and
ammonia
within
the
sampling
systems.
Where
ammonia
concentrations
are
significant
relative
to
SO2
concentrations,
use
of
condensers
to
remove
effluent
moisture
will
result
in
low­
biased
SO2
results.
Furthermore,
the
sampling
system
temperatures
in
Method
7E
are
not
sufficient
to
prevent
certain
gas
reactions
and
adsorption
on
surfaces
(
0032).

Response:
The
interference
effects
of
ammonia
on
SO2
are
discussed
in
Section
4.1
of
Method
6,
which
is
now
cited
in
Method
6C.

229.
Comment:
Figure
7E­
1
is
incomplete
in
regards
to
heated
sampling
lines.
(
0035)

Response:
We
have
revised
the
figure
to
note
the
areas
where
the
sampled
gas
should
be
above
its
dew
point.

Section
6.1
What
do
I
need
for
the
measurement
system?

230.
Comment:
For
wet­
based
systems
(
i.
e.,
dilution
or
hot­
wet
extractive),
moisture
removal
systems
and
heated
sample
lines
are
neither
performed
nor
essential.
In­
stack
dilution
probes
may
not
need
a
heated
sample
line
depending
on
the
dew
point
of
the
sample.
Dry­
based
extractive
systems
may
use
a
combined
sample
probe/
moisture
removal
system
after
which
an
unheated
sample
line
may
be
used
depending
on
the
dew
point.
(
0009,
0010,
0059)

Response:
We
agree
with
the
commenter
that
dilution
systems
should
be
addressed.
Performance­
based
provisions
are
added
to
the
final
rule
for
using
dilution
systems.

231.
Comment:
It
seems
like
a
modified
Method
5
box
with
hot
probe,
filter,
and
moisture
removal
system
hung
from
a
rail
could
work
for
a
traversing
gas
sampling
system
which
would
not
be
allowable
with
your
proposed
heated
line
requirement.
The
heated
line
should
not
be
a
requirement.
(
0043)

Response:
The
noted
modified
Method
5
setup
is
allowed
under
the
provisions
of
the
current
methods.
For
clarification,
we
have
replaced
the
specifications
for
essential
components
with
performance­
based
criteria.
This
eliminates
the
restriction
in
the
proposal
noted
by
the
commenter.

Section
6.2
SO2
Analyzer
232.
Comment:
The
use
of
specific
technologies
should
be
removed
from
this
section
and
moved
to
the
preamble,
if
mentioned
at
all.
The
goal
should
be
performance­
based
systems,
and
referring
to
specific
instruments
and
components
undermines
that
goal.
(
0009,
0054)
70
Response:
We
agree
that
our
goal
is
performance­
based
systems.
However,
we
don=
t
believe
that
mentioning
technologies
that
have
been
successfully
used
previously
necessarily
undermines
this
goal.

Section
6.3
What
additional
equipment
do
I
need
for
the
interference
check?

233.
Comment:
Section
6.3
states
that
in
cases
where
the
emissions
concentrations
are
less
than
15
ppm,
the
alternative
interference
check
in
Section
16.1
Ashould
be
used@,
but
Section
8.3
states
Amust
be
used@.
Please
clarify.
(
0025,
0030,
0038)

Response:
The
current
language
has
been
clarified
to
note
that
the
alternative
interference
check
must
be
used
when
routine
measurements
below
15
ppm
are
made
.

234.
Comment:
This
section
references
an
interference
check
sampling
train
in
Figure
6C­
2.
Are
interference
gases
introduced
directly
into
the
analyzer
or
via
the
sampling
train
for
the
interference
check
sampling
train
in
Figure
6C­
2?
(
0025)

Response:
The
proposed
reference
to
Figure
6C­
2
in
Section
6.3
was
in
error.
The
reference
should
have
been
to
Figure
6C­
1
which
is
now
the
alternative
interference
check
sampling
train.
In
this
alternative
procedure,
real­
time
emission
samples
are
collected
through
the
sampling
train
to
determine
interferences;
interference
gases
in
cylinders
are
not
used.
For
the
normal
interference
check
using
cylinder
gases,
the
interference
gases
are
introduced
into
the
measurement
system
for
the
interference
check.

235.
Comment:
Figure
6C­
1
illustrates
an
interference
check
sampling
train.
There
is
no
Figure
6C­
2,
and
reference
should
be
to
6C­
1.
Figure
6C­
1
does
not
illustrate
heated
sample
lines.
Is
there
a
procedure
for
the
modified
Method
6
other
than
the
figure?
(
0035,
0043)

Response:
See
the
response
to
Comment
234.
The
modified
Method
6
sampling
train
interference
equipment
shown
in
Figure
6C­
1
collects
a
sample
from
the
sample
by­
pass
vent
which
is
after
the
Method
6C
moisture
removal
system
as
shown
in
Figure
7E­
1.
Therefore,
a
heated
sample
line
is
not
necessary.
Also,
there
is
a
discussion
of
the
Modified
Method
6
procedure
in
Section
16.1.

236.
Comment:
The
alternative
interference
check
is
detailed
in
Section
8.3
as
opposed
to
16.3.
(
0043)

Response:
We
moved
our
alternative
interference
check
discussion
to
Section
16.1.

Section
7.2
Additional
Cal
Gas
Requirements
for
Fluorescence
Analyzer
237.
Comment:
Section
7.2
contains
a
requirement
that
is
virtually
impossible
to
meet
as
written
for
fluorescence
based
analyzers
because
it
requires
that
the
O2
and
CO2
concentration
in
the
calibration
gas
be
within
1
percent
absolute
O2
and
CO2
of
the
concentration
in
the
sample.
This
section
needs
to
be
re­
worded
so
that
it
applies
this
requirement
to
the
sample
gas
and
71
calibration
gas
as
delivered
to
the
analyzer
so
that
dilution
based
systems
can
be
used
to
meet
this
requirement
as
has
been
done
for
many
years.
Dilution
of
the
sample
stream
by
20
to
1
usually
satisfies
this
specification.
Nomographs
provided
by
the
gas
vendor
are
not
adequate
to
correct
for
quenching
affects,
and
their
use
should
not
be
allowed.
We
believe
this
requirement
stems
from
a
quenching
problem
that
was
fixed
more
than
20
years
ago,
and
EPA
should
research
this
and
obtain
up­
to­
date
information.
Note
that
hundreds
of
fluorescence
analyzers
are
in
operation
under
the
Acid
Rain
Program,
and
there
have
been
no
reports
of
quenching
problems.
(
0009,
0032,
0033,
0038,
0059)

Response:
These
concerns
for
dilution
systems
have
been
addressed
in
the
final
rule.
We
have
dropped
the
requirement
that
calibration
gases
be
within
1
percent
O2
or
CO2
of
the
corresponding
concentration
in
the
sample
and
have
dropped
the
reference
to
nomographs.

238.
Comment:
In
Section
7.2,
there
is
an
attempt
to
deal
with
third
body
quenching
associated
with
fluorescence
analyzers.
Once
again,
the
method
assumes
that
full
strength
extractive
technology
will
be
used,
either
wet
or
dry.
Dilution­
based
sampling
systems
and
extractive
systems
with
diluters
just
prior
to
the
SO2
analyzer
limit
the
effects
of
third
body
quenching
by
diluting
the
sample
stream
with
SO2­
free
air.
For
a
50:
1
diluted
sample,
the
O2
concentration
will
be
between
20.5
(
no
O2
in
pre­
diluted
sample)
and
20.9
percent
(
20.9
percent
O2
in
the
pre­
diluted
sample).
Varying
the
amount
of
O2
or
CO2
in
the
calibration
gas
will
cause
little
change
in
the
O2
concentration
at
the
analyzer.
Therefore,
it
is
recommended
that
this
section
be
rewritten
to
recognize
that
where
a
steady
state
of
O2
is
present
at
the
analyzer,
as
is
the
case
in
dilution
sampling
systems,
these
restrictions
do
not
apply.

Response:
See
the
response
to
Comment
237.

239.
Comment:
Is
it
really
the
gas
vendor
that
knows
the
quenching
effect
of
O2
or
CO2
on
our
analyzers?
It
seems
more
likely
that
it
would
be
the
analyzer
manufacturer.
Do
gas
vendors
account
for
quenching
on
their
analyzers
when
they
certify
the
gases?
Recommend
changing
Agas
vendor@
to
Agas
analyzer
vendor@.
How
significant
is
the
quenching
factor
in
any
analyzer
especially
in
the
ambient­
level
analyzers
used
with
dilution­
extractive
systems?
Could
nomographs
be
used
with
dilution­
extractive
systems?
(
0043,
0054,
0059)

Response:
The
quenching
effects
of
O2
and
CO2
on
an
analyzer
should
be
addressed
by
the
gas
analyzer
vendor.
Any
quenching
effects
would
not
likely
occur
at
currently
used
dilution
rates.
See
the
response
to
Comment
237.

240.
Comment:
What
is
the
reason
for
O2
and
CO2
concentrations
to
be
within
1
percent
of
actual
stack
gas?
(
0028)

Response:
This
1
percent
O2
and
CO2
tolerance
in
Method
6C
applies
only
to
fluorescence
analyzers
and
accounts
for
the
gas
quenching
effects
for
this
type
analyzer.
This
is
an
obsolete
caution
and
we
have
not
included
it
in
the
final
method.

Section
7.3
Interference
Check
72
241.
Comment:
All
of
the
requirements
to
conduct
interference
tests
for
analyzers
based
on
infrared
adsorption,
UV
differential
adsorption,
and
fluorescence
should
be
deleted
based
on
thousands
of
previous
tests
that
have
been
conducted
that
demonstrate
the
absence
of
interfering
affects
already
conducted
for
properly
designed
systems.
The
necessary
data
to
justify
this
decision
is
reported
to
the
EPA
each
year.
EPA
should
tabulate
and
publish
a
list
of
conditions
under
which
historical
interference
tests
have
been
reported
to
have
been
failed.
(
0032)

Response:
These
instrumental
methods
are
performance­
based.
Conducting
the
interference
test
confirms
that
the
measurement
system
is
properly
designed.
We
are
allowing
the
analyzer
manufacturers
to
perform
the
initial
interference
test.
We
will
gladly
make
available
summary
data
listing
conditions
under
which
historical
interference
tests
have
failed
if
such
data
is
provided
to
us.

242.
Comment:
Interference
tests
for
analytical
techniques
not
previously
demonstrated
should
be
designed
to
be
specific
to
that
analytical
technique
and
based
on
information
provided
by
the
analyzer
manufacturer.
The
interference
gases
specified
in
Method
7E
are
clearly
not
appropriate
for
SO2
analyzers.
The
alternate
interference
check
based
on
a
side­
by­
side
comparison
of
the
instrumental
test
method
with
the
modified
impinger
method
was
a
crude
approach
20
years
ago
and
is
no
better
now.
It
involves
comparing
a
more
accurate
and
precise
instrumental
method
with
an
inferior
manual
method.
This
presents
the
same
problems
as
trying
to
compare
3
runs
of
Method
5
with
optical
particulate
matter
process
monitors.
(
0032)

Response:
Since
we
are
allowing
the
analyzer
manufacturers
to
perform
the
interference
test
on
analyzers,
this
would
cover
techniques
not
previously
demonstrated.
The
side­
by­
side
interference
test
using
a
modified
Method
6
train
is
now
listed
as
the
alternative
interference
procedure.
The
primary
interference
check
follows
the
procedure
listed
in
Method
7E
where
the
measurement
system
is
evaluated
against
potential
gas
interferents.
Only
those
interferents
listed
in
Table
7E­
3
that
are
applicable
need
be
evaluated.

243.
Comment:
We
disagree
with
requiring
annual
interference
checks..
Additionally,
no
provisions
have
been
made
for
cases
when
low
SO2
emissions
are
encountered,
as
was
provided
in
EMTIC
TID­
12,
dated
April
14,
1992.
These
tests
are
neither
simple
nor
cheap
and
should
only
have
to
be
done
once
per
source
category
per
instrument
type.
(
0034)

Response:
The
proposed
requirement
to
conduct
annual
interference
checks
has
been
dropped.
A
manufacturer
stability
test
is
required
for
instruments
that
routinely
measure
low
concentrations.
For
the
interference
check,
the
manufacturer
may
certify
the
measurement
system
using
the
appropriate
interference
gases
for
the
intended
applications.

244.
Comment:
The
concentration
options
for
the
Method
6C
interference
check
are
very
unclear.
Section
7.3
of
Method
6C
calls
for
a
comparison
with
Method
6
for
concentrations
greater
than
15
ppm
and
refers
to
using
the
test
gases
in
Table
7E­
3
of
Method
7E
for
concentrations
less
than
15
ppm.
Section
7.3.1
of
Method
6C
adds
confusion
by
apparently
allowing
the
test
gases
in
Table
7E­
3
for
the
interference
test
at
the
discretion
of
the
tester.
(
0035)
73
Response:
In
the
proposal,
the
intent
was
to
allow
either
interference
check
(
comparison
against
modified
Method
6
or
using
the
test
gas
procedure
in
Method
7E)
during
tests
at
concentrations
above
15
ppm.
Tests
routinely
performed
below
this
concentration
must
only
use
measurement
systems
that
were
evaluated
by
the
procedure
in
Method
7E.
This
is
the
case
in
the
final
Method
6C,
except
that
the
primary
interference
check
is
the
procedure
in
Method
7E
and
the
comparison
against
modified
Method
6
is
the
allowed
alternative.

245.
Comment:
This
section
as
well
as
Sections
8.3
and
13.2
cover
the
interference
check.
Too
many
interference
checks
are
allowed
for
this
method.
Sections
7.3,
8.3,
and
13.2
detail
an
obsolete
(
Method
6
comparison)
interference
check
and
should
be
deleted
in
favor
of
retaining
Section
7.3.1
which
describes
the
new
interference
test
method.
This
would
meet
the
state
intention
to
Aremove
obsolete
specifications,
harmonize
similar
requirements,
and
simplify
to
enhance
the
method=
s
utility
and
reduce
the
cost
of
testing.@
A
Section
16.3
should
be
added
to
Method
7E
to
incorporate
an
Annual
Primary
Interference
Gas
Recheck.
(
0041)

Response:
We
have
attempted
to
add
what
we
think
is
a
superior
interference
check
without
completely
eliminating
the
procedure
that
was
acceptable
in
the
past.
By
continuing
to
allow
the
traditional
interference
test
against
Method
6
as
an
option,
we
will
not
be
placing
an
immediate
burden
on
testers
to
recertify
their
equipment
by
the
new
interference
check.
We
believe
testers
will
eventually
phase
out
the
old
interference
check.

Section
7.3.1
Alternative
Analyzer
Interference
Check
246.
Comment:
This
section
should
clarify
that
for
dilution
systems,
gases
should
be
injected
at
the
probe,
not
directly
into
the
analyzer.
This
is
so
the
analyzer
is
operating
in
the
same
manner
as
when
it
will
be
used
in
the
field.
(
0009)

Response:
The
commenter
is
correct.
Dilution
sampling
systems
should
be
evaluated
by
introducing
the
interference
gases
in
system
calibration
mode.
This
has
been
noted.
The
final
rule
also
specifies
that
a
system
calibration
error
test
must
be
done
for
a
dilution­
type
system,
in
lieu
of
analyzer
calibration
error
and
bias
checks.
This
accounts
for
the
fact
that
the
analyzer
calibration
error
test
(
which
requires
direct
injection)
is
not
feasible
for
dilution
systems.

247.
Comment:
This
section
is
flawed.
It
is
possible
to
introduce
gases
listed
in
Fig.
7E­
3
one
at
a
time
without
SO2,
since
SO2
is
likely
to
react
with
some
of
the
listed
gases
(
i.
e.,
ammonia,
water).
(
0011)

Response:
The
commenter
makes
a
good
point.
We
now
note
that
the
appropriate
interference
test
gases
that
are
potentially
encountered
during
a
test
are
used
for
the
interference
test.

248.
Comment:
The
original
interference
check
does
not
state
at
which
level.
The
repeat
is
at
80
percent
of
the
range,
which
is
also
unclear
as
to
what
range.
(
0035)

Response:
We
have
added
clarity
to
the
description
of
the
interference
check.
The
level
of
test
gas
used
in
the
interference
test
is
shown
in
Table
7E­
3.
When
the
test
gas
is
evaluated
74
with
the
pollutant
gas
present,
the
pollutant
concentration
should
be
80
to
100
ppm.
If
the
test
is
to
certify
the
analyzer
for
low
measurements,
the
pollutant
concentration
should
be
less
than
20
ppm.
For
multi­
range
instruments,
the
most
sensitive
scale
that
will
be
used
for
field
testing
must
be
used
for
the
interference
check.

249.
Comment:
Where
is
Figure
6C­
8?
(
0035)

Response:
The
citation
was
in
error
and
now
reads
AFigure
7E­
4.@

250.
Comment:
The
interference
check
procedure
should
be
in
Section
8.
Section
7
is
for
reagents/
standards.
(
0035)

Response:
We
agree
that
the
alternative
interference
check
discussion
is
out
of
place
in
Section
7.
It
has
been
moved
to
Section
16.

251.
Comment:
No
one
can
pass
this
check
because
it
is
incorrect.
I
think
it
should
say
introduce
SO2
with
and
without
the
gases
listed
in
Figure
7E­
3,
one
at
a
time.
(
0035)

Response:
We
agree,
and
the
wording
is
revised
to
reflect
this.

252.
Comment:
The
alternative
interference
checks
are
expensive
to
perform
and
will
cost
more
than
$
11,000
per
year
to
implement.
This
test
will
make
it
hard
for
small
companies
to
compete.
(
0043)

Response:
We
have
dropped
the
yearly
interference
check
requirement
in
favor
of
requiring
additional
rechecks
only
when
a
major
analyzer
component
is
changed.
If
you
have
many
analyzers
of
the
same
make
and
model,
you
need
test
only
one
of
them.
Also,
if
the
instrument
manufacturer
performs
the
test
on
your
analyzer
or
on
one
of
the
same
make
and
model,
and
provides
documented
test
results,
this
is
acceptable.

253
Comment:
The
procedure
needs
a
more
detailed
explanation.
Specifically,
explain
what
is
meant
by
with
and
without
SO2.
(
0049)

Response:
The
procedure
has
been
revised
to
add
detail
and
clarity.

Section
8.1
Sampling
Site
and
Sampling
Points
254.
Comment:
Method
6C
requires
meeting
a
5
percent
criterion
for
single
point
testing
and
a
10
percent
criterion
for
3­
point
traversing.
How
can
this
be
done
on
a
variable
source?
(
0035)

Response:
You
may
not
be
able
to
take
advantage
of
the
fewer
points
option
at
sources
that
vary.
75
255.
Comment:
For
the
stratification
test,
this
section
requires
the
SO2
diluent
corrected
concentration
to
be
less
than
5
or
10
percent
to
use
less
than
24
points.
This
will
hardly
ever
happen
on
an
SO2
source.
Therefore,
the
cost
of
a
test
will
increase
by
2­
4
times.
Does
this
new
method
apply
to
a
RATA?
(
0035)

Response:
The
stratification
requirements
in
Method
7E
do
not
apply
to
RATAs.
Section
8.1.1
of
Method
7E
states
that
performance
specification
testing
of
CEMS
should
follow
the
sampling
site
procedures
in
the
appropriate
performance
specification
or
applicable
regulation.
If
the
applicable
regulation
or
performance
specification
requires
stratification
testing
for
a
RATA
application,
then
you
must
meet
that
requirement.

Section
8.2
Measurement
System
Performance
Tests
256.
Comment:
This
section
refers
to
Method
7E,
Section
8.2.5,
among
others.
Section
8.2.5
refers
to
Table
7E­
2,
but
there
are
two
tables
named
7E­
2.
I
am
assuming
you
mean
the
second
one.
(
0035)

Response:
The
commenter
is
apparently
confusing
Figure
7E­
2
and
Table
7E­
2.

Section
8.2.5
Initial
Sampling
System
Bias
Check
257.
Comment:
This
section
refers
to
other
sections
for
the
bias
calculations
and
acceptance
criteria.
Everything
should
be
together
in
the
bias
check
section.
(
0035)

Response:
For
consistency,
all
of
the
acceptance
criteria
are
found
in
Section
13
B
Method
Performance.
We
believe
the
bias
calculation
should
remain
in
the
calculations
section
B
Section
12.

258.
Comment:
This
section
incorrectly
refers
to
Section
12.5
for
bias
check
calculations.
Section
12.5
is
the
NOx
converter
efficiency
equation,
while
12.4
refers
to
bias
checks.
(
0035)

Response:
This
correction
has
been
made.

Section
8.3
Interference
Check
259.
Comment:
There
is
text
explaining
how
to
perform
the
interference
check.
Do
these
instructions
also
belong
in
Method
7E,
or
is
this
specific
to
Method
6C?
(
0025)

Response:
The
instructions
primarily
describe
the
Method
6C
interference
test
where
Method
6C
is
compared
to
Method
6.
These
are
specific
to
Method
6C
only.

260.
Comment:
How
long
is
a
run
by
modified
Method
6,
or
what
total
volume?
(
0035)
76
Response:
The
modified
Method
6
sampling
time
per
run
shall
be
the
same
as
for
Method
6
plus
twice
the
system
response
time.
This
inadvertent
deletion
in
the
proposal
has
been
added
to
Method
6C.

261.
Comment:
There
is
no
Figure
6C­
2;
the
reference
should
be
6C­
1.
The
figure
makes
no
sense.
There
is
no
explanation
of
what
it
is
or
how
it
is
supposed
to
be
used.
(
0035,
0038)

Response:
The
reference
to
Figure
6C­
2
has
been
corrected
to
Figure
6C­
1.
We
have
noted
in
the
title
that
the
figure
depicts
a
modified
Method
6
interference
check
sampling
train.

262.
Comment:
Please
define
source
category
and
type
of
facility.
(
0043)

Response:
The
terms
"
source
category"
and
"
type
of
facility"
have
been
dropped
from
the
methods.
A
facility
is
now
characterized
by
its
potential
to
have
interferences.

263.
Comment:
Has
EPA
found
that
SO2
analyzers
are
so
bad
at
discriminating
just
SO2
that
all
of
the
interference
check
procedures
are
necessary?
Has
EPA
determined
that
Method
6
is
immune
to
these
interferents
as
well?
For
all
this
effort,
it
may
make
more
sense
to
abandon
the
analyzer
and
use
the
old
Method
6.
(
0043)
The
interference
check
creates
additional
and
unnecessary
work.
As
an
example,
a
Kraft
mill
might
have
7
to
10
source
categories
on
site.
Why
not
just
use
Method
6?
Interferences
are
specific
to
the
principle
of
operation,
not
analyzer
specific,
and
one
test
per
lifetime
of
the
analyzer
should
be
adequate,
as
it
is
now.
(
0039)

Response:
We
have
reduced
the
frequency
of
the
interference
check
to
initially
and
after
any
major
analyzer
component
is
replaced.

264.
Comment:
Method
6C
does
not
require
the
analysis
of
performance
audit
samples
when
conducting
the
interference
check.
Why
not
require
this
analysis?
It
would
appear
that
this
analysis
would
resolve
any
discrepancy
in
results.
(
0059)

Response:
We
have
added
the
requirement
that
performance
audits
be
analyzed
with
modified
Method
6
in
the
interference
check.

265.
Comment:
The
EPA
should
clarify
that
the
pollutant
of
interest
should
be
in
the
test
gas
(
such
as
wording
to
use
gases
with
and
without
NOx).
Also,
the
EPA
and/
or
manufacturers
should
be
able
to
establish
interference
effects
on
analyzers
without
the
burden
being
on
the
testing
community.
(
0039)

Response:
We
agree
that
the
test
gas
wording
was
vague
and
have
improved
it.
We
allow
initial
interference
certification
by
the
manufacturer
and
have
dropped
the
requirement
to
repeat
the
test
yearly
unless
a
major
analyzer
component
is
replaced.

266.
Comment:
The
interference
check
as
described
is
flawed
and
unreasonable.
Gas
analyzer
manufacturers
have
done
extensive
research,
development,
and
testing
prior
to
manufacturing
any
analyzer.
The
gas
analyzer
manufacturers
incorporate
components
such
as
77
optical
filters
and
reduced
pressure
reaction
chambers
to
create
an
analyzer
that
performs
to
a
specific
need
with
minimal
interference.
The
available
NOx
analyzers
have
already
been
rigorously
evaluated
under
controlled
conditions
for
their
performance.
It
is
unnecessary
to
repeat
gas
interference
checks
on
analyzers
that
have
been
carefully
researched,
developed,
and
tested
by
the
gas
manufacturers.
It
is
true
that
there
are
specific
gases
that
can
interfere
with
the
NOx
gas
analyses.
Carbon
dioxide
(
CO2),
oxygen
(
O2),
water
vapor
(
H2O)
and
ammonia
(
NH3)
are
the
most
likely
gases
to
be
encountered
during
flue
gas
testing
that
may
interfere
with
the
measurement
of
NOx.

Ammonia
creates
a
unique
problem.
The
main
concern
with
ammonia
is
that
stainless
steel
NO2
converters
will
change
NH3
to
NO
which
will
be
detected
in
the
reaction
chamber
of
a
chemiluminescence
NOx
analyzer.
Molybdenum
and
carbon
converters
do
not
have
this
problem.
Additionally,
free
NH3
has
a
tendency
to
combine
with
other
compounds
such
as
NO2,
SO2
and
HCl.
A
common
problem
with
chemiluminescence
analyzers
is
that
free
NH3
combines
with
NO2
in
the
reaction
chamber
to
form
crystals
that
will
eventually
blind
the
detector.
Any
NO2
existing
in
the
sample
stream
may
combine
with
NH3
before
the
moisture
removal
system,
in
the
moisture
removal
system
and/
or
prior
to
the
analyzer.
If
a
source
is
known
to
have
free
NH3
in
the
gas
stream
then
every
effort
should
be
made
to
prevent
ammonia
salts
from
forming
and
place
a
NH3
scrubber
in
the
sample
gas
that
is
directed
to
the
NOx
analyzer.
Ammonia
scrubbers
are
available
from
commercial
vendors.

If
an
NDIR
analyzer
is
used
to
measure
NOx,
then
the
primary
interferences
are
CO2
and
H2O.
As
with
any
NDIR
analyzer,
CO2
and
H2O
will
send
a
false
detection
since
CO2
and
H2O
have
many
absorption
intensity
peaks
in
the
infra­
red
spectrum.
Optical
filters
can
improve
the
discrimination
ratio
for
both
CO
and
H2O.
Additionally,
by
having
an
efficient
moisture
removal
system
will
minimize
H2O
interference.
There
should
be
established
interference
CO2
and
H2O
rejection
ratio
for
NDIR
NOx
analyzers.
Analyzer
manufacturers
should
be
required
to
present
data
to
confirm
their
claimed
interference
rejection
ratios.
It
is
absurd
to
require
an
interference
test
on
a
yearly
basis.
Does
the
principal/
theory
change
every
year?
(
0011)

Response:
We
have
reduced
the
frequency
of
the
interference
check
to
initially
and
after
any
major
analyzer
component
is
replaced.
The
instrument
manufacturer
may
also
conduct
the
initial
interference
check.

Section
8.4
Sample
Collection
267.
Comment:
What
happens
to
the
test
data
when
the
sample
flow
rate
has
exceeded
the
5
percent
criterion
during
a
run?
(
0035)

Response:
In
the
final
rule,
this
rigid
flow
rate
requirement
has
been
dropped.
It
has
been
relaxed
to
a
guideline
(
suggestion)
to
keep
the
flow
rate
within
5
to
10
percent
of
the
target
rate.

268.
Comment:
Rather
than
specifying
that
flow
be
within
5
percent
of
that
during
the
bias
check,
how
about
giving
the
option
of
proving
the
(
in)
sensitivity
to
flow
rate?
The
same
would
apply
to
other
instrumental
methods.
Additionally,
how
often
do
we
need
to
record
the
flow
rate?
I
assume
some
sort
of
documentation
is
needed.
(
0043)
78
Response:
See
the
response
to
Comment
267.

Section
12.4
[
Should
be
12.0
that
references
12.4
in
Method
7E]

269.
Comment:
It
says
a
bias
test
is
the
system
response.
Calibration
gas
value
divided
by
cal
gas
value.
If
that
is
the
case,
why
do
we
need
a
calibration
error
test
as
a
correction?
Should
the
equation
be:
Cs­
Cdir/
(
either
Cdir
or
Cv)?
(
0035)

Response:
The
analyzer
calibration
error
test
(
direct
injection)
is
needed
to
establish
the
accuracy
of
the
analyzer
and
the
linearity
of
its
response
across
its
measurement
range.
The
bias
check
indicates
the
inaccuracy
(
over
and
above
the
analyzer
calibration
error)
that
is
introduced
by
pulling
gas
through
the
entire
measurement
system.
The
pre­
and
post­
run
bias
check
data
are
used
to
Adial
out@
this
inaccuracy
in
the
concentration
measurements.
The
equation
referenced
by
the
commenter
was
found
to
be
incorrect.
It
has
been
corrected
in
the
final
rule.

Section
13.2
Interference
Test
270.
Comment:
The
need
for
an
interference
test
of
this
type
seems
excessive.
Interference
is
inherent
to
the
design
of
the
monitor
and
does
not
change
over
time,
and
does
not
need
tested
on
a
recurring
basis.
Why
was
12
months
chosen
as
the
limit
for
proof
of
interference?
Has
data
shown
that
response
to
interferents
changes
over
time?
(
0009
,0043)

Response:
We
have
reduced
the
frequency
of
the
interference
check
to
initially
and
after
any
major
analyzer
component
is
replaced.
The
instrument
manufacturer
may
also
conduct
the
initial
interference
check.

271.
Comment:
Has
anyone
actually
done
this
interference
test
at
a
source?
Where
does
the
7
percent
come
from?
What
is
the
modified
Method
6?
In
what
units
are
the
results
presented?
(
0035)

Response:
The
modified
Method
6
interference
test
is
not
a
new
requirement
but
has
been
the
required
interference
test
for
Method
6C
in
the
past.
We
are
not
changing
this
procedure
in
the
rulemaking.

Section
13.3
Alternative
Interference
Check
272.
Comment:
This
section
refers
to
Section
13.6
of
Method
7E
which
only
gives
the
criteria
for
acceptance
for
normal
testing
of
2.5
percent.
It
should
reference
Section
7.3.1
of
Method
6C.
(
0035)

Response:
In
the
final
rule,
the
acceptance
criterion
for
the
interference
check
described
in
Section
8.3
of
Method
6C
is
presented
in
Section
13.2.
For
the
alternative
interference
check,
Section
13.3
of
Method
6C
cross­
references
the
specification
in
Section
13.7
of
Method
7E.

Section
13.1
79
273.
Comment:
If
the
standard
is
2.0
ppmv,
then
a
bias
of
0.5
ppmv
is
acceptable?
(
0035)

Response:
Yes.

3.16
Specific
Comments
on
Method
7E
General
comments:

274.
Comment:
We
suggest
that
EPA
produce
an
accompanying
guidance
document,
such
as
the
ones
used
for
Part
75
monitoring
or
compliance
assurance
monitoring,
for
frequently
asked
questions
and
easier­
to­
update
pieces
than
a
full
rule.
We
suggest
the
equipment
in
Section
6.1
be
in
the
guidance
document,
and
a
new
Section
6.1
that
would
be
very
general
and
technology­
neutral.
The
table
matrix
in
7E­
3
will
become
obsolete
quickly
and
a
guidance
document
would
be
useful.
((
0022,
0023)

Response:
A
guidance
document
may
be
a
good
idea,
and
we
will
consider
it
for
a
future
project
as
time
and
resources
allow.

275.
Comment:
For
sources
that
emit
ammonia
at
detectable
levels
(>
1
ppm),
we
suggest
EPA
require
testers
to
add
permeation
tube
ammonia
scrubbers
to
their
NOx
sampling
and
conditioning
system.
(
0038)

Response:
In
this
rule,
we
are
refraining
from
requiring
specific
interference­
limiting
technology
but
are
allowing
testers
and
manufacturers
flexibility
in
their
approach
to
handling
interference
problems.

276.
Comment:
The
Method
7E
procedures
do
not
specify
using
the
larger
of
the
absolute
values
for
each
interferent
(
between
introducing
the
interferent
with
and
without
the
contaminant)
be
used
to
calculate
the
interference,
as
the
other
methods
do.
(
0020)

Response:
We
now
state
that,
in
summing
the
interferences,
use
the
larger
of
the
absolute
values
obtained
for
the
interferent
tested
with
and
without
the
pollutant
present.

Section
1.1
Analytes
277.
Comment:
The
parenthetical
phrase
is
in
the
wrong
location.
It
should
be
after
the
methods
listing
in
Section
1.0.
The
question
posed
in
Section
1.1
seems
to
be
addressed
by
the
table,
but
the
table=
s
placement
makes
it
confusing.
(
0032)

Response:
The
parenthetical
phrase
has
been
moved
to
the
end
of
Section
1.0,
a
more
appropriate
place.

278.
Comment:
Add
a
clarifying
statement
that
the
method
is
for
NOx
as
NO2.
(
0007)
80
Response:
This
statement
has
been
added.

Section
1.3
Data
Quality
Objectives
(
DQO)

279.
Comment:
The
statement,
"
In
applications
where
there
is
no
emission
limit
(
e.
g.,
market­
based
programs)"
is
false
and
should
be
revised.
Virtually
all
sources
that
are
included
in
market­
based
programs
are
also
subject
to
emission
standards
through
other
applicable
regulations.
Furthermore,
it
is
impossible
for
the
tester
to
distinguish
between
the
purpose
of
specific
tests
programs
at
such
sources
because
the
test
results
may
serve
to
demonstrate
conformance
with
multiple
requirements
(
0032).

Response:
This
comment
has
been
addressed
by
our
not
basing
the
performance
tests
on
the
applicable
emission
standard.

280.
Comment:
The
section
states
that
the
method
is
designed
for
determining
compliance
with
Federal
and
State
emission
standards.
The
proposed
revisions
represent
some
major
deviations
from
many
of
the
commonly­
used
aspects,
and
could
result
in
data
that
is
less
comparable
to
other
methods
that
now
refer
to
Method
7E
as
the
standard,
such
as
CEM
certification
tests,
CEM
QA
audits,
emission
inventory
tests,
and
control
device
performance
evaluations.
(
0012)

Response:
A
major
intent
of
the
method
revisions
is
to
improve
the
quality
measured
data.
We
believe
the
revisions
we
have
made
to
the
methods
will
enhance
their
standing
as
standards
rather
than
diminish
it.

281.
Comment:
The
following
sentence
is
poorly
written:
AHowever,
we
do
not
intend
the
method
to
penalize
you
for
calibrating
to
measure
accurately
emissions
well
below
the
emission
limit.@
(
0043)

Response:
We
have
rewritten
this
section.

Section
1.3.1
Data
Quality
Assessment
282.
Comment:
I
do
not
understand
how
conducting
a
bias
test
prior
to
(
optional)
and
post­
test
(
mandatory)
qualifies
as
a
Data
Quality
Assessment,
and
this
section
does
not
tell
the
reader
how
to
assess
data
quality
mostly
because
no
objectives
were
set.
Most
performancebased
systems
involve
dynamic
spiking­
certainly
with
CEMs.
(
0035)

Response:
We
have
dropped
the
proposed
requirement
to
calculate
data
uncertainty
using
the
results
of
the
bias
test.
As
a
result,
the
proposed
discussion
of
data
quality
assessment
using
the
bias
test
results
in
Section
1.3.1
has
not
been
retained.

283.
Comment:
Two
problems
exist
with
the
statement
Aif
the
measured
average
emissions
are
less
than
the
emission
limit
but
a
small
fraction
of
the
data
exceeded
the
analyzer
range,
the
data
user
may
elect
to
accept
this
data
as
adequate
to
show
compliance
with
the
emission
limit.@
First,
data
that
exceed
the
analytical
range
are
unknown
values
and
cannot
be
81
included
in
the
averages,
and
second,
subjective
statements
like
Acould@
or
Amay@
are
ambiguous
and
confusing.
(
0011)

Response:
We
have
dropped
the
proposed
requirement
to
calculate
and
report
the
uncertainty
of
each
test
run.

Section
1.3.2
Data
Quality
Assessment
for
low
emitters
284.
Comment:
The
reference
should
be
to
Section
13.5
for
the
low­
level
bias
and
Section
13.8
for
the
alternative
dynamic
spike.
(
0013,
0035)

Response:
These
comments
on
the
proposal
do
not
apply
to
the
revised
section
that
now
addresses
low
emitters.

285.
Comment:
Why
is
this
criteria
only
interim?
The
NOx
analyzer
is
the
only
instrument
in
the
grouped
methods
that
can
meet
the
bias
criteria
for
lower
levels.
The
other
methods
won=
t
be
able
to
do
it
in
3
yrs
or
10
yrs.
New
instruments
are
needed,
not
interim
relief.
(
0035)

Response:
We
have
dropped
the
statement
giving
interim
relief
for
3
years.

286.
Comment:
Modify
the
first
sentence
to
AYes
there
are
interim
special
sampling
system
bias
performance
criteria
and
allowances
when
using
the
alternative.@
(
0054)

Response:
The
text
of
this
section
has
been
revised
for
greater
clarity.

Section
1.3.3
How
is
the
calibration
designed
when
test
units
are
covered
by
more
than
one
emission
limit?

287.
Comment:
This
section
states
that
the
analysis
should
be
based
on
the
most
stringent
emission
limit.
For
a
source
with
an
average
low
level
limit
for
normal
operation
and
a
short­
term
high
level
average
(
i.
e.
dual­
range
analyzer),
which
one
is
the
most
stringent?
(
0022)

Response:
For
a
source
that
has
an
average
low­
level
limit
for
normal
operation
and
a
short­
term
high­
level
limit,
the
analytical
calibration
must
be
designed
to
cover
both
limits.
For
example,
a
dual­
range
analyzer
could
be
used.

Section
3.0
Definitions
288.
Comment:
This
section
has
no
definition
of
the
term
uncertainty
estimate.
This
term
is
used
(
and
misused
where
bias
is
explicitly
and
implicitly
stated
to
be
uncertainty)
frequently
in
this
method
and
should
be
defined
because
the
meaning
is
not
apparent
from
the
context.
In
addition,
the
definition
for
apparent
bias
is
missing.
(
005,
0013,
0032)

Response:
See
the
response
to
Comment
283.
82
289.
Comment:
We
recommend
you
add
the
following
definitions
(
0054):

a.
Emission
limit:
Analyte
concentration
as
written
in
the
unit=
s
Permit
to
Operate.

b.
Applicable
Emissions
Standard
Correction:
Analyte
concentration
corrected
to
one
of
the
standards
listed
in
Table
12.1.1.

c.
Run:
Consecutive
series
of
gas
samples
taken
at
a
single
point,
3­
point,
or
a
single
traverse
across
the
stack
or
duct
in
minutes.
Number
of
required
minutes
listed
in
13.3.

d.
Test:
Sum
total
of
all
runs
in
minutes.

Response:
Definitions
for
"
run"
and
"
test"
have
been
added
to
Method
7E.
"
Emission
limit"
and
"
applicable
emissions
standard
correction"
are
not
now
needed
in
the
final
methods.

Section
3.1
Analyzer
calibration
error
290.
Comment:
The
sentence
should
end
with
Adivided
by
the
certified
calibration
gas
concentration:.
(
0013)

Response:
The
commenter
is
correct
for
the
proposed
rule;
however,
we
have
revised
the
analyzer
calibration
error
to
be
the
difference
between
the
two
measurements
divided
by
the
calibration
span.

Section
3.2.2
System
calibration
291.
Comment:
The
definition
should
state
that
the
gas
be
introduced
prior
to
the
sampling
filter.
(
0035)

Response:
The
system
calibration
means
introducing
the
calibration
gases
into
the
measurement
system
at
the
probe
and
upstream
of
all
sample
conditioning
components.
The
filter
is
considered
a
sample
conditioning
component.
Even
so,
we
have
mentioned
the
filter
in
the
definition
as
recommended.

292.
Comment:
Clarify
if
it
is
acceptable
to
both
the
EPA
Office
of
Air
Quality
Planning
and
Standards
(
OAQPS)
and
the
Compliance
Assurance
Monitoring
Division
(
CAMD)
to
use
Method
205
to
obtain
the
required
concentrations,
provided
the
high­
concentration
gases
conform
to
EPA
Protocol.
(
0059)

Response:
Method
205
is
acceptable
to
OAQPS,
but
Administrative
approval
from
CAMD
is
required
to
use
it
in
Part
75
applications.
This
has
been
explicitly
stated
in
the
method.

Section
3.5
Data
recorder
83
293.
Comment:
In
the
definition
for
data
recorder,
the
term
Apermanent@
is
interesting
and
could
mean
a
line
on
paper
or
an
electronic
logger.
This
implies
that
data
read
to
a
spreadsheet
is
only
permanent
if
it
is
write­
protected.
Please
clarify.
(
0043)

Response:
"
Permanent"
in
this
instance
means
documented.

Section
3.9
Range
294.
Comment:
This
definition
allows
the
range
to
be
greater
than
5
percent
of
the
spanlevel
concentration.
Although
there
is
no
upper
limit,
it
is
somewhat
restricted
by
the
reduction
in
sensitivity.
There
is
an
incentive
to
use
a
high
range
limit
since
the
calibration
error
for
zero
gas
is
set
at
0.25
percent
of
the
upper
range
limit
(
0032).

Response:
The
methods
now
require
that
no
run
average
exceed
the
calibration
span
concentration.

295.
Comment:
The
definitions
for
range
and
span
(
here
and
in
Section
3.12
and
the
definition
for
span
level
gas
in
Section
7.1.1
seem
to
add
unnecessary
complication
for
determining
analyzer
measurement
range,
span
value,
and
calibration
gas
levels.
An
approach
similar
to
the
straightforward
fashion
of
Method
25A
could
be
adopted
if
the
objective
is
to
improve
the
QA/
QC
aspects
of
the
new
methods.
(
0029)

Response:
We
have
dropped
the
definitions
of
Arange@
and
Aspan@
since
span
as
a
single
term
is
not
used
in
the
newly­
adopted
terminology
and
range
is
used
only
sparingly.
To
simplify
the
terminology,
we
have
changed
Aspan­
level
gas@
to
Ahigh­
level
gas.@
The
Arange@
generally
refers
to
the
interval
between
the
minimum
and
manufacturer­
recommended
maximum
concentration
for
the
analyzer
full­
scale
response.
The
Acalibration
span@
is
defined
as
a
fixed
percentage
of
the
analyzer=
s
full­
scale
range
and
is
chosen
such
that
the
majority
of
the
measured
concentrations
will
be
between
20
and
80
percent
of
this
calibration
span.

296.
Comment:
Most
instruments
have
several
ranges,
so
it
would
be
more
appropriate
to
define
range
in
terms
of
the
measurement
scale
selected
on
the
analyzer,
and
say
that
range
is
the
interval
between
the
nominal
minimum
and
maximum
concentration
on
the
selected
analyzer
measurement
scale.
The
inclusion
of
language
cited
by
the
manufacturer
could
have
the
very
undesirable
effect
of
opening
the
door
to
an
analyzer
certification
program
similar
to
that
used
for
ambient
instruments.
(
0059)

Response:.
See
the
response
to
Comment
295.

297.
Comment:
It
seems
appropriate
to
replace
the
proposed
span
definition
with
one
for
span
value,
which
would
be
the
highest
concentration
of
the
calibration
curve
and
a
value
no
more
than
20
percent
above
the
highest
calibration
gas.
(
0029)

Response:
We
do
not
use
Aspan
value@
because
it
is
a
different
term
that
normally
applies
to
the
calibration
of
CEMS
and
is
determined
in
the
applicable
regulations.
84
298.
Comment:
You
should
only
recommend
that
the
range
be
at
least
5
percent
greater
than
the
concentration
of
span
gas,
not
make
it
a
requirement.
If
the
bias
result
is
off­
scale,
the
test
should
not
be
valid,
however.
It
is
unlikely
that
a
bias
result
would
be
higher
than
the
span
gas.
This
issue
is
not
that
important
unless
the
value
is
off­
scale,
but
some
regulators
like
to
pick
on
it.
(
0043)

Response:
This
proposed
requirement
has
been
dropped
in
the
final
rule.

Section
3.10
Response
time
299.
Comment:
Why
don=
t
you
say
it
must
be
greater
than
92.5
percent
for
three
readings,
not
varying
by
more
than
0.5
percent
of
the
cylinder
value?
(
0035)

Response:
We
have
revised
some
of
the
response
time
criteria
to
make
it
less
conflicting.
Now
the
response
time
is
measured
when
the
upscale
gas
has
reached
95
percent
of
its
certified
value
or
0.5
ppm
(
whichever
is
less
restrictive)
or
when
the
low­
level
gas
has
decreased
to
within
5.0
percent
or
0.5
ppm
(
whichever
is
less
restrictive)
of
its
certified
value.

Section
3.11
Sampling
system
bias
300.
Comment:
It
is
more
accurate
to
refer
to
the
analyzer,
as
opposed
to
the
analytical
system.
(
0029)

Response:
The
revised
system
bias
definition
clarifies
this.

301.
Comment:
There
is
a
conflict
between
the
sampling
system
bias
definition
and
the
calculation
in
Equation
7E­
3.
(
0035)

Response:
This
has
been
corrected
in
the
revised
definition
and
calculation.

Section
3.12
Span
302.
Comment:
The
definition
of
span
has
changed,
and
is
now
based
on
the
calibration
gas
instead
of
the
analyzer
range.
Use
of
a
higher
level
calibration
gas
will
change
the
span
value,
and
this
does
not
make
sense.
(
0008)

Response:
See
the
response
to
Comment
295.

303.
Comment:
Ambiguity
exists
between
the
definition
of
span
as
the
concentration
of
the
highest
calibration
gas
in
this
section,
maximum
concentration
considered
potentially
valid
for
a
test
in
Section
7.1.1,
and
chosen
such
that
all
measurements
are
all
below
the
span
in
the
QA/
QC
Summary
Table.
(
0041)

Response:
We
now
use
"
calibration
span"
in
place
of
"
span."
See
the
response
to
Comment
295.
The
noted
points
of
ambiguity
do
not
conflict.
The
calibration
span
is
the
85
concentration
of
the
highest
calibration
gas;
this
concentration
set
the
upper
calibration
limit
for
valid,
quality­
assured
data,
which
means
that
it
should
be
chosen
such
that
no
run
averages
exceed
this
concentration.

304.
Comment:
I=
m
not
sure
that
the
span
in
most
cases
will
be
higher
than
the
concentration
of
the
emission
limiting
standard.
The
relationship
of
the
span
to
the
emission
limiting
standard
is
often
based
on
age
of
the
standard
in
relation
to
the
source.
Span
should
always
be
selected
such
that
the
expected
concentration
is
25
to
30
percent
of
the
span
to
ensure
meaningful
results.
(
0059)

Response:
See
the
response
to
Comment
295.

Section
4.0
Interferences
(
Reserved)

305.
Comment:
This
section
is
reserved,
and
this
is
entirely
incongruous
and
inconsistent
with
the
fact
that
onerous
interference
testing
procedures
are
required
initially
and
annually
for
each
analyzer
and
for
each
source
category
(
0032).

Response:
A
brief
discussion
has
been
added
to
Section
4.0
to
note
that
interferences
may
vary
among
instruments
and
that
instrument­
specific
interferences
must
be
evaluated
through
the
interference
test.
The
proposed
requirement
to
conduct
the
interference
test
annually
has
been
dropped.

Section
5.0
Safety
306.
Comment:
This
section
should
be
rewritten
to
reflect
cylinder
safety,
noxious
gas
safety,
CO
asphyxiation,
or
reference
to
the
Source
Evaluation
Society
Safety
Manual.
Each
method
should
be
considered
in
this
light.
(
0035)

Response:
We
have
added
precautions
concerning
cylinder
and
noxious
gas
safety
to
Section
5.0.

Section
6.0
Equipment
and
Supplies
307.
Comment:
If
you
are
properly
using
equipment
designed
for
this
application,
why
would
the
performance
criteria
in
this
method
only
be
met
most
of
the
time
as
opposed
to
all
of
the
time?
What
are
some
examples
when
the
performance
criteria
are
not
met
when
the
correct
procedures
and
properly
designed
equipment
are
used?
Perhaps,
Awill@
should
be
Ashould@
and
Amost
of
the
time@
should
be
deleted.
(
0032)

Response:
The
statement
has
been
revised
to
read:
AThe
performance
criteria
in
this
method
should
be
met
or
exceeded
if
you
are
properly
using
equipment
designed
for
this
application.@
86
308.
Comment:
A
definition
of
what
constitutes
dry
basis
and
wet
basis
is
important.
A
requirement
to
continuously
measure
the
outlet
gas
temperature
at
the
coldest
point
in
the
heat
exchanging
vessel
of
the
chiller
type
gas
conditioners
should
be
added.
Language
should
be
added
that
includes
dilution
extractive
and
permeation
technologies
to
continuously
measure
and
record
the
temperature
at
the
coldest
in
the
heat
exchanging
vessel
of
the
chiller.
A
number
of
driers
provide
continuous
dew
points
substantially
below
the
dew
point
temperatures
called
for
in
the
proposed
rule.
Since
dilution
systems
back
calculate
the
concentration
with
a
dilution
ratio,
the
water
fraction
will
be
very
significant
since
it
too
is
multiplied
by
the
dilution
ratio.
Hot
wet
instrumentation
should
also
measure
and
record
the
water
fraction
at
a
similar
frequency.
Correction
of
the
basis
of
measurement
can
then
be
made
to
a
dry
basis
at
standard
conditions
as
required.
At
a
minimum,
the
measured
dew
point
or
water
fraction
error
can
be
accounted
for
in
the
uncertainty
calculation.
(
0023,
0054)

Response:
In
lieu
of
listing
minimum
temperatures
and
temperature
monitoring
techniques
for
specific
technologies,
we
are
moving
to
performance­
based
specifications
and
are
requiring
the
sample
gas
be
maintained
above
its
dew
point
prior
to
analysis.

Section
6.1
What
do
I
need
for
the
measurement
system?

309.
Comment:
This
section
refers
to
an
example
measurement
system
shown
in
Figure
7E­
1
which
is
(
a)
in
direct
conflict
with
the
requirements
of
6.1.1.1,
and
(
b)
prejudicial
to
a
specific
design.
The
filter
media
is
not
included
for
the
bias
test.
The
caption
to
Figure
7E­
1
should
be
example
measurement
system.
Many
other
designs
are
used
in
practice
and
will
provide
equivalent
or
better
performance.
For
example,
a
heated­
head
pump
can
be
placed
upstream
of
the
moisture
removal
system
or
even
at
the
probe
outlet
so
that
the
entire
system
is
under
positive
pressure
and
so
that
the
moisture
removal
is
enhanced.
In
addition,
diagrams
of
other
acceptable
systems
such
as
hot­
wet
and
dilution
sampling
systems
should
be
included.
The
figure
indicates
a
sample
filter
outside
the
stack,
upstream
of
the
calibration
gas
3­
way
valve.
However,
the
filter
can
also
be
located
at
the
probe
tip
in
the
stack,
or
downstream
of
the
3­
way
valve
but
immediately
upstream
of
the
moisture
removal
system.
Further,
it
is
very
bad
practice
to
have
a
throttling
valve
on
the
pump
discharge,
and
a
pump
bypass
valve
is
a
much
better
idea.
(
0032,
0038)

Response:
Figure
7E­
1
has
been
listed
as
an
example
measurement
system.
To
keep
the
method
simple
without
limiting
equipment
options,
the
method
notes
that
other
systems
such
as
hot­
wet
and
dilution
sampling
systems
are
acceptable.
The
throttling
valve
on
the
pump
discharge
has
been
redrawn
to
show
a
pump
bypass
valve.

310.
Comment:
While
Section
6.1
mentions
that
subsections
6.1.1
through
6.1.9
are
provided
as
guidance,
this
point
is
not
made
very
plain
and
I
fear
may
be
lost
to
most
readers,
especially
for
wording
such
as
must
and
subject
to
approval.
I
suggest
cleaning
up
the
text
and
moving
the
bit
about
guidance
to
the
first
paragraph
in
the
section.
(
0022)

Response:
Section
6.1
has
been
revised
to
clearly
note
that
the
example
equipment
is
listed
as
guidance.
87
311.
Comment:
I
have
never
seen
a
design
flow
rate
for
a
sampling
system.
They
are
all
home
made.
Flow
is
determined
by
the
desired
response
time
and
the
height
of
the
stack.
Item
(
1)
in
Section
6.1
should
read
"
5
percent
of
the
initial
flow
rate."
Is
design
flow
rate
defined?
This
is
tightening
the
standard,
and
does
it
make
any
difference?
(
0035,
0043)

Response:
Section
6.1
has
been
clarified
to
states
that
"
Sample
flow
rate
must
be
maintained
within
10
percent
of
the
flow
rate
at
which
the
system
response
time
was
measured."

312.
Comment:
There
are
statements
throughout
that
the
probe,
sample
line,
etc.,
should
be
heated
to
at
least
140
º
C
(
284
º
F)
or
25
º
C
(
77
º
F)
above
the
concentration
dew
point
of
the
sample,
whichever
is
higher,
to
prevent
condensation.
We
cite
a
California
Energy
Commission/
University
of
California
study
entitled
Quantification
of
Uncertainties
in
Continuous
Monitoring
Systems
for
Low
NOx
Emissions
from
Stationary
Sources.
The
executive
summary
concluded
that
"
There
were
no
statistical
differences
observed
between
any
of
the
sample
line
test
conditions,
regardless
of
material
used
or
operating
temperature.
The
only
exception
was
that,
in
the
presence
of
ammonia
and
water,
NOx
losses
increased
(
for
the
lowest
NOx
input)
in
the
stainless
steel
line
operating
at
175
°
C."
The
test
conditions
included
temperatures
at
25,
107,
and
175
°
C
with
various
line
materials.
The
study
showed
that
even
at
relatively
low
temperatures,
little
bias
was
evident
and
that
the
only
condition
that
was
statistically
worse
was
where
the
line
was
heated
too
hot
and
it
was
speculated
that
additional
reactions
were
taking
place.
(
0039)

Response:
The
language
in
the
methods
has
been
revised
to
make
them
technologyneutral
The
requirement
will
be
to
maintain
the
sample
gas
above
the
dew
point
of
the
stack
gas
so
that
no
loss
of
sample
results.
This
may
be
done
by
heating,
diluting,
drying,
desiccating,
or
a
combination
thereof,
or
by
other
means.

313.
Comment:
The
comments
from
the
Institute
of
Clean
Air
Companies
(
ICAC)
are
exactly
the
same
wording
as
the
new
proposed
rules,
and
can
be
interpreted
that
ICAC
members
want
to
sell
some
new
sample
lines
to
companies
that
already
have
adequate
lines,
and
that
is
being
endorsed
in
the
rule.
(
0039)

Response:
The
methods
were
updated
based
on
information
we
received
from
various
sources.
We
incorporated
any
information
we
thought
was
helpful
and
improving
without
adding
any
intended
sales
benefits
to
the
commenters.

Section
6.1.1
Sample
Probe
(
Stinger)

314.
Comment:
(
Stinger)
needs
some
explanatory
note.
Is
Stinger
another
name
for
sampling
probe?
Stinger
is
not
used
elsewhere
in
this
method
and
is
not
common
terminology.
(
0032,
0035)

Response:
The
use
of
Astinger@
as
another
name
for
the
sample
probe
has
been
dropped.

315.
Comment:
Is
hastelloy
an
acceptable
alternative
to
quartz
at
temperatures
greater
than
500
°
F?
(
0028)
88
Response:
Hastelloy
is
acceptable
at
temperatures
greater
than
500
°
F.

316.
Comment:
This
section
refers
to
the
concentration
dew
point,
while
Section
6.1.1.1
only
mentions
dew
point.
Do
these
terms
mean
different
things?
(
0043)

Response:
The
two
terms
mean
the
same
thing
and
both
will
be
referred
to
as
Adew
point@.

Section
6.1.1.1
Particulate
Filter
317.
Comment:
This
section
requires
an
in­
stack
or
out­
of­
stack
particulate
filter.
A
particulate
filter
is
not
necessary
when
sampling
certain
sources,
such
as
gas
fired
turbines
because
there
is
no
particulate
in
the
sample
stream.
Are
filters
needed
for
gas
turbines?
(
0032,
0035)

Response:
The
particulate
filter
requirement
may
be
waived
in
applications
where
no
significant
particulate
matter
is
expected
(
e.
g.,
for
emission
testing
of
a
combustion
turbine
firing
natural
gas).

318.
Comment:
The
last
sentence
of
this
section
requires
that
the
probe
filter
media
be
included
in
the
sampling
system
bias
check
and
be
non­
reactive
to
the
gas
being
sampled.
The
EPA
should
explain
the
basis
for
this
requirement
since
we
know
of
no
solid
phase
reactions
involving
adsorption
of
NO
or
NO2.
The
requirement
is
appropriate
for
SO2
sampling
systems
but
not
O2
or
CO2.
If
there
is
not
a
specific
technical
basis
to
support
introducing
the
calibration
gas
upstream
of
the
filter
media
for
NOx
measurements,
then
this
requirement
should
be
deleted
(
0032).

Response:
An
inappropriately
heated
filter
can
trap
NO2
on
the
wet
surface.
Alkaline
moisture
would
enhance
this
removal
of
NO2.
The
same
thing
goes
for
CO2.
Therefore,
the
bias
check
must
include
the
probe
filter.

319.
Comment:
The
section
says
the
filter
media
must
be
included
in
the
bias
test.
Figure
7E­
1
is
therefore
incorrect
since
the
calibration
gas
injection
location
is
downstream
of
the
heated
filter.
(
0026,
0028,
0035,
0054)

Response:
Figure
7E­
1
has
been
corrected
by
placing
the
calibration
gas
injection
location
upstream
of
the
heated
filter.

Section
6.1.2
Heated
Sample
Line
320.
Comment:
We
suggest
indicating
the
Teflon
trademark
throughout
the
document.
(
0054)

Response:
We
no
longer
use
the
trademark
sign
for
Teflon
or
other
products
that
are
common
and
widely
used.
89
Section
6.1.3
Sample
Lines
321.
Comment:
This
section
implies
that
there
is
never
a
need
to
heat
the
sample
line
between
the
condenser
and
the
analyzers,
which
is
incorrect.
The
section
should
state
that
the
temperature
must
be
above
the
dew
point
set
by
the
condenser,
and
in
some
cases
ambient
temperature
will
suffice.
(
0009,
0010)

Response:
See
response
to
Comment
105.

322.
Comment:
Remove
the
reference
to
stainless
steel
and
Teflon
and
change
to
a
material
that
is
non­
reactive
to
the
gas
sampled.
(
0022)

Response:
The
references
to
stainless
steel
has
been
dropped,
and
the
reference
to
Teflon
have
been
retained
as
example
material
along
with
a
statement
that
other
materials
that
are
nonreactive
to
the
sampled
gas
are
acceptable.

Section
6.1.6
Flow
Control/
Gas
Manifold
323.
Comment:
This
section
is
poorly
written
and
thereby
imposes
unintended
and
unnecessary
design
requirements
(
i.
e.,
a
valve
to
block
the
sample
gas
is
not
needed
for
either
straight
tee
or
atmospheric
vent
system
calibration
designs).
Similarly,
there
is
no
technical
basis
to
require
a
back
pressure
regulator
in
many
acceptable
systems
(
0032).

Response:
New
Section
6.2.6
(
Calibration
Gas
Manifold)
has
been
reworded
to
add
clarity.
The
use
of
a
back­
pressure
regulator
will
be
listed
as
an
option,
not
a
requirement.
Mention
of
a
valve
to
block
the
sample
gas
flow
when
introducing
calibration
gas
directly
to
the
analyzer
has
been
dropped.

324.
Comment:
The
current
description
requires
flooding
the
probe,
which
wastes
expensive
Protocol
1
gases.
An
additional
method
should
be
added
to
perform
the
sampling
system
bias
check
by
introducing
calibration
gases
at
the
calibration
valve
installed
at
the
outlet
of
the
sampling
probe.
(
0013)

Response:
The
narrative
in
the
new
section
has
been
revised
to
state
that
calibration
gas
is
normally
introduced
at
the
outlet
of
the
probe
during
the
bias
check.

325.
Comment:
Where
is
the
sample
flow
measured
both
through
the
system
and
to
the
instrument?
(
0035)

Response:
Proposed
Section
6.1.6
and
final
Sections
6.2.6
and
6.2.7
give
general
design
guidance
for
calibration
and
sample
gas
manifolds.
Sample
flow
measurement
will
depend
on
the
specific
measurement
system
design.
90
326.
Comment:
The
proposed
definition
does
not
include
dilution
systems,
and
it
should
be
noted
that
dilution
systems
use
different
calibration
procedures
than
direct­
extractive.
Specifically,
it
is
not
feasible
to
introduce
source­
level
gas
directly
to
the
analyzers.
(
0059)

Response:
Section
6.1
states
that
alternative
apparatus
and
procedures
may
be
used
where
the
sample
is
diluted
prior
to
analysis.

Section
6.1.7
Sample
Gas
Manifold
327.
Comment:
Change
Athat
does
not
react
with
NOx@
to
Athat
is
non­
reactive
to
the
gas
sampled
to
make
it
more
general.@
(
0022)

Response:
This
change
has
been
made.

Section
6.1.8.1
Dual
Range
Analyzers
328.
Comment:
AManufacturers
may
certify
a
gas
analyzer
with
a
single
large
range
which
can
be
used
with
proper
data
recorders
as
two
separate
analyzers
if
the
proper
set
of
calibration
gases
and
the
interference
tests
meet
the
analyzer
calibration
error
and
sampling
bias
checks.@
This
language
is
unnecessary
and
confusing.
We
suggest
removing
it
with
the
option
of
adding
it
to
the
preamble@.
The
may
certify
language
could
have
the
very
undesirable
effect
of
opening
the
door
to
an
analyzer
certification
program
similar
to
that
used
for
ambient
instruments.
This
would
stagnate
the
current
advancements
being
made
in
the
design
of
source
testing
instruments.
(
0054,
0059)

Response:
This
language
has
been
deleted
from
the
method.

Section
6.1.9
Data
Recording
329.
Comment:
The
resolution
specified
is
0.5
percent
of
span
(
the
QA
table
suggests
1
percent
of
span),
while
the
calibration
error
specification
for
the
zero
gas
is
"
0.25
percent
of
upper
range
limit.
If
the
upper
range
limit
is
5
percent
higher
than
span,
then
the
"
0.25
percent
level
is
below
the
recorder
resolution.
This
is
a
direct
conflict.
(
0032,
0009,
0030)

Response:
The
data
recorder
resolution
in
the
QA
table
has
been
corrected
to
0.5
percent.
The
calibration
error
specification
for
the
low­
level
calibration
gas
(
which
may
be
a
zero
gas)
is
2
percent
of
the
calibration
span
or
0.5
ppm.

330.
Comment:
Regardless
of
the
data
recording
device,
it
should
be
mandatory
that
the
data
during
the
run
be
recorded
in
hard
copy
form
for
agency
personnel
to
review
and
validate.
This
also
applies
to
the
calibration
error,
converter
efficiency,
stratification,
and
bias
tests.
(
0011)

Response:
Section
1.0
notes
that
you
must
document
your
adherence
to
the
specific
requirements
for
equipment,
supplies,
sample
collection
and
analysis,
calculations,
and
data
analysis.
91
331.
Comment:
Move
the
resolution
specification
to
the
DQO
section
in
Section
1.3,
since
the
resolution
of
the
data
recording
device
is
directly
tied
to
the
DQO.
(
0022)

Response:
We
believe
it
is
best
to
list
the
resolution
specification
in
this
section
and
not
tie
it
directly
to
the
DQO.

Section
7.1
Calibration
Gas
332.
Comment:
Basing
the
low­
level
calibration
gas
concentration
on
the
value
of
the
high­
level
calibration
gas
is
much
more
confusing
than
the
existing
way
of
using
a
percent
of
span.
This
added
confusion
will
also
lead
to
more
mathematical
errors,
compromising
the
quality
of
stack
test
programs,
and
possibly
invalidating
results
based
on
a
trivial
technicality.
(
0038)

Response:
We
have
made
the
high­
level
calibration
gas
synonymous
with
the
calibration
span
to
ensure
that
all
measured
data
are
quality
assured
when
the
traditional
concentration
interval
between
the
high­
level
calibration
gas
and
the
span
is
removed.
We
do
not
believe
this
move
adds
confusion
or
leads
to
mathematical
errors
but
rather
increases
the
defensibility
of
data
collected
over
the
entire
measurement
range.

Section
7.1.1
Span­
level
Gas
333.
Comment:
What
is
the
tolerance
for
readings
above
the
span
gas
concentration,
such
as
an
86
ppm
when
using
an
85
ppm
span
gas.
The
current
methods
treat
this
as
valid,
but
it
appears
there
is
no
allowance
for
it
in
the
proposed
methods.
What
if
an
emission
spike
occurs?
Common
sense
and
measurement
theory
dictates
measurements
beyond
the
high
calibration
gas
can
be
valid.
Whether
data
is
potentially
valid
should
be
decided
by
QA
in
the
field,
and
not
subjectively
after
the
fact
by
a
regulatory
report
reviewer.
(
0028,
0038)

Response:
Readings
above
the
calibration
span
are
allowed
if
the
run
average
is
within
the
calibration
span.
Run
averages
that
exceed
the
calibration
span
due
to
spikes
are
normally
invalidated.
The
data
quality
objectives
of
a
test
should
be
the
determining
factor
for
accepting
data
outside
the
calibration
span.

Section
7.1.3
Low­
level
Gas
334.
Comment:
Another
level
of
calibration
has
been
added
and
this
does
not
simplify
the
methods
or
reduce
costs.
We
recommend
removing
all
low­
level
calibration
requirements
for
the
analyzer
error
check
and
the
sampling
system
bias
check.
(
0041)

Response:
Removing
the
low­
level
calibration
requirements
from
the
calibration
error
and
system
bias
checks
would
be
reducing
the
current
level
of
system
verification,
not
removing
anything
added
in
the
proposal.

335.
Comment:
Would
the
use
of
a
4­
point
calibration
curve,
such
as
that
currently
required
by
Method
20,
instead
of
the
3­
point
curve
used
in
other
instrumental
methods,
improve
the
data
quality?
Did
EPA
consider
the
possibility
of
using
a
4­
point
calibration
curve
similar
to
92
Method
20
in
all
methods
with
a
zero
gas
and
a
low­
level
cal
gas
(
e.
g.
say
10
to
30
percent
of
span?
What
were
the
findings?
(
0059)

Response:
We
have
not
assessed
the
value
of
requiring
four
calibration
gases
instead
of
three
in
this
rulemaking.
We
do
not
believe
the
data
quality
would
improve
sufficiently
to
justify
the
expense
of
requiring
an
additional
calibration
gas.

Section
8.0
Sample
Collection,
Preservation,
Storage,
and
Transport
336.
Comment:
The
EPA
should
clarify
that
all
test
options,
deviations,
etc.,
to
the
extent
they
are
known
in
advance,
be
identified
in
the
test
protocol
that
is
typically
submitted
prior
to
testing.
(
0038)

Response:
The
commenter
makes
a
good
recommendation
which
should
be
followed
in
all
cases
where
a
test
protocol
is
used.
However,
in
keeping
to
the
commitment
that
test
methods
be
performance­
based,
we
hesitate
to
emphasize
the
listing
of
test
deviations
but
rather
rely
on
the
performance
tests
to
detect
unacceptable
variations.

337.
Comment:
Once
response
time
has
been
determined,
then
one
times
rather
than
two
times
the
response
time
should
be
appropriate
to
commence
measurements.
There
should
be
a
time
limit
on
the
pollutant
measurement
per
point
along­
side
the
response
time
criteria
(
i.
e.
either
1
min/
point
or
2
times
response
time,
whichever
is
less
restrictive).
(
0030)

Response:
After
waiting
for
twice
the
response
time
and
collecting
the
sample
for
the
first
traverse
point,
you
may
move
to
the
next
point
and
continue
recording,
omitting
the
requirement
to
wait
for
two
times
the
system
response
time
before
recording
data
at
the
subsequent
traverse
points.

338.
Comment:
Under
Part
75
regulations,
our
analyzers
have
a
response
time
of
5
minutes.
Under
the
proposed
method,
2
minutes
of
sampling
preceded
by
two
times
response
time
at
12
points
would
take
more
than
144
minutes,
which
cannot
meet
the
Part
75
requirement
of
less
than
2
hours
for
a
stratification
test.
We
recommend
Part
60
revisions
be
deleted
in
favor
of
Part
75
requirements,
and
comparison
on
a
ppm
basis
rather
than
within
5
percent
of
mean
basis.
(
0033)

Response:
See
the
response
to
Comment
337.
With
the
5­
minute
response
time
and
the
12
points
described,
your
sampling
time
should
be
about
34
minutes.
We
have
added
alternative
concentration
criteria
for
comparing
the
traverse
point
concentrations
for
determining
stratification.

339.
Comment:
Note
that
these
methods
are
to
take
into
account
other
rules
and
regulations.
These
tests
are
to
be
used
for
RATAs
(
21­
minutes
per
run)
or
for
compliance
(
60­
180
minutes
per
run,
length
of
a
batch
process,
etc.),
where
the
pre­
established
run
length,
not
system
response
time,
dictates
the
length
of
the
test
runs.
(
0038)
93
Response:
We
understand
that
the
methods
have
sample
run
times
dictated
in
underlying
regulations.
Our
concern
is
for
collecting
representative
samples
from
the
source
and
not
samples
diluted
by
sampling
line
gas
Section
8.1.3
Determination
of
Stratification
340.
Comment:
Note
that
stainless
steel
probes
act
as
a
catalyst
for
CO
and
NO2
at
simple
cycle
gas
turbines
and
will
show
stratification
even
when
there
is
none.
(
0035)

Response:
In
this
case,
means
of
reducing
the
probe
temperature
(
such
as
through
watercooling
should
be
considered
to
reduce
the
catalytic
effects.

Section
8.2.1
Calibration
gas
verification
341.
Comment:
Does
confirming
that
the
documentation
includes
all
information
required
by
the
Protocol
mean
confirming
that
the
documentation
is
complete?
If
so,
the
first
part
of
the
second
sentence
is
redundant
(
0032).

Response:
One
must
ensure
that
the
documentation
includes
all
of
the
information
required
by
the
protocol
and
that
this
information
passed
certification.

342.
Comment:
The
EPA
should
recommend
the
tester
have
copies
of
calibration
gas
certification
sheets
while
on­
site
to
aid
regulators
and
inspectors
in
lieu
of
misreading
torn
cylinder
labels.
(
0038)

Response:
A
recommendation
to
have
calibration
gas
certification
sheets
on­
site
has
been
added
to
the
methods
to
aid
in
on­
site
inspections.

Section
8.2.2
Measurement
system
preparation
343.
Comment:
Several
commenters
thought
that
restricting
measured
concentrations
between
20
and
100
percent
of
span
was
too
stringent
or
unreasonable
for
the
following
reasons:

a.
The
specification
does
not
apply
to
situations
where
the
span
is
set
relative
to
a
standard
according
to
Section
1.3.1.
(
0012,
0013)

b.
Instruments
that
meet
the
calibration
error
or
linearity
specifications
should
generate
valid
data
within
that
range
of
calibration,
and
an
acceptance
of
measurements
within
5
to
95
or
10
to
90
percent
of
span
is
reasonable.
(
0012)

c.
When
testing
new
equipment,
one
cannot
know
in
advance
the
stack
concentrations.
(
0013)

d.
Meeting
this
criterion
is
not
always
possible,
since
there
may
be
dips
in
emissions
below
20
percent.
94
e.
The
20
percent
limit
could
not
be
met
for
low­
NOx
units
with
permitted
limits
below
2
ppm
and
a
Method
7E
analyzer
set
at
10
ppm.
(
0038).

f.
This
requirement
conflicts
with
Section
1.3.3
in
terms
of
calibrating
around
the
actual
concentration
instead
of
the
allowable
emission.
Using
the
actual
concentration
is
impractical,
since
it
will
require
having
numerous
cylinders
on
hand.
(
0020)

g.
What
if
emissions
are
non­
detect,
or
the
lower
instrument
range
is
100
ppm
but
NOx
readings
are
10­
15
ppm?
What
if
the
allowable
is
250
ppmv,
the
analyzer
span
is
300
ppm,
and
emission
readings
are
5
ppmv.
Would
the
tester
be
required
to
use
a
lower
range
when
the
only
regulatory
requirement
is
to
demonstrate
compliance?
This
is
not
clear
throughout
the
rules.
(
0020,
0030,
0031)

Response:
Section
8.2.2
has
been
revised
to
require
that
the
tester
define
the
calibration
span
such
that
the
majority
of
measured
emission
concentrations
fall
between
20
and
80
percent
of
the
calibration
span.
With
the
new
definition
of
calibration
span,
we
have
dropped
any
reference
to
the
emission
limit
being
in
the
calibration
range.
We
have
noted
that
data
generated
anywhere
in
the
calibration
range,
up
to
and
including
the
calibration
span,
will
be
considered
valid.

344.
Comment:
This
section
does
not
address
leak
checks,
which
are
needed
to
assure
sampling
system
integrity.
We
propose
performing
leak
checks
prior
to
the
first
system
bias
check
and
following
the
final
system
bias
check.
Acceptable
criteria
should
be
established
as
another
QA
measure.
(
0044)

Response:
Leaks
will
be
detected
in
the
system
bias
check;
a
separate
leak­
check
requirement
is
not
necessary.

Section
8.2.3
Analyzer
Calibration
Error
Test
345.
Comment:
It
should
read
"
low­,
mid­,
and
span­
level
gases."
What
is
the
zero
gas?
Its
specification
is
missing
(
0032).

Response:
The
final
rule
requires
calibration
error
using
low,
mid,
and
high
level
gases;
The
Azero
gas@
was
replaced
in
the
proposal
by
Alow­
level
gas.@.
The
low­
level
gas
may
be
a
zero
gas.

346.
Comment:
The
sentence
explaining
the
calibration
error
calculation
should
say
that
calibration
error
is
"
the
difference
between
the
measured
concentration
and
the
manufacturer
certified
gas
concentration
divided
by
the
manufacturer
certified
gas
concentration."
The
results
are
supposed
to
be
as
a
percent
of
the
certified
concentration
of
the
calibration
gas.
(
0013)

Response:
The
final
rule
language
of
Section
8.2.3
is
different
from
that
in
the
proposal.
However,
we
have
revise
the
proposed
calculation
of
analyzer
calibration
error
to
be
based
on
a
percentage
of
the
calibration
span.
95
347.
Comment:
The
intent
seems
to
be
to
perform
a
full
3­
point,
hands­
off
calibration
error
test
directly
after
the
initial
calibration
without
calibration
through
the
entire
sampling
system.
In
what
event
are
you
allowed
to
perform
hands­
on
calibration
adjustments?
We
see
nothing
wrong
with
the
calibration
error
procedure
as
it
is
now
is.
(
0038)

Response:
The
current
calibration
error
procedure
has
been
retained.
The
calibration
error
test
is
done
in
direct
calibration
mode
and
not
through
the
entire
sampling
system.
You
may
perform
hands­
on
calibration
adjustments
anytime
before
or
after
the
calibration
error
test.

Section
8.2.4
NO2
to
NO
Conversion
Efficiency
Test
348.
Comment:
The
word
principal
should
be
principle
(
in
two
places).
(
0032,
0013)

Response:
These
corrections
have
been
made
through
revisions
to
this
section.

Section
8.2.5
Initial
Sampling
System
Bias
Check
349.
Comment:
During
all
sample
runs,
calibration
checks,
interference
checks,
converter
checks,
and
bias
checks,
data
should
be
taken
from
the
data
recorder
and
not
the
analyzer
display.
(
0011)

Response:
The
important
thing
is
to
fully
document
collected
data
and
performance
tests
as
noted
in
Section
1.0
of
the
methods.

350.
Comments:
If
a
data
recording
system
records
data
once
per
second,
are
only
3
seconds
required
to
show
stability.
Contrarily,
at
once/
minute
recordings,
are
3
full
minutes
required?
Is
there
a
minimum
time
for
the
three
consecutive
readings?
Data
acquisition
systems
can
take
3
recordings
in
less
than
1
second.
When
SO2
bias
checks
are
done,
it
may
take
up
to
10
minutes
to
achieve
stable
response
while
slowly
inching
towards
the
final
value.
In
other
cases,
it
may
be
less
than
30
seconds.
Section
6.4.1
of
the
current
6C
requires
waiting
until
a
stable
response
is
achieved,
and
this
is
also
used
in
the
proposed
Method
7E.
The
EPA
should
leave
the
existing
stable
response
criteria
in
place.
(
0012,
0016,
0031)

Response:
The
proposed
language
attempted
to
establish
a
cutoff
point
for
final
readings
when
establishing
system
response
time.
We
have
revised
this
language
to
read
"
Record
the
time
it
takes
for
the
measured
concentration
to
increase
to
a
value
that
is
within
95
percent
or
0.5
ppm
(
whichever
is
less
restrictive)
of
the
certified
gas
concentration."

351.
Comment:
Commenters
had
the
following
comments
about
the
stable
value
criteria:
a.
Should
the
mean
be
at
least
within
3
percent
of
the
certified
value?
For
example,
with
a
30
ppm
certified
value,
if
the
analyzer
indicates
a
34
ppm
(
113
percent
of
certified
value),
this
is
more
than
97
percent
and
is
still
valid.

b.
The
Summary
Table
of
QA/
QC
allows
5
percent
for
the
sampling
bias.
We
believe
3
percent
to
be
too
restrictive,
since
previously
the
regulation
allowed
5
percent
of
span.
96
c.
The
specification
seems
to
add
a
second
bias
test
result
specification
of
97
percent
of
the
gas
value,
which
would
be
too
restrictive.
(
0012,
0016,
0031,
0035,
0049)

Response:
We
have
dropped
the
requirement
for
successive
readings
to
agree
within
0.5%
of
the
certified
gas
tag
value.
Rather,
determination
of
stability
is
left
to
the
judgment
of
the
tester.
In
the
final
rule,
the
response
time
is
defined
in
the
more
traditional
way,
as
the
time
for
95%
of
a
step
change
in
concentration
to
occur.

352.
Comments:
Half­
way
through
the
section,
it
states
to
calculate
the
sampling
system
bias
and
see
Section
12.5.
This
should
be
Section
12.4.
(
0013,
0020,
0025,
0029,
0035)

Response:
This
section
now
cites
Section
12.4.

353.
Comment:
Clarification
is
needed
over
which
gases
to
use
and
when.
Section
8.2.5
uses
span
or
mid
and
zero
gas,
then
references
Section
13.5,
where
low
and
span
are
used.
The
Summary
Table
uses
high
and
zero
gases.
Further,
zero
gas
is
not
defined
within
the
method.
(
0025,
0026,
0029,
0041)

Response:
We
have
corrected
the
inconsistencies
and
added
clarity
to
the
gas
specification
for
the
system
bias
test.
The
low­
level
gas
and
the
span
or
mid­
level
gas
(
referred
to
as
the
upscale
gas)
whichever
most
closely
approximates
the
effluent
concentration)
must
be
used.
We
have
dropped
the
use
of
Azero@
gas
and
replaced
it
with
Alow­
level@
gas,
although
at
the
tester=
s
discretion,
the
low­
level
gas
may
be
a
zero
gas.

354.
Comment:
This
section
refers
to
zero­
gas,
but,
unfortunately,
zero
gas
is
undefined.
We
are
still
perplexed
as
to
why
the
low­
level
gas
is
needed
at
all.
Later,
in
Section
13.4,
we
find
a
specification
of
0.25
percent
of
upper
range
limit
(
assumed
to
be
full­
scale
range)
for
the
analyzer
calibration
error
when
using
zero
gas,
even
though
zero
gas
had
not
been
mentioned
until
Section
8.2.3.
It
is
also
interesting
to
note
that
zero
gas
is
not
mentioned
in
Section
13.5,
which
is
the
bias
check
specification.
This
creates
a
serious
internal
rule
conflict
that
is
unnecessary,
and
is
easy
to
fix
by
simply
referencing
all
specifications
to
full­
scale
analyzer
range.
(
0038)

Response:
See
response
to
Comment
353.
Other
noted
inconsistencies
have
been
corrected.

355.
Comment:
The
Summary
Table
of
QA/
QC
and
Sections
3.10
and
8.2.6
should
be
consistent
regarding
the
response
time.
(
0035)

Response:
Consistency
has
been
added
where
noted.

356.
Comment:
System
bias
checks
must
be
performed
with
a
wetted
system.
This
test
should
be
performed
only
after
the
sampling
system
has
been
sampling
stack
gas
for
a
Aseasoning@
period
(
e.
g.
at
least
a
1­
hour
period).
This
allows
all
of
the
components
to
season
to
the
stack
conditions.
(
0054)
97
Response:
We
believe
the
preconditioning
of
the
system
per
Section
8.2.2
and
the
duration
of
the
sampling
run
are
sufficient
to
Aseason@
the
equipment
and
achieve
a
wetted
system.

357.
Comment:
This
section
states
that
it
is
necessary
to
first
inject
the
span
gas
and
then
the
zero
gas
during
the
bias
test.
However,
the
order
of
gas
injection
does
not
matter.
Any
order
should
be
allowed
because
it
may
cut
down
cal
time
in
half.
Further,
EPA
should
clarify
that
all
system
bias
calibrations
are
to
be
performed
hands­
off.
The
EPA
should
also
clarify
that
for
tests
following
one
another
closely,
the
post­
test
bias
check
for
a
given
run
may
also
serve
as
the
pretest
bias
check
for
the
next
run.
(
0038)

Response:
It
has
been
noted
that
the
order
of
injecting
calibration
gases
for
the
post­
run
bias
checks
is
not
important.
However,
for
the
initial
pre­
test
bias
check,
the
upscale
gas
is
injected
first
and
the
low­
level
(
or
zero)
gas
second,
in
order
to
determine
the
upscale
and
downscale
response
times.
After
that,
the
injection
order
is
unimportant.
Also,
the
system
must
be
operated
at
the
normal
sampling
rate
and
no
adjustments
may
be
made
other
than
those
necessary
to
achieve
proper
calibration
gas
flow
rates
at
the
analyzer.
We
have
explicitly
stated
that,
for
tests
closely
following
one
another
(
less
than
2
hours
apart),
the
post­
test
bias
check
for
a
previous
run
may
serve
as
the
pre­
test
bias
check
for
the
subsequent
run.

Section
8.2.6
Measurement
System
Response
Time
358.
Comment:
How
is
response
time
to
be
determined
when
taking
into
account
temporal
variations
in
stack
concentrations?
(
0004)

Response:
Response
time
is
determined
outside
the
stack
environment
using
calibration
gases.
We
do
not
believe
that
temporal
variations
in
stack
concentrations
will
affect
response
times
previously
determined
this
way.

359.
Comment:
Are
we
correct
in
our
understanding
that
the
response
time
is
determined
from
the
times
for
the
system
display
exceeds
92.15
percent
(
95
percent
of
greater
than
97
percent)
of
the
certified
gas
value
for
3
consecutive
readings?
(
0012)

Response:
See
the
response
to
Comment
351.

360.
Comment:
The
reference
for
high­
level
gases
should
be
changed
to
span­
level
gases.
(
0029)

Response:
The
final
rule
now
uses
the
terms,
Alow@,
Amid@
and
Ahigh@
for
the
three
required
calibration
gas
levels.

361.
Comment:
How
is
response
time
to
be
determined?
If
there
is
temporal
variability
in
stack
concentrations,
then
high
to
stack
and
zero
to
stack
response
times
can=
t
be
determined.
Can
we
go
from
high
to
zero
and
zero
to
high
through
the
entire
system?
(
0004)
98
Response:
The
system
response
time
is
determined
during
the
system
bias
check
when
low­
and
upscale
calibration
gases
are
evaluated
in
system
calibration
mode.
This
involves
evaluating
the
system
response
as
you
go
from
low­
to
upscale
gas
or
vice
versa.
Temporal
variations
in
stack
concentrations
do
not
affect
the
test.

Section
8.3
Interference
Check
362.
Comments:
Commenters
had
the
following
comments
about
the
interference
check:

a.
Interference
gases
should
be
introduced
as
mixtures
(
blended
gases)
as
much
as
is
practical.
For
example,
if
ten
gases
are
considered
as
interfering
gases,
introducing
them
separately
would
require
a
sensitivity
of
at
least
0.05
ppm
at
the
0­
20
ppm
range;
each
gas
can
have
an
interference
response
of
0.05
ppm.
The
recorder
resolution
is
specified
at
0.5
percent
of
upper
range
limit.
If
this
upper
range
limit
is
20
ppm,
then
the
resolution
is
0.1
ppm,
which
is
greater
than
the
sensitivity
required
to
measure
the
interference
response
of
each
gas.
Shouldn=
t
there
be
a
procedure
for
determining
the
noise
level?
Otherwise,
how
would
one
distinguish
interference
from
noise?
To
avoid
this,
it
would
be
better
to
blend
the
interfering
gases
rather
than
doing
them
separately
and
adding
up
noise
level
responses.

b.
The
interference
specification
is
"
2.5
percent
of
the
upper
range
limit.
This
limit
is
a
variable
as
it
is
defined
as
anything
greater
than
5
percent
of
the
span­
level
concentration.
If,
for
example,
the
upper
range
limit
is
1000
ppm,
the
interference
limit
is
25
ppm.
If
the
analyzer
is
calibrated
from
0
to
250
and
the
measured
concentration
is
100
ppm,
then
the
interference
is
25
percent
of
the
measured
concentration.
EPA
should
reconsider
the
use
of
upper
range
limit
as
a
point
of
reference.

c.
According
to
the
directions,
all
the
gases
in
Table
7E­
3
are
to
be
used
for
the
interference
check,
regardless
of
whether
the
gases
are
present
in
the
source
or
not.
With
the
problems
of
sensitivity,
generating
moisture
contents,
determining
what
are
interfering
gases,
frequency
of
checks,
etc.,
EPA
should
consider
going
solely
to
the
dynamic
spiking
procedure.
(
0032,
0027)

Response:
The
interference
gases
may
be
blended,
where
practical,
to
facilitate
the
interference
check.
The
total
interference
must
not
exceed
2.50
percent
of
the
calibration
range,
instead
of
upper
range
limit.
The
calibration
range
for
the
interference
test
must
be
between
80
and
100
ppm
for
measurements
greater
than
20
ppm
and
at
an
appropriate
concentration
below
20
ppm
when
emission
concentrations
less
than
20
ppm
are
routinely
measured.
Alternative
acceptance
criteria
have
been
added
for
checks
at
the
lower
concentration
range.
The
results
are
acceptable
if
the
interference
does
not
exceed
0.5
ppmv
for
a
calibration
span
of
5
to
10
ppmv,
or
0.2
ppmv
for
a
calibration
span
less
than
5
ppmv.

The
number
of
gases
needed
in
a
particular
interference
test
will
vary
according
to
analyzer
type.
Each
test
gas
listed
in
Table
7E­
3
may
not
be
appropriate
for
each
analyzer
type.
The
tester
or
manufacturer
must
evaluate
the
potential
of
the
listed
test
gases
or
other
possible
interferences
to
affect
the
analyzer
performance
and
plan
the
test
accordingly.
99
363.
Comment:
This
section
refers
to
16.3
for
the
primary
gas
interference
recheck,
but
the
requirements
are
much
looser.
Why?
(
0035)

Response:
The
requirement
to
perform
an
annual
primary
gas
interference
recheck
has
been
dropped.
A
repeat
of
the
interference
check
is
only
required
when
a
major
analyzer
component
has
been
replaced.

364.
Comment:
The
interference
test
gases
can
be
introduced
(
see
Table
7E­
3)
into
the
measurement
system
in
system
mode,
separately
or
as
mixtures.
(
0054)

Response:
The
interference
check
is
primarily
a
test
of
the
analyzer.
The
entire
measurement
system
may
be
evaluated,
but
this
is
not
required.
The
language
in
Section
8.2.7
has
been
revised
to
reflect
this.

365.
Comment:
This
section
is
confusing,
incomplete,
and
probably
unnecessary.
At
a
minimum,
it
should
be
completely
rewritten.
This
section
is
very
poorly
done
and
needs
a
complete
technical
justification.
(
0038)

Response:
It
would
have
been
helpful
if
the
commenter
had
made
specific
recommendations
to
improve
the
section.
We
have
reworded
some
of
the
language
in
attempting
to
add
clarity.

Section
8.4
Sample
Collection
366.
Comment:
The
requirement
to
sample
within
5
percent
of
the
rate
used
during
the
sampling
system
bias
check
is
unnecessarily
restrictive.
The
sampling
rate
does
not
affect
the
quality
of
data
unless
it
falls
below
the
analyzer
sampling
rate.
We
suggest
the
requirement
be
removed
or
increased
to
20
percent.
(
0029)

Response:
This
requirement
has
been
dropped.

367.
Comment:
How
is
this
sampling
rate
measured
and
recorded?
Does
it
refer
to
the
total
sample
or
the
sample
of
each
instrument?
(
0035)

Response:
This
5
percent
criterion
for
the
sampling
rate
has
been
dropped.

Section
8.5
Post­
Run
Sampling
System
Bias
Check
368.
Comments:
There
is
a
typo
half­
way
through
the
section.
"
Method?
s"
should
be
"
method's"
(
0013,
0025)

Response:
This
is
corrected
in
revisions
to
the
section.

369.
Comment:
Does
the
"
repeat
the
sampling
system
bias
check"
refer
back
to
Section
8.2.5
Initial
Sample
System
Bias
Check?
(
0031)
100
Response:
Yes.
The
system
bias
check
is
performed
before
and
after
each
run
following
the
procedures
in
Section
8.2.5.

370.
Comment:
This
section
refers
to
Table
7E­
2.
I
assume
the
second
Table
7E­
2.
(
0035)

Response:
There
is
only
one
Table
7E­
2.
There
is
a
Figure
7E­
2.

Section
8.6
Alternative
Dynamic
Spike
Procedure
371.
Comment:
The
proposed
procedures
for
the
analyte
spike
procedures
are
internally
inconsistent
in
Method
7E.
This
section
states
that
the
tester
can
use
the
alternative
dynamic
spike
check
(
ADSC)
procedure
instead
of
the
direct
calibration
procedure
and
the
pre­
and
postrun
sampling
system
bias
tests.
In
contrast,
Section
13.8
allows
the
ADSC
as
an
alternative
to
the
interference
test
and
the
sampling
system
bias
tests.
Does
the
dynamic
spike
test
substitute
for
the
calibration
error
test
or
the
interference
check,
or
are
these
two
different
options?
(
0025,
0032).

Response:
This
inconsistency
has
been
removed.
The
dynamic
spiking
procedure
may
be
used
in
place
of
the
system
bias
checks
and
the
interference
check.

372.
Comment:
The
reference
to
Section
16.2
should
be
Section
16.1.
(
0013,
0025,
0035)

Response:
This
section
has
been
correctly
designated
in
the
final
rule.

Section
9.
Summary
Table
of
QA/
QC
373.
Comments:
The
last
column
of
the
table
is
for
corrective
action.
This
column
provides
no
practical
advice.
Instead
it
contains
very
rudimentary,
self
evident
suggestions
that
would
be
obvious
to
even
the
most
novice
tester.
If
it
was
the
intent
of
EPA
to
assist
the
tester,
then
this
column
should
be
revised
to
include
practical
suggestions
from
actual
field
experience.
The
table
also
contains
a
status
indicator
of
AA
which
is
unidentified
(
0032,
0035).

Response:
This
column
has
been
dropped
from
the
table.
"
AA"
has
been
identified
in
the
footnote.

374.
Comments:
Analyzer
resolution
or
sensitivity
has
an
acceptance
criteria
of
less
than
2
percent
of
range,
but
this
is
not
defined
in
Section
3,
nor
is
there
any
performance
criteria
for
it
in
Section
13.
(
0005,
0012)

Response:
The
criterion
for
acceptable
analyzer
resolution
or
sensitivity
is
a
suggested
QA/
QC
measure
and
not
a
required
measure.
Therefore,
it
is
not
defined
in
Section
3
nor
listed
in
Section
13.

375.
Comment:
Data
recorder
design/
data
resolution
has
an
acceptance
criteria
of
less
than
1
percent,
but
Section
6.1.9
indicates
no
larger
than
0.5
percent.
(
0005,
0012)
101
Response:
Data
recording
is
now
discussed
in
Section
6.2.9
where
we
do
not
list
a
required
resolution.
The
QA/
QC
summary
table
recommends
a
data
resolution
 
0.5
percent
of
full
scale
range.

376.
Comment:
In
the
table,
"
Data
quality
assessment
using
sampling
system
bias
data"
has
acceptance
criterion
of
<
1.5
ppmv,
while
Section
13.5
has
 
0.5
ppmv.
The
table
reference
for
the
system
bias
calculation
in
column
4
is
wrong.
What
is
"
apparent
bias"
and
what
is
the
corrective
action
if
apparent
bias
is
>
5
percent?
(
0005,
0013,
0020,
0026,
0035)

Response:
The
table
has
been
corrected
to
list
 
0.5
ppmv
absolute
difference
as
the
alternative
bias
criterion.
The
table
no
longer
references
the
equations
section
nor
mentions
"
apparent
bias,"
which
was
confusing.

377.
Comment:
The
section
to
identify
the
data
user
is
inappropriate,
since
it
is
common
and
efficient
to
conduct
testing
to
satisfy
more
than
one
data
user.
(
0012)

Response:
The
listing
is
only
included
as
a
suggestion
to
aid
the
tester
in
determining
the
data
quality
objectives.

378.
Comment:
Provisions
for
short­
term
excursions
above
span
level
should
be
made.
(
0012)

Response:
In
the
table,
we
recommend
that
all
1­
minute
average
measurements
be
within
the
calibration
span
and
require
that
all
run
averages
be
less
than
or
equal
to
the
calibration
span.

379.
Comment:
Under
"
Interference
gas
check,"
remove
the
words
"
upper"
and
"
limit."
The
word
range
is
defined
and
is
all
that
should
be
used
here.
(
0013)

Response:
We
now
determine
interference
based
on
the
calibration
span.
"
Upper
range
limit"
has
been
replaced
with
"
Calibration
span."

380.
Comment:
Under
"
Sampling
system
bias,"
add
"
or
mid­
level
gas"
to
the
list,
which
is
where
most
calibrations
will
be
performed.
Change
the
word
"
high­
level"
to
"
span­
level"
to
match
the
definitions.
(
0013)

Response:
The
terminology
used
for
the
system
bias
check
in
the
final
rule
is
"
low
scale"
and
"
upscale"
gas.
The
table
has
been
revised
accordingly.

381.
Comment:
Under
"
NO2­
NO
conversion,"
the
frequency
should
be
before
initial
run
and
not
after
every
test.
(
0013)

Response:
We
have
corrected
it
to
read
"
before
each
test."
102
382.
Comment:
In
the
footnotes:
S,
M,
and
A
do
not
need
superscripts
numbers.
The
description
for
A
is
missing,
and
is
probably
"
alternative."
The
abbreviation
AM­
A
has
not
been
defined.
(
0013,
0025)

Response:
The
superscript
numbers
have
been
removed.
"
A"
has
been
defined
as
"
alternative."
No
reference
to
"
AM­
A"
was
found
in
the
proposal.

383.
Comment:
"
Equipment
type
(
condenser
or
permeation
dryer)"
is
listed
under
"
Moisture
removal."
However,
elsewhere
in
the
method,
it
only
mentions
the
condenser
type.
This
is
inconsistent,
and
since
permeation
dryer
was
added
to
the
QA/
QC
table,
permeation
dryers
should
be
added
to
the
method.
Because
of
the
considerable
use
of
dilution
in
RATAs,
dilution
should
be
added
as
well.
(
0023,
0054)

Response:
The
"
Equipment
type"
entry
has
been
changed
to
"
equipment
efficiency"
which
does
not
lists
a
moisture
removal
principle.
Section
6.2.4
allows
any
of
the
noted
moisture
removal
principles
to
be
used.

384.
Comment:
The
required
frequency
of
the
dynamic
spike
test
is
confusing
in
the
2nd
to
last
row,
which
states
before
and
after
each
test
and
in
place
of
pre­
and
post
run
sampling
system
bias
tests
and
interference
check.
There
should
be
no
need
for
an
interference
check
with
dynamic
spiking.
(
0025,
0035)

Response:
The
dynamic
spiking
procedure
is
used
in
place
of,
instead
of
in
addition
to,
the
interference
check
and
the
system
bias
check.
We
have
dropped
the
acceptance
criteria
from
the
QA/
QC
table
because
it
was
unclear
in
this
abbreviated
form.
The
tester
can
refer
to
Section
13
for
these
criteria.

385.
Comment:
The
"
Multiple
Sample
Points"
section
refers
to
a
rake
probe,
but
this
has
been
specifically
forbidden
by
Part
75
policy
manual,
and
may
also
conflict
with
Part
60
regulations.
Please
clarify.
How
are
multiple
sample
points
simultaneously
determined?
Is
a
24­
hole
rake
acceptable?
(
0025,
0035)

Response:
Multi­
hole
probes
with
verifiable
constant
flow
through
all
holes
within
10
percent
of
mean
flow
rate
are
being
allowed.
This
should
not
conflict
with
the
Part
60
regulations.
However,
this
option
requires
Administrative
approval
for
Part
75
applications.

386.
Comment:
Sample
Extraction
does
not
list
other
metallic
materials,
such
as
inconel,
as
options.
Why
not
make
all
materials
of
construction
inert
to
sample
or
judged
by
bias,
not
just
the
manifold.
The
probe
material
in
Section
6.1.1
says
glass,
stainless
steel,
or
equivalent.
The
table
only
lists
SS
as
mandatory,
although
the
text
refers
to
Teflon
also.
The
table
should
be
changed
to
reflect
this,
since
stainless
steel
is
not
practical
for
temporary
sampling
systems.
(
0031,
0035,
0039)

Response:
Except
for
the
calibration
valve
listing,
the
sample
extraction
material
must
be
"
inert
to
sample
constituents."
103
387.
Comment:
Under
"
Sampling
System
Bias,"
the
allowance
of
 
0.50
ppmv
absolute
difference
for
the
bias
needs
to
be
extended
not
only
to
sources
with
emission
limits
of
 
10
ppmv,
but
to
any
bias
conducted
with
cylinder
gases
 
10
ppmv.
Also,
this
criterion
should
not
have
a
temporary
time
limit,
but
should
be
in
effect
until
such
time
as
to
re­
evaluate
the
method.
(
0031)

Response:
The
 
0.50
ppmv
criterion
has
been
extended
to
apply
in
all
bias
cases.
The
temporary
time
limit
that
was
proposed
for
using
this
specification
has
been
dropped.

388.
Comment:
What
are
the
acceptance
criteria
for
Data
Quality?
(
0035)

Response:
The
data
quality
categories
dealt
primarily
with
the
uncertainty
determination.
The
uncertainty
determination
and
the
data
quality
categories
in
the
QA/
QC
table
have
been
dropped
from
the
final
rule.

389.
Comment:
The
"
Analyzer
response
time"
listed
in
the
table
is
<
30
seconds.
In
Section
13.3,
there
is
no
minimum
response
time.
The
30
second
requirement
will
most
likely
be
met
when
conducting
a
direct
calibration
at
high
concentrations.
Low
concentration
analyzers
and
full
scale
ranges
can
have
longer
response
times
(
2
to
5
minutes).
The
30
second
requirement
cannot
be
met
if
it
is
system
response
time
due
to
sample
line
and
conditioning
system
delay.
(
0035,
0038)

Response:
The
reference
to
<
30
seconds
in
the
table
has
been
deleted.
We
do
not
specify
a
minimum
sample
response
time.

390.
Comment:
The
listed
"
Filter
temperature"
is
listed
as
>
95
°
C,
which
is
different
from
Section
6.1.1.1,
which
should
be
the
same
as
heated
sample
line.
(
0035,
0038)

Response:
In
lieu
of
specifying
minimum
temperatures,
we
now
require
that
all
system
components
(
excluding
sample
conditioning
components,
if
used)
must
maintain
the
sample
temperature
above
the
moisture
dew
point.

391.
Comment:
Under
"
Analyzer
range,"
this
criterion
should
be
referenced
to
the
concentration
to
be
tested,
not
the
span
gas
value.
(
0038)

Response:
We
have
dropped
the
use
of
"
analyzer
range"
and
now
use
"
calibration
span"
as
the
upper
limit
of
the
calibration
curve.
The
calibration
span
is
chosen
such
that
most
of
the
measurements
are
20
to
100
percent
of
the
calibration
span.

392.
Comment:
There
are
various
references
to
"
each
run"
(
Column
5)
for
the
probe
material,
sample
line
material,
etc.
It
will
not
be
necessary
to
inspect
the
material
type
after
"
each
run"
but
rather
for
"
each
test."

Response:
This
correction
has
been
made.
104
393.
Comment:
Under
"
Analyzer
calibration
error,"
it
states
that
the
performance
specification
for
the
zero
gas
is
less
than
0.25
percent
of
span.
This
specification
is
unusually
stringent,
and
is
equal
to
the
zero
noise
level
on
some
instruments
(
e.
g.,
for
a
NOx
analyzer
set
on
a
10
ppm
range,
this
equates
to
0.025
ppm
which
is
the
zero
noise
level
for
a
Teco
NOx
analyzer).
This
specification
is
inconsistent
with
the
2
percent
criterion
for
the
span
gas
calibrations.
This
specification
is
also
inconsistent
with
the
language
in
Section
13.4
where
the
words
"
upper
range
limit"
are
used.
(
0038)

Response:
The
0.25
percent
specification
for
a
zero
gas
has
been
dropped.

394.
Comment:
"
After
every
test"
is
used
in
Column
5,
Row
16.
For
these
methods
in
general,
the
definition
of
"
test"
should
be
defined.
Is
it
just
one
unit
or
emission
process?
Or
is
it
a
series
of
units
under
the
same
test
mobilization
(
e.
g.,
testing
five
consecutive
units
at
the
same
plant
during
the
same
week)?
Could
it
even
be
two
separate
plants
with
the
same
type
of
unit
as
long
as
the
exact
same
analyzer
is
used?
The
definition
of
"
test"
may
have
different
meanings,
depending
on
the
QA/
QC
to
be
performed
(
e.
g.,
the
required
frequency
of
a
direct
calibration
error
test
versus
the
required
frequency
of
a
converter
check,
etc.).
(
0038)

Response:
"
Test"
has
been
defined
in
Section
3.19
as
the
series
of
runs
required
by
the
applicable
regulation.

395.
Comment:
Under
"
Moisture
removal,"
the
"<
±
5
percent
target
compound
removal"
requirement
should
be
removed,
because
there
is
no
way
to
verify
it.
Passing
the
bias
test
does
not
verify
this
criteria.
(
0038)

Response:
We
agree
with
the
commenter.
This
requirement
has
been
removed.

396.
Comment:
In
the
"
Suggested
corrective
action"
column,
the
word
"
retest"
should
not
be
used
because
it
can
be
misinterpreted
to
mean
repeat
all
runs.
We
suggest
that
"
repeat
test
run"
be
used
instead.
(
0038)

Response:
The
"
Suggested
corrective
action"
column
has
been
deleted
from
the
table
as
requested
by
other
commenters.

Section
10.1
Initial
Analyzer
Calibration
397.
Comment:
A
web
site
is
given
for
the
Traceability
Protocol.
There
is
no
traceability
Protocol
at
this
web
site
address.
(
0032).

Response:
The
commenter
is
correct.
The
Traceability
Protocol
has
been
removed
from
this
website.
We
are
not
able
at
this
time
to
supply
another
web
source
for
the
document.

398.
Comments:
The
section
says
you
must
pass
the
bias
test
before
you
start
measurements,
but
the
test
is
based
on
the
emission
standard,
which
may
be
process­
based
(
i.
e.
lbs/
ft2).
If
so,
then
it
is
not
possible
to
know
if
you
pass
until
the
test
is
over
and
the
process
rate
105
is
measured.
We
recommend
adding
an
alternative
specification,
such
as
2
percent
of
span
value,
in
such
a
case.
(
0026)

Response:
We
have
dropped
the
proposed
determination
of
system
bias
based
on
the
emission
standard.
Bias
is
now
calculated
based
on
the
calibration
span,
which
is
chosen
such
that
the
majority
of
measurements
are
between
20
to
100
percent
of
this
calibration
span.

399.
Comment:
Note
that
low­
level
NO2
calibration
gases
may
not
be
readily
available
as
EPA
Protocol
standards.
(
0038)

Response:
Alternative
means
of
calibrating
analyzers
that
measure
NO
and
NO2
separately
will
be
considered
on
a
case­
by­
case
basis.

Section
12.0
Calculations
and
Data
Analysis
Section
12.1.
Nomenclature
400.
Comment:
The
terms
Cdir
and
Cs
both
use
the
wording
"
reported
by
the
gas
analyzer."
To
be
consistent,
they
should
say
"
reported
by
the
data
recorder."
(
0011)

Response:
The
proposed
descriptions
of
the
terms
have
been
revised
in
the
final
rule.
We
now
define
the
terms
relative
to
their
application
in
either
direct
calibration
mode
or
system
calibration
mode.

401.
Comment:
The
following
terms
are
missing
from
the
nomenclature:
ACE,
Mmeas,
Mnative,
Madded,
C,
Cm,
Cequiv,
ENO2,
CNO2,
R,
Y,
Estd,
R,
U,
and
Qfuel.
Where
did
Section
12.0
(
Nomenclature)
come
from?
Over
half
the
equations
contain
terms
that
are
not
explained
in
the
nomenclature.
This
needs
a
total
re­
do.
The
terms
need
to
be
adequately
defined
with
units.
(
0005,
0012,
0013,
0014,
0024,
0026,
0030,
0032,
0035,
0056)

Response:
We
apologize
for
these
obvious
omissions.
Section
12.1
has
been
corrected
to
include
all
terms
used
in
the
equations
and
their
units.

402.
Comment:
In
the
Definition
for
M,
NOx
should
not
be
subscript.
Ch
is
listed
as
the
calculated
in­
stack
concentration
but
does
not
appear
in
the
equations,
but
Section
12.1.1
uses
Cequiv.
Should
Cequiv
be
the
correct
notation?
The
value
of
conversion
factor
K
and
its=
units
are
undefined.
Are
the
units
of
B
assumed
to
be
percent?
For
Cadj,
was
it
the
intention
to
make
all
O2
corrections
to
15
percent
absolute?
A
specific
number
should
not
be
used,
since
not
all
regulatory
programs
use
this
value
as
the
reference.
Does
Cm
equal
analyte
spike
concentration
measured
(
ppmv).
Add
CS
to
equal
the
analyte
calculated
spike
value
(
ppmv).
(
0009,
0013,
0020,
0025,
0026,
0029,
0035,
0041,
0049,
0054)

Response:
See
the
response
to
Comment
413.
The
noted
inconsistent
and
conflicting
terminologies
have
been
corrected
in
the
final
rule.

Section
12.1.1
Concentration
equivalent
of
the
emission
standard
106
403.
Comment:
For
the
case
when
the
standard
is
in
mass/
unit
of
electrical
output,
inclusion
of
the
"
Eff"
seems
to
be
an
error
and
should
be
deleted.
(
0029)

Response:
Since
the
performance
tests
are
no
longer
calculated
relative
to
the
emission
standard,
this
section
has
been
dropped.

404.
Comment:
Emission
Standard
calculation
is
incorrect
for
ppmv
at
Y
percent
O2.
Y
should
be
in
the
numerator
and
percent
O2
in
the
denominator.
(
0030)

Response:
See
the
response
to
Comment
403.

405.
Comment:
The
terms
Cstd,
Estd,
and
Emass
in
the
table
and
in
Equation
7E­
16
should
be
replaced
with
M
as
defined
in
Section
12.1
to
allow
its
use
with
pollutants
other
than
NOx.
I
need
more
information
to
follow
these
equations;
why
not
replace
K
with
the
appropriate
ACV
factor?
(
0025,
0029,
0035,
0041)

Response:
See
the
response
to
Comment
403.

406.
Comment:
The
table
should
be
deleted
or
corrected.
(
1)
No
one
has
actually
tried
this
table;
(
2)
The
first
equation
doesn=
t
mean
anything,
see
Method
20
for
the
correct
equation;
(
3)
The
other
equations
don=
t
mean
much
without
fuel
flow
data
and
fuel
composition
data.
(
0035)

Response:
See
the
response
to
Comment
403.

407.
Comment:
How
is
O2
percent
determined
for
the
equations
in
the
table?
Is
this
based
on
previous
stack
test
results,
O2
concentration
during
the
test
run,
or
O2
concentration
during
a
preliminary
run?
The
term
GCV/
hr
is
impossible
because
the
units
would
be
BTU/
lb/
hr.
GCV
is
normally
defined
as
gross
calorific
value
of
a
fuel,
and
units
vary
by
fuel
type,
such
as
gas:
BTU/
scf,
oil
and
coal:
BTU/
lb.
(
0038)

Response:
See
the
response
to
Comment
403.

Section
12.2
Analyzer
Calibration
Error
Test
408.
Comment:
Equation
7E­
1
appears
to
be
incorrect.
I
believe
it
should
read
ACE
=
(
Cdir
­
Cv)/
Cv
x
100.
(
0005,
0013,
0026,
0035,
0049)

Response:
The
analyzer
calibration
error
equation
has
been
corrected.

Section
12.3
Alternative
Dynamic
Spike
Recovery
409.
Comment:
The
terms
in
Equation
7E­
2
needs
units
and
definitions.
The
equation
needs
a
"
x
100."
(
0035)
107
Response:
The
dynamic
spiking
equation
has
been
corrected.

410.
Comment:
This
entire
section
needs
clarification,
especially
with
regard
to
how
this
test
is
performed.
(
0004)

Response:
Clarity
has
been
added
to
the
dynamic
spike
procedure
which
is
described
in
Section
16.1.

411.
Comments:
Was
it
the
intention
for
mean
Mass
of
NOx
to
be
used,
or
mass
flow
controlled
NOx,
expressed
in
ppm
by
volume?
(
0054)

Response:
The
intent
is
to
use
the
flow­
controlled
spike
gas
concentration
in
ppmv.
The
equation
terms
have
been
revised
to
reflect
this.

Section
12.4
Sampling
System
Bias
Check
412.
Comment:
Equation
7E­
3
should
be
divided
by
Ch
instead
of
Cv
if
there
is
an
emission
standard.
The
equation
is
correct
only
when
not
subject
to
an
emission
standard
as
per
Section
13.5.
(
0013,
0022,
0026,
0035,
0039)

Response:
The
system
bias
equation
has
been
corrected.

413.
Comment:
The
Oregon
DEQ
believes
in
correcting
raw
field
data
from
pre­
and
post­
test
calibration
results.
Knowing
the
range
of
uncertainty
is
helpful
in
determining
compliance,
but
is
not
an
acceptable
substitute
for
a
raw
data
correction.
Therefore,
we
propose
including
an
equation
to
correct
the
raw
field
data,
much
like
equation
6C­
1
in
existing
Method
6C.
(
0044)

Response:
We
have
retained
the
current
equation
for
correcting
the
raw
for
the
system
bias.

Section
12.5
NO2­
NO
Conversion
Efficiency
414.
Comment:
Equation
7E­
4
needs
to
be
multiplied
by
100
for
the
result
to
be
a
percent.
(
0013,
0035)

Response:
This
correction
has
been
made.

Section
12.6
Uncertainty
Estimate
415.
Comment:
Most
of
the
equation
terms
are
not
defined,
especially
CNO2.
Is
it
simply
CNOx
­
Cno
for
analyzers
that
measure
NOx
and
NO
independently?
We
cannot
comment
on
the
equation
until
we
have
the
definitions.
(
0013,
0035)

Response:
The
proposed
requirement
to
calculate
data
uncertainty
has
been
dropped
and
Equation
7E­
5
has
been
dropped.
108
416.
Comment:
We
cannot
explain
Equations
7E­
5
and
6
to
testers
because
we
do
not
understand
them.
Are
they
estimated
high
and
low
values
or
the
deviations
of
these
from
the
reported
results?
We
request
that
the
uncertainties
be
in
the
bias
corrections
of
raw
data
that
we
use
as
results.
(
0044,
0049)

Response:
The
proposed
requirement
to
calculate
data
uncertainty
has
been
dropped
and
Equations
7E­
5
and
7E­
6
have
been
dropped.

417.
Comment:
These
equations
assume
the
concentration
of
NO2
is
known,
but
some
systems
do
not
separately
monitor
NO2.
Thus,
the
second
term
should
be
removed
from
the
equations
for
these
systems.
These
equations
seem
to
have
a
typo
related
to
the
definitions
for
NO2
conversion,
and
need
clarity
for
the
use
of
"
Eff"
vs.
"
E."
(
0029,
0039,
0041)

Response:
See
the
response
to
Comment
416.

418.
Comment:
Equations
7E­
5
and
6
include
values
from
the
NOx
converter
test
and
for
NO2
concentration,
neither
of
which
would
apply
to
3A,
6C,
or
10.
How
would
these
methods
determine
uncertainty?
(
0025)

Response:
The
proposed
requirement
to
calculate
data
uncertainty
has
been
dropped
and
Equations
7E­
5
and
7E­
6
have
been
dropped.

419.
Comment:
The
uncertainty
equation
(
Eq.
7E­
5)
is
confusing
and
we
suggest
it
be
rewritten.
The
entire
equation
should
be
replaced
with
uncertainty
techniques
developed
by
ISO
and
ASTM.
This
makes
no
sense.
We
note
that
the
uncertainty
estimates
were
proposed
because
many
commenters
objected
to
the
bias
correction
equation
in
the
first
proposal,
and
argued
it
was
too
complicated.
It
is
assumed
the
commenters
were
objecting
to
the
proposed
and
complicated
bias
correction
equation
and
not
the
existing
and
understood
bias
correction.
A
new
procedure
was
not
necessary,
and
EPA
should
have
retained
the
existing
bias
correction
procedure.
(
0022,
0023,
0035,
0038)

Response:
The
proposed
uncertainty
calculation
has
been
dropped
in
favor
of
the
current
bias
correct
of
data.

420.
Comment:
"
ADSC"
is
not
defined
until
Section
13.8;
it
should
be
defined
the
first
time
it
is
used
(
0032,
0029).

Response:
We
have
dropped
the
acronym
"
ADSC."
We
now
refer
to
the
procedure
as
the
dynamic
spike
procedure.

421.
Comment:
Should
there
be
an
acceptable
test
uncertainty
range
listed?
(
0054)

Response:
The
proposed
uncertainty
calculation
has
been
dropped.
109
Section
12.7
Miscellaneous
calculations
422.
Comment:
The
following
comments
address
the
miscellaneous
calculations
in
Section
12.7.

a.
In
Section
12.7.2,
if
you
want
to
do
what
it
says,
go
to
Section
12.7.5.2.
The
rest
is
not
needed
or
desired.
(
0035)

b.
Equation
7E­
8
is
missing.
(
0054)

c.
Section
12.7.2.3
refers
to
"
equations,"
but
only
one
equation
is
listed.
(
0025)

d.
Equation
7E­
10
does
not
calculate
NOx
adjusted
to
15
percent
O2.
Do
you
mean
CO2?
(
0035)

e.
Section
12.7.2.3
has
the
wrong
narrative.
(
0035)

f.
In
Section
12.7.3,
not
all
emissions
will
be
corrected
to
15
percent
oxygen.
Utility
boilers
are
still
corrected
to
3
percent
oxygen.
The
correction
should
not
be
only
to
15
percent
but
should
rather
say
the
"
desired
O2
divided
by
actual
O2"
to
apply
to
all
corrections.
(
0013,
0035)

g.
It
is
recommended
to
add
a
Section
12.7.5.4
and
include
an
equation
to
calculate
Emass
based
on
flow
rates
as
measured
by
EPA
Methods
1­
4.
(
0026)

h.
Section
12.7.4
is
not
the
way
we
currently
calculate
corrected
NOx
results.
The
run
average
raw
NOx
and
raw
O2
are
plugged
into
Equation
7E­
11.
This
equation
requires
that
you
calculate
corrected
NOx
at
each
traverse
point,
and
then
average
the
results.
(
0013)

i.
Equation
7E­
12
appears
to
have
an
additional
and
unnecessary
summation
term
in
the
equation.
This
cancels
out
the
1/
k
averaging
term
inside
the
brackets
and
predicts
emissions
at
the
number
of
points
times
the
actual
concentration
(
say
potentially
12
to
48
times
too
high).
(
0039)

j.
It
is
unnecessary
to
individually
average
each
point
allowing
for
response
time
in
between
each
point.
If
you
allow
the
appropriate
response
time
before
each
point
and
your
response
time
is
1­
min
or
less,
then
you
just
average
all
data
collected.
(
0039)

k.
Test
times
should
have
a
set
time
limit,
such
as
one
hour
as
stated
in
current
Method
20
or
Part
75,
or
other
applicable
regulation.
(
0039)

l.
In
Equations
7E­
13
through
15,
each
equation
omits
the
NOx
conversion
factor
from
ppm
to
lb/
scf.
(
0038)
110
m.
Equation
7E­
13
must
be
multiplied
by
the
conversion
factor
CV
in
Table
7E­
2
to
be
correct.
Alternately,
the
definition
of
Cd
within
the
nomenclature
of
Section
12.1
should
be
modified
to
include
units
of
ng/
sm3
and
lb/
scf.
(
0026)

n.
Equation
7E­
16
calculates
mass
emission
rate
based
on
fuel
usage,
but
there
are
no
requirements
listed
for
accuracy
of
the
fuel
flow
monitor.
We
only
allow
this
approach
with
Part
75
certified
fuel
flow
monitors,
or
some
other
monitor
sufficiently
accurate,
and
only
when
stack
gas
flow
measurements
are
not
possible.
(
0020)

Response:
We
have
dropped
the
various
calculations
for
correcting
emissions
data
to
diluent
levels
or
emission
rates
to
keep
the
method
as
brief
and
simple
as
possible.
These
deleted
calculations
may
be
accessed
in
Method
19
or
in
the
applicable
regulations.

Section
13.0
Method
Performance
423.
Comment:
I
do
not
understand
these
procedures
or
where
they
came
from,
and
there
should
be
more
detail.
(
0004)

Response:
This
section
does
not
discuss
procedures
but
gives
the
acceptance
criteria
for
the
required
performance
tests.
We
have
revised
the
narrative
to
list
these
limits
as
concisely
as
possible.

424.
Comment:
Section
13
does
not
provide
any
performance
specifications
for
the
uncertainty
estimate
to
allow
a
range
for
determining
validity
or
rejection
of
the
run.
They
are
listed
in
the
Summary
Table,
though,
along
with
the
undefined
term
apparent
bias
and
an
undefined
corrective
action
when
the
apparent
bias
is
>
5
percent.
(
0005,
0013)

Response:
The
proposed
approach
for
calculating
data
uncertainty
has
been
dropped.
The
ambiguous
term
"
apparent
bias"
is
no
longer
used,
and
the
"
Suggested
corrective
action"
column
of
the
QA/
QC
table
has
been
dropped.

Section
13.1
Analytical
Range
425.
Comment:
Clarification
needs
to
be
added
on
whether
a
test
run
is
invalid
if
the
analytical
range
is
less
than
20
percent
or
greater
than
100
percent.
(
0005)

Response:
A
test
run
is
valid
if
the
concentration
is
below
20
percent
of
the
calibration
span
but
in
invalid
if
the
concentration
is
greater
than
100
percent
of
the
calibration
span.

426.
Comment:
Analytical
range
says
you
must
report
an
exceedance
but
gives
no
guidelines
on
how
to
assess
it.
It
refers
to
Section
1.3.1
which
says
the
State
may
use
any
data.
(
0035)

Response:
We
no
longer
have
an
Analytical
Range
section
in
Section
13.
However,
in
other
places
in
the
method,
we
explain
that
all
run
averages
that
are
within
the
instrument
calibration
(
between
zero
and
the
calibration
span)
are
acceptable.
111
427.
Comment:
The
suggestion
that
any
reading
above
the
calibrated
span
may
be
invalid
is
ridiculous
for
an
analyzer
that
had
exhibited
a
linear
3­
pt
cal.
All
previous
reference
methods
and
monitoring
regulations
have
correctly
allowed
for
at
least
25
percent
extrapolation
above
the
span.
This
is
another
example
of
EPA=
s
lack
of
understanding
how
monitoring
systems
perform
gas
concentration
measurements.
(
0038)

Response:
Measurements
taken
above
an
instrument's
calibration
range
are
not
quality
assured
and
may
be
suspect.
Enforcement
data
that
are
not
quality
assured
may
not
be
sustainable
in
a
court
of
law.
We
are
allowing
individual
1­
minute
data
points
to
exceed
the
calibration
span
as
long
as
the
run
average
is
within
the
calibration
span.
This
precludes
run
invalidation
due
to
minor,
short­
term
excursions
outside
the
instrument
calibration.
The
tester
should
make
every
effort
to
properly
design
a
test
such
that
the
majority
of
measurements
are
within
20
to
80
percent
of
the
calibration
span.

428.
Comment:
Section
13.1
says
you
must
report
an
exceedance
but
it
gives
no
guidelines
on
how
to
assess
it.
It
refers
to
Section
1.3.1
which
says
the
State
may
use
any
data.
(
0035)

Response:
In
final
Method
7E,
1­
minute
average
pollutant
concentrations
may
exceed
the
calibration
span
as
long
as
the
run
average
does
not.
Runs
with
concentrations
above
the
calibration
span
are
not
valid.

Section
13.2
Sensitivity
429.
Comment:
This
section
references
Section
1.3.1
for
a
discussion
of
sensitivity.
However,
there
is
no
discussion
of
sensitivity
in
Section
1.3.1
nor
is
it
defined
in
Section
3
definitions.
(
0032).

Response:
The
final
method
addresses
sensitivity
in
Section
1.1.

Section
13.3
System
Response
and
Minimum
Sampling
Times
430.
Comment:
The
requirement
to
sample
at
each
measurement
point
for
a
minimum
of
two
times
the
response
time
after
first
purging
the
sampling
system
for
twice
the
system
response
time
a)
is
not
necessary
because
the
concentrations
at
each
point
are
approximately
the
same
in
almost
every
test,
and
b)
will
significantly
extend
the
duration
of
the
tests.
These
requirements
should
be
eliminated.
These
requirements
conflict
with
promulgated
requirements
within
certain
Part
60
source­
specific
Subparts,
Part
60
Appendix
B,
the
Part
75
performance
specifications,
and
other
regulations.
Together
with
the
proposed
sampling
point
traverses,
these
onerous
and
inappropriate
requirements
encourage
testers
to
revert
to
clearly
inferior
wet
chemistry
test
methods
to
reduce
costs
and
save
time
(
0032).

Response:
Purging
the
sampling
system
for
at
least
twice
the
system
response
time
ensures
that
sampling
lines
and
conditioning
system
are
free
of
diluted
sample.
We
believe
that
subsequent
sampling
at
each
test
point
for
a
minimum
of
twice
the
system
response
time
is
also
112
reasonable.
We
are
not
aware
of
specific
subparts
in
Part
60
that
conflict
with
these
requirements.
The
performance
specifications
in
Parts
60
and
75
apply
to
the
relative
accuracy
testing
of
CEMS.
In
these
cases,
the
sampling
point
requirements
in
these
performance
specifications
are
followed.

431.
Comment:
This
section
states
that
there
is
no
minimum
system
response
time
specified.
This
differs
from
the
Summary
Table
of
QA/
QC,
which
lists
criteria
as
less
than
30
seconds.
(
0005)

Response:
The
listing
in
the
QA/
QC
table
is
a
mistake.
There
is
no
minimum
system
response
time.

432.
Comment:
The
section
implies
that
the
purge
time
is
equal
to
2x
the
system
response
time.
However,
the
system
response
time
is
a
direct
function
of
the
amount
of
time
needed
to
purge
a
given
system,
meaning
that
the
purge
time
is
equal
to
the
system
response
time.
(
0038)

Response:
We
do
not
mean
to
imply
that
purge
time
is
two
times
the
system
response
time
by
definition,
but
that
the
time
chosen
to
purge
the
system
before
the
test
should
be
twice
the
system
response
time.

433.
Comment:
For
multi­
point
traverses,
are
the
data
accumulated
between
sample
points
(
i.
e.
due
to
purging/
system
response
time)
also
counted
in
the
final
run
average?
If
not,
then
this
further
adds
to
the
confusion
of
the
method,
and
is
beyond
the
data
handling
capability
of
most
data
acquisition
systems.
(
0038)

Response:
Normally,
the
data
accumulated
between
sample
points
is
not
counted
in
the
final
run
average,
but
there
is
flexibility
in
this.
The
requirements
we
are
adding
to
explain
representative
sampling
should
not
impose
significant
burdens
on
current
data
acquisition/
handling
systems
which
should
already
be
designed
to
handle
purge
times
and
multipoint
testing.
Many
of
these
systems
are
used
in
RATAs
tests
which
impose
the
same
requirements.

Section
13.4
Analyzer
Calibration
Error
434.
Comment:
The
zero
gas
specification
is
not
reflected
in
Section
8.2.3
(
0032).
There
is
no
zero
gas
and
should
not
be
a
separate
requirement.
The
low­
level
gas
can
be
gas
with
no
pollutant
in
it.
(
0035)

Response:
The
commenter
is
correct.
Mention
of
zero
gas
in
this
section
has
been
removed.

435.
Comment:
By
revising
the
specification
from
2
percent
of
span
to
2
percent
of
the
manufacturer
certified
concentration,
EPA
has
made
the
method
more
stringent.
Does
EPA
have
documentation
that
supports
this
change?
Are
present­
day
instruments
capable
of
meeting
2
percent
of
the
low­
level
(
mid­
level?)
gas
concentration?
Gases
are
typically
only
good
to
1­
2
113
percent
themselves,
making
this
criteria
impossible
to
meet.
In
addition,
if
using
Method
205
dilution,
small
errors
will
make
it
difficult
to
meet.
Please
see
the
examples
and
explanations
presented
in
the
Calibration
Error
Worksheet
in
Attachment
2.
(
0011,
0026,
0027,
0032,
0056)

Response:
We
agree
with
the
points
presented
by
the
commenters
and
have
amended
the
analyzer
calibration
error
limit
to
2.0
percent
of
the
calibration
span.

436.
Comment:
Why
was
the
"
0.25
percent
of
analyzer
upper
range
limit
used
rather
than
"
0.25
percent
of
span?
This
may
be
too
tight
to
meet.
The
upper
range
limit
is
an
undefined
variable
(
anything
greater
than
5
percent
of
the
span
level
concentration).
If
EPA
is
trying
to
use
a
fixed
value,
then
the
calibration
error
should
also
be
based
on
the
emission
limit.
Note
that
calibration
gases
have
a
range
and
the
absolute
amount
can
vary
depending
whether
the
upper
or
lower
limits
of
the
range
are
selected.
Note
also
that
the
equivalent
ppm
concentration
varies
with
the
O2
concentration.
Perhaps
EPA
should
select
a
standard
O2
concentration
for
each
emission
limit
in
order
to
fix
the
level
(
0032,
0012).

Response:
The
0.25
percent
specification
and
mention
of
a
zero
gas
have
been
deleted
from
the
final
method.

437.
Comment:
The
error
limit
for
the
zero
gas
is
too
difficult,
and
is
less
that
the
0.5
percent
resolution
required
for
the
data
recorder.
I
don=
t
understand
the
use
of
0.25
percent
for
the
zero
gas
and
2.0
for
the
low
level,
which
can
be
zero.
(
0009,
0031,
0035)

Response:
See
the
response
to
Comment
436.

438.
Comment:
Remove
the
words
upper
and
limit.
The
word
range
is
defined
and
is
all
that
should
be
used
here.
(
0013)

Response:
See
the
response
to
Comment
436.

Section
13.5
Sampling
System
Bias
439.
Comment:
The
last
sentence
of
Section
13.5
and
13.8
states
that
the
provision
for
low­(
concentration)
standard
facilities
is
valid
only
for
tests
completed
within
3­
years
of
the
effective
date
of
this
amendments
promulgation.
We
applaud
the
offering
of
alternative
provisions,
but
why
are
they
only
valid
for
3
years?
The
EPA
should
explain
what
happens
after
the
three­
year
period
and
what
efforts
are
being
undertaken
to
develop
a
permanent
standard.
Operating
permits
based
on
such
provisions
may
be
impossible
to
meet
if
EPA
fails
to
take
appropriate
actions
within
the
three­
year
period.
We
believe
the
is
very
risky
to
both
EPA
and
industry,
since
it
suggests
that
in
3
years,
measurement
technology
will
have
improved
significantly
or
that
EPA
is
now
preparing
a
replacement
provision.
We
have
difficulty
believing
either
one.
(
0009,
0012,
0013,
0025,
0031,
0032,
0035,
0038)

Response:
We
have
removed
the
3­
year
limit
on
using
the
alternative
bias
and
dynamic
spike
recovery
limits
for
low­
standard
facilities.
114
440.
Comment:
The
term
"
emission
standard"
is
not
defined.
(
0012)

Response:
Emission
standards
are
not
used
in
the
calculations
in
the
final
rule.

441.
Comment:
There
is
a
conflict
between
thresholds
to
qualify
for
low­
emitter
treatment
in
(
1)
preamble
of
manufacturer
stability
test;
(
2)
the
sampling
system
bias
check
in
Section
13.5;
and
(
3)
the
interference
check
of
Table
7E­
3.
Is
this
intentional
and
how
were
they
determined?
Also
note
a
switch
in
the
term
low­
emitter
to
low­
standard.
(
0025)

Response:
The
noted
low­
level
thresholds
differ
because
they
came
from
different
sources.
We
did
not
adjust
these
concentrations
to
make
them
consistent
because
we
did
not
know
if
the
data
behind
each
threshold
would
support
such
a
change.

Section
13.6
Interference
Check
442.
Comment:
The
upper
range
limit
should
be
replaced
with
emission
limit.
If
the
upper
range
limit
is
twice
that
of
the
emission
limit,
then
the
amount
of
interference
response
should
be
5.0
percent.
The
use
of
the
upper
range
limit
should
be
investigated
further
before
promulgation.
Maybe
this
should
be
"
range"
or
"
upper
range"
instead
of
"
upper
range
limit?"
(
0013,
0032,
0036)

Response:
We
agree
that
"
upper
range
limit"
is
ambiguous
as
used.
This
has
been
replaced
with
"
calibration
span"
in
the
final
rule.

443.
Comments:
A
more
sensible
requirement
would
be
to
state
that
the
analyzer
must
meet
interference
rejection
ratios
for
certain
gases.
For
example,
30,000:
1,
for
every
30,000
ppm
of
CO2
the
analyzer
response
is
influenced
by
1
ppm
of
NOx.
(
0011)

Response:
Verified
interference
rejection
ratios
may
be
used
to
show
that
an
instrument
meets
the
requirements
of
the
interference
check.

Section
13.8
Alternative
Dynamic
Spike
Check
(
ADSC)

444.
Comment:
Why
is
the
ADSC
subject
to
the
manufacturer=
s
stability
test
while
it
is
not
required
for
all
analyzers
in
Section
6.1.8.
If
the
analyzer
is
unstable
it
will
not
be
able
to
pass
the
performance
specifications
for
the
before
and
after
spike
recoveries
(
0032).

Response:
We
have
dropped
the
proposed
requirement
that
an
analyzer
must
be
certified
through
the
manufacturer's
stability
test
before
the
interference
check
and
pre­
and
post­
run
system
bias
tests
are
waived
in
the
dynamic
spike
check.

445.
Comment:
The
0.2
ppm
provision
is
only
helpful
below
a
standard
of
2
ppm.
(
0035)

Response:
The
0.2
ppm
provision
takes
effect
at
spike
concentrations
at
or
below
2
ppm.
We
believe
this
is
offers
fair
relief
at
very
low
concentration
measurements.
115
446.
Comment:
Does
the
last
sentence
refer
to
the
ADSC,
the
previous
sentence,
or
none
of
the
above?
(
0035)

Response:
The
last
sentence
in
Section
13.8
referred
to
the
previous
sentence
which
discusses
the
0.20
ppmv
difference
in
the
calculated
and
measured
spike
values.

16.0
Alternative
Procedures
447.
Comment:
I
do
not
understand
these
procedures
or
where
they
came
from,
and
there
should
be
more
detail.
(
0004)

Response:
We
have
rewritten
the
dynamic
spike
check
procedures
to
make
them
more
understandable.

Section
16.1
Dynamic
Spiking
Procedure
448.
Comment:
The
procedure
does
not
specify
where
the
spike
gases
are
to
be
injected...
directly
at
the
analyzer
or
at
the
probe?
(
0025)

Response:
The
spike
gas
is
introduced
in
calibration
mode.
Method
7E
now
states
that.

449.
Comment:
Is
it
the
intention
to
add
spike
gas
different
from
the
native
gas
or
the
same
as
the
native
gas?
The
example
in
16.1.3
and
the
directions
in
16.1.2
and
Equation
7E­
2
portray
different
concepts.
(
0041)

Response:
The
spike
gas
is
the
same
as
the
native
gas.

450.
Comment:
Spiking
will
not
work
for
any
source
with
significant
variability,
but
may,
on
the
other
hand,
work
fine
for
duplicate
integrated
samples.
In
other
words,
spiking
is
not
a
good
idea
for
instrumental
methods,
but
will
work
with
manual
sampling.
There
is
no
mention
of
compounds
other
than
SF6
(
which
insinuates
that
SF6
is
the
only
way
to
go),
no
appropriate
QA/
QC
procedures
are
mentioned,
and
that
the
uncertainty
and
vagueness
of
this
procedure
is
vast.
(
0039)

Response:
We
note
in
Section
16.1
that
best
results
are
obtained
for
this
procedure
when
source
emissions
are
steady
and
not
varying.
Fluctuating
emissions
may
render
this
alternative
procedure
difficult
to
pass.
We
disagree
that
spiking
is
not
a
good
idea
for
instrumental
methods.

Section
16.1.1
(
untitled
section)

451.
Comment:
This
section
should
only
require
that
the
tester
follow
its
standard
written
analyte
spike
procedure.
As
written,
it
requires
that
one
must
certify
that
they
have
followed
a
written
procedure
and
have
demonstrated
ability
within
the
last
calendar
year
to
operate
the
spiking
system.
Who
determines
what
is
acceptable?
Do
we
apply
for
acceptance
in
every
state?
Specific
recovery
and
precision
criteria
are
included
for
a
30­
minute
demonstration
116
test
including
the
recovery
of
100
±
5
percent
of
the
mass
with
a
RSD
less
than
5
percent.
The
EPA
has
provided
no
basis
for
the
arbitrary
criteria
which
are
obviously
intended
to
limit
the
selection
of
analytical
techniques.
This
is
totally
inappropriate
for
inclusion
in
a
test
method.
The
analyte
spiking
method
is
self­
certifying.
If
the
performance
requirements
are
met,
then
the
individual
obviously
has
demonstrated
that
they
can
conduct
the
procedure.
Testers
do
not
need
to
certify
that
they
can
conduct
the
sampling
system
bias
tests
or
other
procedures
within
the
method
(
0032,
0035,
0039).

Response:
We
have
dropped
the
proposed
requirement
that
testers
certify
their
following
of
a
written
procedure
and
their
proficiency
with
the
procedure
within
the
last
calendar
year.
The
method
now
only
requires
the
tester
to
follow
a
written
procedure.

Section
16.1.2
Spiking
procedure
requirements
452.
Comment:
This
section
has
numerous
problems
and
should
be
completely
reworked.
It
states
that
the
analyte
spikes
must
be
added
at
two
(
concentration)
levels
so
that
the
levels
of
spike
added
are
1
to
2
times
the
native
mass
and
5
to
10
(
typo?)
times
the
native
mass.
One
does
not
add
mass
per
se.
The
spike
levels
are
read
in
terms
of
concentration.

Additionally,
this
section
states
that
the
pre­
test
spike
does
not
have
to
be
performed
if
one
can
provide
a
valid
certification
that
the
analyzer
has
been
shown
to
meet
the
manufacturers
stability
test
in
Section
16.2.
The
manufacturer=
s
stability
test
in
no
way
can
substitute
for
an
analyte
spike.
The
requirement
should
be
to
do
both
pre­
and
post
test
analyte
spikes.

Finally,
this
section
states
that
to
use
the
analyte
spike
option,
you
must
document
and
confirm
that
during
the
entire
test
you
operated
within
the
ambient
temperature
and
pressure
and
voltage
ranges
certified
by
the
manufacturer.
You
must
also
list
all
manufacturer
fault
and
alarm
codes
and
identify
any
that
were
activated
during
the
test.
This
makes
the
analyte
spike
completely
impractical.
How
are
such
data
to
be
recorded?
The
only
way
is
manually.
What
does
this
prove?
If
the
analyzer
cannot
pass
the
analyte
spike
criteria
there
is
a
problem.
Perhaps
it
is
with
pressure,
voltage
etc.,
but
failure
of
the
spike
recovery
criteria
is
the
proof.
If
the
spike
passes,
then
the
results
should
be
acceptable
even
if
the
analyzer
was
operated
hotter
or
colder
than
the
manufacturer=
s
certification.
Continuous
measurement,
recording
the
power
voltage,
etc.,
just
adds
more
work
and
expense
without
significant
benefit.
We
recommend
removing
this
section.
(
0016,
0032)

Response:
We
agree
with
the
commenters'
recommendations.
We
have
corrected
the
spiking
language
to
reflect
concentration
instead
of
mass.
We
have
dropped
the
option
to
skip
the
pre­
test
spike
if
the
manufacturer's
stability
test
is
performed.
We
have
also
dropped
the
proposed
requirement
to
document
and
confirm
the
test
operated
within
the
parameter
certified
by
the
manufacturer
along
with
the
listing
and
reporting
of
manufacturer
fault
and
alarm
codes.

453.
Comment:
A2
time@
should
be
A2
times.@
(
0032).

Response:
This
has
been
corrected.
117
454.
Comment:
The
section
states
that
if
an
analyzer
meets
the
MST,
the
pre­
test
spike
is
waived.
Since
the
Summary
Table
says
the
MST
is
mandatory,
it
seems
all
analyzers
could
waive
the
pre­
test
spike,
making
the
pre­
test
spike
language
unnecessary,
and
the
pre­
test
spike
recovery
requirement
of
100
±
5
percent
irrelevant.
(
0025)

Response:
See
the
response
to
Comment
504.
The
QA/
QC
summary
table
has
been
revised
accordingly.

455.
Comment:
EPA
proposes
to
require
an
analyzer
MST
as
per
Section
16.2
when
dynamic
spiking
is
used,
which
is
appropriate
for
a
new
analyzer
in
a
stable
environment,
but
is
practically
meaningless
for
stack
testing,
where
analyzers
are
continually
shipped
across
the
country
and
exposed
to
many
environments.
(
0016)

Response:
The
MST
was
not
meant
to
be
required
before
using
the
dynamic
spike
check
in
the
proposal,
but
was
optional.
It
is
still
optional;
however,
it
cannot
substitute
for
the
pre­
test
spike
as
allowed
in
the
proposal.

456.
Comment:
It
is
not
clear
how
the
passing
criteria
in
Sections
16.1.2
and
16.1.3
relate
to
the
performance
criteria
in
Section
13.8.
(
0025)

Response:
Sections
16.1.2
and
16.1.3
have
been
revised
to
remove
the
ambiguity.

457.
Comment:
Shouldn't
"
between
5
and
1
times"
be
"
between
5
and
10
times?
As
stated,
the
two
intervals
overlap.
The
EPA
could
not
have
intended
that
such
high
concentrations
of
spike
gas
are
needed
to
conduct
the
ADSC.
If
the
native
gas
concentration
doubles,
so
does
the
range
of
the
spike
gas.
(
0025,
0032,
0035,
0041)

Response:
It
should
have
read
"
between
5
and
10
times."
We
have
revised
the
two
spike
levels
used.
The
two
spikes
must
now
bracket
the
average
measured
pollutant
concentration.

Section
16.1.3
Example
spiking
procedure
using
a
tracer
gas
458.
Comment:
Equation
7E­
17
is
upside
down.
Dilution
implies
a
number
less
than
one.
A
dilution
factor
is
usually
expressed
as
10:
1
or
less
for
analyte
spikes
(
0032).

Response:
The
example
spiking
procedure
now
uses
different
equations
for
spike
gas
concentration
and
spike
recovery
than
were
used
in
the
proposal.

459.
Comment:
The
equations
for
the
spike
test
passing
criteria
for
low
emitters
are
the
only
equations
not
included
in
Section
12,
and
perhaps
Eq
7E­
17
and
18
could
be
moved
to
Section
12.
(
0025)

Response:
The
dynamic
spike
equations
have
been
revised
since
the
proposal
and
have
been
moved
to
Section
12.
118
460.
Comment:
How
do
we
measure
SF6
in
the
sampling
system?
Why
not
use
CO/
SO2
for
SO2
and
NOx/
SO2
or
some
other
easily
measured
compound?
See
Air­
Tech
SBIR
proposal.
(
0035)

Response:
The
proposed
spiking
system
using
a
tracer
gas
has
been
dropped.
The
final
rule
gives
an
example
spike
procedure
that
uses
the
measured
pollutant
as
the
spike
gas.
Other
procedures
that
meet
the
performance
requirements
are
also
acceptable.

Section
16.2
Manufacturer
=

s
Stability
Test
461.
Comment:
EPA
has
provided
no
technical
basis
or
rationale
to
justify
anything
included
in
Section
16.2
or
Table
7E­
5
and
all
of
this
should
be
deleted
(
0032).

Response:
Under
the
Summary
of
the
preamble
to
the
proposed
notice,
we
noted
that
"
We
are
also
proposing
to
add
 
provisions
for
sampling
at
low
concentrations.
We
have
added
the
provisions
of
the
MST
for
instruments
that
routinely
measure
emission
concentrations
less
than
15
ppmv.

Section
16.3
Annual
Primary
Interference
Gas
Recheck
462.
Comment:
Why
are
the
criteria
between
the
initial
and
annual
interference
tests
so
different?
Looser
criteria
will
lead
to
a
failure
of
the
annual
check.
(
0035)

Response:
The
proposed
annual
primary
interference
gas
recheck
has
been
dropped.

Tables
&
Figures
463.
Comment:
In
Table
7E­
1,
Cv
should
replace
A,
Cdir
should
replace
B,
the
equation
for
difference
should
be
Cdir­
Cv,
and
the
percent
difference
equation
should
be
(
Cdir­
Cv)/
Cv
*
100
(
see
Equation
7E­
1).
The
absolute
value
signs
are
not
needed.
The
equation
for
calculating
percent
difference
is
incorrect,
and
needs
to
be
divided
by
A
to
be
consistent
with
the
language
of
the
method.
Actual,
not
absolute
value
terms
should
be
used,
since
it
gives
an
indication
of
the
sign
of
the
error,
which
is
important.
(
0013,
0026,
0039)

Response:
The
commenters'
recommendations
are
technically
correct.
However,
because
we
are
adding
a
separate
calculation
to
the
final
rule
for
system
calibration
for
dilution
sampling
systems,
we
have
retained
the
proposed
general
terminology
which
precludes
our
having
to
list
both
sets
of
terminology.
The
absolute
value
signs
have
been
dropped.

464.
Comment:
What
is
the
required
balance
gas
in
Table
7E­
3,
nitrogen
or
zero
air?
In
Table
7E­
4,
what
is
the
total
difference
used
for?
Where
did
the
2.5
percent
come
from?
(
0013,
0035)

Response:
The
balance
gas
may
be
either
nitrogen,
zero
air,
or
some
other
appropriate
balance
gas.
119
465.
Comment:
Table
7E­
5
is
not
described
anywhere
else
in
the
method.
What
are
the
requirements?
(
0013)

Response:
Table
7E­
5
is
explained
in
the
cited
sections
of
40
CFR
53.
In
the
final
rule,
more
explanation
of
the
test
parameters
is
given
and
the
table
has
been
simplified.

466.
Comment:
Figure
7E­
2;
arrows
at
the
bottom
means
you
never
finish
a
test
run?
(
0035)

Response:
We
have
added
a
note
to
Figure
7E­
2
to
"
Continue
until
test
is
completed"
at
the
"
Proceed
to
next
run"
step.

3.17
Specific
Comments
on
Method
10
General
Comments
467.
Comment:
The
analyzer
calibration
error
test,
sampling
system
bias
test,
and
the
calibration
gases
now
required
in
Methods
3A,
6C,
and
7E
should
be
utilized
for
Method
10.
(
0021)

Response:
These
tests
and
calibration
gases
are
now
required
in
Method
10.

468.
Comment:
Most
NDIR
instruments
are
not
stable
below
5
ppm.
What
will
this
method
do
for
that?
(
0035)

Response:
We
have
removed
the
requirement
that
analyzers
be
strictly
NDIR.
Current
state­
of­
the­
art
infrared
analyzers
are
capable
of
part­
per­
billion
measurements.

469.
Comment:
For
those
sources
testing
for
CO
at
low
limits
(
e.
g.
<
10
ppm),
it
has
been
our
experience
that
using
a
set
of
calibration
gases
in
the
range
of
10
ppm
has
been
problematic.
This
is
due
to
analyzer
drift
and
stability,
as
well
as
calibration
gas
performance.
In
these
cases,
using
a
higher
span/
range
has
been
necessary,
and
was
done
to
improve
analyzer
performance,
not
to
make
the
performance
specification
easier
to
pass.
(
0038)

Response:
State­
of­
the­
art
infrared
analyzers
should
offer
a
stable
response
even
at
concentrations
below
10
ppm.

Section
2.0
Summary
of
Method
470.
Comment:
The
vast
majority
of
discrete
CO
analyzers
currently
sold
and
currently
used
are
based
on
Gas
Filter
Correlation
Infrared
(
GFCIR)
spectroscopy.
In
other
cases,
CO
measurements
are
made
using
extractive
FTIR
systems.
These
analytical
techniques
are
the
contemporary
methods
and
should
be
included
in
the
summary
of
the
method.
(
0032)
120
Response:
We
have
removed
the
reference
to
NDIR
in
the
Summary
of
Method.
This
gives
the
tester
freedom
in
choosing
the
analyzers
type
without
precluding
the
use
of
acceptable
technologies.

471.
Comment:
Reference
to
specific
technology
should
be
removed
from
the
method.
(
0009,
0054)

Response:
See
the
response
to
Comment
470.

Section
3.0
Definitions
472.
Comment:
Section
3.1
references
7E
for
the
interference
check
but
seems
to
gives
a
new
ones
in
Section
4.0.
Do
we
do
both?
(
0035)

Response:
Any
apparent
contradiction
between
proposed
Sections
3.1
and
4.0
have
been
clarified
in
the
final
rule.
The
interference
check
described
in
Method
7E
is
the
only
one
required.

Section
4.0
Interferences
473.
Comment:
The
statement
beginning
this
section,
Any
substance
having
a
strong
absorption
of
infrared
energy
will
interfere
to
some
extent
is
blatantly
wrong.
It
is
incomprehensible
how
such
a
statement
could
be
published
in
this
decade
by
a
group
within
EPA
that
also
relies
on
FTIR
measurement
in
other
methods
(
0032).

Response:
Without
being
too
technical,
since
the
mode
of
detection
with
NDIR
is
absorption
of
infrared
radiation
by
the
sample,
it
logically
follows
that
any
extraneous
substance
in
the
sample
that
strongly
absorbs
infrared
light
can,
and
possibly
will,
interfere
to
some
extent.
The
exact
wording
in
the
proposal
may
not
have
reflected
this
precisely,
and
we
have
tempered
the
statement
in
the
final
rule
to
say
"
substances
having
a
strong
absorption
of
infrared
energy
may
interfere
to
some
extent
in
some
analyzers."

474.
Comments:
The
use
of
ascarite
to
remove
CO2
interferences
was
appropriate
for
1970s
vintage
NDIR
analyzers
but
was
outdated
by
the
mid
1980s
due
to
improvements
in
interference
rejection
in
NDIR
analyzers.
The
use
of
ascarite
and
silica
gel
interference
traps
for
GFCIR
or
FTIR
systems
is
entirely
inappropriate
and
it
should
be
specified
that
they
are
not
required.
This
section
and
the
interference
check
procedures
specified
in
Section
7.2
and
elsewhere
must
be
completely
reworked
and
updated
consistent
with
contemporary
instrumentation
and
basic
scientific
principles.
If
the
scrubber
trap
remains
in
the
regulations,
it
should
be
clearly
stated
that
it
is
only
required
for
instruments
that
fail
the
interference
check
specifications.
What
about
no
traps
at
all
when
the
CO
emissions
are
well
within
the
limitations?
Carbon
dioxide
removal
corrections
are
not
necessary
if
system
bias
corrections
are
performed.
(
0034,
0038,
0043,
0048)

Response:
The
final
rule
states
that
instrumental
correction
may
be
used
to
compensate
for
interferences.
We
only
mention
silica
gel
and
ascarite
traps
as
one
way
to
eliminate
the
121
interferences.
The
system
bias
check
does
not
account
for
sample
volume
reduction
due
to
CO2
absorption
and
is
not
a
substitute
for
this
correction.

475.
Comment:
Section
4.0
contains
a
table
which
presents
analyzer
ranges
and
interference
ratios.
The
table
shows
how
the
interference
ratio
can
be
higher
when
the
measuring
device
has
a
low
range
(
0­
100
ppm).
Actually
the
interference
ratio
is
less.
The
device
range
1500­
3000
ppm
shows
3.5
percent
H2O
per
7
ppm
CO,
and
the
0­
100
range
shows
3.5
percent
H2O
per
25
ppm.
In
the
first
case,
for
every
35,000
ppm
of
H2O,
the
analyzer
would
show
7
ppm
CO,
a
ratio
of
5,000:
1.
The
later
case
calculates
for
every
35,000
ppm
of
H2O
the
analyzer
would
show
25
ppm
CO,
a
ratio
1,400:
1.
Current
CO
analyzers
using
NDIR
principals
have
much
better
interference
ratios.
Depending
on
the
optical
filter
arrangement
and
type
of
detector
(
cross
flow
modulation,
luft,
gas
filter
correlation,
dual
cell,
single
cell),
a
minimum
interference
ratio
of
10,000:
1
for
both
H2O
and
CO2
can
be
expected.
Most
models
of
analyzers
have
interference
ratios
as
high
as
250,000:
1
for
CO2
and
50,000:
1
for
H2O.
(
0011)

Response:
It
appears
the
interference
ratios
proposed
in
the
table
were
outdated.
We
have
therefore
dropped
the
interference
ratio
table
from
the
final
rule.

476.
Comments:
Unlike
the
current
version
of
Method
10,
no
equation
or
guidance
is
given
to
calculate
the
correction
from
use
of
ascarite
traps
or
instrument
correction.
It
is
unclear
whether
the
device
range/
interference
ratio
table
is
a
correction
factor
table
or
simply
an
information
table.
(
0006,
0038)

Response:
The
CO2
correction
equation
is
included
in
the
final
method
but
the
table
of
example
interference
ratios
is
not.
This
table
was
included
in
the
proposal
for
informational
purposes
but
has
been
dropped
from
the
final
method
because
the
information
was
shown
to
be
outdated.

477.
Comment:
It
should
be
stated
that
the
interference
test
must
be
performed
and
tabulated
as
shown
in
Method
7E,
Tables
7E­
3
and
7E­
4.
(
0054)

Response:
This
is
stated
in
Section
8.3.

Section
6.0
Equipment
and
Supplies
478.
Comment:
Sampling
and
analytical
procedures
for
integrated
sampling
are
absent.
No
storage
and
transport
criteria
are
listed.
The
only
references
to
integrated
sampling
are
in
Section
6.0
and
Figure
10­
2.
(
0020)

Response:
We
have
added
a
new
Section
8.4.2
to
discuss
integrated
sampling
479.
Comment:
Figure
10­
1
should
be
expanded
to
be
consistent
with
Figure
7E­
1,
and
should
include
calibration
gas
injection
location,
sample
pump
location,
sample
gas
manifold,
bypass
vent,
etc.
As
it
now
is,
it
would
be
more
appropriately
labeled
as
the
conditioning
system
for
the
continuous
sampling
train.
(
0026,
0035,
0043)
122
Response:
Figure
10­
1
has
been
dropped
in
favor
of
citing
Figure
7E­
1
of
Method
7E.

480.
Comment:
Figures
10­
1,
10­
2,
and
10­
3
are
not
indicative
of
normal
practice
for
CO
measurement
systems.
The
moisture
removal
systems
are
typically
a
chilled
coil
condenser
type,
and
not
an
impinger
train
containing
silica
gel
and
ascarite.
A
figure
similar
to
that
of
Figure
7E­
1
of
Method
7E
should
be
provided.
(
0011,
0038)

Response:
We
have
removed
Figures
10­
1
and
10­
3
but
have
retained
Figure
10­
2
since
it
pertains
to
integrated
bag
sampling.
We
have
redesignated
Figure
10­
2
as
Figure
10­
1.

Section
6.1
What
do
I
need
for
the
measurement
system?

481.
Comment:
As
with
other
methods,
the
sampling
system
requirements
adopted
by
reference
to
Method
7E
should
be
modified
to
reflect
the
fact
that
CO
is
a
non­
reactive
and
nonsoluble
gas
(
0032).

Response:
Section
6.1
of
Method
10
notes
that
the
requirements
in
Method
7E
to
use
stainless
steel,
Teflon,
or
non­
reactive
glass
filters
do
not
apply.
It
also
notes
that
a
heated
sample
line
is
not
required
to
transport
dry
gases
or
for
systems
that
measure
the
CO
concentration
on
a
dry
basis.

482.
Comment:
The
word
particulate
is
misspelled.
(
0013)

Response:
This
has
been
corrected.

Section
6.2
CO
Analyzer
483.
Comment:
The
word
principal
is
misspelled
and
should
be
deleted.
(
0013,
0032)

Response:
This
has
been
done.

Section
7.1
Calibration
Gas
484.
Comment:
This
section
does
not
list
the
allowed
calibration
gas
combinations,
like
those
shown
in
3A
and
6C.
(
0038)

Response:
Section
7.1
refers
to
Section
7.1
of
Method
7E
which
states
that
blended
gases
meeting
the
Traceability
Protocol
are
allowed
if
the
additional
gas
components
are
shown
not
to
interfere
with
the
analysis.
The
tester
has
the
flexibility
to
tailor
the
gas
combinations
as
needed
for
the
test.

Section
7.2
Interference
Check
485.
Comment:
Gas
analyzer
manufacturers
have
done
extensive
research,
development,
and
testing
prior
to
manufacturing
any
analyzer.
The
gas
analyzer
manufacturers
incorporate
components
such
as
optical
filters,
gas
filter
correlation,
and
other
techniques
to
create
an
123
analyzer
that
performs
to
a
specific
need
with
minimal
interference.
The
available
CO
analyzers
have
already
been
rigorously
evaluated
under
controlled
conditions
for
their
performance.
It
is
unnecessary
to
repeat
gas
interference
checks
on
analyzers
that
have
been
carefully
researched,
developed,
and
tested
by
the
gas
manufacturers.

The
two
prime
interfering
flue
gases
for
measuring
CO
with
NDIR
technology
are
CO2
and
H2O.
Water
vapor
can
be
easily
removed
with
a
moisture
removal
system.
The
dew
point
of
a
gas
at
40
°
F
and
29.50
inches
Hg
calculates
to
a
H2O
concentration
of
80
ppm.
This
is
insignificant
and
would
not
even
affect
the
CO
measurements.
Most
moisture
removal
systems
achieve
a
40
°
dew
point
of
the
outlet
sample
gas.
An
interference
ratio
of
250,000:
1
for
CO2
is
typical
in
most
CO
NDIR
analyzers.
It
would
require
a
concentration
of
25
percent
of
CO2
to
display
1
ppm
on
the
CO
analyzer.

There
really
is
no
need
to
conduct
interference
checks
on
analyzers
as
long
as
the
analyzer
manufacturer
has
conducted
and
discloses
the
interference
ratios
in
the
analyzer
operation
manual.
This
method
should
require
as
a
minimum
that
the
interference
ratios
for
CO2
and
H2O
be
at
least
10,000:
1
as
this
will
provide
a
good
margin
of
safety.
(
0011)

Response:
The
interference
test
may
be
designed
and
conducted
by
the
manufacturer.
Manufacturer
verification
that
potential
interferences
do
not
exceed
2.5
percent
of
the
calibration
span
through
the
use
in
interference
ratios
is
an
acceptable
fulfillment
of
the
interference
check
requirements.

486.
Comment:
The
prescribed
interference
tests
are
not
appropriate
procedures
for
contemporary
GFCIR
CO
analyzers.
The
procedures
and
referenced
test
constituents
are
completely
inappropriate
for
FTIR
systems
which
are
in
fact
designed
to
measure
some
of
the
alleged
interfering
gases.
EPA
should
completely
rework
this
section
to
be
consistent
with
common
analytical
techniques
now
in
use
(
0032).

Response:
The
test
gases
listed
in
Table
7E­
3
of
proposed
Method
7E
are
now
listed
as
example
test
gases.
Section
7.2
now
requires
the
use
of
the
appropriate
test
gases
listed
in
Table
7E­
3
(
i.
e.,
potential
interferents,
as
identified
by
the
instrument
manufacturer)
to
conduct
the
interference
check.

487.
Comment:
There
is
no
need
to
conduct
interference
checks
on
analyzers
as
long
as
the
manufacturer
has
conducted
and
discloses
the
interference
ratios.
(
0011)

Response:
The
use
of
interference
ratios
to
show
that
interference
effects
are
accounted
for
is
acceptable.

488.
Comment:
Reference
to
table
7E­
5
is
incorrect,
and
should
refer
to
table
7E­
3.
(
0011,
0013)

Response:
This
has
been
corrected.
124
489.
Comment:
In
the
last
line,
the
reference
should
be
to
Table
7E­
4,
not
Figure
7E­
8.
(
0013)

Response:
This
error
has
been
corrected
by
abbreviating
the
section
and
referencing
Section
7.2
of
Method
7E.

490.
Comment:
One
of
the
sentences
is
wrongly
worded.
It
should
be
"
CO
with
and
without
interfering
gas."
(
0035)

Response:
This
has
been
remedied
by
abbreviating
the
section
and
referencing
Section
7.2
of
Method
7E.

Section
8.1
Sampling
Site
and
Sampling
Points
491.
Comment:
Carbon
monoxide
stratification
tests
using
stainless
steel
at
high
temperatures
can
be
biased.
Two
probes/
instruments
are
needed
to
account
for
temporal
changes.
(
0035)

Response:
This
approach
to
alleviate
biases
is
acceptable
492.
Comment:
This
section
does
not
cover
the
sampling
procedure
illustrated
in
Figure
10­
2.
(
0039)

Response:
A
new
Section
8.4.2
for
integrated
sampling
has
been
added
to
the
method.

Section
8.3
Sample
Collection
493.
Comment:
Should
refer
to
Section
8.4,
not
8.1.
(
0013)

Response:
This
has
been
clarified
in
the
final
rule.

Section
16.0
Alternative
Procedures
494.
Comment:
This
section
refers
to
the
manufacturer=
s
stability
test
and
spiking.
CO
is
not
subject
to
scrubbing,
and
any
interferences
for
CO
are
usually
high
instead
of
low,
which
spiking
cannot
identify.
(
0039)

Response:
The
manufacturer's
stability
test
is
a
valuable
indicator
of
instrument
stability
under
ambient
conditions
and
low­
concentration
measurements.
We
are
allowing
different
technologies
for
the
analysis
of
CO
and
believe
some
testers
will
desire
to
use
the
dynamic
spike
option.

Table
10­
1
495.
Comment:
What
is
Table
10­
1
and
what
is
it
used
for?
(
0035)
125
Response:
The
interference
table
gave
example
interference
ratios
for
specific
interferences
at
different
CO
analyzer
ranges.
The
table
has
been
dropped
from
the
final
rule.

496.
Comment:
Table
10­
1
should
only
be
used
for
the
integrated
bag
method.
The
units
of
cfm
in
the
table
are
likely
high
for
this
method,
perhaps
it
should
read
cfh
or
lpm.
(
0039)

Response:
A
caption
has
been
added
to
Table
10­
1
to
note
that
it
is
for
integrated
sampling.
The
units
have
been
changed
to
Lpm.

497.
Comment:
The
field
data
listed
in
the
table
(
i.
e.
rotometer
reading)
are
not
applicable
to
the
instrumental
analyzer
method
for
CO.
(
0038)

Response:
Table
10­
1
is
the
data
sheet
for
integrated
bag
sampling,
not
continuous
sampling.

3.18
Specific
Comments
on
Method
20
General
Comments
498.
Comment:
For
Method
20
traverses,
would
stratification
tests
now
have
to
be
performed
at
each
load
level,
or
would
the
low
load
O2
traverse
still
be
sufficient
for
determining
sample
points?
Also,
if
the
stratification
test
passes
the
5
percent
criterion,
would
one
sample
point
be
sufficient?
(
0028)

Response:
Method
20
traverses
may
be
performed
at
one
load
unless
otherwise
specified
in
an
applicable
regulation.
When
the
Method
7E
stratification
test
is
used,
if
the
5
percent
criterion
is
met,
a
single
sample
point
is
sufficient.

499.
Comment:
Incorporating
Method
7E
into
Method
20
clarifies
long­
standing
concerns
and
confusion
with
Method
20,
and
we
are
in
agreement
with
this
revision.
Our
only
suggestion
is
to
include
a
statement
at
the
beginning
of
Method
20
that
clarifies
that
the
testing
requirements
can
be
met
by
using
Method
7E.
We
appreciate
the
fact
that
Method
20
is
virtually
eliminated
and
now
incorporates
Method
7E
by
reference.
The
current
Method
20
is
significantly
flawed
and
somewhat
confusing
when
compared
to
the
current
Method
7E.
(
0033,
0038)

Response:
Section
2.0
of
Method
20
notes
that
NOx
is
measured
using
Method
7E.

500.
Comment:
Method
7E
replaces
Method
20
but
does
not
allow
for
sulfur
analysis
of
gas
samples.
Why?
(
0035)

Response:
The
reference
to
sulfur
analysis
in
fuels
has
been
retained.
Specific
references
to
sulfur
analysis
is
also
addressed
in
the
applicable
regulation
(
e.
g.
Subpart
GG
of
Part
60).

501.
Comment:
The
system
bias
and
interference
limits
should
be
5
percent
and
2
percent,
adding
absolute
limits
for
low­
NOx
measurements.
(
0038)
126
Response:
The
system
bias
and
interference
limits
are
referenced
in
Section
13
of
Method
7E,
and
absolute
limits
have
been
added
for
low
measurements.

502.
Comment:
The
revised
Method
7E
only
allows
the
use
of
chemiluminescence
monitors,
so
the
Method
20
discussion
regarding
interference
check
procedures
for
other
types
of
monitors
is
unnecessary
and
should
be
deleted.
(
0038)

Response:
Method
7E
only
lists
chemiluminescence
as
an
example
mode
of
detection,
not
a
required
type
of
analyzer.

Section
1.2
Applicability
503.
Comment:
Section
1.2
appears
to
be
a
copy
and
paste
of
section
1.2
from
Method
6C.
This
section
alludes
to
Method
6C,
when
it
should
reference
Method
20.
(
0009,
0025)

Response:
You
are
correct.
This
section
has
been
corrected.

Section
8.1
Sampling
Site
and
Sampling
Points
504.
Comment:
The
stratification
test
proposed
in
Method
7E
and
applied
to
Method
20
which
replaces
the
oxygen
traverse
in
favor
of
the
generalized
procedure,
while
apparently
logical
and
easier,
will
not
work
well
for
a
number
of
existing
simple
cycle
combustion
turbines.
Although
less
common
than
it
used
to
be,
there
are
still
large
numbers
of
simple
cycle
combustion
turbines
that
have
significant
diluent
stratification.
Method
20
was
originally
designed
to
accommodate
this
kind
of
diluent
stratification
that
is
still
experienced
with
some
simple
cycle
combustion
turbines
by
testing
at
the
lowest
points
of
oxygen.

We
recommend
changing
the
stratification
test
and
point
requirements
to
be
consistent
with
the
requirements
of
Part
75.
For
Method
20,
as
it
applies
to
simple
cycle
combustion
turbines,
if
the
turbine
does
not
meet
the
stratification
criteria
for
single
point
sampling,
then
sample
at
the
lowest
3
points
of
oxygen,
or
revert
to
a
48
or
49
point
O2
traverse
and
the
lowest
8
points
of
O2
for
the
formal
tests.
Leaving
the
Method
20
requirements
intact
for
O2
traverses
is
not
unreasonable
and
should
be
considered
to
get
the
best
data.
(
0039)

Response:
In
Method
20,
the
stratification
test
now
uses
a
3­
or
12­
point
traverse
to
determine
diluent­
corrected
pollutant
concentrations.
The
criteria
in
Method
7E
is
then
used
to
determine
the
minimum
number
of
test
points.
We
believe
this
approach
is
as
effective
as
the
old
requirement
to
use
48
or
49
points
for
the
stratification
test
followed
by
sample
collection
at
the
8
lowest
points
of
diluent
concentration.
The
current
procedures
are
very
similar
to
what
is
allowed
under
Part
75.
The
tester
has
the
option
to
test
at
more
points
(
per
the
old
Method
20)
if
desired.

505.
Comment:
The
proposed
number
of
sampling
points
for
this
method
differs
greatly
from
the
currently
published
version.
This
method
is
specific
for
tests
on
gas
turbines.
Due
to
the
nature
and
configuration
of
exhaust
gases
from
gas
turbines,
there
is
a
very
good
chance
of
stratification.
Many
exhaust
ducts
of
gas
turbines
do
not
meet
the
minimum
criteria
of
Method
1
127
for
duct
diameters
downstream
or
upstream
from
disturbances.
It
is
necessary
to
provide
direction
for
selecting
sampling
points
at
existing
non­
conforming
units.
I
suggest
that
if
the
exhaust
ducts
do
not
meet
the
minimum
requirements
of
Method
1,
then
the
selection
of
sample
points
follow
the
criteria
in
existing
Method
20.
(
0011)

Response:
See
the
response
to
Comment
504.

506.
Comment:
To
be
consistent
with
other
traverse
point
selection
procedures,
modify
paragraph
8.1.2.1
to
limit
the
maximum
number
of
traverse
points
to
24
or
25.
Requiring
up
to
48
or
49
points
is
technically
unnecessary
and
burdensome.
(
0038)

Response:
See
the
response
to
Comment
504.

Other
Comments:

507.
Comment:
Figure
20­
1
should
include
all
the
same
components
detailed
in
Section
6.0
of
Method
7E.
Specifically,
this
refers
to
the
sample
probe,
particulate
filter,
heated
sampling
line,
moisture
removal
system,
sample
pump,
flow
control,
and
gas
manifold
as
per
Figure
7E­
1.
The
figure
shown
does
not
include
either
an
in­
or
out­
of­
stack
filter,
and
a
figure
similar
to
that
of
Figure
7E­
1,
should
be
provided.
(
0011,
0038)

Response:
Proposed
Figure
20­
1
has
been
dropped,
and
Section
6.0
of
Method
7E
is
referenced
for
equipment
and
supplies.
