Technical
Support
Document
for
the
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Not
Intended
for
Circulation
Do
Not
Cite,
Quote,
or
Distribute
August
2,
2002
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
i
Acknowledgments
This
document
was
prepared
by
Maria
Gomez­
Taylor,
Henry
D.
Kahn,
William
A.
Telliard,
Khouane
Ditthavong,
and
Charles
E.
White,
of
the
Engineering
and
Analysis
Division
in
EPA's
Office
of
Water.
Harry
McCarty
and
Lynn
Riddick
with
DynCorp
I&
ET
and
Dale
Rushneck
with
Interface,
Inc.
provided
assistance
under
EPA
Contract
No.
68­
C­
01­
091.
Sidina
Dedah
and
Kathleen
Stralka
with
Science
Applications
International
Corporation
provided
assistance
under
EPA
Contract
No.
68­
C­
99­
233.

Questions
or
comments
about
general
aspects
of
this
assessment
should
be
addressed
to:

William
A.
Telliard
USEPA
(
4303T)
1200
Pennsylvania
Avenue,
NW
Washington,
DC
20460
telliard.
william@
epa.
gov
Questions
or
comments
about
statistical
issues
related
to
this
assessment
should
be
addressed
to:

Henry
Kahn
USEPA
(
4303T)
1200
Pennsylvania
Avenue,
NW
Washington,
DC
20460
kahn.
henry@
epa.
gov
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
ii
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
iii
Table
of
Contents
Acknowledgments
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
i
Table
of
Contents
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
iii
List
of
Figures
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
vii
Chapter
1
Introduction
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
­
1
1.1
Background
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
­
1
1.2
Clause
6
Settlement
Agreement
Requirements
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
­
1
1.2.1
Clause
6a
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
­
1
1.2.2
Clause
6B
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
­
2
1.2.3
Clause
6d
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
­
2
1.2.4
Clause
6e
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
­
2
1.2.5
Clause
6F
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
­
2
1.3
EPA's
Approach
to
Conducting
this
Assessment
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
­
3
1.3.1
Study
Plan
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
­
4
1.3.2
Material
and
Data
used
in
the
Assessment
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
­
4
1.3.2.1
EPA's
ICP/
MS
Study
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
­
5
1.3.2.2
EPA's
Episode
6000
Study
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
­
5
1.3.2.3
EPA's
Episode
6184
Study
(
the
"
GC/
MS
Threshold
Study")
.
.
.
.
.
.
.
1
­
6
1.3.2.4
AAMA
Metals
Study
of
Methods
200.7
and
245.2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
­
7
1.3.2.5
Method
1638
Interlaboratory
Validation
Study
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
­
7
1.4
Terminology
used
in
this
Document
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
1
­
7
Chapter
2
Overview
and
History
of
Detection
and
Quantitation
Limit
Concepts
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
­
1
2.1
Currie's
Call
for
Standardization
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
­
1
2.2
Development
of
the
MDL
and
ML
as
Practical
Embodiments
of
Currie's
Proposal
.
.
.
.
.
2
­
3
2.2.1
Method
Detection
Limit
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
­
3
2.2.2
Minimum
Level
of
Quantitation
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
­
4
2.3
Concepts
Advanced
by
Other
Organizations
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
­
5
2.3.1
EPA
Concepts
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
­
5
2.3.2
Industry­
supported
Concepts
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
­
7
2.3.3
Concepts
Advocated
by
the
Laboratory
Community
and
Voluntary
Consensus
Standards
Bodies
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
­
8
Chapter
3
Issues
Pertaining
to
Detection
and
Quantitation
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
1
3.1
Analytical
Chemistry
Concepts
Pertaining
to
Detection
and
Quantitation
.
.
.
.
.
.
.
.
.
.
.
.
3
­
2
3.1.1
Blank
versus
Zero
Concentration
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
2
3.1.2
Lack
of
Instrument
Response
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
2
3.1.3
Matrix
Effects
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
3
3.1.4
Measurement
Quality
over
the
Life
of
a
Method
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
5
3.2
CWA
Regulatory
Issues
Affecting
Detection
and
Quantitation
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
5
3.2.1
Detection
and
Quantitation
Limit
Applications
Under
CWA
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
5
3.2.1.1
Method
Development
and
Promulgation
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
5
3.2.1.2
Method
Performance
Verification
at
a
Laboratory
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
7
3.2.1.3
National
Pollutant
Discharge
Elimination
System
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
7
3.2.1.4
Non­
Regulatory
Studies
and
Monitoring
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
9
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
iv
3.2.2
Descriptive
versus
Prescriptive
Uses
of
Lower
Limits
to
Measurement
.
.
.
.
.
.
.
.
.
3
­
9
3.2.3
Compliance
Evaluation
Thresholds
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
10
3.2.4
Accepting
the
Procedures
of
Voluntary
Consensus
Standards
Bodies
.
.
.
.
.
.
.
.
.
.
3
­
11
3.2.5
National
versus
Local
Standards
for
Measurement
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
12
3.2.6
Cost
and
Implementation
Issues
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
12
3.2.6.1
Implementation
of
a
Detection/
Quantitation
Limit
Procedure
by
a
Method
Developer
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
13
3.2.6.2
Implementation
of
a
Detection/
Quantitation
Limit
Procedure
by
a
Laboratory
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
13
3.2.7
Use
of
a
pair
of
related
detection
and
quantitation
procedures
in
all
Clean
Water
Act
applications.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
13
3.3
Statistical
Issues
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
14
3.3.1
Sources
of
Variability
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
14
3.3.2
Censoring
Measurement
Results
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
14
3.3.3
Outliers
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
17
3.3.4
Criteria
for
the
Selection
and
Appropriate
Use
of
Statistical
Models
.
.
.
.
.
.
.
.
.
.
3
­
18
3.3.4.1
Short
History
of
Modeling
Measurement
Results
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
18
3.3.4.2
Criteria
for
Selecting
Models
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
20
3.3.4.3
Current
Practices
with
Available
Data
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
21
3.3.5
Methodology
for
Parameter
Estimation
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
22
3.3.6
False
Positives
and
False
Negatives
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
22
3.3.7
Statistical
Prediction
and
Tolerance
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
24
3.3.7.1
Prediction
Intervals
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
24
3.3.7.2
Tolerance
Intervals
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
25
Use
of
Tolerance
and
Prediction
in
Setting
Detection
and
Quantitation
Levels
.
.
3
­
25
3.3.8
Design
of
Detection
and
Quantitation
Studies
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
25
3.3.8.1
Spike
Concentrations
and
Modeling
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
26
3.3.8.2
Probability
Design
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
26
3.3.8.3
Completeness
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
26
Chapter
4
Evaluation
Criteria
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
­
1
4.1
Criterion
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
­
1
4.2
Criterion
2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
­
1
4.3
Criterion
3
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
­
2
4.4
Criterion
4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
­
3
4.5
Criterion
5
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
­
4
4.6
Criterion
6
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
4
­
5
Chapter
5
Assessment
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
­
1
5.1
Detection
Limit
Concepts
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
­
1
5.1.1
Evaluation
of
the
MDL
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
­
1
5.1.1.1
Description
of
the
MDL
Concept
and
Procedure
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
­
1
5.1.1.2
Assessment
of
the
MDL
Against
the
Evaluation
Criteria
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
­
3
5.1.2
Evaluation
of
the
ASTM­
International
Interlaboratory
Detection
Estimate
(
IDE)
.
5
­
6
5.1.2.1
Description
of
the
IDE
Concept
and
Procedure
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
­
6
5.1.2.2
Assessment
of
the
IDE
Against
the
Evaluation
Criteria
.
.
.
.
.
.
.
.
.
.
.
.
5
­
7
5.1.3
Evaluation
of
the
ACS
Limit
of
Detection
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
­
10
5.1.3.1
Description
of
the
ACS
LOD
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
­
10
5.1.3.2
Assessment
of
the
LOD
Against
the
Evaluation
Criteria
.
.
.
.
.
.
.
.
.
.
5
­
11
5.1.4
Evaluation
of
the
IUPAC/
ISO
Critical
Value
(
CRV)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
­
12
Table
of
Contents
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
v
5.1.4.1
Description
of
the
ISO/
IUPAC
Critical
Value
(
CRV)
Concept
and
Procedure
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
­
12
5.1.4.2
Assessment
of
the
CRV
Against
the
Evaluation
Criteria
.
.
.
.
.
.
.
.
.
.
5
­
13
5.1.5
Evaluation
of
the
IUPAC/
ISO
Detection
Limit
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
­
14
5.1.5.1
Description
of
the
IUPAC/
ISO
Detection
Limit
Procedure
.
.
.
.
.
.
.
.
5
­
14
5.1.5.2
Assessment
of
the
ISO/
IUPAC
MDV
Against
the
Evaluation
Criteria
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
­
15
5.2
Quantitation
Limit
Concepts
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
­
16
5.2.1
Assessment
of
the
EPA
Minimum
level
of
Quantitation
(
ML)
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
­
16
5.2.1.1
Description
of
the
ML
Concept
and
Procedures
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
­
16
5.2.1.2
Assessment
of
the
ML
against
the
Evaluation
Criteria
.
.
.
.
.
.
.
.
.
.
.
.
5
­
17
5.2.2
Assessment
of
the
IQE
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
­
19
5.2.2.1
Description
of
the
IQE
Concept
and
Procedure
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
­
19
5.2.2.2
Assessment
of
the
IQE
Against
the
Evaluation
Criteria
.
.
.
.
.
.
.
.
.
.
.
5
­
20
5.2.3
Assessment
of
the
ACS
Limit
of
Quantitation
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
­
23
5.2.3.1
Description
of
the
ACS
LOQ
Concept
and
Procedure
.
.
.
.
.
.
.
.
.
.
.
.
5
­
23
5.2.3.2
Assessment
of
the
ACS
LOQ
Against
the
Evaluation
Criteria
.
.
.
.
.
.
5
­
24
5.2.4
Assessment
of
the
IUPAC/
ISO
Limit
of
Quantitation
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
­
25
5.2.4.1
Description
of
the
ISO/
IUPAC
LOQ
Concept
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
­
25
5.2.4.2
Assessment
of
the
IUPAC/
ISO
LOQ
Against
the
Evaluation
Criteria
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
5
­
25
Chapter
6
Conclusions
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
6
­
1
References
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
R
­
1
Appendix
A
Literature
Search
Regarding
Detection
and
Quantitation
Limit
Concepts
.
.
.
.
.
.
.
.
.
.
.
.
A­
1
Introduction
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
A­
1
How
the
search
was
conducted
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
A­
1
On­
line
citation
index
search
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
A­
1
General
on­
line
literature
search
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
A­
1
How
the
results
are
presented
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
A­
2
Summary
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
A­
3
Appendix
B
is
not
done
yet,
but
it
will
go
here
when
it
is.
It
will
be
sent
to
the
reviewers
shortly.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
B­
1
Appendix
C
is
not
done
yet,
but
it
will
go
here
when
it
is.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
C­
1
Appendix
D
Draft
Revised
MDL
Procedure
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
D­
1
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
vi
List
of
Figures
Figure
2­
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
­
1
Figure
2­
2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
2
­
2
Figure
3­
1
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
15
Figure
3­
2
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
16
Figure
3­
3
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
17
Figure
3­
4
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
.
3
­
19
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
1­
1
Chapter
1
Introduction
1.1
Background
On
June
8,
1999
(
64
FR
30417),
EPA
promulgated
(
i.
e.,
published
in
a
final
rule)
Method
1631B:
Mercury
in
Water
by
Oxidation,
Purge
and
Trap,
and
Cold
Vapor
Atomic
Fluorescence
Spectrometry
for
use
in
Clean
Water
Act
programs.
The
method
was
developed
specifically
to
measure
mercury
at
ambient
water
quality
criteria
levels
and
includes
a
method
detection
limit
(
MDL)
of
0.2
ng/
L
(
ppt).

Following
promulgation,
a
lawsuit
was
filed
challenging
EPA
on
the
validity
of
the
method.
The
basis
of
the
challenge
included
several
specific
aspects
of
Method
1631
as
well
as
the
general
procedures
used
to
establish
the
MDL
and
minimum
level
of
quantitation
(
ML)
published
in
the
method
although
the
promulgated
MDL
of
0.2ng/
L
and
ML
of
0.5
ng/
L
were
not
affected
by
the
settlement.

In
order
to
settle
the
lawsuit,
EPA
entered
into
a
settlement
agreement
with
the
Alliance
of
Automobile
Manufacturers,
Inc.,
the
Chemical
Manufacturers
Association,
and
the
Utility
Water
Act
Group
(
collectively
the
"
Petitioners")
and
the
American
Forest
and
Paper
Association
("
Intervenor")
on
October
19,
2000
(
the
"
Settlement
Agreement").
Under
the
terms
of
the
Settlement
Agreement,
Clause
6
EPA
agreed
to
perform
an
assessment
of
detection
and
quantitation
limit
concepts.
The
complete
text
of
Clause
6
is
provided
in
Exhibit
1­
1.
A
summary
of
Clause
6
is
provided
in
Section
1.2.
This
is
followed
by
a
description
of
EPA's
approach
to
the
assessment,
including
the
material
and
data
evaluated
(
Section
1.3).

1.2
Clause
6
Settlement
Agreement
Requirements
Clause
6
of
the
Settlement
Agreement
is
titled
Reassessment
of
Method
Detection
Limit
and
Minimum
Level
Procedures.
The
clause
consists
of
five
subclauses,
a
­
b
and
d
­
f.
(
There
is
no
subclause
c).

1.2.1
Clause
6a
Clause
6a
broadly
defines
the
scope
of
the
assessment
and
provides
a
schedule
for
completing
the
initial
phase.
Specifically,
Clause
6a
requires
EPA
to:

°
Sign
and
forward
to
the
Office
of
Federal
Register
(
OFR)
a
notice
inviting
public
comment
on
a
reassessment
of
existing
EPA
procedures
for
determining
the
detection
and
quantitation
limits
of
contaminants
in
aqueous
samples.
°
Forward
the
notice
to
OFR
on
or
before
February
28,
2003.
°
Provide
a
period
of
at
least
120
days
for
public
comment
on
the
notice.
°
At
a
minimum,
include
the
MDL
procedure
published
at
40
CFR
part
136
and
the
ML
procedure
described
in
Section
17.8
of
Method
1613B
in
the
reassessment
of
detection
and
quantitation
limits.

Clause
6a
also
provides
EPA
with
the
option
of:

°
Inviting
comment
on
one
or
more
alternative
procedures
for
determining
and
describing
test
sensitivity,
and
°
Proposing
modifications
to
the
existing
procedures.
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
1­
2
1.2.2
Clause
6B
Clause
6b
requires
EPA
to
subject
its
reassessment
to
a
formal
peer
review
and
describes
requirements
associated
with
this
peer
review.
Specifically,
Clause
6b
requires
EPA
to:

°
Submit
the
reassessment
of
existing
procedures
(
including
any
proposed
modifications
thereof)
and
any
evaluation
of
alternatives
for
peer
review.
Peer
reviewers
are
defined
as
experts
in
the
field
of
analytical
chemistry
and
the
statistical
aspects
of
analytical
data
interpretation.
°
Conduct
the
peer
review
in
accordance
with
EPA's
current
peer
review
policies
°
Prepare
a
charge
to
the
peer
review
panel
that
requests
the
peer
reviewers
to
consider:
<
Criteria
for
selection
and
appropriate
use
of
statistical
models
<
Methodology
for
parameter
estimation
<
Statistical
tolerance
and
prediction
<
Criteria
for
design
of
detection
and
quantification
studies,
including
selection
of
concentration
levels
("
spiking
levels")
<
Interlaboratory
variability,
and
<
Incorporation
of
elements
of
probability
design.

1.2.3
Clause
6d
Clause
6d
requires
EPA
to
provide
the
Petitioners
and
Intervenor
(
the
"
litigants")
with
an
opportunity
for
review
of
the
Agency's
assessment
concurrent
with
the
Clause
6b
peer
review.

1.2.4
Clause
6e
Clause
6e
requires
EPA
to
provide
the
litigants
with:

°
An
opportunity
to
meet
periodically
(
i.
e.,
every
six
months)
to
discuss
the
Agency's
progress
during
development
of
the
assessment.
°
A
plan
for
performing
the
assessment
on
or
before
the
second
of
these
meetings
and,
°
Copies
of
relevant
documents,
where
appropriate,
in
advance
of
these
meetings
1.2.5
Clause
6F
Clause
6f
establishes
a
schedule
and
requirements
concerning
final
action
on
the
assessment.
Specifically:

°
On
or
before
September
30,
2004,
EPA
is
to
sign
and
forward
to
the
OFR
a
notice
taking
final
action
on
the
assessment,
and
°
Coincident
with
publication
of
this
notice,
EPA
is
provide
the
litigants
with
an
opportunity
to
meet
and
discuss
the
implications
of
the
final
notice
and/
or
the
need
for
any
subsequent
EPA
action
in
light
of
the
final
notice.
Chapter
1
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
1­
3
Exhibit
1­
1.
Full
Text
of
Clause
6
of
the
Settlement
Agreement
6.
Reassessment
of
Method
Detection
Limit
and
Minimum
Level
Procedures
a.
On
or
before
February
28,
2003,
EPA
shall
sign
and
forward
to
the
Office
of
the
Federal
Register
for
prompt
publication
a
notice
inviting
public
comment
on
a
reassessment
of
the
existing
Agency
procedures
for
determination
of
sensitivity
of
analytic
test
methods
for
aqueous
samples,
specifically,
EPA
procedures
for
determining
the
detection
limits
and
levels
of
quantitation
of
contaminants
in
aqueous
samples,
including,
at
a
minimum,
the
"
Definition
and
Procedure
for
Determination
of
the
Method
Detection
Limit"
published
at
40
C.
F.
R.
Part
136,
Appendix
B,
as
well
as
the
"
minimum
level"
procedures,
which
is
described
in
section
17.8
of
Method
1631B.
The
notice
shall
invite
comment
on
EPA's
evaluation
of
one
or
more
alternative
procedures
for
determining
and
describing
test
sensitivity.
The
notice
also
may
propose
modifications
to
the
existing
procedures.
The
notice
shall
invite
public
comment
for
a
period
of
no
less
than
one
hundred
twenty
(
120)
days.

b.
Prior
to
publishing
the
notice
inviting
public
comment
on
EPA
procedures
for
determining
test
sensitivity,
EPA
shall
submit
its
reassessment
of
existing
procedures
(
including
any
proposed
modifications
thereof)
and
its
evaluation
of
alternatives
for
peer
review
by
experts
in
the
field
of
analytical
chemistry
and
the
statistical
aspects
of
analytical
data
interpretation.
In
its
charge
to
the
peer
review
panel,
EPA
shall
request
that
the
peer
review
consider:
criteria
for
selection
and
appropriate
use
of
statistical
models;
methodology
for
parameter
estimation;
statistical
tolerance
and
prediction;
criteria
for
design
of
detection
and
quantification
studies,
including
selection
of
concentration
levels
("
spiking
levels");
interlaboratory
variability;
and
incorporation
of
elements
of
probability
design.
EPA
(
or
its
authorized
representative)
shall
conduct
the
peer
review
in
accordance
with
EPA's
current
peer
review
policies
in
the
January
1998
Science
Policy
Council
Handbook
(
EPA
100­
B­
98­
00),
including
any
subsequently­
developed
EPA
peer
review
documents
that
may
revise
or
amend
that
Handbook.

[
c.
Note
­
there
is
no
clause
"
6.
c"
in
the
settlement
agreement]

d.
During
the
peer
review
period,
EPA
shall
also
provide
an
opportunity
for
concurrent
review
and
comment
by
the
Petitioners
and
Intervenor.

e.
In
the
development
of
the
reassessment/
assessment
of
alternatives,
EPA
shall
provide
the
Petitioners
and
Intervenor
with
a
periodic
opportunity
to
meet
(
i.
e.,
every
six
(
6)
months)
on
the
Agency's
progress.
EPA
shall
prepare
and
present
the
Petitioners
and
Intervenor
with
the
Agency's
"
plan"
for
conducting
the
reassessment/
assessment
of
alternatives
on
or
before
the
second
such
periodic
meeting.
Where
appropriate,
EPA
shall
provide
the
Petitioners
and
Intervenor
with
copies
of
relevant
documents
in
advance
of
such
meetings.

f.
On
or
before
September
30,
2004,
EPA
shall
sign
and
forward
to
the
Office
of
the
Federal
Register
for
prompt
publication
a
notice
taking
final
action
on
the
notice
described
in
subparagraph
6.
a.
Coincident
with
publication
of
the
notice
of
final
action,
EPA
shall
provide
Petitioners
and
Intervenor
an
opportunity
to
meet
to
discuss
the
implications
of
the
final
notice
and/
or
the
need
for
any
subsequent
EPA
action
in
light
of
the
final
notice.

1.3
EPA's
Approach
to
Conducting
this
Assessment
This
draft
document
details
the
Agency's
assessment
of
detection
and
quantitation
limits.
This
assessment
is
being
conducted
in
accordance
with
a
plan
described
in
Section
1.3.1
and
is
based,
in
part,
on
an
assessment
of
the
data
described
in
Section
1.3.2.
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
1­
4
1.3.1
Study
Plan
EPA
developed
a
technical
approach
for
1)
conducting
the
assessment,
and
2)
complying
with
all
applicable
requirements
of
the
Settlement
Agreement.
The
approach
was
documented
in
a
draft
study
plan
that
has
since
formed
the
general
framework
for
the
assessment
described
in
this
TSD.
EPA
also
conducted
a
literature
search
to
identify
and
review
issues
and
concepts
that
should
be
considered
when
developing
the
plan.
A
summary
of
this
literature
review
is
provided
in
Appendix
A
to
this
TSD.

Although
the
Settlement
Agreement
did
not
require
EPA
seek
formal
peer
review
on
its
draft
plan,
the
Agency
chose
to
do
so
anyway.
The
peer
review
was
initiated
in
December
2001,
conducted
in
accordance
with
EPA's
current
peer
review
policies,
and
performed
by
two
statisticians
and
two
chemists.
EPA
reviewed
comments
and
recommendations
offered
by
these
reviewers,
and
where
appropriate,
revised
the
plan
to
reflect
their
comments.
EPA
also
reviewed,
and
where
appropriate,
revised
the
plan
to
reflect
comments
provided
by
the
litigants
following
their
concurrent
review.

The
study
plan
described
roles
and
responsibilities
for
implementing
the
plan,
provided
a
background
discussion
of
detection
and
quantitation
limit
concepts,
including
the
MDL
and
ML,
and
outlined
a
series
of
11
events
associated
with
the
Agency's
assessment
of
detection
and
quantitation
limit
concepts.
The
relationship
between
those
planned
events
and
this
TSD
is
summarized
in
Exhibit
1­
2.

1.3.2
Material
and
Data
used
in
the
Assessment
In
order
to
perform
the
assessment
described
in
this
document,
EPA
sought
to
collect
documentation
describing
existing
detection
and
quantitation
limit
concepts
and
procedures
and
data
that
could
be
used
to
evaluate
these
procedures.

Documentation
concerning
the
existing
concepts
and
procedures
was
obtained
by
performing
a
literature
search
as
described
in
Appendix
A
to
this
TSD,
and
where
necessary,
by
purchasing
copies
of
concepts
or
procedures
from
the
organizations
that
published
them.

In
performing
this
assessment,
EPA
hoped
to
identify
a
large
body
of
data
containing
results
generated
at,
below,
and
above
the
region
of
interest.
EPA
found,
however,
that
few
data
sets
existed
that
met
these
requirements.
To
date,
EPA
has
been
able
to
identify
only
five
data
sets
that
were
of
use
in
fully
evaluating
variability
in
the
range
of
analytical
detection
and
quantitation.
Three
of
the
five
were
developed
by
EPA
for
the
express
purpose
of
studying
the
relationship
between
measurement
variation
and
concentration
across
a
wide
variety
of
measurement
techniques
and
analytes.
EPA
refers
to
these
data
sets
as
"
EPA's
ICP/
MS
data,"
"
the
Episode
6000
study,"
and
"
the
Episode
6184
study."
In
all
three
cases,
replicate
measurement
results
from
each
combination
of
analyte
and
measurement
technique
were
produced
by
a
single
laboratory
over
a
wide
range
and
large
number
of
concentrations.
The
fourth
data
set
was
developed
by
the
American
Automobile
Manufacturer's
Association
(
AAMA)
for
the
purpose
of
estimating
one
particular
kind
of
quantitation
value.
That
quantitation
value
is
called
an
Alternative
Minimum
Level
(
see
Gibbons
et
al.,
1997).
In
the
AAMA
study,
replicate
results
were
measured
at
a
limited
number
of
concentrations
by
multiple
laboratories
using
EPA
Method
245.2
(
cold
vapor
atomic
absorption,
or
CVAA)
for
mercury
and
EPA
Method
200.7
(
inductively
coupled
plasma/
atomic
emission
spectroscopy,
or
ICP/
AES)
for
twelve
other
metals.
The
fifth
data
set
was
jointly
gathered
by
EPA
and
the
Electric
Power
Research
Institute
(
EPRI)
to
support
an
interlaboratory
validation
of
EPA
Method
1638.

The
studies
from
which
these
five
data
sets
were
obtained
are
summarized
in
sections
1.3.2.1
­
1.3.2.5
below.
Additional
information
about
these
studies
can
be
found
in
Appendix
B
to
this
TSD.
Chapter
1
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
1­
5
1.3.2.1
EPA's
ICP/
MS
Study
The
objective
of
this
study
was
to
characterize
variability
in
EPA's
draft
Method
1638
for
nine
metals
by
inductively
coupled
plasma
with
mass
spectroscopy
(
ICP/
MS).
The
nine
metals
were
silver,
cadmium,
copper,
nickel,
lead,
antimony,
selenium,
thallium,
and
zinc.
The
ICP/
MS
instrument
used
in
this
study
averages
triplicate
scans
to
produce
a
single
measurement
of
each
element
at
each
concentration.
Such
averaging
is
typical
of
ICP/
MS
design
and
use.

In
preparation
for
the
study,
the
ICP/
MS
was
calibrated
using
triplicate
scans
averaged
to
produce
a
single
measurement
of
100,
1,000,
5,000,
10,000,
and
25,000
nanograms
per
liter
(
ng/
L)
for
each
element.
Originally,
the
instrument
was
calibrated
using
unweighted
least
squares
estimates
under
the
assumption
of
linearity.
Subsequently,
the
analytical
results
were
adjusted
with
weighted
least
squares
estimates.
Weighted
least
squares
estimates
are
based
on
the
assumption
that
variability
increases
with
increasing
analyte
concentration.

Although
the
instrumentation
has
the
capability
to
provide
intensity
results
for
each
of
the
three
scans
at
each
concentration,
averaging
the
three
scans
to
produce
a
single
measurement
is
the
normal
operating
mode,
and
the
average
was
used
to
produce
the
measurements
in
this
study.
Draft
Method
1638
specifies
the
use
of
average
response
factors
rather
than
least
squares
estimation
of
linear
calibration
curves,
although
it
does
allow
for
the
use
of
such
procedures.

All
nine
metals
were
spiked
into
water
to
produce
solutions
at
concentrations
of:
0,
10,
20,
50,
100,
200,
500,
1,000,
2,000,
5,000,
10,000,
and
25,000
nanograms
per
liter
(
ng/
L).
Each
solution
was
subsequently
divided
into
seven
aliquots,
and
each
aliquot
was
analyzed
beginning
with
the
blank
(
zero
concentration)
and
followed
by
analyses
from
the
highest
to
the
lowest
concentration.

Results
at
multiple
mass­
to­
charge
ratios,
or
m/
zs,
were
reported
for
each
metal,
although
draft
Method
1638
specifies
only
one
m/
z
for
eight
of
the
nine
metals.
For
lead,
m/
zs
206,
207,
and
208
are
specified.
This
study
of
variation
only
used
the
data
associated
with
m/
zs
that
are
specified
in
draft
Method
1638.

1.3.2.2
EPA's
Episode
6000
Study
The
Episode
6000
study
was
designed
to
characterize
the
variability
of
results
from
1/
10th
of
the
method
detection
limit
to
concentrations
into
the
normal
quantification
range
for
a
variety
of
measurement
techniques.
The
analytes
and
analytical
techniques
studied
were:

°
Total
suspended
solids
(
TSS)
by
gravimetry
°
Metals
by
graphite
furnace
atomic
absorption
spectroscopy
(
GFAA)
°
Metals
by
inductively­
coupled
plasma
atomic
emission
spectrometry
(
ICP/
AES)
°
Hardness
by
ethylene
diamine
tetraacetic
acid
(
EDTA)
titration
°
Phosphorus
by
colorimetry
°
Ammonia
by
ion­
selective
electrode
°
Volatile
organic
compounds
in
water
by
purge­
and­
trap
capillary
column
gas
chromatography
with
a
photoionization
detector
(
GC/
PID)
and
electrolytic
conductivity
detector
(
GC/
ELCD)
in
series
°
Volatile
organic
compounds
by
gas
chromatography
with
a
mass
spectrometer
(
GC/
MS)
°
Available
cyanide
by
flow­
injection/
ligand
exchange/
amperometric
detection
°
Metals
by
inductively­
coupled
plasma
spectrometry
with
a
mass
spectrometer
(
ICP/
MS)
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
1­
6
A
total
of
five
laboratories
were
employed
for
these
analyses;
each
parameter
and
method
combination
was
tested
by
one
of
these
laboratories.
A
method
detection
limit
(
MDL)
study
was
conducted
for
each
combination
of
analyte
and
analytical
technique.

After
determining
the
MDL,
each
laboratory
analyzed
seven
replicates
at
a
series
of
concentrations
created
at
100,
50,
20,
10,
7.5,
5.0,
3.5,
2.0,
1.5,
1.0,
0.75,
0.50,
0.35,
0.20,
0.15,
and
0.10
times
the
MDL
determined
above.
In
a
few
instances
(
described
below),
laboratories
analyzed
more
than
seven
replicates.

A
variant
of
the
iterative
procedure
for
detemining
the
method
detection
limit
(
40
CFR
part
136
Appendix
B)
was
used
for
organics.
Methods
for
organics
normally
list
many
(
15
to
100)
analytes,
and
the
response
for
each
analyte
is
different.
Therefore,
to
determine
an
MDL
for
each
analyte,
the
concentration
of
the
spike
must
be
inversely
proportional
to
the
response.
Making
a
spiking
solution
with
15
to
100
different
concentrations
is
cumbersome
and
error
prone.
A
more
straightforward
approach,
and
one
used
in
this
study
at
EPA's
suggestion,
was
to
run
seven
replicates
at
decreasing
concentrations
until
signal
extinction,
then
pick
off
the
concentration(
s)
appropriate
for
the
MDL.
In
some
cases
the
laboratories
picked
off
the
concentrations,
in
others
cases,
EPA
did.
This
approach
was
generally
applied
for
organics
analysis,
however
laboratories
also
had
the
option
of
using
some
combination
of
the
monotonically
decreasing
concentrations
described
above
and
a
few
selected
concentrations
to
achieve
the
desired
spiking
levels.

Spiked
solutions
were
analyzed
in
order
from
the
highest
concentration
to
the
lowest.
This
approach
was
used
to
(
1)
minimize
carry
over
effects
and
(
2)
prevent
the
collection
of
some
costly,
but
less
than
informative,
data.

Carry­
over
can
occur
when
a
high
concentration
sample
is
followed
by
a
low
concentration
sample.
Carry­
over
is
usually
less
than
one
percent
but
can
be
a
few
percent
in
some
methods.
Therefore,
if
a
sample
containing
a
pollutant
at
100
times
the
MDL
is
followed
by
a
sample
containing
that
pollutant
at
only
0.1
times
the
MDL,
the
sample
at
0.1
times
the
MDL
could
be
compromised
by
carry­
over.
Running
the
samples
in
decreasing
order
of
the
chosen
spike
levels
is
not
expected
to
compromise
successively
lower
measurements.

For
methods
that
do
not
produce
a
signal
for
a
blank,
the
signal
will
disappear
somewhere
below
the
MDL,
i.
e.,
a
zero
will
be
reported.
Laboratories
were
told
that
when
three
nondetects
(
out
of
seven
measurements)
were
reported,
it
was
not
necessary
to
move
to
the
next
lower
concentration,
because
it
was
not
reasonable
to
have
laboratories
measure
seven
zeros,
move
to
a
lower
level,
measure
seven
zeros,
etc.

1.3.2.3
EPA's
Episode
6184
Study
(
the
"
GC/
MS
Threshold
Study")

Data
from
this
study
also
were
collected
to
characterize
the
variability
of
results
from
concentrations
at,
and
sometimes
lower
than,
the
detection
limit
to
spike
concentrations
well
into
the
normal
quantification
range
of
the
method.
Specifically,
the
laboratory
generated
data
for
82
semivolatile
organic
compounds
by
EPA
Method
1625C
(
semivolatile
organic
compounds
by
GC/
MS).
MDLs
were
not
determined
for
these
compounds.
Instead,
a
series
of
samples
consisting
of
17
concentration
levels
were
prepared
and
analyzed
as
follows:
50.0,
20.0,
10.0,
7.50,
5.00,
3.50,
2.00,
1.50,
1.00,
0.75,
0.50,
0.35,
0.20,
0.15,
0.10,
0.075
and
0.050
ng/:
L
(
or
:
g/
mL).
Each
concentration
level
was
analyzed
in
triplicate
with
the
mass
spectrometer
thresholds
set
to
0,
and
again
in
triplicate
with
the
mass
spectrometer
thresholds
set
to
1000,
a
level
typical
of
that
used
in
routine
environmental
analyses.
As
with
the
Episode
6000
data
set,
samples
were
analyzed
in
order
from
the
highest
to
the
lowest
concentration,
and
measurements
were
made
as
often
as
possible
using
the
same
calibration.
Chapter
1
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
1­
7
1.3.2.4
AAMA
Metals
Study
of
Methods
200.7
and
245.2
The
American
Automobile
Manufacturer's
Association
conducted
an
interlaboratory
study
of
EPA
Method
200.7
(
metals
by
ICP/
AES)
and
Method
245.2
(
mercury
by
CVAA).
Nine
laboratories
participated
in
the
study,
and
each
reported
data
for
the
following
13
metals:
aluminum,
arsenic,
cadmium,
chromium,
copper,
lead,
manganese,
mercury,
molybdenum,
nickel,
selenium,
silver
and
zinc.
Study
samples
were
analyzed
by
EPA
200.7
for
12
of
the
metals;
mercury
was
determined
by
EPA
245.2.

The
nine
laboratories
were
randomized
prior
to
the
start
of
the
study.
Five
matrix
types
(
including
reagent
water)
were
selected,
including
four
that
were
representative
of
the
automotive
industry.
Each
matrix
was
spiked
at
five
concentrations
in
a
predetermined
concentration
range.
Matrix
A
(
reagent
water)
was
analyzed
in
all
nine
laboratories,
and
three
laboratories
analyzed
each
of
the
other
four
matrices.
All
analyses
were
repeated
weekly
over
a
five
week
period.
As
a
result,
a
total
of
6825
observations
were
obtained,
which
includes
2925
observations
for
matrix
A
(
9
labs
*
13
metals
*
5
spike
concentrations
*
5
weeks)
and
975
(
3
labs
*
13
metals
*
5
spike
concentrations
*
5
weeks)
for
each
of
the
other
four
matrices
(
6825
=
2925
+
[
975
*
4]).
There
were
two
missing
values
for
chromium
in
matrix
A
from
laboratories
1
and
9.

Starting
from
a
blank,
or
unspiked
sample,
all
target
analytes
were
spiked
at
four
concentrations
to
yield
a
total
of
five
concentrations
per
matrix.
Concentrations
ranged
from
0.01
to
10
:
g/
L
for
mercury
and
selenium
on
the
low
end,
and
from
2.0
and
1000
:
g/
L
for
mercury
and
selenium
on
the
high
end.
In
addition,
the
concentrations
were
matrix­
dependent.
The
same
concentration
ranges
for
each
metal
by
matrix
combination
were
used
for
all
five
weeks
of
the
study.

1.3.2.5
Method
1638
Interlaboratory
Validation
Study
The
Method
1638
interlaboratory
validation
study
was
conducted
by
EPA
to
evaluate
performance
of
the
method
and
to
gather
data
that
would
allow
revision
of
existing
performance
specifications,
including
detection
and
quantitation
limits.
To
accommodate
stakeholder
interests
and
expand
the
scope
of
the
study,
the
Electric
Power
Research
Institute
funded
the
distribution
of
additional
samples
to
study
laboratories.

A
total
of
eight
laboratories
(
and
a
referee
laboratory)
participated
in
the
study.
The
study
was
designed
so
that
each
participating
laboratory
would
analyze
sample
pairs
of
each
matrix
of
interest
at
concentrations
that
would
span
the
analytical
range
of
the
method.
Each
laboratory
was
provided
with
11
sample
pairs
(
a
total
of
22
blind
samples).
These
included
1
filtered
effluent
pair,
1
unfiltered
effluent
pair,
4
filtered
freshwater
pairs,
and
5
spiked
reagent
water
pairs.
All
eight
laboratories
received
and
analyzed
the
same
sample
pairs
(
a
total
of
176
analyses).
To
measure
the
recovery
and
precision
of
the
analytical
system,
and
to
monitor
matrix
interferences,
the
laboratories
were
instructed
to
analyze
matrix
spike
and
matrix
spiked
duplicate
samples
on
specified
field
samples
in
each
filtered
and
unfiltered
matrix.
Laboratories
were
further
instructed
to
spike
aliquots
of
the
designated
MS/
MSD
samples
at
1­
5
times
the
background
concentration
of
the
analytes
determined
by
analysis
of
an
unspiked
aliquot
of
the
sample.
The
laboratories
also
were
instructed
to
perform
all
other
QC
tests
described
in
the
method,
including
the
analysis
of
blanks.

1.4
Terminology
used
in
this
Document
We
use
the
term
"
quantitation"
in
this
document
because
of
its
common
usage
among
analytical
chemists,
even
though
we
recognize
that
the
term
"
quantification"
(
i.
e.,
the
act
of
quantifying)
is
more
appropriate.
Also,
when
referring
to
detection
and
quantitation,
we
use
the
words
"
approach"
or
"
concept"
to
refer,
generically,
to
the
procedures
used
to
establish
detection
and
quantitation
limits
or
the
theories
on
which
those
procedures
are
based.
We
use
the
word
"
limit"
rather
than
"
level"
to
indicate
that
the
detection
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
1­
8
Exhibit
1­
2.
Relationship
of
TSD
to
Planned
Approach
for
Assessment
of
Detection
and
Quantitation
Limit
Concepts
Event
1,
Develop
a
detailed
plan
for
responding
to
Clause
6:
This
event
was
completed
in
April
2002
when
the
draft
plan
was
revised
to
reflect
peer
review
and
litigant
comments.

Event
2,
Identify
and
explore
issues
to
be
considered:
The
Settlement
Agreement
identified
six
specific
issues
that
should
be
considered
during
the
assessment
of
detection
and
quantitation
limit
concepts
and
subjected
to
formal
peer
review.
During
development
of
its
technical
approach,
EPA
identified
a
number
of
other
issues
that
should
be
considered
during
assessment
of
detection
and
quantitation
limits.
EPA
listed
and
described
each
of
these
issues
in
the
study
plan
and
noted
that
identification
of
issues
is
likely
to
be
a
dynamic
process,
in
that
as
a
suite
of
issues
is
identified
and
discussed,
other
issues
may
surface.
Finally,
EPA
stated
its
intent
to
prepare
an
"
issue
paper"
that
fully
explains
and
discusses
each
of
the
identified
issues.
Chapter
3
of
this
TSD
serves
the
function
of
the
"
issue"
paper
described
in
the
plan.

Event
3,
Develop
Criteria
against
which
Concepts
can
be
Evaluated:
After
fully
considering
all
relevant
issues,
EPA
planned
to
develop
a
suite
of
criteria
that
could
be
used
to
evaluate
the
suitability
of
various
detection
and
quantitation
procedures
for
use
in
CWA
programs.
Chapter
4
of
this
TSD
provides
and
describes
the
criteria
selected
by
EPA
after
its
consideration
of
all
pertinent
issues.

Event
4,
Evaluate
Existing
Procedures
for
Establishing
Detection
and
Quantitation
Levels:
EPA
planned
to
evaluate
existing
detection
and
quantitation
limit
concepts
used
or
advanced
1)
by
voluntary
consensus
standards
bodies
(
VCSBs),
2)
in
the
published
literature,
3)
EPA.
As
per
the
terms
of
the
settlement
agreement,
the
MDL
and
ML
were
explicitly
targeted
for
inclusion.
EPA
committed
to
evaluating
concepts
published
by
ASTM­
International
and
ISO
and
to
consider
concepts
and
procedures
offered
by
other
organizations
such
as
the
American
Chemical
Society
(
ACS)
and
the
International
Union
of
Pure
and
Applied
Chemistry
(
IUPAC),
as
well
as
other
concepts
that
have
been
adopted
by
EPA
for
use
in
other
programs
or
that
were
identified
during
EPA's
review
of
the
published
literature.
Chapter
2
describes
the
existing
concepts
that
EPA
evaluated
in
this
assessment.
Where
appropriate,
these
concepts
also
are
discussed
in
context
to
the
issues
that
are
identified
and
discussed
in
Chapter
3.
Chapter
5
presents
the
results
of
EPA's
assessment
of
each
concept
against
the
evaluation
criteria
established
in
Chapter
4.

Event
5,
Development
and
Evaluation
of
Alternative
Procedures:
EPA
planned
to
develop
and
evaluate
alternative
procedures
only
if
the
Agency's
assessment
of
existing
procedures
suggested
that
modifications
or
alternatives
to
the
existing
procedures
is
needed.
EPA
noted
that
its
primary
objective
in
developing
such
alternatives
(
or
modifications)
would
be
to
address
deficiencies
noted
in
Event
4
and
improve
the
performance
of
the
procedures
that
best
meet
the
criteria
established
in
Event
3.

Event
6,
Peer
Review
of
the
Agency's
Assessment:
Initiated
August
2002.
EPA
will
summarize
and
formally
respond
to
peer
review
comments
received
on
the
assessment.

Events
7
­
11,
Activities
to
be
taken
followed
Peer
Review.
After
considering
peer
review
comments,
EPA
will
finalize
its
strategy
regarding
the
FR
notice
to
be
proposed
per
the
terms
of
Settlement
Agreement
Clause
6a
and
take
the
actions
necessary
to
ensure
publication
of
that
notice.
and
quantitation
concepts
are
directed
at
the
lowest
concentration
or
amount
at
which
an
analyte
is
determined
to
be
present
(
detection)
or
may
be
measured
(
quantitation).
In
choosing
the
word
`
limit'
we
do
not
mean
to
imply
any
sense
of
permanence.
We
recognize
that
measurement
capabilities
improve
over
time,
and
that
detection
or
quantitation
`
limits'
established
today
may
be
superceded
by
future
developments
in
analytical
chemistry.
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
2­
1
Figure
2­
1
Chapter
2
Overview
and
History
of
Detection
and
Quantitation
Limit
Concepts
It
is
not
possible
to
measure
the
concentration
of
a
substance
in
water
all
the
way
down
to
zero.
As
an
analogy,
consider
the
following
example:
imagine
measuring
an
object
less
than
16th
of
an
inch
in
length
with
a
rule
marked
in
1/
16th
inch
increments.
How
well
can
the
length
of
the
object
be
measured
using
only
the
ruler?
Similar
issues
arise
as
chemists
try
to
measure
ever
smaller
concentrations
of
substances
in
water.
In
response
to
the
challenges
associated
with
measuring
low
concentrations,
chemists
have
defined
numerical
values
that
provide
points
of
reference
for
reporting
and
using
measurement
results.
These
values
are
usually
referred
to
as
detection
and
quantitation
limits.
This
chapter
provides
an
overview
of
detection
and
quantitation
concepts
and
procedures
in
analytical
chemistry
and
their
use
in
Clean
Water
Act
applications.

2.1
Currie's
Call
for
Standardization
Since
1968,
most
of
the
literature
regarding
detection
and
quantitation
has
referenced
the
work
of
Dr.
Lloyd
Currie,
recently
retired
from
the
National
Institutes
of
Science
and
Technology
(
NIST,
formerly
the
National
Bureau
of
Standards).
In
1968,
Currie
published
a
paper
in
which
he
reviewed
the
then
current
state
of
the
art,
presented
a
three­
tiered
concept
for
detection
and
quantitation,
and
demonstrated
his
concept
with
operational
equations
for
a
single
laboratory.
In
his
paper,
Currie
reviewed
eight
existing
definitions
for
the
concept
of
detection,
and
reported
that
when
these
eight
operational
definitions
were
applied
to
the
same
data,
they
resulted
in
numerical
values
that
differed
by
nearly
three
orders
of
magnitude.
These
results
made
it
impossible
to
compare
the
detection
capabilities
of
measurement
methods
using
available
publications.
Dr.
Currie
proposed
standardizing
on
theoretical
definitions
that
he
called
the
critical
value,
the
detection
limit,
and
the
determination
limit.
(
In
1995,
writing
on
behalf
of
International
Union
of
Pure
and
Applied
Chemistry
[
IUPAC],
Dr.
Currie
used
the
term
"
quantification
limit"
instead
of
his
original
term
"
determination
limit."
Substantial
agreement
with
the
International
Standards
Organization
[
ISO]
on
the
meaning
and
language
of
detection
and
quantification
was
achieved
later,
although
some
"
subtle
differences
in
perspective"
remain
Currie[
2000]).
His
purpose
for
these
definitions
was
to
create
a
system
in
which
the
standard
documentation
of
any
measurement
method
would
include
a
statement
of
capabilities
that
were
directly
comparable
to
any
other
method
for
measuring
the
same
substance.

Currie
used
terms
from
statistical
decision
theory
as
the
basis
for
his
three­
tiered
system.
In
Currie
(
1968)
and
Currie
(
1995)
he
defined
the
critical
value
as
the
measured
value
at
which
there
is
a
small
chance
that
the
concentration
in
the
sample
is
zero.
Consequently,
any
measured
result
greater
than
or
equal
to
the
critical
value
is
considered
evidence
that
the
sample
contains
the
substance
of
interest.
Currie
was
careful
to
emphasize
that
the
decision
as
to
whether
the
substance
has
been
detected
is
made
by
comparing
the
measurement
result
to
the
critical
value.
Figure
2­
1
shows
a
critical
value
selected
such
that
measurements
greater
than
the
critical
value
have
less
than
a
1%
chance
of
being
associated
with
a
sample
that
does
not
contain
the
substance
of
interest.
The
area
under
the
curve
to
the
right
of
the
critical
value
is
the
probability
that
a
measured
value
will
exceed
the
critical
value.
The
area
under
the
curve
to
the
left
of
the
critical
value
is
the
probability
(
much
greater)
of
observing
a
value
that
is
less
than
the
critical
value.
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
2­
2
Figure
2­
2
Currie
(
1968
and
1995)
used
the
term
detection
limit
to
refer
to
a
true
concentration
that
has
a
high
probability
of
generating
measured
values
greater
than
the
critical
value.
That
is,
measurements
on
samples
that
contain
concentrations
equal
to
the
detection
limit
have
a
high
probability
of
exceeding
the
critical
value
and
are,
therefore,
unlikely
to
result
in
a
decision
that
the
substance
is
not
detected
in
the
sample.
In
Currie's
concept,
the
critical
value
and
the
detection
limit
are
related
and
functionally
dependent,
but
it
is
clear
that
the
detection
decision
is
made
on
the
basis
of
comparing
sample
by
sample
measurements
to
the
critical
value.
While
Currie's
terminology
is
consistent
with
standard
statistical
decision
theory,
it
is
in
all
likelihood
responsible
for
a
great
deal
of
confusion
among
chemists
and
others
who
may
associate
the
term
`
limit'
with
some
sort
of
decision
point.
Currie
(
1995)
states:
"
The
single,
most
important
application
of
the
detection
limit
is
for
planning.
It
allows
one
to
judge
whether
the
CMP
[
Chemical
Measurement
Process]
under
consideration
is
adequate
for
the
detection
requirements."
Figure
2­
2
shows
a
detection
limit
selected
such
that
99%
of
the
measurements
on
a
sample
containing
this
concentration
are
expected
to
be
above
the
critical
value.
The
bellshaped
curve
centered
at
the
detection
limit
illustrates
how
likely
various
measurement
responses
are
when
the
concentration
of
the
substance
in
a
sample
is
equal
to
the
detection
limit.
That
is,
the
figure
shows
the
probability
density
of
values
measured
in
a
sample
with
a
true
concentration
equal
to
the
detection
limit.
The
area
under
the
curve
to
the
left
of
the
critical
value
is
equal
to
1%
of
the
total
area,
while
the
area
to
the
right
is
equal
to
99%.

Currie
(
1968,
1995)
defined
the
determination
limit,
later
renamed
the
quantification
limit,
as
(
quoting
Currie,
1995)
"
performance
characteristics
that
mark
the
ability
of
a
CMP
to
adequately
`
quantify'
an
analyte."
Quantification
limits
"
serve
as
benchmarks
that
indicate
whether
the
CMP
can
adequately
meet
the
measurement
needs.
The
ability
to
quantify
is
generally
expressed
in
terms
of
the
signal
or
analyte
(
true)
value
that
will
produce
estimates
having
a
specified
relative
standard
deviation
(
RSD)
commonly
10%."
This
translates
into
a
quantification
limit
equal
to
a
multiplier
of
10
times
the
standard
deviation
(
a
measure
of
measurement
variability)
at
the
limit.
The
multiplier
of
10
(
equal
to
the
inverse
of
the
RSD)
is
arbitrary,
but
has
been
used
widely.
IUPAC
selected
10
as
a
"
default
value"
(
Currie,
1995),
implying
other
values
are
possible.
In
papers
published
in
1980
and
1983,
the
American
Chemical
Society's
Committee
on
Environmental
Analysis
also
recommended
the
use
of
a
multiplier
of
10
for
determining
quantitation
limits
(
see
McDougal,
et
al.
Analytical
Chemistry,
Vol.
53,
p.
2242,
1980
and
Keith,
et
al.,
Vol.
55,
pp.
2210­
2218,
1983).
Measured
concentrations
greater
than
the
quantitation
limit
are
considered
to
be
reliable
by
chemists,
although
from
a
statistical
perspective,
any
measured
value
along
with
knowledge
of
the
precision
of
the
measurement,
is
useful.

Currie's
goal
of
having
measurement
method
developers
publish
directly
comparable
descriptions
of
detection
and
quantitation
capability
remains
elusive
more
than
thirty
years
after
publication
of
his
first
paper
on
this
topic.
Even
if
Currie's
three­
tiered
concept
were
used,
the
treatment
of
related
issues
causes
difficulty
in
comparing
methods.
Some
of
these
issues
include
between­
laboratory
variation,
selection
of
appropriate
statistical
models,
design
of
the
detection
and
quantitation
capability
study,
and
statistical
prediction
and
tolerance.
These
and
other
issues
are
discussed
in
Chapter
3
of
this
document.
Chapter
2
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
2­
3
2.2
Development
of
the
MDL
and
ML
as
Practical
Embodiments
of
Currie's
Proposal
In
1981,
staff
at
EPA's
Environmental
Monitoring
and
Support
Laboratory
in
Cincinnati,
Ohio,
published
a
procedure
for
determining
what
they
referred
to
as
a
method
detection
limit
(
MDL).
The
MDL
functions
as
a
practical,
general
purpose
version
of
Currie's
critical
value.
The
MDL
was
subsequently
promulgated
for
use
in
CWA
programs
on
October
26,
1984
(
49
FR
43234)
at
40
CFR
136,
Appendix
B.
Prior
to
formal
development
of
the
MDL,
the
EPA
Office
of
Water
had
included
the
term
"
minimum
level"
(
ML)
or
"
minimum
level
of
quantitation"
in
some
methods
for
analysis
of
organic
pollutants.
These
methods
were
proposed
on
December
3,
1979
and
subsequently
promulgated
on
October
26,
1984,
along
with
the
MDL.
Additional
information
about
the
MDL
and
ML
is
provided
below
in
Sections
2.2.1
and
2.2.2.

2.2.1
Method
Detection
Limit
Conscious
of
the
definitions
provided
by
Currie
and
others,
Glaser
et
al.
(
1981)
stated
"
The
fundamental
difference
between
our
approach
to
detection
limit
and
former
efforts
is
the
emphasis
on
the
operational
characteristics
of
the
definition.
[
The]
MDL
is
considered
operationally
meaningful
only
when
the
method
is
truly
in
the
detection
mode,
i.
e.,
[
the]
analyte
(
the
substance
of
interest)
must
be
present."
Expanding
on
this
reasoning,
Glaser
et
al.
(
1981)
developed
MDL
estimates
for
methods
that
produce
a
result
of
zero
for
blanks,
such
as
EPA
Methods
624
and
625,
for
determination
of
organic
pollutants
by
gas
chromatography/
mass
spectrometry
(
GC/
MS).
Blank
variability
exists,
whether
or
not
it
can
be
detected
by
measurement
processes.
Failure
to
detect
this
variability
may
be
attributed
to
insufficient
sensitivity
of
the
measurement
process
or,
as
is
the
case
with
some
measurement
processes,
thresholds
that
are
built
into
equipment
which
censor
measurements
below
certain
levels.)
Currie's
critical
value
is
dependent
on
the
ability
to
estimate
measurement
variability
of
zero
concentration
samples.
In
cases
where
the
substance
is
not
detected
in
direct
measurements
on
blanks,
an
alternative
approach
to
estimating
blank
variability
must
be
used.
Another
option
is
to
estimate
measurement
variability
at
concentrations
that
represent
the
lowest
possible
levels
where
a
signal
can
be
detected.
This
is
the
basic
approach
of
the
MDL
which
provides
a
general
purpose,
straightforward
operational
procedure
for
estimating
a
quantity
analogous
to
the
Currie
critical
value
when
measurement
processes
applied
to
blank
samples
do
not
produce
detectable
signals.
More
complex
statistical
procedures
for
estimating
blank
variability
are
possible
and
may
be
preferable
from
a
rigorous
statistical
perspective,
but
the
MDL
has
been
found
to
be
satisfactory
by
chemists
in
a
wide
range
of
applications.

In
1984,
the
MDL
became
a
regulatory
requirement
for
certain
wastewater
discharge
permits
authorized
under
the
Clean
Water
Act.
To
determine
the
MDL,
at
least
seven
replicate
samples
with
a
concentration
of
the
pollutant
of
interest
near
the
estimated
detection
capabilities
of
the
method
are
analyzed.
The
standard
deviation
among
the
replicate
measurements
is
determined
and
multiplied
by
the
t­
distribution
for
n­
1
degrees
of
freedom
(
in
the
case
of
7
replicates,
the
multiplier
is
3.143,
which
is
the
value
for
6
degrees
of
freedom).

Both
the
MDL
concept
and
the
specific
definition
at
part
136
have
been
used
within
EPA
by
the
Office
of
Ground
Water
and
Drinking
Water
(
OGWDW),
the
Office
of
Solid
Waste
(
OSW),
the
Office
of
Emergency
and
Remedial
Response
(
OERR),
and
others.
The
MDL
also
has
been
used
outside
of
EPA
in
Standard
Methods
for
the
Examination
of
Water
and
Wastewater,
published
by
the
American
Public
Health
Association
(
APHA),
the
American
Water
Works
Association
(
AWWA),
and
the
Water
Environment
Federation
(
WEF),
in
methods
published
by
the
American
Society
for
Testing
and
Materials
(
ASTM),
and
elsewhere.
Assessment
of
Detection
and
Quantitation
Concepts
1The
refined
definition
of
the
ML
first
appeared
in
EPA's
1994
draft
National
Guidance
for
the
Permitting,
Monitoring,
and
Enforcement
of
Water
Quality­
based
Effluent
Limitations
Set
Below
Analytical
Detection/
Quantitation
Levels
The
draft
guidance
was
never
finalized.
However,
the
refined
definition
of
the
ML
has
remained
in
use.

Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
2­
4
Despite
such
widespread
use,
some
members
of
the
regulated
industry
and
others
have
claimed
that
the
MDL
is
a
less
than
ideal
concept
for
detection.
Specifically,
critics
have
faulted
the
MDL
because:

°
There
are
some
inconsistencies
between
the
definition
and
the
procedure
°
It
does
not
account
explicitly
for
false
negatives
°
It
does
not
account
for
bias
°
A
prediction
or
tolerance
interval
adjustment
is
not
provided
°
It
does
not
account
for
interlaboratory
variability
These
issues
are
discussed
later
in
this
document.

2.2.2
Minimum
Level
of
Quantitation
The
minimum
level
of
quantitation
(
ML)
was
originally
proposed
on
December
5,
1979
(
44
FR
69463)
in
footnotes
to
Table
2
of
EPA
Method
624
and
to
Tables
4
and
5
of
EPA
Method
625.
The
ML
was
defined
as
the
"
level
at
which
the
entire
analytical
system
must
give
recognizable
mass
spectra
and
acceptable
calibration
points"
(
in
the
footnote
to
Table
2
in
Method
624)
and
as
the
"
level
at
which
the
entire
analytical
system
must
give
mass
spectral
confirmation"
(
in
the
footnotes
to
Tables
4
and
5
in
EPA
Method
625).

Between
1980
and
1984,
EPA
also
developed
Methods
1624
and
1625,
and
promulgated
these
methods
along
with
the
final
versions
of
EPA
Methods
624
and
625
on
October
26,
1984
(
49
FR
43234).
The
definitions
of
the
ML
in
the
promulgated
versions
of
EPA
Methods
1624
and
1625
were
the
"
level
at
which
the
analytical
system
shall
give
recognizable
mass
spectra
(
background
corrected)
and
acceptable
calibration
points"
(
in
footnote
2
to
Table
2
in
Method
1624)
and
as
the
"
level
at
which
the
entire
GC/
MS
system
must
give
recognizable
mass
spectra
(
background
corrected)
and
acceptable
calibration
points"
(
in
footnotes
2
to
Tables
3
and
4
in
Method
1625).

As
EPA
developed
additional
methods
over
the
next
decade,
the
definition
of
the
ML
was
generalized
to
"
the
lowest
level
at
which
the
entire
analytical
system
must
give
a
recognizable
signal
and
acceptable
calibration
point."
(
See,
e.
g.,
Section
24.2
of
EPA
Method
1613
at
40
CFR
136,
Appendix
A.)
In
generating
actual
numerical
values
for
MLs,
the
lowest
calibration
point
was
estimated
from
method
development
studies
and
included
in
the
methods,
although
a
specific
calculation
algorithm
was
not
used.
Laboratories
were
required
to
calibrate
their
analytical
systems
with
a
multi­
point
calibration
(
i.
e.,
calibrate
using
different
concentrations
over
the
range
of
the
instrument)
that
included
a
standard
at
the
lowest
calibration
point
listed
in
the
method
(
i.
e.,
the
ML).

In
response
to
comments
on
the
ML
by
the
regulated
industry
and
some
state
regulatory
agencies,
EPA
refined
the
definition
of
the
ML
in
1994
as
10
times
the
standard
deviation
used
to
determine
the
MDL.
1
Because
the
MDL
is
commonly
determined
as
3.14
times
the
standard
deviation
of
seven
replicate
measurements,
the
ML
was
commonly
calculated
as
3.18
times
the
MDL.
(
The
figure
of
3.18
was
derived
by
dividing
10
by
3.14;
if
more
than
7
replicates
were
used
to
determine
the
MDL,
both
the
MDL
and
the
ML
multipliers
are
adjusted
accordingly
based
on
values
from
the
t­
distribution.)
This
calculation
makes
the
ML
analogous
to
Currie's
quantification
limit
and
the
American
Chemical
Society's
limit
of
quantitation
(
LOQ),
Chapter
2
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
2­
5
which
is
defined
as
ten
times
the
standard
deviation
of
replicate
or
low
concentration
measurements
(
McDougal,
et
al.
(
1980)
and
Keith,
et
al.
(
1983)).

To
simplify
implementation
of
the
ML,
the
definition
was
expanded
to
state
that
the
calculated
ML
is
rounded
to
the
number
nearest
to
(
1,
2,
or
5)
times
10n,
where
n
is
an
integer.
The
reason
for
this
simplification
is
that
calibration
of
an
analytical
system
at
some
exact
number
(
e.
g.,
6.27)
is
difficult
and
prone
to
error,
whereas
rounding
to
the
number
nearest
to
(
1,
2,
or
5)
x
10n
provides
a
practicable
value.
The
most
recent
definition
of
the
ML
is
"
the
lowest
level
at
which
the
entire
analytical
system
must
give
a
recognizable
signal
and
acceptable
calibration
point
for
the
analyte.
It
is
equivalent
to
the
concentration
of
the
lowest
calibration
standard,
assuming
that
all
method­
specified
sample
weights,
volumes,
and
cleanup
procedures
have
been
employed.
The
ML
is
calculated
by
multiplying
the
MDL
by
3.18
and
rounding
the
result
to
the
number
nearest
to
(
1,
2,
or
5)
x
10n,
where
n
is
an
integer,"
and
this
definition
was
contained
in
the
version
of
EPA
Method
1631
that
was
promulgated
on
June
8,
1999
(
64
FR
30417)
(
see
Section
17.8
of
EPA
Method
1631).

The
ML
will
generally
be
somewhat
lower
than
Currie's
quantification
limit
even
when
similar
sample
sizes
and
estimation
procedures
are
used.
This
is
because
the
standard
deviation
used
to
calculate
the
ML
will
generally
be
smaller
than
the
standard
deviation
at
the
lowest
concentration
at
which
the
relative
standard
deviation
is
10%.
This
is
due
to
the
fact
that,
in
almost
all
cases,
standard
deviation
is
nondecreasing
with
increasing
concentration,
and
generally
tends
to
increase
as
concentration
increases.

Although
the
ML
has
been
used
successfully
in
EPA
methods
for
more
than
20
years,
some
members
of
the
regulated
industry
and
others
have
claimed
that
the
ML
is
less
than
an
ideal
concept
for
quantitation
because
it:

C
Does
not
allow
for
interlaboratory
variability,
and
C
Is
based
on
a
multiple
of
the
standard
deviation
rather
than
a
fitted
model
These
concerns
are
discussed
later
in
this
document.

2.3
Concepts
Advanced
by
Other
Organizations
To
expand
somewhat
on
Currie
(
1968),
standardizing
on
operational
definitions
of
detection
and
quantitation
would
benefit
society
by
making
it
easier
to
compare
and
select
measurement
methods
based
on
low­
level
measurement
capability
and
requirements
in
particular
applications.
Unfortunately,
in
spite
of
agreement
on
general
principles
and
definitions
advanced
by
Currie
and
his
supporters,
consensus
on
procedures
that
would
result
in
comparable
detection
and
quantitation
estimates
has
been
elusive.
Sections
2.3.1
­
2.3.3,
which
are
by
no
means
an
exhaustive
list
of
the
various
concepts
advanced
to
date,
highlight
concepts
that
have
been
most
widely
advanced
for
environmental
applications.

2.3.1
EPA
Concepts
Over
the
years,
a
number
of
detection
and
quantitation
limit
concepts
have
been
developed,
suggested,
or
used
by
EPA
among
the
various
organizations
charged
with
responding
to
differing
program
mandates.
In
part,
this
situation
reflects
actual
differences
in
the
mandates,
and
in
part,
it
reflects
the
fact
that
no
concept
advanced
to
date
has
emerged
as
a
clear
`
winner'
that
meets
all
needs
for
all
people.
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
2­
6
Concepts
that
have
been
used
or
suggested
by
EPA
include
the:

°
MDL
and
ML
(
described
in
Sections
2.2.1
and
2.2.2)
°
Instrument
detection
limit
(
IDL)
°
Practical
quantitation
limit
(
PQL)
°
Contract
required
detection
limit
(
CRDL)
and
contract
required
quantitation
limit
(
CRQL)

Instrument
Detection
Limit:
EPA
methods
for
analysis
of
metals
have
historically
included
an
instrument
detection
limit,
or
IDL.
Functionally,
the
IDL
is
similar
to
the
MDL
except
that
the
IDL
includes
temporal
variability
(
it
is
determined
on
3
non­
consecutive
days)
and
does
not
include
all
sample
processing
steps
(
the
IDL
characterizes
the
detection
capabilities
of
the
instrument
as
opposed
to
the
method).
Because
IDLs
do
not
reflect
the
entire
measurement
process
and,
for
the
most
part,
have
been
used
only
for
measurement
of
metals,
EPA
did
not
consider
the
IDL
as
a
potential
alternate
to
the
MDL
when
conducting
the
assessment
described
in
this
TSD.

Practical
Quantitation
Limit:
The
practical
quantitation
limit,
or
PQL,
was
established
in
the
1980s
by
EPA's
drinking
water
program
as
the
lowest
concentration
at
which
reliable
measurements
can
be
made.
The
PQL
is
defined
as
the
lowest
concentration
of
an
analyte
that
can
be
reliably
measured
within
specified
limits
of
precision
and
accuracy
during
routine
laboratory
operation
conditions
(
52
FR
25699,
July
8,
1987).
The
PQL
is
a
means
of
integrating
information
on
the
performance
of
approved
analytical
methods
into
the
development
of
a
drinking
water
regulation.
The
PQL
incorporates
the
following:

°
Quantitation,
°
Precision
and
bias,
°
Normal
operations
of
a
laboratory,
and
°
The
fundamental
need
to
have
a
sufficient
number
of
laboratories
available
to
conduct
compliance
monitoring
analyses.

EPA
uses
two
main
approaches
to
determine
a
PQL
for
an
analyte
under
the
Safe
Drinking
Water
Act
(
SDWA).
One
approach
is
to
use
the
data
from
Water
Supply
(
WS)
studies
(
e.
g.,
laboratory
performance
evaluation
studies
conducted
by
the
Agency
as
part
of
the
certification
process
for
drinking
water
laboratories).
The
PQL
is
established
at
the
concentration
at
which
at
least
75%
of
the
laboratories
in
the
study,
or
the
subset
representing
EPA
Regional
laboratories
and
state
laboratories,
obtain
results
within
some
predetermined
percentage
of
the
true
value
of
the
test
samples
(
e.
g.,
±
30%).
This
approach
is
used
in
most
cases
when
sufficient
WS
data
are
available
to
calculate
a
PQL.
The
WS
data
approach
was
used
to
determine
the
PQLs
for
Phase
V
inorganic
chemicals
such
as
antimony,
beryllium,
cyanide,
nickel
and
thallium
(
July
17,
1992;
57
FR
31800),
as
well
as
many
other
contaminants
regulated
under
the
SDWA.

In
the
absence
of
WS
data,
the
second
approach
that
EPA
uses
is
the
multiplier
method.
In
this
approach,
the
PQL
is
calculated
by
multiplying
the
EPA­
derived
MDL
by
a
factor
between
5
and
10.
The
exact
multiplier
varies
and
sometimes
depends
on
the
degree
of
concern
about
the
specific
contaminant
(
i.
e.,
based
on
a
human
health
risk
assessment
for
consumption
of
drinking
water).

Application
of
the
PQL
has
been
traditionally
limited
to
drinking
water.
Furthermore,
the
PQL
may
not
be
related
to
the
lowest
quantitation
limit
because
1)
the
PQL
is
associated
with
the
analyte
and
may
be
determined
irrespective
of
a
specific
analytical
method,
2)
the
performance
evaluation
(
PE)
samples
from
which
it
is
derived
contain
pollutant
concentrations
that
may
be
well
above
the
true
limit
of
quantitation,
3)
the
multiplier
used
to
calculate
a
PQL
when
PE
data
are
not
available
is
somewhat
dependent
on
concerns
about
risks
from
human
exposure
to
contaminants
in
drinking
water,
and
4)
the
resulting
PQLs
may
be
too
high
other
for
regulatory
purposes
have
been
perceived
as
too
high
for
regulatory
purposes.
In
addition,
Chapter
2
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
2­
7
because
EPA
has
privatized
the
performance
evaluation
program
for
drinking
water
laboratory
certification,
it
is
not
yet
clear
that
appropriate
data
will
be
available
in
the
future.
Based
on
these
facts,
EPA
did
not
conduct
an
assessment
of
the
PQL
for
CWA
applications.

In
the
late
1980s,
EPA's
Office
of
Solid
Waste
(
OSW)
adopted
a
different
version
of
the
PQL
as
a
quantitation
limit.
No
procedure
for
establishing
the
limits
was
given;
instead
values
were
extrapolated
from
CLP
CRQLs
(
see
below).
Since
1994,
OSW
has
actively
removed
the
term
"
PQL"
from
its
revised
methods,
replacing
it
with
the
Estimated
Quantitation
Limit
(
EQL).
The
term
PQL
and
the
original
numerical
values
remain
in
a
few
older
OSW
guidance
documents.

Estimated
Quantitation
Limit:
EPA's
Office
of
Solid
Waste
has
defined
the
EQL
as
the
lowest
concentration
that
can
be
reliably
achieved
within
specified
limits
of
precision
and
accuracy
during
routine
laboratory
operating
conditions
(
see
SW­
846,
Chapter
1).
The
EQL
is
generally
5
to
10
times
the
40
CFR
136,
appendix
B
MDL
but
may
be
chosen
to
simplify
data
reporting.
For
many
analytes,
the
EQL
analyte
concentration
is
selected
as
the
lowest
non­
zero
standard
in
the
calibration.
Sample
EQLs
are
highly
matrix­
dependent.
As
noted
in
most
newer
SW­
846
methods,
the
EQLs
are
provided
for
guidance
and
may
not
always
be
achievable.
Because
the
EQL
is
not
rigorously
defined
and
is
guidance,
because
the
EQL
may
be
based
on
the
MDL,
and
because
the
EQL
can
be
the
lowest
calibration
point
and
would,
therefore,
overlap
the
ML,
EPA
did
not
consider
the
EQL
further
in
its
assessment
of
detection/
quantitation
concepts.

Contract­
Required
Detection
and
Quantitation
Limits:
EPA's
Superfund
program
has
adopted
the
use
of
contractually­
required
limits
that
are
based
on
consensus
among
analytical
chemists
about
levels
that
can
realistically
be
achieved
in
commercial
laboratories
using
a
contractually­
specified
method.
Laboratories
that
participate
in
the
Superfund
Contract
Laboratory
Program
(
CLP)
are
required
to
demonstrate
that
they
can
achieve
the
specified
CRDLs
and
CRQLs.
The
CRQLs
are
based
on
the
concentration
of
the
lowest
non­
zero
calibration
standard,
in
a
fashion
analogous
to
the
original
derivation
of
the
ML.
Because
few
CWA
applications
involve
the
use
of
EPA
contract
laboratories,
EPA
did
not
consider
the
CRDL
or
the
CRQL
as
viable
alternatives
to
the
MDL
and
ML
when
conducting
the
assessment
described
in
this
document.

2.3.2
Industry­
supported
Concepts
The
regulated
industry
has
demonstrated
an
interest
in
detection
limit
concepts
since
EPA
first
promulgated
the
MDL
and
ML
for
use
in
CWA
programs
in
1984
(
49
FR
43234).
As
part
of
that
rule,
EPA
promulgated
Methods
601
through
613,
624,
625,
1624,
and
1625
for
organic
compounds
at
40
CFR
136,
Appendix
A
and
EPA
Method
200.7
for
metals
by
inductively
coupled
plasma
spectrometry
(
ICP)
at
40
CFR
136,
Appendix
C.
EPA
also
promulgated
the
MDL
procedure
at
40
CFR
136,
Appendix
B.
The
Virginia
Electric
Power
Company
(
VEPCO)
brought
suit
against
EPA
for
use
of
the
MDL
in
the
promulgated
methods.
In
a
settlement,
EPA
agreed
that
the
MDL
would
be
applicable
to
the
600­
series
organic
methods
only;
i.
e.,
it
would
not
be
applicable
to
EPA
Method
200.7.
The
settlement
agreement
did
not
preclude
future
use
of
the
MDL
by
EPA
or
the
right
of
VEPCO
to
bring
suit
in
such
future
use.

After
the
VEPCO
settlement,
the
regulated
industry,
mainly
through
efforts
of
the
Electric
Power
Research
Institute
(
EPRI),
remained
involved
in
detection
and
quantitation
approaches
to
be
used
under
EPA's
CWA
programs.
The
first
concepts
that
industry
advanced
were
the
compliance
monitoring
detection
level
(
CMDL)
and
compliance
monitoring
quantitation
level
(
CMQL)
(
Maddalone,
et
al.,
1993).
The
CMDL/
CMQL
were
variants
of
EPA's
MDL/
ML
that
attempted
to
adjust
for
interlaboratory
variability.

The
regulated
community
continued
its
efforts
to
develop
alternate
detection
and
quantitation
concepts
with
development
of
the
alternate
minimum
level
(
AML)
in
the
mid­
1990s
(
Gibbons,
1997).
The
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
2­
8
AML
is
based
on
statistical
modeling
of
standard
deviation
versus
concentration,
which
requires
large
amounts
of
data.

Most
recently,
the
regulated
industry
has
funded
development
of
the
interlaboratory
detection
estimate
(
IDE)
and
interlaboratory
quantitation
estimate
(
IQE).
The
IDE/
IQE
have
been
balloted
and
approved
by
ASTM's
Committee
D­
19
for
water
as
standard
practices
D­
6091
and
D­
6512,
respectively.
These
concepts
take
into
account
all
possible
sources
of
variability
to
arrive
at
detection
and
quantitation
limits
that
are
higher
on
average
than
the
limits
produced
by
other
concepts.
Because
the
industry
community
has
shifted
support
from
the
CMDL/
CMQL
and
the
AML
to
the
IDE
and
IQE,
and
because
EPA
is
not
aware
of
other
organizations
that
currently
advocate
the
earlier
proposals,
EPA
did
not
consider
industry
concepts
other
than
the
IDE/
IQE
in
its
assessment
of
possible
alternatives
to
the
MDL
and
ML.

As
with
all
other
concepts
advocated
to
date,
the
IDE
and
IQE
have
fallen
short
of
being
ideal
concepts
for
detection
and
quantitation
for
all
organizations
and
applications.
To
date,
EPA
is
not
aware
of
a
demonstrated
implementation
of
the
IDE
or
IQE
in
the
development
of
an
analytical
method.
Specific
concerns
that
have
been
raised
about
the
IDE
and
IQE
are
that:

°
They
contain
an
inappropriate
allowance
for
false
negatives,
°
The
IDE
and
IQE
are
based
on
prediction
and/
or
tolerance
intervals
that
are
unsuitable,
°
The
IQE
requires
a
large
amount
of
costly
data
in
order
to
be
able
to
model
variability
versus
concentration,
including
data
generated
in
multiple
laboratories,
and
°
The
complex
statistical
procedures
involved
in
calculating
an
IDE
and
IQE
are
an
unreasonable
burden
on
the
analytical
chemists
that
typically
develop,
modify,
and
use
methods.

2.3.3
Concepts
Advocated
by
the
Laboratory
Community
and
Voluntary
Consensus
Standards
Bodies
In
1980
(
McDougal
et
al.,
1980)
and
1983
(
Keith
et
al.,
1983),
the
American
Chemical
Society's
Committee
on
Environmental
Improvement
(
CEI)
advanced
concepts
for
the
Limit
of
Detection
(
LOD)
and
Limit
of
Quantitation
(
LOQ).
The
ACS
LOD
is
defined
as
the
lowest
concentration
level
that
can
be
determined
to
be
statistically
different
from
a
blank.
The
recommended
value
for
the
LOD
is
three
times
the
standard
deviation
of
replicate
measurements
of
a
blank
or
low­
level
sample.
The
LOD
is
roughly
equivalent
to
the
MDL
in
numerical
terms
and
conceptually
equivalent
to
Currie's
critical
value.

The
ACS
LOQ
is
defined
as
the
level
above
which
quantitative
results
may
be
obtained
with
a
specified
degree
of
confidence.
The
recommended
value
for
the
LOQ
is
10
times
the
standard
deviation
of
replicate
blank
or
low­
level
measurements.
Because
the
LOD
and
LOQ
are
still
used
by
the
analytical
community,
they
have
been
included
in
EPA's
reassessment
of
detection
and
quantitation
concepts.

In
the
mid­
1980s,
the
ACS
CEI
introduced
the
concept
of
the
Reliable
Detection
Limit
(
RDL)
and
the
Reliable
Quantitation
Limit
(
RQL).
The
RDL
and
RQL
were
attempts
at
simplification
of
the
LOD
and
LOQ.
Both
the
RDL
and
the
RQL
involved
applying
a
multiplier
to
the
standard
deviation
derived
from
replicate
measurements
of
a
low­
level
sample.
Neither
concept
received
acceptance
by
the
analytical
community.
Because
the
RDL
and
RQL
are
no
longer
being
advanced
by
ACS,
they
were
not
considered
for
evaluation
in
EPA's
assessment
of
detection
and
quantitation
limit
concepts.

In
1999
(
Currie,
1999a,
1999b),
IUPAC
and
ISO
reached
substantial
agreement
on
the
terminology
and
concepts
documented
by
Currie
(
1995),
although
"
subtle
differences
in
perspective"
of
the
organizations
remain
Currie
(
2000).
IUPAC
and
ISO
have
not,
to
date,
published
methods
that
include
limits
reflecting
Chapter
2
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
2­
9
these
standards.
Similarly,
although
ASTM
adopted
the
IDE
in
1997
and
the
IQE
in
2000,
ASTM
has
not
included
any
IDE
or
IQE
values
in
methods
approved
through
the
ASTM
ballot
process.
On
the
other
hand,
ISO
and
ASTM
have
published
methods
that
employ
the
MDL.
Because
IUPAC
and
ISO
have
approved
the
critical
value,
detection
limit,
and
quantification
limit,
and
because
ASTM
has
approved
through
ballot
the
IDE
and
IQE,
EPA
has
included
these
concepts
in
its
assessment
of
detection
and
quantitation
limit
concepts.
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
1
Chapter
3
Issues
Pertaining
to
Detection
and
Quantitation
As
part
of
the
Settlement
Agreement
concerning
the
Agency's
reassessment
of
detection
and
quantitation
limit
concepts,
the
Agency
agreed
to
consider
several
specific
issues
pertaining
to
these
concepts.
These
issues
included:

°
Criteria
for
selection
and
appropriate
use
of
statistical
models;
°
Methodology
for
parameter
estimation
°
Statistical
tolerance
and
prediction
°
Criteria
for
design
of
detection
and
quanitification
studies,
including
selection
of
concentration
levels
("
spiking
levels")
°
Interlaboratory
variability,
and
°
Incorporation
of
elements
of
probability
design.

In
developing
its
plan
for
conducting
this
assessment,
EPA
identified
a
number
of
other
issues
that
should
be
considered.
These
issues
include:

°
Concepts
of
the
lower
limit
of
measurement
°
Concepts
in
relation
to
EPA
Office
of
Water
applications,
including
­
method
performance
verification
at
a
laboratory,
­
method
development
and
promulgation,
­
National
Pollutant
Discharge
Elimination
System
applications,
­
non­
regulatory
studies
and
monitoring,
­
descriptive
versus
prescriptive
uses
of
lower
limits
to
measurement,
and
­
use
of
a
pair
of
related
detection
and
quantitation
procedures
in
all
OW
applications
°
Censoring
of
measurement
results
°
Sources
of
Variability
(
including,
but
not
limited
to
interlaboratory
variability)
°
False
positives
and
false
negatives
°
Measurement
quality
over
the
life
of
a
method
°
Matrix
effects
°
Background
contamination
°
Outliers
°
Instrument
Non­
Response
°
Accepting
the
procedures
of
voluntary
consensus
standards
bodies
(
VCSBs)
°
National
versus
local
standards
for
measurement,
°
Ease
of
use
(
i.
e.,
ability
of
study
managers,
bench
chemists,
and
statisticians
to
do
what
is
required
by
a
detection
or
quantitation
limit
procedure)
°
Cost
to
implement
the
procedures
°
Laboratory­
specific
applications
Concepts
concerning
the
lower
levels
of
measurement
were
discussed
in
Chapter
2.
For
clarity
and
brevity,
EPA
has
organized
the
remaining
criteria
into
three
subsections
that
follow.
Section
3.1
discusses
the
issues
that
are
primarily
driven
by
analytical
chemistry
concerns,
Section
3.2
discusses
the
issues
that
are
primarily
driven
by
CWA
regulatory
requirements,
and
Section
3.3
discusses
issues
that
are
primarily
driven
by
statistical
concerns.
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
2
3.1
Analytical
Chemistry
Concepts
Pertaining
to
Detection
and
Quantitation
3.1.1
Blank
versus
Zero
Concentration
Analytical
chemists
rarely,
if
ever,
say
that
a
sample
contains
zero
concentration
of
a
substance
of
interest.
Even
when
the
sample
is
created
in
a
laboratory
for
the
purpose
of
containing
as
little
substance
of
interest
as
possible
(
a
blank),
analytical
chemists
recognize
the
possible
contribution
of
the
blank
to
the
final
measurement
result.
The
ability
of
a
laboratory
to
reduce
the
concentration
of
a
substance
in
the
blank
is
often
the
limiting
factor
in
attempts
to
make
measurements
at
ever
lower
levels.

A
classic
example
of
the
potential
problem
is
illustrated
by
the
seminal
works
of
Patterson
in
the
late
1960s
and
1970s
(
e.
g.,
Patterson
and
Settle,
1976).
Patterson
demonstrated
that
the
majority
of
concentrations
of
lead
reported
in
the
literature
for
such
diverse
matrices
as
urban
dust,
open
ocean
waters,
and
biological
tissues
were
in
error
by
several
orders
of
magnitude.
The
source
of
the
"
gross
positive
errors"
was
contamination
introduced
during
sample
collection,
handing,
and
analysis.
Interlaboratory
studies
of
the
day
designed
to
determine
consensus
values
for
reference
materials
were,
in
fact,
determining
the
consensus
values
for
background
contamination
across
laboratories.
Patterson
recognized
the
value
in
running
blank
samples
(
samples
thought
not
to
contain
the
substance
of
interest)
to
demonstrate
that
the
sample
collection,
handling,
and
analysis
processes
were
not
introducing
contamination.
Patterson
subsequently
developed
the
techniques
for
"
evaluating
and
controlling
the
extent
and
sources
of
industrial
lead
contamination
introduced
during
sample
collecting,
handling,
and
analysis"
that
form
the
basis
of
the
"
clean
techniques"
used
for
metals
analysis
today,
and
that
are
incorporated
in
EPA
Method
1631
among
others.

The
most
common
analytes
for
which
contamination
problems
are
encountered
in
environmental
measurements
are
metals,
primarily
zinc
because
of
its
ubiquity
in
the
environment.
On
the
other
hand,
it
is
rare
to
find
contamination
in
the
measurement
of
organic
compounds,
except
for
methylene
chloride,
acetone,
and
a
few
other
volatile
organic
compounds
used
as
solvents
in
analytical
laboratories.
Therefore,
for
determination
of
metals,
a
blank
is
usually
included
or
compensated
in
the
calibration
whereas,
for
organics,
except
for
the
solvents,
the
concentration
in
the
blank
is
assumed
to
be
zero
and
there
is
no
compensation
of
the
calibration.

Measurement
methods
designed
to
determine
substances
at
very
low
concentrations
may
include
requirements
for
the
preparation
and
analysis
of
a
variety
of
blanks
that
are
designed
to
identify
the
extent
and
the
sources
of
contamination.
Analysts
understand
that
"
blank"
does
not
mean
zero
concentration,
but
that
through
careful
control
and
evaluation,
it
is
possible
make
measurements
for
which
the
blank
contribution
is
sufficiently
small
to
be
considered
negligible.

Useful
detection
and
quantitation
limit
concepts
must
address
the
potential
contribution
of
the
blank,
through
both
the
design
of
the
study
that
generates
the
detection
and
quantitation
limit
estimates
and
the
evaluation
of
the
study
results.

3.1.2
Lack
of
Instrument
Response
Instruments
do
not
always
produce
a
result
from
an
appropriately
prepared
sample.
Sometimes
this
is
attributable
to
uncontrollable
instrument
limitations,
sometimes
it
is
attributable
to
controllable
instrument
settings
(
thresholds)
established
by
the
manufacturer
or
the
laboratory,
and
sometimes
it
occurs
randomly.
As
an
example,
gas
chromatograph/
mass
spectrometer
(
GC/
MS)
instruments
often
contain
thresholds
below
which
no
instrument
signal
is
reported.
With
no
instrument
signal
reported,
no
measurement
result
can
be
reported,
and
the
instrument
will
report
zero
in
such
cases
to
indicate
the
lack
of
a
signal.
To
understand
the
Chapter
3
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
3
concept
of
an
instrument
threshold
it
may
be
helpful
to
think
of
background
static
heard
on
a
citizen­
band
(
CB)
radio
or
a
walkie­
talkie.
The
static
is
present,
but
it
has
no
meaning.
Turning
the
"
squelch"
knob
to
the
point
at
which
the
static
is
filtered
out
also
may
make
it
impossible
to
hear
the
caller.
In
the
context
of
detection,
increasing
the
instrument
threshold
may
cause
the
instrument
to
miss
the
substance
of
interest
at
low
levels.

In
1997,
EPA
conducted
study
of
82
semivolatile
(
acid
and
base/
neutral)
organic
compounds
measured
by
EPA
Method
1625
in
order
to
observe
the
performance
of
a
GC/
MS
instrument
both
with
and
without
thresholds.
In
the
study,
solutions
at
up
to
17
concentration
levels
were
analyzed
with
the
thresholds
on
and
with
the
thresholds
off.
Samples
were
analyzed
at
decreasing
concentrations,
including
a
blank,
with
triplicate
determinations
at
each
concentration.
For
measurement
results
obtained
with
the
thresholds
turned
on,
all
of
the
measurements
made
on
the
blank
were
reported
as
zero.
This
was
not
a
surprising
result,
given
the
purpose
of
the
instrument
thresholds.
For
measurements
obtained
without
thresholds,
27
of
230
measurements
on
the
blank
(
11%)
were
reported
as
0.000
ng/
L
and
no
measurement
results
were
reported
lower.
The
fact
of
instrument
non­
response
at
low
concentrations
has
both
direct
and
indirect
impacts
on
estimating
detection
and
quantitation
levels.

There
are
two
direct
impacts
of
non­
response
at
low
concentrations.
One
is
that
Currie's
critical
value
(
see
Chapter
2)
is
either
problematic
to
estimate
or
it
does
not
exist.
The
other
is
that
EPA's
method
detection
limit
exists
but
is
problematic
to
estimate.

If
the
assumptions
required
for
Currie's
critical
value
were
true,
then
approximately
50%
of
the
measurement
results
should
have
been
negative
and
their
average
would
have
been
approximately
zero.
Without
measurement
results
below
the
expected
average,
the
average
will
always
be
a
little
high
and
getting
a
good
estimate
will
be
difficult
or
impossible.
If
the
measurement
method
cuts
off
at
some
value
above
zero
then
Currie's
critical
value
and
detection
limit
do
not
technically
exist.

With
instrument
non­
response
at
low
concentrations,
the
estimation
of
EPA'S
MDL
becomes
problematic.
In
order
to
meet
the
requirements
of
the
MDL
definition,
it
is
necessary
to
find
the
concentration
at
which
the
measurement
method
ceases
to
generate
measurement
results,
and
many
laboratories
have
run
repeat
measurements
in
order
to
find
this
concentration.
This
problem
manifested
itself
in
EPA's
variability
versus
concentration
(
Episode
6000)
studies.
The
40
CFR
136,
Appendix
B
procedure
suggests
iteration
until
the
measured
MDL
is
within
a
factor
of
5
of
the
spike
level.
For
the
Episode
6000
studies,
EPA
instructed
laboratories
to
use
a
factor
of
3
instead
of
5
in
an
attempt
to
more
narrowly
define
the
lowest
spike
level
at
which
measurements
could
be
made.
Several
laboratories
asked
for
relief
from
this
requirement
and
EPA
relented
after
learning
of
the
difficulties
in
attempting
to
achieve
the
factor
of
3.
A
conclusion
that
can
be
drawn
from
this
outcome
is
that
detection
limits
are
somewhat
variable
and
not
easy
to
define.
Further
details
are
in
the
results
of
the
studies
given
in
Chapter
4.

In
summary,
both
the
Currie
system
and
the
EPA
system
have
theoretical
problems
with
addressing
instrument
non­
response.
Any
operational
system
for
detection
or
quantitation
will
need
to
take
this
issue
into
account.

3.1.3
Matrix
Effects
"
Sample
matrix"
is
a
term
used
to
describe
all
of
the
substances,
other
than
the
pollutant(
s)
of
interest,
present
in
an
environmental
sample.
In
the
case
of
a
wastewater
sample,
this
would
include
the
water
itself,
as
well
as
any
other
dissolved
or
suspended
materials.
For
any
given
measurement,
some
of
the
substances
may
interfere
with
the
measurement,
while
others
may
be
substances
that
have
no
effect
on
the
measurement.
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
4
Interferences
in
the
sample
may
act
either
positively
(
i.
e.,
increasing
the
measured
results),
negatively
(
i.
e.,
decreasing
the
measured
results),
or
even
preventing
the
measurement
from
being
conducted.

"
Matrix
effect"
is
a
term
used
to
describe
a
situation
in
which
a
substance
or
combination
of
substances
in
the
sample
(
other
than
the
substance(
s)
of
interest)
influence
the
results
of
the
measurement
or
cause
the
results
to
fail
performance
criteria.
Positive
interferences
may
inflate
the
results
for
the
substance
or
make
it
difficult
to
distinguish
one
substance
from
another.
However,
unless
the
positive
bias
is
consistent
and
predictable,
the
measurement
result
may
be
unreliable.
Negative
interferences
may
suppress
the
results
for
the
substance
to
the
point
that
they
cannot
be
distinguished
from
background
instrument
noise.

For
these
reasons,
many
believe
that
detection
and
quantitation
limits
should
be
determined
in
"
real
world"
matrices
rather
than
in
reference
matrices
intended
to
simulate
method
performance
in
a
particular
matrix
type.
Problems
with
such
an
approach,
however,
are
that:

°
Many
real
world
matrices
contain
the
target
pollutant
at
levels
well
above
the
detection
or
quantitation
limit,
making
it
impossible
to
characterize
what
can
and
cannot
be
detected
at
low
levels.
Diluting
the
sample
to
dilute
the
target
pollutant
concentration
is
an
option.
However,
this
also
has
the
potential
to
dilute
any
interferences
that
might
be
present,
thereby
defeating
the
purpose
of
using
the
real­
world
matrix.
°
It
is
not
possible
to
anticipate
and
obtain
samples
of
every
possible
matrix
on
which
a
method
might
be
used
when
the
method
is
being
developed
and
detection/
quantitation
limits
are
being
established.
°
Use
of
a
reference
matrix
to
establish
detection
and
quantitation
limits
allows
the
results
to
be
reproduced
(
i.
e.,
confirmed)
by
an
independent
party;
such
confirmations
will
not
possible
with
many
real
world
matrices
that
may
be
subject
to
seasonal,
diurnal,
or
other
types
of
variability.
°
The
cost
of
determining
detection
and
quantitation
limits
in
every
possible
matrix
would
be
prohibitive.

Given
these
difficulties,
EPA
believes
that
reference
matrices
should
be
used
to
establish
method
detection
and
quantitation
limits,
but
that
the
procedures
for
defining
these
limits
should
allow
for
evaluation
of
data
collected
in
particular
matrices
of
concern.
EPA
also
believes
that
such
matrix­
specific
determinations
should
only
be
used
when
all
efforts
to
resolve
the
matrix
interferences
have
been
exhausted.

In
some
cases,
finding
a
matrix
effect
indicates
that
the
analyst
should
select
a
more
appropriate
method.
For
example,
a
colorimetric
method
for
the
measurement
of
sulfide
may
be
a
poor
choice
for
the
analysis
of
a
sample
that
is
very
cloudy
or
darkly
colored.
In
other
cases,
characteristics
of
the
sample
such
as
its
pH
may
destroy
the
substance
of
interest,
effectively
preventing
analysis
for
that
substance.

Nearly
all
of
the
newer
analytical
methods
approved
at
40
CFR
136
describe
the
preparation
and
analysis
of
quality
control
samples
that
are
designed
to
indicate
the
presence
of
matrix
effects
(
e.
g.,
matrix
spike
and/
or
matrix
spike
duplicate
samples).
Many
of
these
methods
also
contain
techniques
for
addressing
matrix
effects.
Further,
EPA
has
developed
guidance
documents
that
amplify
the
discussions
in
those
methods
(
e.
g.,
Guidance
on
Evaluation,
Resolution,
and
Documentation
of
Analytical
Problems
Associated
with
Compliance
Monitoring,
June
1993,
EPA
821­
B­
93­
001).
For
the
determination
of
mercury
by
EPA
Method
1631
that
is
the
subject
of
the
Settlement
Agreement,
additional
guidance
on
resolving
matrix
interferences
to
achieve
specified
detection
and
quantitation
levels
is
provided
in
EPA's
Guidance
for
Implementation
and
Use
of
EPA
Method
1631
for
the
Determination
of
Low­
Level
Mercury
(
March
2001,
EPA
821­
R­
01­
023).
Following
the
techniques
in
the
methods
and
guidance
will
usually
reduce
adverse
effects
of
the
sample
matrix
on
detection/
quantitation
limits
and
measurement
results.
Chapter
3
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
5
3.1.4
Measurement
Quality
over
the
Life
of
a
Method
°
We
have
all
heard
the
expression
"
Practice
makes
perfect."
It
applies
to
the
quality
of
measurements
made
with
a
given
method
over
time.
We
can
demonstrate
it
using
simple
techniques
like
laboratory
control
charts.
The
improvements
are
a
result
of
experience,
as
well
as
improvements
in
equipment
over
time.

EPA
expects
changes
in
performance
when
new
staff
are
trained.
For
this
reason,
many
EPA
methods
specify
that
"
start
up
tests"
be
repeated
each
time
new
staff
arrive.
It
is
not
unusual
to
see
slight
increases
in
measurement
variability
as
new
staff
are
trained.
However,
when
new
staff
become
as
good
as
the
existing
staff,
control
charts
will
show
it.

As
with
most
other
areas
of
technology,
measurement
instruments
are
constantly
improving.
Instrument
companies
and
laboratories
are
increasing
data
processing
power,
speed
of
analysis,
and
the
reduction
of
chemical
or
electronic
"
noise."
Any
of
these
instrument
improvements
are
expected
to
improve
the
measurement
method
in
determining
the
concentrations
of
environmental
pollutants.
This
process
can
be
illustrated
for
a
variety
of
EPA
methods.
A
case
in
point
is
EPA
Method
1613
for
determination
of
polychlorinated
dibenzo­
p­
dioxins
and
polychlorinated
dibenzofurans.
Development
of
this
method
began
in
1988.
At
the
time,
high
resolution
mass
spectrometer
systems
that
were
commercially
available
were
able
to
achieve
a
detection
limit
of
approximately
4
pg/
L
and
an
ML
of
10
pg/
L.
By
the
time
that
EPA
proposed
the
method
in
1991,
the
Canadian
government
published
its
own
version
that
went
down
to
a
quantitation
limit
5
pg/
L.
By
the
time
EPA
officially
published
Method
1613
in
1997,
many
laboratories
performing
the
analysis
had
replaced
or
supplemented
their
old
instruments
with
newer
models.
As
a
result,
many
laboratories
performing
analyses
using
Method
1613
routinely
measure
sample
results
at
levels
10
times
lower
than
those
analyzed
routinely
only
10
years
earlier.

Given
that
measurement
capabilities
improve
over
time,
EPA
believes
that
a
detection
and
quantitation
limit
concept
should
be
supported
by
procedures
that
will
allow
individual
laboratories
and
other
organizations
to
affordably
characterize
such
improvements.

3.2
CWA
Regulatory
Issues
Affecting
Detection
and
Quantitation
Section
3.2.1
below
provides
a
brief
overview
and
a
discussion
of
Clean
Water
Act
activities
that
involve
chemical
measurements
and
are,
therefore,
directly
impacted
by
detection
and
quantitation
limit
concepts.
Specific
issues
that
must
be
considered
in
the
context
of
these
CWA
applications
and
EPA's
regulatory
obligations
are
discussed
in
Sections
3.2.2
­
3.2.6.

3.2.1
Detection
and
Quantitation
Limit
Applications
Under
CWA
The
Clean
Water
Act
directs
EPA,
states,
and
local
governments
to
conduct
a
variety
of
data
gathering,
permitting,
and
compliance
monitoring,
and
enforcement
activities.
Many
of
these
activities
depend
directly
on
environmental
measurements,
and
therefore,
are
affected
by
detection
and
quantitation
limit
concepts
as
discussed
in
the
subsections
that
follow.

3.2.1.1
Method
Development
and
Promulgation
Section
304(
h)
of
the
Clean
Water
Act
requires
EPA
to
promulgate
analytical
methods
that
can
be
used
to
support
data
gathering
requirements
under
the
Act.
These
methods
are
promulgated
at
40
CFR
part
136,
and
include
methods
developed
by
EPA
as
well
as
those
developed
by
other
organizations,
such
as
Standard
Methods,
AOAC­
International,
ASTM­
International,
the
U.
S.
Geological
Survey,
instrument
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
6
vendors,
and
others.
Upon
request
by
a
laboratory,
permittee,
instrument
manufacturer,
or
other
interested
party,
EPA
also
is
required
to
consider
alternate
testing
procedures
(
ATPs).
If
EPA
deems
these
ATPs
are
acceptable
for
nationwide
use,
they
too,
are
published
at
40
CFR
part
136.
A
primary
objective
in
promulgating
methods
developed
by
EPA
and
by
other
organizations
is
to
provide
the
regulatory
community,
permittees,
and
laboratories
with
as
many
options
as
possible
so
that
they
may
choose
the
method
that
yields
the
best
performance
at
the
lowest
cost
for
the
application.

In
recent
years,
EPA
has
developed
methods
for
promulgation
at
40
CFR
136
only
when
no
other
methods
are
available
that
meet
the
immediate
or
anticipated
regulatory
need.
The
National
Technology
Transfer
and
Advancement
Act
of
1995
(
NTTAA)
requires
government
agencies
to
use
methods
published
by
voluntary
consensus
standards
bodies
(
VCSBs),
such
as
Standard
Methods
and
ASTM­
International,
when
VCSB
methods
are
available.
EPA
accepts
that
these
methods
have
been
through
a
sufficient
level
of
testing,
peer
review,
and
scientific
acceptance
to
warrant
proposal
if
they
meet
EPA's
regulatory
needs.
When
an
individual
laboratory,
permittee,
or
other
organization
submits
a
request
for
approval
of
an
alternate
test
procedure,
however,
EPA
requires
that
the
procedure
be
subjected
to
a
level
of
testing
that
demonstrates
that
the
method
provides
sensitivity,
accuracy,
and
other
measures
of
performance
comparable
to
an
approved
method.

The
lack
of
widespread
consensus
on
detection
limits
has
obvious
impacts
on
EPA's
responsibility
to
promulgate
methods
under
CWA.
Nearly
all
organizations
that
develop
methods
uses
a
different
concept,
and
many
organizations
have
changed
concepts
over
the
years.
The
result
is
that
a
number
of
different
concepts
for
detection
and
quantitation
are
embodied
in
the
methods
approved
at
40
CFR
part
136.
The
vast
majority
of
the
approved
methods
include
the
MDL,
which
as
noted
in
Section
2.2.1,
has
been
used
by
several
EPA
Offices,
Standard
Methods,
AOAC,
ASTM,
and
others.
Other
concepts
embodied
in
the
methods
at
40
CFR
part
136
include,
but
are
not
limited
to:
1)
a
method
"
range"
that
is
usually
not
defined,
but
is
often
interpreted
as
either
lower
end
of
the
range
in
which
pollutants
can
be
identified
or
quantified,
2)
an
"
instrument
detection
limit"
that
has
been
defined
by
a
variety
of
procedures,
but
is
intended
to
capture
instrument
sensitivity
only,
3)
an
"
estimated
detection
limit"
that
may
be
based
on
best
professional
judgement,
single
laboratory
data,
or
some
other
source
of
information,
4)
a
"
practical
quantitation
limit,"
that
has
typically
been
determined
according
to
one
of
the
scenarios
described
in
Section
2.3.1,
and
5)
"
sensitivity"
that
is
an
undefined
concept
similar
in
result
to
the
MDL.

The
most
obvious
solution
to
this
problem
would
be
for
the
Office
of
Water
to
force
all
methods
promulgated
at
40
CFR
136
to
contain
single
concepts
for
detection
and
quantitation.
Unfortunately,
taking
such
action
would
confound
methods
promulgation.
Problems
with
this
solution
are
that:

°
To
date,
no
single
detection
and
quantitation
limit
concept
has
emerged
a
meeting
the
needs
of
all
organizations
for
all
applications
°
If
the
Office
of
Water
were
to
select
a
concept
that
differs
from
those
of
other
organizations,
those
organizations
would
be
required
to
conform
their
method
to
accommodate
OW's
concept.
Doing
so
would
mean
that
these
organizations
would
have
to
invest
additional
laboratory
resources
to
develop
detection
and
quantitation
limits
that
conformed
to
OW
definitions.
°
If
outside
organizations
decided
against
conforming
their
concepts
to
that
of
OW,
fewer
methods
would
be
promulgated
at
40
CFR
part
136.
This
would
result
in
fewer
options
for
the
regulatory,
permittee,
and
laboratory
community.
°
If
EPA
selected
a
concept
that
has
burdensome
procedures
for
develop
detection
and
quantitation
limits,
it
would
have
the
effect
of
discouraging
development
of
innovative
technology
or
method
modifications.

Given
these
issues,
and
EPA's
desire
to
1)
encourage
the
development
of
improved
measurement
techniques,
and
2)
provide
the
stakeholder
community
with
a
variety
of
measurement
options
whenever
Chapter
3
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
7
possible,
EPA
believes
it
would
be
impractical
to
force
standardization
on
a
single
detection
or
quantitation
limit
concept
on
method
developers
and
promulgate
only
those
methods
that
contain
this
concept.
We
also
believe,
however,
that
there
are
real
benefits
to
standardization,
and
that
1)
all
new
methods
developed
by
EPA
for
promulgation
at
40
CFR
part
136
should
reflect
such
standardization,
and
2)
EPA
should
strongly
encourage
outside
organizations
to
include
these
standardized
concepts
in
their
methods.

3.2.1.2
Method
Performance
Verification
at
a
Laboratory
Just
as
sensitivity
is
important
for
evaluating
measurement
method
performance,
it
is
important
to
verify
that
a
laboratory
using
a
method
can
achieve
acceptable
levels
of
sensitivity
for
making
measurements.
Such
demonstrations
can
take
many
forms
and
should
be
viewed
in
the
context
of
the
decision
to
be
made.
The
analytical
methods
published
at
40
CFR
136
are
designed
for
monitoring
compliance
with
Clean
Water
Act
permits.
Most
pollutants
in
permits
have
a
numeric
limit,
and
compliance
with
these
limits
is
determined
by
laboratory
analysis
of
samples
from
the
waste
stream
or
water
body
regulated
by
these
limits.
The
laboratory
that
conducts
such
analyses
must
be
able
to
demonstrate
that
its
detection
or
quantitation
levels
are
low
enough
to
assure
reliable
measurements.

Thus,
even
where
a
method
describes
the
sensitivity
measured
or
estimated
by
the
developer
or
the
organization
that
published
the
method,
some
means
is
needed
to
demonstrate
that
a
give
laboratory
can
achieve
sufficient
sensitivity
to
satisfy
the
regulatory
decision
(
e.
g.,
monitoring
compliance).

The
EPA
MDL
procedure
provides
a
means
of
verifying
laboratory
performance
and
has
long
been
used
in
this
fashion
by
EPA
and
various
other
Federal
and
State
agencies.
Other
procedures
may
be
employed,
including
analysis
of
reference
materials
containing
the
analytes
of
interest
at
concentrations
that
are
at
or
below
the
regulatory
limits
of
interest,
spiked
samples
that
are
similarly
prepared
(
e.
g.,
matrix
spikes),
or
performance
evaluation
(
PE)
samples
such
as
those
used
in
laboratory
accreditation
studies.

The
IDE
and
IQE
were
advanced
by
the
regulated
industry
and
subsequently
approved
by
ASTMInternational
as
means
of
characterizing
the
performance
of
a
method
in
laboratories
that
participate
in
an
interlaboratory
study.
The
idea
in
developing
these
concepts
was
to
establish
detection
and
quantitation
limits
that
could
be
met
by
any
laboratory
that
participated
in
the
study.
An
advantage
of
this
approach
is
that
individual
laboratories
do
not
have
to
demonstrate
sensitivity.
However,
potential
disadvantages
also
exist.
For
example,
it
may
not
be
possible
to
develop
a
realistic
IDE
or
IQE
for
a
new
methods
involving
a
highly
innovative
technique
because
there
may
not
be
a
sufficient
number
of
laboratories
practicing
the
technique
to
allow
development
of
an
IDE/
IQE.
Also,
establishing
detection
and
quantitation
limits
that
can
be
met
by
all
laboratories
that
practice
methods
that
are
in
widespread
use
can
potentially
lead
to
worst­
case
limits
that
are
significantly
higher
than
limits
that
can
be
achieved
by
many
affordable
laboratories.
This
is
somewhat
analogous
to
buying
a
car
at
list
price.
This
may
be
acceptable
for
those
who
choose
to
avoid
the
negotiation
hassle,
but
will
likely
be
unacceptable
for
others
who
prefer
to
get
the
best
deal
possible.
Developers
of
the
IDE/
IQE
have
recognized
that
an
analogous
concept
is
desirable
for
single­
laboratory
application
and
have
begun
work
on
a
within­
laboratory
detection
estimate
(
WDE),
to
be
followed
by
a
within­
laboratory
quantitation
estimate
(
WDE).
As
with
the
IDE/
IQE,
these
concepts
will
capture
all
sources
of
variability
such
as
temporal
variability,
and
will
include
a
prediction
or
tolerance
interval
(
or
both),
but
will
not
include
interlaboratory
variability.
EPA's
MDL
remains
appropriate
for
demonstrating
method
performance
at
an
individual
laboratory.

3.2.1.3
National
Pollutant
Discharge
Elimination
System
The
National
Pollutant
Discharge
Elimination
System
(
NPDES)
serves
as
the
primary
means
by
which
EPA,
States,
and
Tribes
control
point
source
releases
into
the
nation's
waters.
Under
this
system,
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
8
individual
facilities
are
issued
NPDES
permits
that
provide
limitations
on
the
type,
concentration,
and
volume
of
pollutants
that
may
be
legally
discharged.
Typically,
these
pollutant
controls
are
based
on
technologybased
standards.
If,
however,
these
technology­
based
controls
are
not
adequate
to
protect
the
water­
quality
standard
designated
for
the
facility's
receiving
water,
stricter
controls
are
warranted.
In
such
cases,
NPDES
permits
contain
water
quality­
based
controls.

Development
and
Implementation
of
Technology­
based
Controls
(
Effluent
Guidelines)

EPA
promulgates
national
effluent
limitations
guidelines
and
standards
under
the
authority
of
Clean
Water
Act
Sections
301,
304,
306,
307,
308,
and
501.
The
regulations
allow
the
discharge
of
pollutants
from
normal
industrial
processes
when
the
discharges
have
been
treated
using
various
levels
of
available
treatment
technologies
that
are
affordable.
Functionally,
these
industry­
specific
guidelines
establish
standards
for
the
quality
of
wastewater
discharges
to
waters
of
the
United
States.
They
are
generally
stated
in
the
form
of
concentration­
based
limit
for
selected
substances
that
are
is
to
be
exceeded.
For
example,
the
maximum
oil
concentration
in
wastewater
separated
from
oil
pumped
out
of
an
offshore
well
and
discharged
on
any
single
day
shall
not
exceed
42
milligrams
per
liter
(
mg/
L).
This
form
is
called
a
numeric
effluent
guideline
limit
or
numeric
limit.

Development
and
Implementation
of
Water
Quality­
based
Controls
States
designate
water­
quality
standards
for
various
bodies
of
water
within
their
boundaries.
Each
standard
consists
of
a
designated
use,
criteria
to
determine
if
the
water
quality
supports
that
designated
use,
and
an
anti­
degradation
polity.
Possible
designated
uses
include
public
water
supply,
recreation,
and
propagation
of
fish
and
wildlife.
When
the
water­
quality
standard
is
not
met,
waste­
load
allocations
are
developed
to
indicate
the
maximum
amount
of
a
substance
that
can
be
discharged
to
a
particular
water
body
without
impairing
the
designated
use.
EPA
and
delegated
States
calculate
water
quality­
based
effluent
limits
based
on
the
waste­
load
allocation
and
the
variability
of
the
substance
in
the
wastewater
discharge.
The
concept
is
to
prohibit
discharge
of
a
substance
beyond
the
level
at
which
a
designated
use
would
be
impaired.
Similar
to
effluent
limitations
guidelines
and
standards,
permits
generally
specify
the
use
of
measurement
methods
promulgated
at
40
CFR
part
136
under
the
Clean
Water
Act
Section
304(
h).

The
variability
issues
considered
in
the
development
of
water
quality­
based
limits
are
similar
to
those
considered
in
the
development
of
numerical
effluent­
guidelines
limits
and
detection
levels.
All
of
these
concepts
use
mathematical
statistics
to
determine
levels
unlikely
to
be
exceeded
under
specified
conditions.
Calculated
water
quality­
based
limits
consider
all
of
the
components
of
variation
considered
in
the
development
of
detection
and
quantitation
limits,
plus
the
additional
sources
of
variability
considered
in
the
setting
of
effluent
guidelines
numeric
limits.
Those
additional
sources
of
variability
include
the
variability
of
the
substance
in
the
industrial
process
producing
the
wastewater,
variability
of
the
wastewater
treatment
process,
variability
of
the
process
for
collecting
samples,
and
a
host
of
other
issues.

A
special
case
occurs
when
the
water
quality­
based
effluent
limit
is
less
than
the
detection
limit
of
the
most
sensitive
analytical
method.
This
case
is
addressed
in
Section
3.2.3
below
on
Compliance
Evaluation
Thresholds.

Permit
Compliance
Monitoring
Under
Clean
Water
Act
Sections
318,
402,
and
405,
NPDES
permits
are
issued
to
owners
of
facilities
that
discharge
wastewater
to
waters
of
the
United
States
(
coastal
areas,
lakes,
rivers,
streams,
certain
wetlands,
etc.).
Specific
discharge
limits
are
established
either
for
individual
facilities
or
for
classes
of
facilities.
Individual
permits
are
established
in
industries
with
a
lot
of
site­
specific
issues
that
determine
the
substances
Chapter
3
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
9
discharged,
such
as
the
pharmaceutical
industry
in
which
the
specific
drugs
produced
could
influence
the
water
quality.
General
permits
are
issued
when
the
substances
discharged
are
not
so
greatly
related
to
the
site,
such
as
the
coastal
oil
and
gas
extraction
industry.
The
permit
limits
are
typically
established
using
technology­
based
effluent
guidelines,
unless
the
facility
is
discharging
into
a
water
body
that
does
not
meet
its
designated
use
or
that
will
not
meet
the
designated
use
if
a
technology­
based
limit
permitted.
In
these
situations,
water
quality­
based
limits
are
used
in
the
permit.

Detection
plays
a
role
because
of
concerns
with
measurement
results
at
the
low
end
of
any
measurement
method.
As
discussed
in
the
introduction,
all
measurement
results
are
variable.
At
the
low
end
of
most
measurement
methods,
there
comes
a
point
at
which
a
particular
measurement
result
is
unacceptably
likely
(
a
policy
decision)
to
have
come
from
a
sample
in
which
the
substance
of
interest
is
absent
(
zero
concentration).
Such
a
measurement
result
would
be
below
the
critical
value
defined
by
Currie
(
1995)
and
in
common
usage
it
would
be
called
below
detection.
In
practice,
the
reporting
limit
may
be
set
equal
to
a
critical
value,
detection
limit,
or
quantitation
limit.
Assuming
that
the
reporting
limit
is
a
detection
limit
of
1
mg/
L
oil
and
grease,
the
measurement
result
would
be
reported
as
"
less
than
1
mg/
L
of
oil
and
grease."

3.2.1.4
Non­
Regulatory
Studies
and
Monitoring
EPA
conducts
a
variety
of
non­
regulatory
studies
and
monitoring
activities
to
support
the
These
activities
range
from
long
term
surveys,
such
as
the
Great
Lakes
Water
Quality
Surveys
that
are
conducted
each
spring
and
summer
to
monitor
trends
in
water
quality
against
established
baselines
to
short­
term
studies
that
are
used
to
establish
baselines,
model
pollutant
cycles,
and
guide
direction
for
future
study
and
policy.
Examples
of
such
studies
include
the
National
Study
of
Chemical
Residues
in
Fish
that
was
conducted
in
the
late
1980s
(
a
follow­
up
to
that
study
is
currently
underway),
and
the
Lake
Michigan
Mass
Balance
Study
conducted
in
the
early
1990s.

Assuming
that
EPA
is
actively
designing
a
study
or
regimen,
detection
and
quantitation
concepts
can
be
used
along
with
information
on
the
risks
associated
with
substances
(
pollutants)
and
the
cost
of
measurement
to
select
an
appropriate
measurement
method
before
sample
measurement
begins.
Accepting
all
positively
valued
measurement
results
and
selecting
a
measurement
method
with
a
detection
limit
lower
than
the
level
of
concern
for
a
substance
being
measured
would
provide
some
assurance
that
measurement
results
associated
with
that
concentration
would
be
positively
valued.
Selecting
a
measurement
method
with
a
quantitation
limit
lower
than
the
level
of
concern
for
a
substance
being
measured
would
generate
measurement
results
that
are
easier
to
explain.

3.2.2
Descriptive
versus
Prescriptive
Uses
of
Lower
Limits
to
Measurement
The
literature
on
detection
and
quantitation
generally
assumes
that
these
procedures
are
descriptive,
as
opposed
to
prescriptive.
In
other
words,
detection
and
quantitation
studies
are
described
as
characterizing
the
current
performance
of
a
laboratory
or
laboratories
using
a
method
to
measure
a
substance.
Two
possible
reasons
for
this
treatment
are:
(
1)
the
intended
audience
for
the
articles
are
laboratory
staff
and
measurement
methods
developers
who
wish
to
make
new
methods
useable
to
as
many
laboratories
as
possible
and
(
2)
the
author
may
have
an
institutional
reason
for
not
attempting
to
control
variability
and
thus
lower
detection
and
quantitation
limits.
On
the
other
hand,
the
technology­
based
and
water
quality­
based
effluent
limitations
programs
administered
by
EPA's
Office
of
Water
have
an
institutional
requirement
to
protect
human
health
and
the
environment.
In
order
to
provide
this
protection,
the
Agency
must
measure
pollutants
at
ever
lower
concentrations.
Establishing
stringent
standards
and
a
compliance
scheme
for
laboratories
is
one
way
to
more
rapidly
develop
the
ability
to
measure
at
these
concentrations.
A
prescriptive
strategy
concerning
detection
and
quantitation
limits
would
be
to:
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
10
°
Determine
the
detection
and
quantitation
limits
at
multiple
laboratories.
°
Establish
detection
limit
and
a
quantitation
limit
for
the
method
that
is
based
on
some
performance
of
these
laboratories.
The
limit
could
be
established
as
the
limits
reported
by
the
mean
or
median
laboratory,
or
by
some
other
criterion,
such
as
the
pooled
value
of
the
limits
achieved
by
all
laboratories,
or
the
limit
that
is
met
by
a
certain
percentage
of
the
laboratories.
°
Use
the
established
detection
and
quantitation
limit
as
a
performance
standard
that
must
be
demonstrated
by
laboratories
that
practice
the
method.

Such
an
approach
is
consistent
with
other
performance
standards
included
EPA
methods,
such
as
standards
for
instrument
calibration,
recovery
of
spiked
reference
and
matrix
samples,
etc.

The
use
of
such
an
approach
would
ensure
that
prescriptive
detection
and
quantitation
limits
(
i.
e.,
performance
standards)
reflect
the
capabilities
of
multiple
laboratories,
rather
than
a
single
state­
of­
the­
art
research
laboratory.
Of
course,
it
is
possible
that
even
when
multiple
laboratories
are
used
to
establish
performance
standards
for
detection
and
quantitation,
some
laboratories
may
not
be
able
to
achieve
these
standards
using
their
current
operations.
However,
most
laboratories
facing
this
problem
will
be
able
to
achieve
these
standards
by
investing
in
staff
training,
improved
equipment,
a
stronger
quality
assurance
program,
or
higher
quality
maintenance
and
operations.

There
is
of
course,
a
risk
that
some
members
of
the
laboratory
community
will
not
be
able
to
meet
the
standard,
either
because
they
are
not
willing
to
invest
the
resources
necessary
to
do
so,
or
for
other
reasons.
That
risk
needs
to
be
considered
when
using
a
prescriptive
approach
to
detection
and
quantitation
(
i.
e.,
establishing
limits
that
act
as
performance
standards.)
Conversely,
the
risk
of
using
a
descriptive
approach
is
that
it
can
result
in
detection
and
quantitation
limits
that
reflect
a
broad
community
of
laboratories,
including
those
that
have
made
little
if
any
effort
to
control
variability
at
these
levels.
That
can
have
the
effect
of
raising
detection
and
quantitation
limits
to
a
level
that
is
higher
than
desired.

3.2.3
Compliance
Evaluation
Thresholds
A
situation
that
arises
frequently
in
addressing
water
quality­
based
limits
is
the
setting
of
the
permit
limit
below
the
detection/
quantitation
limit
of
the
most
sensitive,
approved
analytical
method.
This
subject
was
addressed
in
EPA's
draft
National
Guidance
for
the
Permitting,
Monitoring,
and
Enforcement
of
Water
Quality­
based
Effluent
Limitations
Set
Below
Analytical
Detection/
Quantitation
Levels
(
WQBEL
guidance).
The
WQBEL
guidance
suggested
use
of
the
minimum
level
of
quantitation
(
ML)
as
the
compliance
evaluation
threshold
(
CET)
when
the
water
quality­
based
effluent
limit
(
WQBEL)
is
below
the
detection/
quantitation
limit
of
the
most
sensitive,
approved
analytical
method.
In
comments
on
the
WQBEL
guidance,
the
regulated
industry
objected
to
the
CET,
claiming
that
it
did
not
include
interlaboratory
variability
and
other
sources
of
variability;
the
States
objected
to
the
CET,
claiming
that
it
would
not
allow
them
to
be
as
protective
as
if
the
detection
limit
were
used.

From
a
technical
standpoint,
a
one­
sided
limit
that
reduces
false
positives
only,
such
as
the
Currie
critical
level
or
EPA's
MDL,
is
the
most
appropriate
concept
for
producing
a
CET
for
the
situation
in
which
the
WQBEL
is
less
than
detection
limit
in
the
most
sensitive
analytical
method
because
the
one­
sided
limit
allows
measurement
to
the
lowest
possible
level
while
protecting
a
discharger
from
the
risk
of
a
false
violation.
For
example,
consider
the
situation
in
which
2,3,7,8­
tetrachlorodibenzo­
p­
dioxin
(
dioxin)
is
to
be
evaluated
against
the
ambient
water
quality
criterion
of
13
parts­
per­
quintillion
(
ppqt).
The
most
sensitive
analytical
method
is
EPA
Method
1613
with
an
MDL
of
4
parts­
per­
quadrillion
(
ppq)
and
an
ML
of
10
ppq.
The
MDL
is
more
than
300
times
greater
than
the
ambient
criterion.
Therefore,
if
dioxin
is
detected
in
the
receiving
water
as
a
result
of
a
discharge
(
at
greater
than
an
MDL
of
4
ppq),
there
must
have
been
an
Chapter
3
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
11
exceedance
of
the
ambient
criterion.
In
the
WQBEL
guidance,
EPA
suggested
use
of
the
ML
because
it
was
the
point
at
which
the
measurement
could
be
considered
reliable.
However,
from
a
purely
technical
standpoint,
the
MDL
is
the
most
appropriate.

3.2.4
Accepting
the
Procedures
of
Voluntary
Consensus
Standards
Bodies
In
February
1996,
Congress
enacted
Public
Law
104­
113,
the
National
Technology
Transfer
and
Advancement
Act
(
NTTAA).
This
act
directs
"
federal
agencies
to
focus
upon
increasing
their
use
of
(
voluntary
consensus)
standards
whenever
possible,
thus
reducing
federal
procurement
and
operating
costs."
The
Act
gives
Federal
agencies
discretion
to
use
other
standards
where
the
use
of
voluntary
consensus
standards
would
be
"
inconsistent
with
applicable
law
or
otherwise
impractical."

NTTAA
is
implemented
by
Federal
agencies
based
on
the
policies
described
in
Circular
A­
119
from
the
Office
of
Management
and
Budget
(
OMB).
The
current
version
of
this
OMB
circular
was
published
in
the
Federal
Register
on
February
19,
1998
(
63
FR
8546).

Neither
the
NTTAA
nor
Circular
A­
119
require
that
agencies
replace
existing
government
standards
with
standards
from
a
voluntary
consensus
standard
body
(
VCSB).
In
other
words,
if
EPA
already
has
standards
in
place
for
detection
and
quantitation
concepts,
EPA
is
not
obligated
by
NTTAA
to
replace
these
with
VCSB
standards.

Circular
A­
119
also
discusses
the
effect
of
the
policy
on
the
regulatory
authorities
and
responsibilities
of
Federal
agencies.
The
circular
states
that:

"
This
policy
does
not
preempt
or
restrict
agencies'
authorities
and
responsibilities
to
make
regulatory
decisions
authorized
by
statute.
Such
regulatory
authorities
and
responsibilities
include
determining
the
level
of
acceptable
risk;
setting
the
level
of
protection;
and
balancing
risk,
cost,
and
availability
of
technology
in
establishing
regulatory
standards.
However,
to
determine
whether
established
regulatory
limits
or
targets
have
been
met,
agencies
should
use
voluntary
consensus
standards
for
test
methods,
sampling
procedures,
or
protocols."

Thus,
EPA
is
responsible
for
establishing
the
levels
of
risk
and
protection,
not
only
for
the
regulatory
limits
applied
to
discharges,
but
also
to
the
risks
of
decision
errors
(
e.
g.,
false
negatives
or
false
positives)
in
the
detection
and
quantitation
concepts
applicable
under
the
Clean
Water
Act.

NTTAA
recognizes
that
there
are
instances
in
which
use
of
VCSB
standards
may
not
be
appropriate.
The
Act
includes
the
provision
for
agencies
to
not
use
consensus
standards
where
they
are
"
inconsistent
with
law
or
otherwise
impractical."
The
stated
objective
of
the
Clean
Water
Act
is
"
to
restore
and
maintain
the
chemical,
physical,
and
biological
integrity
of
the
Nation's
waters."
As
a
result,
if
adopting
a
VCSB
standard
for
a
detection
or
quantitation
concept
either
would
lead
to
raising
the
NPDES
permit
limits
for
dischargers,
or
prohibiting
EPA
or
authorized
States
from
enforcing
those
limits
until
new
detection
and/
or
quantitation
limits
were
developed
for
every
method
at
40
CFR
136,
then
EPA
believes
that
such
an
approach
would
be
inconsistent
with
the
stated
objectives
of
the
Clean
Water
Act.

Finally,
Circular
A­
119
describes
two
types
of
technical
standards:
a
performance
standard
and
a
prescriptive
standard.
A
performance
standard
is
defined
as
"
a
standard
...
that
states
requirements
in
terms
of
required
results
with
criteria
for
verifying
compliance
but
without
stating
the
methods
for
achieving
required
results."
In
contrast,
a
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
12
prescriptive
standard
is
one
"
which
may
specify
design
requirements,
such
as
materials
to
be
used,
how
a
requirement
is
to
be
achieved,
or
how
an
item
is
to
be
fabricated
or
constructed."

Neither
NTTAA
nor
Circular
A­
119
direct
agencies
to
favor
performance
standards
over
prescriptive
standards,
or
vice
versa.
EPA
believes
that
the
current
MDL
standard
is
a
prescriptive
one,
in
that
it
specifies
both
the
design
of
the
MDL
study
and
how
the
requirement
to
establish
method
sensitivity
be
achieved.
There
is
some
obvious
flexibility
or
opportunity
for
judgement
in
employing
the
MDL
procedure,
and
much
of
the
historical
debate
over
the
utility
of
the
MDL
procedure
would
suggest
that
it
may
not
be
prescriptive
enough.
All
of
the
alternative
concepts
for
establishing
detection
and
quantitation
are
also
prescriptive
standards
rather
than
performance
standards.

One
option
that
EPA
may
consider
is
to
employ
a
performance­
based
approach
to
establishing
detection
and
quantitation
limits
in
which
method
developers,
laboratories,
and
others
would
be
free
to
use
any
one
of
a
variety
of
approaches
to
establishing
these
limits,
including
the
existing
MDL
procedure
or
a
VCSB
standard.
Thus,
establishing
method
sensitivity
could
be
considered
a
performance
standard
under
NTTAA
and
Circular
A­
119,
rather
than
a
prescriptive
standard.
The
fact
that
different
approaches
(
prescriptive
standards)
yield
different
answers
would
be
immaterial
if
EPA
evaluates
the
answers
relative
to
a
specific
decision.
That
evaluation
must
not
be
divorced
from
knowledge
of
the
decision
to
be
made
(
e.
g.,
the
regulatory
limit
for
a
given
pollutant).

3.2.5
National
versus
Local
Standards
for
Measurement
In
accordance
with
the
Settlement
Agreement,
EPA
is
re­
examining
the
concepts
of
detection
and
quantitation
used
with
methods
approved
for
use
at
40
CFR
part
136.
The
Clean
Water
Act
authorizes
States
and
local
governments
to
implement
permits,
with
the
requirement
that
they
be
at
least
as
protective
(
stringent)
as
the
national
standards
established
by
EPA.
Thus,
EPA
must
take
into
account
the
impact
of
any
revised
or
new
detection/
quantitation
limit
concepts
and
procedures
on
State
and
local
governments,
as
well
as
on
those
affected
by
State
and
local
requirements.
EPA
also
is
aware
that
some
States
have
implemented
approaches
to
detection
and
quantitation
that
are
either
specific
to
that
State,
result
in
lower
numerical
limits
in
discharge
permits,
or
both.
Given
the
ability
of
State
and
local
governments
to
use
more
stringent
approaches,
any
decision
by
EPA
with
regard
to
this
re­
evaluation
of
detection
and
quantitation
concepts
may
not
have
an
effect
on
those
States
and
local
governments.

3.2.6
Cost
and
Implementation
Issues
When
discussing
the
implementation
of
a
detection
or
quantitation
procedure,
we
will
make
distinctions
between
the
method
developer
and
method
user.
Usually,
method
developers
are
governmental
organizations
such
as
EPA,
NOAA,
USGS,
and
DOE,
or
voluntary
consensus
standards
bodies
(
VCSBs)
such
as
the
American
Public
Health
Association
(
APHA),
ASTM­
International,
AOAC­
International,
and
ISO/
IUPAC.
Method
developers
also
may
include
manufacturers
of
instruments
or
supplies
used
in
testing.
Methods
users
are
the
laboratories
performing
tests
to
assess
and
assure
product
quality,
to
support
regulatory
compliance
monitoring,
or
to
support
scientific
studies.

Method
development
requires
a
more
diverse
set
of
skills
than
method
use,
in
that
a
broad
understanding
of
quality
systems,
statistics,
and
analytical
technologies
is
required.
Staff
working
for
the
method
developer
will
usually
include
the
project
manager,
measurement
analysts,
and
statisticians.
Method
use
requires
a
focus
on
obtaining
reliable
results
in
the
analysis
of
a
given
sample.
Staff
working
for
the
laboratory
include
the
manager
and
measurement
analysts.
Chapter
3
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
13
3.2.6.1
Implementation
of
a
Detection/
Quantitation
Limit
Procedure
by
a
Method
Developer
The
basic
resources
available
to
the
method
developer
are
time,
money,
and
the
technical
skills
of
its
staff.
The
fundamental
decision
for
implementing
a
detection
or
quantitation
procedure
is
whether
that
procedure
is
intended
to
characterize
the
performance
of
the
method
at
a
well­
performing
laboratory
or
if
it
is
intended
to
characterize
the
performance
of
the
method
across
a
group
of
laboratories.
If
the
procedure
is
intended
to
characterize
the
performance
of
the
method
across
a
group
of
laboratories
it
is
further
necessary
to
decide
if
there
will
be
some
way
to
compare
the
performance
of
individual
laboratories
to
the
group
performance
standard.
There
are
serious
time,
cost,
and
skill
issues
with
each
of
these
decisions.
Ordering
these
decisions
from
the
least
resource
intensive
to
the
most,
they
are
characterizing
the
performance
of
the
method:
(
1)
at
a
well­
performing
laboratory,
(
2)
at
a
group
of
laboratories,
or
(
3)
at
a
group
of
laboratories
with
comparisons
of
individual
laboratories.
Other
costs
for
the
method
developer
could
include
planning,
data
management,
reference
laboratory
services,
and
whether
laboratories
are
willing
to
volunteer
for
the
study
or
if
their
services
must
be
purchased.

An
independent
decision
is
whether
to
assume
a
simple
model
for
measurement
variability
and
limit
the
number
of
test
concentrations,
iterate
assuming
a
simple
model,
or
to
design
a
study
of
the
relationship
between
measurement
variation
and
the
concentrations
of
the
substances
measured
by
the
method.
This
decision
will
greatly
influence
the
number
of
samples
measured
in
the
study.
If
the
laboratories
do
not
volunteer
for
the
study,
then
the
direct
cost
for
measuring
these
samples
or
blanks
ranges
from
a
few
dollars
per
sample
to
more
than
$
1,000
per
sample
for
measuring
dioxins.
Until
such
time
as
the
relationship
between
measurement
results
and
standardized
concentrations
becomes
well
known,
such
studies
will
require
the
active
participation
of
professional
statisticians
in
design,
implementation,
and
analysis.

3.2.6.2
Implementation
of
a
Detection/
Quantitation
Limit
Procedure
by
a
Laboratory
A
laboratory
may
implement
detection
or
quantitation
procedures
for
its
own
quality
control
purposes,
because
of
regulatory
requirements,
or
as
part
of
the
study
of
a
method
by
some
other
organization.
When
participating
in
the
study
of
another
organization,
the
laboratory
may
voluntarily
accept
some
cost
of
the
study
for
marketing
purposes,
professional
development,
or
to
benchmark
the
performance
of
the
laboratory.
Studies
designed
to
characterize
method
performance:
(
1)
at
a
well­
performing
laboratory
or
(
2)
at
a
group
of
laboratories
with
comparisons
of
individual
laboratories
may
be
used
to
benchmark
the
performance
of
a
participating
laboratory.
Studies
designed
to
characterize
method
performance
at
a
group
of
laboratories
are
unlikely
to
provide
useful
feedback
to
participating
laboratories.

3.2.7
Use
of
a
pair
of
related
detection
and
quantitation
procedures
in
all
Clean
Water
Act
applications.

In
Section
3.2.1,
we
discussed
several
different
applications
for
detection
and
quantitation
limits
under
the
Clean
Water
Act.
To
review,
these
applications
are:

°
Method
development
and
promulgation,
°
Method
performance
verification
at
a
laboratory
°
Technology­
based
effluent
guidelines
development
°
Water
quality­
based
effluent
limits
development
°
Permit
compliance
monitoring,
and
°
Non­
regulatory
studies
and
monitoring.
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
14
Although
we
could
develop
a
separate
detection
and
quantitation
concept
for
each
of
these
applications
and
attempt
to
define
and
evaluate
each
of
these
concepts
in
our
re­
examination
of
detection
and
quantitation
concepts,
the
resulting
matrix
of
concepts
would
cause
confusion
to
regulators,
permittees,
and
the
laboratory
community.
Further,
when
proposed,
each
member
of
the
matrix
of
concepts
and
applications
would,
individually,
be
subject
to
contention
and
second­
guessing,
and
it
is
likely
that
the
outcome
would
be
nearly
the
same
as
if
a
single
pair
of
concepts
is
selected.
To
avoid
this
confusion,
it
is
therefore
desirable
to
use
a
single
pair
of
related
detection
and
quantitation
procedures
to
meet
needs
where
they
exist
in
all
Clean
Water
Act
applications.

3.3
Statistical
Issues
The
goal
of
this
section
is
to
provide
a
brief
explanation
of
the
key
statistical
issues
involved
in
the
development
of
detection
and
quantitation
limits.

3.3.1
Sources
of
Variability
Various
known
and
unknown
sources
of
variability
will
influence
measurements
made
by
a
laboratory
using
a
specific
method.
These
sources
may
include,
random
measurement
error,
differences
in
analysts,
variations
between
different
equipment
manufacturers
and
models,
variations
in
analytical
standards,
routine
fluctuations
in
equipment
performance,
and
variations
in
facility
conditions
(
e.
g.,
varying
levels
of
background
contributions).

There
are
a
number
of
ways
in
which
variability
can
be
controlled.
One
is
a
strong
quality
assurance
(
QA)
program
that
includes
use
of:
1)
trained
and
qualified
staff,
2)
properly
maintained
equipment,
3)
fresh
and
properly
prepared
and
stored
standards,
4)
adherence
to
written
standard
operating
procedures
and
methods
for
all
sample
handling,
analysis,
and
data
reduction/
reporting
activities,
4)
ongoing
monitoring
of
laboratory
performance
and
5)
quality
control
(
QC)
samples
and
QC
acceptance
criteria
to
ensure
that
the
laboratory
systems
are
in
control.
The
EPA
methods
promulgated
at
40
CFR
part
136
require
the
use
of
qualified
staff,
appropriately
cleaned
and
calibrated
equipment,
and
properly
prepared
standards.
Each
method
also
provides
detailed
steps
for
performing
all
sampling
handling
and
analysis
activities.

Even
when
prescribed
EPA
requirements
are
implemented,
however,
it
is
not
possible
to
completely
eliminate
all
variability
within
or
between
laboratories.
The
potential
effects
of
sources
of
variability
should
be
considered
when
establishing
detection
and
quantitation
limit
concepts.
Even
with
quality
control
and
variability
control
procedures
in
place,
it
should
be
recognized
that
some
laboratories
may
achieve
lower
detection
and
quantitation
limits
than
others.
Ultimately,
some
laboratories
may
not
be
capable
of
meeting
low
level
measurement
requirements
without
some
effort
to
improve
operations.

3.3.2
Censoring
Measurement
Results
Measurement
results
are
often
reported
as
less
than
some
detection,
quantitation,
or
reporting
limit
(
see
Section
3.2.1.3,
Permit
Compliance
Monitoring)
without
providing
a
single
best
estimate
for
the
numeric
result.
For
example,
if
a
direct
reading
of
the
measurement
results
would
indicate
a
concentration
of
3
mg/
L
and
the
reporting
limit
for
the
substance
is
5
mg/
L,
the
laboratory
may
only
report
that
the
measurement
result
is
less
than
5
mg/
L.
Statisticians
call
this
process
of
suppression
of
results
less
than
a
specified
amount
"
censoring."
Reasons
for
the
practice
of
censoring
relate
directly
to
issues
surrounding
the
development
of
detection
and
quantitation
limits;
i.
e.
the
premise
that
measurement
results
below
certain
low
levels
may
not
useable
for
certain
purposes.
Chapter
3
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
15
0.0
0.2
0.4
0.6
0.8
1.0
0.0
0.2
0.4
0.6
0.8
1.0
1.2
Classicals,
Method
350.3
AMMONIA
AS
NITROGEN,
MG/
L
Measured
Concentration
Spike
Concentration
Figure
3­
1
In
order
to
evaluate
low
level
variability,
EPA
conducted
a
comprehensive
study
of
results
from
1/
10th
the
MDL
to
concentrations
into
the
usual
quantitation
range.
Ten
different
analytical
techniques
were
evaluated
in
the
study
(
see
Appendix
B,
Characterizing
Measurement
Variability
as
a
Function
of
Analyte
Concentration
for
a
Variety
of
Analytical
Techniques).
Data
from
this
study
indicate
that
measurement
results
may
be
generated
at
low
concentrations
which
are
quite
variable
in
relation
to
the
true
concentration
in
the
sample.
While
this
observation
has
not
been
demonstrated
with
every
substance
measured,
it
is
suggested
by
plots
of
data
from
most
of
the
measurement
techniques
observed
in
the
study.
An
example
is
Ammonia
as
Nitrogen
by
Method
350.3.
Plotting
measurement
results
versus
concentrations
spiked
into
reagent
water
samples
(
Figure
3­
1),
we
see
the
strong
relationship
between
measurement
results
and
spike
concentrations
that
would
be
expected.
However,
it
is
difficult
to
see
what
is
going
on
at
low
concentrations
in
a
graphic
that
covers
measurement
results
over
several
orders
of
magnitude.
By
plotting
the
log
of
the
measurement
results
versus
the
log
of
the
spike
concentrations
(
Figure
3­
2),
we
would
expect
to
see
an
expansion
of
variability
at
low
concentrations,
a
contraction
of
variability
at
high
concentrations
and
points
mostly
plotted
along
a
45
degree
angle
to
indicate
that
measurement
results
are
approximately
equal
to
the
spike
concentrations.
However,
we
only
see
the
45
degree
angle
at
higher
concentrations.
Measurement
results
in
the
lowest
order
of
magnitude
appear
to
have
reached
a
plateau
below
which
they
do
not
go.
ASTM
Committee
E­
1
has
termed
the
model
that
describes
the
general
pattern
displayed
by
these
data
the
"
General
Analytical
Error
Model."
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
16
0.001
0.005
0.010
0.050
0.100
0.500
1.000
0.05
0.10
0.50
1.00
Classicals,
Method
350.3
AMMONIA
AS
NITROGEN,
MG/
L
Log­
10
of
Measured
Concentration
Log­
10
of
Spike
Concentration
Figure
3­
2
Despite
such
evidence
that
results
at
low
concentrations
can
be
quite
variable,
some
data
users,
such
as
modelers,
prefer
to
use
the
actual
measurement
results
(
even
if
they
are
negative
values),
rather
than
reporting
limits,
because
they
believe
that
censoring
data
at
a
detection
limit
also
can
introduce
bias
into
the
data
set.
If
Currie's
model
is
true,
and
the
true
value
of
a
sample
is
zero,
then
the
frequency
distribution
of
observations
would
have
a
mean
at
zero,
with
values
above
or
below
that
mean
due
to
the
inherent
uncertainty
of
the
analytical
process.
If
all
negative
or
low
values
were
eliminated,
the
mean
would
have
a
positive
bias.
In
other
words,
while
negative
or
extremely
low
values
may
have
no
meaning
in
the
real
world,
they
may
be
of
value
to
statisticians
and
modelers
who
are
handling
large
volumes
of
data.

Some
programs,
such
as
EPA's
Superfund
Contract
Laboratory
Program
require
laboratories
to
report
the
measurement
result
obtained
from
the
analysis
in
conjunction
with
a
qualifier
that
the
result
is
below
a
specified
detection,
quantitation,
or
reporting
level.
Going
back
to
the
example
in
the
first
paragraph,
the
laboratory
would
report
both
a
measured
value
of
3
mg/
L
and
a
reporting
limit
of
5
mg/
L.
Under
certain
assumptions,
measurement
results
below
the
specified
level
could
then
be
used
to
calculate
averages
and
extreme
value
estimates
that
would
be
superior
to
estimates
calculated
using
censored
data.
The
primary
assumption
is
that
measurement
results
are,
on
average
(
i.
e.,
in
expectation)
approximately
equal
to
the
true
concentration
in
the
average.

EPA
believes
that
such
an
approach
provides
the
greatest
degree
of
flexibility
for
data
users,
but
also
believes
that
it
should
be
used
with
care.
First,
data
users
who
choose
to
use
values
reported
below
a
detection
or
quantitation
limit
need
to
have
a
firm
understanding
of
the
limitations
of
those
data.
Second,
and
as
noted
in
Section
3.2.1.3,
Permit
Compliance
Monitoring,
reporting
data
below
a
detection
or
quantitation
limit
can
lead
to
misinterpretation.
Chapter
3
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
17
0
200
400
600
800
1000
1200
0
500
1000
1500
Metals,
Method
1620
ALUMINUM,
UG/
L
Measured
Concentration
Spike
Concentration
Figure
3­
3
3.3.3
Outliers
Outliers
are
extreme
or
aberrant
measurement
values
that,
on
inspection,
are
not
consistent
with
a
set
of
data.
Outliers
are
generated
by
a
number
of
causes
such
as
errors
in
following
an
analytical
procedure,
errors
in
recording
numerical
results
or
they
may
be
the
result
of
extreme
random
variation
in
a
properly
operating
process.
For
example,
if
a
new
measurement
method
is
being
tested
but
the
laboratory
fails
to
follow
the
procedure
correctly
with
some
samples
then
the
associated
measurement
results
may
stand
out
as
outliers.
A
graphic
example
is
shown
in
Figure
3­
4,
which
shows
measurement
results
for
aluminum
by
Method
1620
versus
concentration.
At
spike
concentration
of
250
:
g/
L,
one
of
the
measured
values
is
about
750
:
g/
L
and
stands
out
visually
from
the
rest
of
the
values.
This
may
be
considered
an
outlier.

A
common
process
for
identifying
potential
outliers
is
to
apply
one
or
more
statistical
procedures
for
identifying
extremely
large
or
extremely
small
measurement
values.
An
example
of
such
a
procedure
is
ASTM
Practice
D­
2777.
Because
extreme
values
are
expected
to
occur,
it
is
not
necessarily
appropriate
to
exclude
them
from
measurement
results
used
to
develop
detection
or
quantitation
values.
As
recommended
in
the
ASTM
procedure,
a
review
of
the
analysts
records
associated
with
the
measurement
may
establish
whether
the
extreme
value
was
caused
by
failure
to
follow
the
method
or
by
some
rare
event
associated
with
the
method.
If
the
method
under
study
was
not
followed
then
it
is
appropriate
to
exclude
the
measurement
result
from
the
detection
or
quantitation
analysis.
If
the
measurement
result
is
a
rare
event
associated
with
the
method
under
study
then
may
also
be
appropriate
to
exclude
the
measurement
result
from
the
results
in
the
study.

In
large
studies
of
detection
and
quantitation,
it
may
not
be
economically
or
technically
feasible
to
review
all
extreme
values
to
determine
if
they
are
outliers.
In
such
cases,
removing
all
extreme
values
as
if
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
18
they
were
outliers
may
be
acceptable.
We
recommend
that
the
study
documentation
clearly
state
this
is
the
case
and
that
the
percentage
of
data
removed
be
stated.
Removing
large
percentages
of
extreme
values
may
cause
variability
estimates
to
be
understated,
indicate
that
there
are
systematic
problems
with
following
the
method,
or
indicate
that
there
are
problems
with
the
procedure
for
determining
extreme
values.

Influential
early
work
in
using
ranking
to
help
identify
outlying
laboratories
in
studies
was
conducted
by
Youden
(
Youden,
W.
J.
and
E.
H.
Steiner
Statistical
Manual
of
the
Association
of
Official
Analytical
Chemists).

3.3.4
Criteria
for
the
Selection
and
Appropriate
Use
of
Statistical
Models
Detection
and
quantitation
limits
may
be
based
on
statistical
models
of
the
relationship
between
measurement
variation
and
the
concentration
of
a
substance
in
the
water.
Results
are
produced
by
adding
varying
known
amounts
of
the
substance
to
the
water
("
spiking"),
making
replicate
measurements
at
each
concentration,
and
modeling
the
variability
of
the
results
as
a
function
of
concentration.
This
section
summarizes
the
history
of
modeling
variability
versus
concentration,
considers
criteria
for
selecting
models,
and
discusses
current
practices
with
regard
to
available
data.

3.3.4.1
Short
History
of
Modeling
Measurement
Results
Over
time,
a
number
of
different
models
have
been
used
to
estimate
measurement
variation.
Currie
(
1968)
modeled
variation
in
radiochemical
measurement
methods
using
a
procedure
associated
with
counting
large
numbers
of
distinct
objects
which
are
appropriately
modeled
with
the
Poisson
distribution
.
However,
he
relied
on
large
sample
sizes
and
standard
normal
distributions
to
describe
all
other
types
of
measurement
methods.
Hubaux
and
Vos
(
1970)
developed
a
procedure
based
on
an
estimated
calibration
relationship
using
smaller
sample
sizes
to
estimate
Currie's
detection
and
quantitation
limits.
Again,
measurement
results
were
assumed
to
follow
standard
normal
distributions
but
it
was
also
assumed
that
measurement
variation
was
constant
throughout
the
range
of
interest.
Similarly,
Glaser
et
al.
(
1981)
suggested
that
measurement
variation
increases
linearly
with
concentration
but
they
did
not
provide
estimators
under
this
theory
because
they
believed
that
measurement
variation
is
usually
approximately
constant
in
the
range
of
detection.
Glaser
et
al.
(
1981)
did
suggest
that
when
appropriate
data
were
available
that
a
linear
regression
analysis
of
the
relationship
over
the
analytical
range
be
performed.
Clayton
et
al.
(
1987)
discusses
transforming
the
measurement
results
(
using
logarithms
or
square
root
functions).
Gibbons
et
al.
(
1991)
suggests
that
measurement
variability
may
be
proportional
to
concentration.
Rocke
and
Lorenzato
(
1995)
propose
a
model
motivated
by
physical
characteristics
of
measurement
processes
in
which
measurement
variability
is
approximately
constant
at
low
concentrations
but
changes
in
a
continuous
mathematical
manner
to
increasing
variability
as
concentration
increases.

Figure
3­
4
shows
the
fundamental
analytical
measurement
models
in
linear
and
logarithmic
domains.
The
models
are
applicable
to
nearly
all
analytical
measurements;
we
will
not
deal
with
the
exceptions
because
they
represent
a
small
percentage
of
cases.
As
can
be
seen
from
the
top
two
graphs,
response
is
linear
as
a
function
of
concentration
in
both
the
linear
and
log
domains.
The
middle
two
graphs
and
the
bottom
two
graphs
are
those
most
pertinent
to
detection
and
quantitation.
Chapter
3
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
19
Response
vs.
Concentration
Linear
Domain
0
20
40
60
80
100
0
20
40
60
80
100
Concentration
Response
SD
vs.
Concentration
Linear
Domain
0
2
4
6
8
10
0
20
40
60
80
100
Concentration
SD
Response
vs.
Concentration
Log­
Log
Domain
0.001
0.01
0.1
1
10
100
0.01
0.1
1
10
100
1000
Concentration
Response
SD
vs.
Concentration
Log­
Log
Domain
0.01
0.1
1
10
100
0.01
0.1
1
10
100
1000
Concentration
SD
RSD
vs.
Concentration
Linear
Domain
0
20
40
60
80
100
0
20
40
60
80
100
Concentration
RSD
RSD
vs.
Concentration
Log­
Log
Domain
1
10
100
1000
0.01
0.1
1
10
100
1000
Concentration
RSD
Figure
3­
4
3.3.4.1.1
Detection
Limits
Using
Variability
at
Low
Concentrations
The
middle
graphs
shows
variability
versus
concentration
and
shows
the
model
postulated
by
Rocke
and
Lorenzato.
The
flat
(
constant)
portion
of
the
graph
in
the
linear
domain
is
difficult
to
see
because
it
occurs
near
the
origin,
but
it
can
be
seen
easily
in
the
log
domain.
Most
detection
concepts
(
e.
g.,
Currie's
critical
level
and
detection
limit;
EPA's
MDL;
the
ACS
LOD)
are
constructed
assuming
the
flat
(
constant)
region
of
the
variability
versus
concentration
graph,
although
the
graph
is
rarely
displayed
(
a
horizontal
line
would
be
singularly
uninteresting).
Detection
concepts
such
as
the
critical
level,
detection
limit,
LOD,
and
MDL
are
constructed
by
multiplying
the
standard
deviation
in
the
flat
region
by
some
constant.
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
20
Contention
and
differences
of
opinion
occur
in
how
to
arrive
at
an
"
appropriate"
standard
deviation
and
what
to
do
with
the
standard
deviation
when
you
have
it.
Currie's
critical
level
and
EPA's
MDL
use
a
multiple
of
the
standard
deviation
in
a
similar
manner.
(
a
t
statistic
adjusted
for
the
number
of
replicates
used
for
Currie's
critical
level;
3.14
for
7
replicates
in
EPA's
MDL);
the
IDE
uses
an
additional
upward
adjustment
based
on
a
statistical
tolerance
limit
calculation.

3.3.4.1.2
Quantitation
Limits
Using
Standard
Deviation
Multiples
and
Models
of
Standard
Deviation
versus
Concentration
and
RSD
versus
Concentration
The
limit
of
quantitation
(
LOQ)
advanced
by
Currie
and
the
American
Chemical
Society's
Committee
on
Environmental
Improvement,
and
EPA's
minimum
level
of
quantitation
(
ML)
result
from
multiplication
of
the
standard
deviation
by
a
factor
of
10,
again
assuming
a
flat
portion
of
the
variability
versus
concentration
graph.
This
factor
of
10
is
directed
at
achieving
a
relative
standard
deviation
(
RSD)
of
10
percent.
An
advantage
of
this
approach
is
that
a
quantitation
limit
is
produced,
regardless
of
what
the
RSD
turns
out
to
be.
For
example,
it
is
known
that
the
determination
of
2,4­
dinitrophenol
by
EPA
Method
625
produces
highly
variable
results
and
that
10
percent
RSD
cannot
be
achieved
for
this
compound.
Multiplying
the
standard
deviation
of
replicate
measurements
at
low
levels
results
in
a
quantitation
limit
considerably
higher
than
quantitation
limits
for
other
compounds
analyzed
by
Method
625.
The
RSD
at
this
quantitation
limit
could
be
30,
50,
or
70
percent.
Arbitrarily
limiting
the
quantitation
limit
to
some
value
(
e.
g.,
30%
as
with
the
ASTM
IQE)
could
prohibit
the
use
of
EPA
Method
625
for
determination
of
2,4­
dinitrophenol.
If
2,4­
dinitrophenol
were
present
at
high
concentration
in
a
discharge,
it
would
not
be
reported.
Although
it
could
be
argued
that
a
more
precise
method
should
be
used
for
determination
of
2,4­
dinitrophenol,
determination
of
pollutants
by
a
large
suite
of
multiple
methods
would
be
quite
costly
with
little
meaningful
benefit.
Increasing
precision
(
i.
e.,
decreasing
measurement
error)
would
be
critical
only
if
the
concentration
at
issue
was
near
a
compliance
limit.

Another
means
of
arriving
at
a
limiting
RSD
is
to
graph
RSD
versus
concentration,
as
shown
in
the
bottom
two
graphs
of
Figure3­
3.
This
approach
is
used
by
the
ASTM
IQE.
It
has
the
advantage
that
a
model
is
fit
to
data,
rather
than
a
point
estimate
such
as
the
Currie
and
ACS
LOD
or
the
EPA
ML.
However,
it
requires
considerably
more
data
than
concepts
based
on
point
estimates.
In
addition,
how
a
model
is
selected
can
play
a
major
role
in
the
outcome.

3.3.4.2
Criteria
for
Selecting
Models
Both
mathematical
and
graphical
procedures
have
been
proposed
for
selecting
between
models
for
measurement
results
versus
spike
concentrations.

Mathematical
Criteria
While
mathematical
criteria
are
available
for
choosing
between
models
of
similar
types,
currently
available
mathematical
criteria
are
not
satisfactory
for
choosing
between
the
wide
variety
of
models
considered
for
the
relationship
between
measurement
variation
and
spike
concentration,
based
on
EPA's
studies
of
measurement
variation
versus
spike
concentration.
More
technically,
mathematical
criteria
include
(
1)
the
simplest
model
to
obtain
statistical
significance,
(
2)
the
model
with
the
smallest
estimated
variability,
and
(
3)
the
model
with
the
smallest
likelihood
ratio.
Given
the
wide
variety
of
models
considered
for
detection
and
quantitation,
there
are
problems
associated
with
each
of
these
procedures.
Data
that
obviously
do
not
follow
the
model
may
produce
statistically
significant
results,
variability
may
be
estimated
with
weights
that
make
the
various
estimates
incomparable,
and
the
likelihood
function
may
not
be
comparable
between
models.
Chapter
3
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
21
Graphical
Criteria
Graphical
criteria
may
be
susceptible
to
some
subjectivity
in
their
application
but
they
are
currently
the
best
available
method
for
choosing
between
models.
At
the
most
basic
level,
the
primary
graphical
criteria
is
for
the
form
of
the
model
to
be
suggested
by
the
available
data.
To
consider
the
quality
of
the
graphical
analysis,
it
is
useful
to
see
if
some
small
number
of
data
are
overly
influential
in
determining
if
a
model
does
or
does
not
fit.
Given
the
ability
of
the
human
eye
to
discern
deviations
from
a
straight
line
than
from
a
curved
line,
a
useful
technique
is
to
plot
the
data
so
that
they
will
indicate
a
straight
line
if
they
follow
the
model
of
interest.

3.3.4.3
Current
Practices
with
Available
Data
EPA
graphed
variability
versus
concentration
data
with
regard
to
how
real
data
from
measurement
methods
used
under
the
Clean
Water
Act
would
conform
to
a
number
of
different
models.
For
details
of
how
datasets
were
selected
and
how
data
were
collected
within
the
datasets,
see
Appendix
B,
Characterizing
Measurement
Variability
as
a
Function
of
Analyte
Concentration
for
a
Variety
of
Analytical
Techniques
Four
sets
of
composite
scatter
plots
for
all
combinations
of
analytical
technique
by
analyte
by
study
were
produced.
These
sets
include:

1.
Measurement
versus
Spike
Concentration
2.
Log
Measurement
versus
Log
Spike
Concentration
3.
Observed
Standard
Deviation
versus
Spike
Concentration
4.
Log
Standard
Deviation
versus
Log
Spike
Concentration
5.
Relative
Standard
Deviation
(
RSD)
versus
Log
Spike
Concentration
There
are
hundreds
of
scatter
plots
in
each
set,
sorted
by
the
source,
measurement
technique
and
study.
The
first
set
of
scatter
plots
can
be
used
to
evaluate
how
well
measurement
results
match
the
spiked
concentration
in
the
water.
If
the
assumed
straight
line
model
were
true
then
the
relationship
outlined
by
the
plotted
data
would
be
approximately
linear.
These
relationships
are
plotted
using
log­
log
plots
so
that
small
deviations
away
from
the
line
can
be
easily
visualized.
All
the
graphs
are
contained
in
attachments
to
Appendix
B
which
are
printed
on
compact
disk.

The
plot
of
observed
standard
deviations
versus
spike
concentrations
can
be
used
to
evaluate
the
reasonableness
of
the
constant
variation
and/
or
linearly
increasing
variability
models
(
Currie
[
1968],
Hubaux
and
Vos
[
1970],
Glaser
et
al.
[
1981]).
If
the
constant
variability
model
for
standard
deviation
were
true,
there
would
be
no
apparent
relationship
between
the
standard
deviation
and
spike
concentration.
If
the
straight
line
model
for
standard
deviation
were
true,
plots
are
expected
to
indicate
an
approximately
linear
relationship.
Analogously,
the
standard
deviation/
spike
concentration
versus
spike
concentration
is
expected
to
show
a
straight­
line
relationship
when
variability
is
proportional
to
the
spike
concentration
(
Gibbons
et
al.
1991).
The
log­
log
plots
of
standard
deviation
versus
spike
concentration
are
expected
to
indicate
if
log
or
square
root
transformations
may
be
appropriate
(
Clayton
et
al.,
1987)
or
to
display
a
hockey
stick
relationship
when
it
is
appropriate
to
use
the
model
proposed
by
Rocke­
Lorenzato
(
1995).
Variability
near
zero
will
be
approximately
constant
but
variability
will
increase
more
strongly
with
spike
concentration
as
you
move
to
higher
concentrations
with
a
Rocke
and
Lorenzato
model.

The
large
number
of
plots
make
it
difficult
to
draw
general
conclusions.
For
the
most
part,
conclusions
must
be
considered
on
a
case­
by­
case
basis.
One
somewhat
general
observation
is
that
measurement
variability
over
low
concentrations
does
not
appear
to
fit
a
pattern
and
thus
may
be
considered
to
be
approximately
constant
in
this
range
for
a
large
number
of
analytic
techniques.
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
22
3.3.5
Methodology
for
Parameter
Estimation
Along
with
proposing
various
concepts
of
detection
and
quantitation
and
models
for
measurement,
a
number
of
specific
estimation
procedures
have
been
proposed.
Maximum
likelihood
and
least
squares
are
two
generally
applicable
statistical
methods
that
can
be
used
in
estimating
model
parameters.
There
are
advantages
and
disadvantages
to
both
that
must
be
weighed
in
particular
cases.
A
standard
statistical
practice
for
evaluating
the
quality
of
an
estimation
procedure
is
to
calculate
the
precision
and
bias,
usually
best
understood
by
examining
a
plot
of
residuals
from
a
fit
to
a
function.
All
else
being
equal,
the
estimation
procedure
with
the
best
precision
and
least
bias
is
preferred.
In
some
cases,
precision
and
bias
can
be
calculated
based
on
the
assumptions
behind
the
estimation
procedure.
In
other
cases,
it
is
either
necessary
or
convenient
to
estimate
precision
and
bias
using
simulations.
From
a
general
theoretical
perspective,
the
maximum
likelihood
estimation
methodology
is
preferable
because
it
generates
estimates
that
are
generally
best
with
regard
to
properties
of
precision
and
bias
(
especially
for
larger
sample
sizes)
while
also
being
approximately
normally
distributed.
Unfortunately,
maximum
likelihood
can
sometimes
be
problematic
because
the
method
requires
the
solution
of
complex
equations.
Least
squares
estimation
is
generally
more
tractable
and
thus
is
more
generally
applicable
although
the
estimates
that
result
may
not
be
as
desirable
from
a
theoretical
statistical
perspective.

What
can
sometimes
be
overlooked
in
considering
estimating
for
model
fitting
is
that
direct
measurement
of
variation
of
the
blank
or
low
level
concentration
may
be
the
most
cost­
effective
and
least
difficult
method
to
implement.
The
loss
in
statistical
efficiency
in
comparison
to
more
elaborate
estimation
and
model
fitting
methodology
would
be
offset
by
the
relative
ease
and
lower
cost.

3.3.6
False
Positives
and
False
Negatives
In
this
section,
we
discuss
the
impact
of
detection,
quantitation,
and
reporting
levels
on
false
positive
measurement
results
and
false
negative
measurement
results.
The
definitions
of
false
positives
and
false
negatives
are
easily
tied
into
the
Currie
(
1995)
concepts
of
critical
value
and
detection
limit.
The
definitions
will
also
be
explained
in
terms
of
reporting
limits,
along
with
common
errors
in
their
usage.

Assume
that
a
critical
value
was
selected
such
that
higher
measurements
have
less
than
a
1%
chance
of
being
associated
with
a
sample
that
does
not
contain
the
substance
of
interest.
When
a
measurement
result
associated
with
a
sample
that
does
not
contain
the
substance
of
interest
is
higher
than
the
critical
value
it
is
treated
as
if
the
sample
does
contain
the
substance
of
interest.
That
is
a
false
positive
measurement
result
or
a
"
false
positive."

When
a
measurement
result
that
contains
the
substance
of
interest
is
lower
than
the
critical
value,
it
is
treated
as
if
the
sample
did
not
contain
the
substance
of
interest.
That
is
a
false
negative
measurement
result
or
a
"
false
negative."
Where
a
false
positive
is
generated
from
a
known
concentration
(
zero)
with
a
selected
probability,
false
negatives
are
usually
generated
from
unknown
concentrations
with
probabilities
that
depend
on
both
the
true
unknown
concentration
and
the
critical
value
or
reporting
level.
When
the
unknown
true
concentration
is
less
than
the
critical
value,
the
probability
of
obtaining
a
false
negative
is
greater
than
50%.
When
the
unknown
true
concentration
is
equal
to
the
critical
value,
the
probability
of
obtaining
a
false
negative
is
approximately
50%.
When
the
unknown
true
concentration
is
greater
than
the
critical
value,
the
probability
of
obtaining
a
false
negative
is
less
than
50%.
Currie
(
1995)
defined
his
detection
limit
to
be
a
true
concentration
value
such
that
true
value
concentrations
greater
than
or
equal
to
the
detection
limit
are
likely
to
be
associated
with
measured
concentrations
that
are
greater
than
the
critical
value.
His
recommended
values
for
"
likely"
are
95%
or
99%.
Chapter
3
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
23
The
reporting
limit
functions
a
lot
like
a
critical
value.
When
a
measurement
result
that
contains
the
substance
of
interest
is
lower
than
either
the
critical
value
or
the
reporting
limit,
it
is
treated
as
if
the
sample
did
not
contain
the
substance
of
interest.
The
primary
difference
is
that
a
critical
value
is
explicitly
tied
to
the
probability
of
generating
a
false
positive
but
the
reporting
limit
is
generally
not.
However,
the
probability
of
generating
a
false
positive
with
a
reporting
limit
can
often
be
back­
calculated.
For
example,
if
the
reporting
limit
is
set
equal
to
the
detection
limit
used
in
the
earlier
examples,
the
probability
of
a
false
positive
becomes
somewhat
less
than
2
in
a
million.
If
the
reporting
limit
is
set
equal
to
the
quantitation
limit
used
in
the
earlier
examples,
then
the
probability
of
a
false
positive
becomes
too
small
to
be
easily
calculated
(
one
estimate
has
been
less
than
1
in
108).

A
common
error
in
many
published
discussions
of
false
negatives
in
relation
to
detection
and
quantitation
(
such
as
the
ASTM
IDE)
is
the
claim
that
using
Currie's
detection
limit
as
a
reporting
limit
or
action
level
will
somehow
"
control"
false
negatives.
That
claim
is
both
false
and
counter­
productive.
To
illustrate
the
problem
with
this
error,
consider
the
classic
scenario
in
which
the
true
concentration
for
the
substance
of
interest
in
a
sample
is
equal
to
the
critical
value.
Include
the
simplifying
assumptions
that
variability
is
approximately
normal
in
distribution
and
approximately
constant
throughout
the
region
of
concern.
The
critical
level
(
alpha
level)
is
set
to
0.01
or
1%
throughout
the
remainder
of
this
section.

The
first
part
of
this
illustration
goes
much
the
same
as
it
has
been
written
in
a
number
of
different
papers.
Set
the
reporting
limit
equal
to
the
critical
value.
Given
a
large
number
of
measurements
on
the
sample,
about
half
of
the
measurement
results
will
be
reported
as
measured
above
the
reporting
limit
and
about
half
of
the
measurement
results
will
be
reported
as
measured
below
the
reporting
limit.
Measurement
results
reported
below
the
reporting
limit
are
treated
as
if
there
is
no
substance
of
interest
in
the
sample;
these
are
false
negative
measurements.
Many
authors
assert
that
the
50%
level
of
false
negatives
is
not
acceptable.

We
will
significantly
change
the
second
part
of
the
usual
discussion
by:
(
a)
explicitly
including
the
concept
of
the
reporting
limit
and
(
b)
not
changing
the
reference
point
(
the
true
concentration
in
the
sample).
Set
the
reporting
limit
to
Currie's
detection
limit.
Given
a
large
number
of
measurements
on
the
sample,
about
one
percent
of
the
measurement
results
will
be
reported
as
measured
above
the
reporting
limit
and
ninety­
nine
percent
of
the
measurement
results
will
be
reported
as
measured
below
the
reporting
limit.
This
makes
the
problem
of
false
negatives
worse.

Now,
to
see
what
Currie
was
getting
at
when
he
defined
his
detection
limit,
set
the
reporting
limit
equal
to
Currie's
critical
value
and
create
a
sample
with
a
true
concentration
equal
to
Currie's
detection
limit.
Given
a
large
number
of
measurements
on
this
sample,
about
ninety­
nine
percent
of
the
measurement
results
will
be
reported
as
measured
above
the
reporting
limit
and
one
percent
of
the
measurement
results
will
be
reported
as
measured
below
the
reporting
limit.
Knowledge
of
the
lowest
true
concentration
that
will
routinely
produce
acceptable
measurement
results
can
then
be
used
to
determine
if
the
measurement
method
meets
the
needs
of
a
study.
A
study
concerned
with
a
wastewater
treatment
technology
not
expected
to
be
effective
at
concentrations
below
10
mg/
L
may
call
for
a
relatively
inexpensive
measurement
method
capable
of
detecting
at
that
level
rather
than
a
more
expensive
measurement
method
capable
of
measuring
a
hundred
times
lower.

To
emphasize
the
previous
point
about
how
counter­
productive
it
is
to
move
the
reporting
limit
higher
based
on
concerns
associated
with
false
negatives
consider
the
following.
Wherever
the
reporting
limit
is
set,
a
large
number
of
measurements
on
a
sample
with
a
true
concentration
equal
to
the
reporting
limit
will
produce
false
negatives
about
fifty
percent
of
the
time.
This
is
true
if
the
reporting
limit
is
set
equal
to
the
critical
value,
the
detection
limit,
a
quantitation
limit,
or
any
other
positively
valued
limit
within
the
range
of
the
measurement
method.
As
the
reporting
limit
increases
in
value,
the
probability
increases
that
a
measurement
result
associated
with
any
fixed
true
concentration
will
be
a
false
negative.
This
means
that
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
24
concerns
regarding
false
positive
measurement
results
and
false
negative
measurement
results
are
directly
competing
goals.
As
long
as
the
only
tool
for
setting
requirements
for
false
positive
and
false
negative
measurement
results
is
the
reporting
limit,
setting
the
reporting
limit
higher
reduces
the
probability
of
a
false
positive
at
the
expense
of
increasing
the
probability
of
a
false
negative.

To
push
the
argument
to
the
extreme,
setting
the
limit
to
zero
guarantees
no
false
negatives
because
no
negatives
will
be
reported
(
assuming
the
analytical
system
will
not
produce
a
result
less
than
zero).
Therefore,
to
adjust
the
limit
upward
to
account
for
false
negatives
can
only
exacerbate
the
false
negative
problem.
The
solution
to
the
dilemma
is
to
set
the
limit
to
some
acceptable
false
positive
rate
at
which
false
negative
results
will
be
minimized.
If
there
is
a
concern
that
a
pollutant
will
not
be
reported
if
the
false
negative
rate
tied
to
the
false
positive
limit,
repetitive
testing
over
time
will
eventually
detect
a
true
(
not
false)
positive.

3.3.7
Statistical
Prediction
and
Tolerance
When
we
define
a
critical
value,
detection
limit,
or
quantitation
limit,
different
descriptive
terminology
may
suggest
changes
in
the
numerical
value
of
the
limit.
We
will
use
a
critical
value
as
an
example,
but
the
questions
motivating
detection
and
quantitation
limits
can
be
phrased
in
similar
fashion.
Do
we
want
a
critical
value
that
tells
us
how
likely
it
is
that:

1.
A
measurement
result
was
produced
by
measuring
a
blank
sample,
2.
The
next
measurement
result
will
be
produced
by
measuring
a
blank
sample,
or
3.
The
next
[
pick
any
number]
of
measurement
results
will
be
produced
by
measuring
a
blank
sample?

The
statistical
procedures
for
finding
these
answers
are
called:

1.
Percentiles;
2.
Prediction
intervals;
and
3.
Tolerance
intervals.

A
more
basic
way
to
divide
these
procedures
is
between
describing
something
that
already
exists
(
percentiles)
and
describing
a
future
occurrence
(
prediction
and
tolerance
limits).
Percentiles
are
fairly
straight
forward
to
interpret,
i.
e.,
they
specify
the
percentage
of
a
distribution
that
falls
below
a
given
percentile
value.
Prediction
and
tolerance
limits
are,
in
effect,
confidence
limits
on
percentiles
and
can
be
somewhat
more
difficult
to
understand
and
apply.
There
are
many
excellent
textbook
and
literature
references
that
present
the
theory
and
application
of
tolerance
and
prediction
limits
such
as
Hahn
and
Meeker,
Statistical
Intervals,
Wiley,
1991,
and
Pratt
and
Gibbons,
Concepts
of
Non­
parametric
Theory,
Springer­
Verlag,
1981.
Hahn
and
Meeker
describe
at
length
the
different
statistical
intervals
including
their
properties,
applications
and
methodology
for
constructing
the
intervals
and
situations
in
which
the
various
intervals
are
appropriate
for
use.
Hahn
and
Meeker
also
give
examples
of
the
sort
of
applications
that
are
suitable
for
each
type
of
interval
although
the
decision
to
use
a
particular
type
of
interval
in
a
given
application
is
not
determined
strictly
by
theoretical
considerations
but
is
also
a
matter
of
judgment.
Pratt
and
Gibbons
have
an
excellent
discussion
of
tolerance
intervals
that
is
general
in
application
due
to
the
non­
parametric
perspective,
i.
e.,
no
distributional
assumptions
are
required
for
the
results
to
be
valid.

3.3.7.1
Prediction
Intervals
Prediction
intervals
are
used
to
specify
intervals
that
contain
the
results
of
future
samples
from
a
previously
sampled
population.
Prediction
intervals
are
not
estimators
of
parameters
such
as
means
or
Chapter
3
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
25
percentiles.
For
example,
prediction
intervals
may
be
constructed
to
contain
future
sampling
results
expressed
as
a
mean
or
standard
deviation
of
a
future
sample
or
all
of
a
certain
number
of
individual
future
sampling
results.

3.3.7.2
Tolerance
Intervals
Tolerance
intervals
are
used
primarily
to
construct
intervals
that
are
intended
to
enclose
or
contain
a
specified
proportion
of
a
population.
In
statistical
terms,
tolerance
intervals
are
intervals
that
contain
a
specified
proportion
of
a
population
of
measured
values
with
a
given
statistical
confidence
level.
For
example,
we
say
that
a
proportion
P
of
a
population
is
contained
within
the
interval
(
L1,
L2)
with
(
1­")
100%
confidence.
The
lower
and
upper
ends
of
the
interval,
L1
and
L2
,
respectively,
are
referred
to
as
tolerance
limits.
A
tolerance
interval
is
therefore
an
interval
of
random
length
that
is
determined
on
the
basis
of
having
a
specified
probability
of
1­"
that
its
coverage
of
the
population
is
at
least
equal
to
a
specified
value
P.
The
quantity
1­"
is
referred
to
as
the
confidence
level
for
the
interval
and
P
is
the
minimum
proportion
of
the
population
contained
in
the
interval.
Tolerance
limits
are
not
estimators
of
values
such
as
a
mean
or
a
percentile
but
rather
values
that
are
always
guaranteed
to
be
either
greater
than
or
less
than
the
desired
value
at
some
level
of
statistical
confidence.
Pratt
and
Gibbons
discuss
this
and
other
properties
that
affect
the
utility
of
tolerance
limits
and
create
difficulties
in
the
interpretation
and
application
of
tolerance
limits.

In
effect,
the
determination
of
what,
if
any,
interval
to
use
is
a
policy
decision.
The
choice
of
which
kind
of
interval
to
use
should
consider
how
easy
it
is
to
estimate
the
interval
you
want
under
the
conditions
that
exist.
As
PG
point
out,
the
interpretation
of
tolerance
intervals
(
and
analogously,
prediction
intervals)
can
be
problematic
especially
when
issues
of
sample
size
and
choice
of
confidence
level
come
into
play.
PG
cite
examples
where
the
interplay
of
sample
size
and
high
percentile
and
confidence
level
make
tolerance
limits
useless.

Use
of
Tolerance
and
Prediction
in
Setting
Detection
and
Quantitation
Levels
Statistical
intervals
can,
and
have
by
a
number
of
authors,
be
adapted
for
use
in
setting
detection
and
quantitation
levels.
The
basic
approach
requires
a
functional
definition
of
detection
or
quantitation
that
includes
a
statistical
term
or
terms.
An
interval
could
then
be
constructed
about
the
statistical
term
which
could
be
used
to
assess
the
detection
or
quantitation
level
or
make
an
adjustment
to
a
calculated
value
that
would
result
in
the
detection
or
quantitation
level.
For
example,
most
detection
level
estimators
are
functionally
dependent
on
an
estimate
of
standard
deviation
of
measurement
error.
A
statistical
interval
could
be
constructed
about
the
standard
deviation
and
the
length
of
the
interval
could
be
used
to
assess
the
detection
level.
The
end
points
of
the
interval
could
be
used
as
the
basis
for
an
adjustment
(
upward
or
downward)
in
the
calculated
level.

However,
the
use
of
prediction
and/
or
tolerance
limits
in
setting
detection
and
quantitation
limits
is
not
an
absolute
requirement
and
should
be
evaluated
in
the
context
of
specific
applications
and
policy
considerations.
In
practice,
the
effect
of
adjustment
of
detection
and
quantitation
limits
by
use
of
prediction
and
tolerance
intervals
can
be
quite
large,
depending
on
the
amount
of
available
data
and
the
choices
of
percentiles
and
confidence
levels.

3.3.8
Design
of
Detection
and
Quantitation
Studies
The
issues
associated
with
the
design
of
detection
and
quantitation
studies
include:
how
well
a
selection
of
spike
concentrations
can
be
used
to
differentiate
between
different
models
for
the
relationship
between
measurement
results
and
spike
concentrations,
how
the
distance
between
spike
concentrations
can
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
26
impact
estimates
of
detection
and
quantitation
limits,
how
to
reduce
the
influence
of
uncontrollable
factors
in
the
measurement
process
(
probability
design),
how
complete
to
make
the
design
factors
in
terms
of
the
physical
measurement
process,
and
how
flexible
to
make
the
design
factors
in
terms
of
the
physical
measurement
process.

3.3.8.1
Spike
Concentrations
and
Modeling
If
a
model
under
consideration
cannot
be
described
by
the
number
of
spike
concentrations
in
the
design
then
it
is
not
possible
to
tell
if
the
model
is
appropriate.
To
take
the
simplest
example,
it
is
not
possible
to
describe
the
slope
of
a
line
associated
with
linearly
increasing
variation
from
a
single
spike
concentration.
Two
well­
spaced
spike
concentrations
would
allow
you
to
estimate
a
slope
but
it
provides
you
with
no
idea
of
the
variability
of
the
estimate.
Three
well­
spaced
spike
concentrations
is
the
minimum
requirement
for
estimating
the
linear
relationship
and
the
variability
of
that
relationship.

Clayton
et
al.
(
1987)
describes
the
relationship
between
the
spread
of
the
spike
concentrations,
the
number
of
spike
concentrations,
and
the
number
of
replicate
measurements
with
regard
to
estimated
variability
when
a
linear
model
is
used.
While
the
specific
equation
used
here
does
not
apply
to
all
models,
it
indicates
principles
that
do
apply.
Increasing
the
number
of
replicate
measurements,
increasing
the
number
of
spike
concentrations,
and
reducing
the
spread
of
the
spike
concentrations
are
all
expected
to
reduce
estimated
variability
along
with
the
associated
detection
and
quantitation
limits.
However,
one
of
the
components
of
variability
associated
with
detection
and
quantitation
is
that
associated
with
estimating
the
calibration
relationship.
To
account
for
this
source
of
variation,
it
may
be
appropriate
to
cover
the
entire
calibration
range.
On
the
other
hand,
many
replicates
at
a
high
concentration
may
improperly
weight
the
data
in
favor
of
high
detection
and
quantitation
estimates.

3.3.8.2
Probability
Design
The
process
known
as
randomization
is
fundamental
to
the
design
and
proper
interpretation
of
experimental
studies.
This
involves
the
allocation
of
experimental
units
to
factors
and
treatments
under
study
according
a
design
determined
by
probability.
Randomization
insures
the
inferential
validity
of
the
results
by
avoiding
bias
and
systematic
errors
that
can
occur
in
studies
where
proper
randomization
is
not
used.
In
studies
of
measurement
methods,
randomization
should
be
used
in
the
process
of
creating
spike
concentration
solutions
and
the
ordering
of
analyses.
A
classic
text
is
Statistics
for
Experimenters
by
Box,
Hunter
and
Hunter,
Wiley,
1978.

3.3.8.3
Completeness
The
physical
measurement
process
can
be
studied
using
rough
approximations
or
it
can
be
studied
more
rigorously.
A
rough
approximation
could
physically
use
the
available
components
of
a
method
as
applied
to
convenient
samples.
A
more
rigorous
study
would
use
a
complete,
specific,
and
well­
defined
measurement
method
with
all
sample
processing
steps.
The
appropriate
level
of
study
will
probably
depend
on
the
purpose
of
the
study.

Measurement
procedures
(
methods)
may
be
more
or
less
strictly
designed.
Variability
in
what
is
allowed
in
the
procedures
may
add
to
variability
in
the
measurement
results.
To
the
extent
that
permutations
of
a
method's
procedures
are
not
expected
to
be
used
in
a
particular
detection
or
quantitation
study,
EPA
recommends
that
this
information
be
included
in
the
report
on
the
study
results.
While
there
may
be
physical
reasons
for
extrapolating
the
results
of
a
variability
study
on
one
set
of
procedures
to
permutations
of
those
procedures,
there
is
no
statistical
basis
for
making
such
an
extrapolation.
Statistical
theory
by
itself
is
only
able
to
describe
conditions
that
have
been
observed.
On
the
other
hand,
a
knowledge
of
the
underlying
Chapter
3
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
3­
27
physics
of
the
measurement
process
can
guide
the
completeness
of
the
modeling
process
when
statistical
procedures
fail.
For
example,
the
Rocke
and
Lorenzato
model
in
the
linear
of
log­
log
domain
is
the
best
characterization
of
a
physical
measurement
process.
Therefore,
this
model
can
be
forced
through
data
to
produce
a
complete
answer
when
statistical
procedures
fail
to
deduce
the
"
correct"
model.
2Daubert
v.
Merrell
Dow
Pharmaceuticals,
509
U.
S.
579
(
1993)

Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
4­
1
Chapter
4
Evaluation
Criteria
This
chapter
presents
criteria
developed
by
EPA
as
a
means
for
selecting
acceptable
detection
and
quantitation
limit
concepts
for
use
in
Clean
Water
Act
(
CWA)
programs.
These
criteria
reflect
EPA's
careful
consideration
of
the
issues
identified
and
discussed
in
Chapter
3.

4.1
Criterion
1
Criterion
1:
The
detection
and
quantitation
limit
concepts
must
be
scientifically
valid.

Scientific
validity
is
widely
accepted
but
loosely
defined.
For
the
purposes
of
this
evaluation,
a
detection/
quantitation
concept
or
methodology
will
be
considered
scientifically
valid
if
it
meets
the
following
conditions:

°
It
can
be
(
and
has
been)
tested
°
It
has
been
subjected
to
peer
review
and
publication
°
The
error
rate
associated
with
the
concept
or
methodology
is
either
known
or
can
be
estimated
°
Standards
exist
and
can
be
maintained
to
control
its
operation
(
i.
e.,
it
is
supported
by
well­
defined
procedures
for
use)
°
It
has
attracted
(
i.
e.,
achieved)
widespread
acceptance
within
a
relevant
scientific
community.

While
EPA
acknowledges
that
other
measures
could
be
established
to
demonstrate
scientific
validity,
EPA
has
adopted
the
conditions
cited
because
they
reflect
those
established
by
the
U.
S.
Supreme
Court
as
considerations
pertaining
to
assessments
of
scientific
validity
when
considering
the
admissibility
of
expert
scientific
testimony.
2
EPA
believes
that
considerations
established
by
the
Court
as
necessary
to
demonstrate
the
scientific
validity
of
an
expert's
reasoning
or
methodology
are
equally
valid
for
demonstrating
the
scientific
validity
of
a
detection/
quantitation
concept.

4.2
Criterion
2
Criterion
2:
The
concepts
must
address
demonstrated
expectations
of
laboratory
and
method
performance,
including
routine
variability.

As
discussed
in
Chapter
3
of
this
TSD,
the
detection
and
quantitation
limit(
s)
for
an
analyte
in
an
analytical
method
can
be
established
from
a
single­
laboratory
study,
multiple
single­
laboratory
studies,
or
an
interlaboratory
study.
Historical
methods
developed
by
EPA
under
the
Clean
Water
Act
programs,
and
nearly
all
methods
developed
by
EPA
under
the
Safe
Drinking
Water
Act
programs,
were
developed
by
EPA's
research
laboratory
in
Cincinnati,
Ohio.
In
the
course
of
method
development,
this
single
laboratory
established
detection
and
quantitation
limits.
In
many
instances,
these
detection
and
quantitation
limits
were
unrealistic,
in
that
they
were
unachievable
in
many
non­
research
laboratories.
However,
with
time,
laboratory,
method,
and
analytical
instrumentation
improved,
making
detection
and
quantitation
limits
more
easily
achievable
in
nearly
all
laboratories.
Therefore,
the
difficulty
created
was
in
initial
application
of
the
research
methods.
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
4­
2
In
recent
years,
EPA's
Office
of
Science
and
Technology
has
used
single­
laboratory
studies
to
develop
an
initial
estimate
of
detection
and
quantitation
limit
for
a
new
or
modified
method,
and
has
verified
this
limit
in
interlaboratory
studies
or
by
conducting
additional
single
laboratory
studies
in
other
laboratories.

Voluntary
consensus
standards
bodies
(
VCSBs)
such
as
the
American
Society
for
Testing
and
Materials
(
ASTM)
have
historically
used
interlaboratory
studies
to
establish
method
performance.
Over
the
past
5
to
10
years,
ASTM
has
been
developing
interlaboratory
and
single­
laboratory
concepts
for
detection
and
quantitation.
Whereas
the
single­
laboratory
studies
at
EPA's
research
laboratory
in
Cincinnati
produce
the
lowest
detection
and
quantitation
limits,
concepts
such
as
those
published
by
ASTM
gather
all
sources
of
variability
to
produce
the
highest
detection
and
quantitation
limits.
A
realistic
expectation
of
method
and
laboratory
performance
likely
lies
somewhere
in
between.

As
noted
in
Section
3.2.2
of
this
TSD,
laboratory
and
method
performance
can
be
affected
by
the
use
of
performance
criteria
that
serve
as
prediction
or
tolerance
levels.
Examples
of
such
criteria
include
measures
to
demonstrate
that
a
laboratory
is
producing
accurate
results
at
a
concentration
of
interest
(
i.
e.,
analysis
of
reference
standards
or
spiked
samples),
measures
to
demonstrate
that
results
are
not
biased
by
contamination
(
i.
e.,
analysis
of
blanks),
and
measures
to
demonstrate
that
the
laboratory
can
achieve
the
sensitivity
required
to
reliably
detect
pollutants
at
low
concentrations
(
i.
e.,
at
the
detection
limit).
It
is
likely
that
laboratory
performance
will
be
better
(
and
variability
will
be
lower)
when
laboratories
are
required
to
meet
specified
performance
criteria
in
order
to
report
results.

A
further
consideration
concerning
routine
variability
is
the
means
for
rejection
of
outliers.
Mathematicians
and
statisticians
are
very
reluctant
to
remove
any
data;
chemists
performing
routing
analyses
know
that
outliers
occur.
Therefore,
if
a
statistical
procedure
is
used
to
establish
a
detection
of
quantitation
limit,
the
issue
of
outliers
must
be
addressed,
and
some
means
of
resolving
outlier
issues
must
be
included.

In
examining
each
concept
against
this
criterion,
EPA
will
evaluate
if
the
concept
can
be
used
to
provide
a
realistic
expectation
of
laboratory
performance.
As
part
of
this
assessment,
EPA
will
examine
the
sources
of
variability
captured
by
the
concept,
and
the
degree
to
which
the
statistics
that
underlie
the
concept
realistically
reflect
these
sources
of
variability.

4.3
Criterion
3
Criterion
3:
The
concepts
must
be
supported
by
a
practical
and
affordable
procedure
that
a
single
laboratory
can
use
to
evaluate
method
performance.

Any
concept
or
procedure
should
be
simple,
complete,
and
cost­
effective
to
implement.
The
laboratories
that
can
be
expected
to
use
detection/
quantitation
procedures
will
range
from
large
laboratories
and
laboratory
chains
with
a
wide
range
of
technical
capability
to
"
mom
and
pop"
laboratories
operated
by
one
or
a
few
people
with
a
limited
set
of
statistical
skills.
If
a
procedure
is
complicated
it
will
be
error
prone
in
its
use.
Similarly,
if
a
procedure
requires
investment
of
extensive
resources
that
cannot
be
billed
to
the
client,
laboratories
will
have
a
disincentive
to
use
the
procedure.
Therefore,
if
the
Agency
wishes
to
encourage
the
development
and
use
of
innovative
techniques
that
improve
measurement
performance
or
lower
measurement
cost,
the
Agency
must
consider
practicality
and
affordability
as
significant,
if
not
co­
equal,
considerations
to
scientific
validity.

After
evaluating
each
of
the
issues
discussed
in
Chapter
3
of
this
document,
EPA
has
concluded
that
successful
implementation
of
CWA
programs
depends
on
the
ability
of
laboratories
to
easily
and
affordably
1)
demonstrate
that
a
method
works
in
a
particular
matrix
at
levels
of
concern,
2)
characterize
improvements
Chapter
4
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
4­
3
in
measurement
capabilities
in
terms
of
measurement
sensitivity,
and
3)
characterize
the
sensitivity
of
new
methods.

A
matrix
effect
is
an
interference
in
a
measurement
caused
by
substances
or
materials
in
the
sample
other
than
the
analyte
of
interest
and
that
are
not
removed
using
the
procedures
in
the
method
or
other
commonly
applied
procedures.
In
the
context
of
detection
and
quantitation,
matrix
effects
manifest
themselves
by
precluding
measurements
at
levels
as
low
as
could
be
measured
were
the
interference
not
present.
From
a
practical
perspective,
it
is
not
possible
to
test
the
sensitivity
of
new
methods
in
every
possible
matrix
in
which
it
may
be
used.
At
a
minimum,
it
is
unlikely
that
EPA
or
any
other
organization
could
possibly
identify
and
obtain
samples
of
every
matrix
to
which
the
method
might
be
applied,
and
even
if
such
a
feat
were
possible,
the
cost
and
logistics
of
doing
so,
would
be
prohibitive.
Therefore,
EPA
prefers
to
identify
a
concept
that
allows
for
characterization
of
measurement
sensitivity
in
representative
matrices
and
is
supported
by
a
simple,
cost­
effective
procedure
would
allow
individual
laboratories
to
evaluate,
on
an
asneeded
basis,
the
effects
of
specific
matrices
on
measurement
sensitivity.

The
reality
of
environmental
analysis
is
that
measurement
capabilities
improve
over
time.
This
is
attributable
to
a
variety
of
factors,
including
(
1)
increased
staff
experience
with
a
given
technique,
(
2)
technological
upgrades
or
improvements
in
the
instrumentation
used
for
analysis,
and
(
3)
development
of
new
instrumentation
or
techniques
that
improves
sensitivity,
precision,
or
bias.
In
each
case,
the
improvements
may
not
be
observed
across
the
entire
laboratory
community.
In
the
case
of
increased
staff
experience,
for
example,
it
is
obvious
that
a
laboratory
that
specializes
in
one
type
of
analysis,
such
as
low
level
mercury
measurements,
will
develop
greater
experience
than
a
laboratory
that
rarely
performs
this
measurement.
Likewise,
it
is
easy
to
see
how
one
or
a
few
laboratories
that
concentrate
their
business
on
a
particular
type
of
analysis
might
be
willing
to
invest
significant
resources
in
new
or
upgraded
equipment
to
improve
performance
whereas
laboratories
that
rarely
perform
such
analyses
would
not
find
such
upgrades
to
be
costeffective

Improvements
in
measurement
capability,
including
the
development
of
new
methods,
may
create
a
dynamic
decision­
making
process,
in
that
measurements
at
lower
levels
may
allow
EPA
and
States
to
identify
previously
undetected
pollutants
that
are
impairing
the
water
body.
Such
situations
offer
a
means
for
monitoring
and
controlling
(
i.
e.,
regulating)
the
discharge
of
previously
unregulated
but
harmful
pollutants.
It
is
in
the
best
interest
of
the
environment
for
EPA
to
encourage
the
development
and
use
of
improved
environmental
analysis
procedures
and
equipment.

In
evaluating
this
criterion,
EPA
will
favor
affordable
and
easy­
to­
use
procedures
that
allow
analysts
in
a
single
laboratory
to
1)
determine
matrix­
specific
variations
based
on
realistic
data
and
2)
demonstrate
lower
detection
and
quantitation
limits
associated
with
improvements
in
their
measurement
capabilities.
Procedures
for
establishing
the
sensitivity
of
new
methods
or
improved
measurement
capabilities
must
be
practical
enough
to
encourage
such
development.
These
procedures
should
specify
the
spiking
level
at
which
measurements
are
to
be
made
and
the
corrective
action
to
be
taken
if
the
resulting
detection
or
quantitation
limit
is
inconsistent
with
the
data
from
which
it
is
derived.

4.4
Criterion
4
Criterion
4:
The
detection
level
concept
should
identify
the
concentration
at
which
there
is
99%
confidence
that
a
substance
will
be
detected
when
the
analytical
method
is
performed
by
experienced
staff
in
a
well­
operated
laboratory.
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
4­
4
Any
detection
limit
concept
should
be
capable
of
providing
regulators,
the
regulated,
and
data
users
with
confidence
that
a
pollutant
reported
as
being
present
really
is
present.
Historically,
nearly
every
detection
limit
concept
has
set
the
criterion
for
detection
at
99
percent
confidence;
i.
e.,
the
lowest
level
at
which
a
pollutant
will
be
detected
with
a
probability
of
99
percent.
This
criterion
results
in
the
probability
of
a
false
positive;
i.
e.,
that
a
pollutant
will
be
stated
as
being
present
when
it
actually
is
not
(
a
Type
I
error),
of
one
percent;
and
the
probability
of
a
false
negative;
i.
e.,
that
a
pollutant
will
be
stated
as
being
not
present
when
it
actually
is
(
a
Type
II
error),
of
50
percent.

A
common
error
in
some
detection
limit
concepts,
such
as
the
IUPAC
detection
limit
and
the
ASTM
interlaboratory
detection
estimate
(
IDE)
is
the
claim
that
a
detection
limit
can
somehow
be
used
to
"
control"
false
negatives.
That
claim
is
untrue
in
the
context
in
which
detection
limit
is
used
as
a
reporting
level,
as
it
is
in
reporting
data
to
the
Agency.
If
you
want
to
find
a
procedure
with
the
lowest
possible
probability
of
a
false
negative,
then
select
the
critical
value
or
reporting
limit
based
on
what
you
can
accept
for
false
positives
and
that
limit
will
have
the
lowest
probability
of
a
false
negative
associated
with
your
selected
false
positive
rate.

In
evaluating
this
criterion,
EPA
will
favor
procedures
that
reflect
routine
analytical
conditions
in
a
well­
operated
laboratory.
For
example,
the
procedure
must
be
capable
of
arriving
at
a
detection
limit
when
the
substance
of
interest
is
not
found
in
a
blank
and/
or
when
instrument
thresholds
are
adjusted
for
routine
operation.

4.5
Criterion
5
Criterion
5:
The
quantitation
limit
concept
should
identify
a
concentration
at
which
the
reliability
of
the
measured
results
is
consistent
with
the
capabilities
of
the
method
when
a
method
is
performed
by
experienced
staff
in
a
well­
operated
laboratory.

Measurement
capabilities
among
laboratories
vary
depending
on
a
number
of
factors,
including,
but
not
limited
to,
instrumentation,
training,
and
experience.
Similarly,
measurement
capabilities
among
different
analytical
methods
vary
depending
on
a
number
of
factors,
including
the
techniques
and
instrumentation
employed
and
the
clarity
of
the
method
itself.

Historical
approaches
to
recognizing
laboratory
capabilities
in
establishing
detection
and
quantitation
limits
have
varied
between
two
extremes
of
establishing
the
limit
in
a
state­
of­
the­
art
research
laboratory
to
reflect
the
lowest
possible
limit
that
can
be
achieved,
and
establishing
the
limit
based
on
statistical
prediction
intervals
calculated
from
a
large
number
of
laboratories
with
varying
levels
of
experience,
instrumentation
and
competence.
Generally,
use
of
the
former
has
been
employed
to
serve
as
a
goal
or
performance
standard
to
be
met
by
other
laboratories,
whereas
use
of
the
latter
treats
the
limit,
not
as
a
performance
standard
that
needs
to
be
met
by
each
laboratory,
but
rather
as
a
characterization
of
the
future
performance
of
the
entire
universe
of
laboratory
capabilities
at
the
time
of
method
development.

Historical
approaches
to
recognizing
method
capabilities
also
have
varied
between
those
that
allow
the
error
(
relative
standard
deviation,
or
RSD)
among
low
level
measurements
to
vary,
depending
on
the
capabilities
of
the
method
and
those
that
fix
this
error
(
RSD)
at
a
specific
level.

EPA
will
evaluate
various
concepts
against
this
criterion
by
examining
the
ease
of
adjustment
of
the
RSD
or
other
performance
measure
in
the
context
of
the
measurement
capability
of
the
laboratory
or
the
need
to
adjust
the
measurement
error
to
allow
for
environmental
decisions.
In
evaluating
the
concepts,
EPA
will
Chapter
4
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
4­
5
give
preference
to
those
concepts
that
strike
a
reasonable
balance
between
using
state­
of­
the
art
laboratories
and
a
highly
varied
community
of
laboratories
to
establish
quantitation
limits.

4.6
Criterion
6
Criterion
6:
Detection
and
quantitation
concepts
must
be
applicable
to
the
variety
of
decisions
made
under
the
Clean
Water
Act,
and
should
support
state
and
local
obligations
to
implement
measurement
requirements
that
are
at
least
as
stringent
as
those
set
by
the
Federal
Government.

The
Clean
Water
Act
requires
EPA
to
conduct,
implement,
and
oversee
a
variety
of
data
gathering
programs.
As
noted
in
Section
3.2
of
this
TSD,
these
programs
include,
but
are
not
limited
to:

°
Survey
programs
to
establish
baselines
and
monitor
changes
in
ambient
water
quality;
°
Screening
studies
to
identify
emerging
concerns
and
establish
the
need
for
more
in­
depth
assessment;
°
Effluent
guideline
studies
to
establish
technology­
based
standards
for
the
control
of
pollutants
in
wastewater
discharges;
°
Toxicity
and
environmental
assessment
studies
to
establish
water
quality­
based
standards
for
the
control
of
pollutants
in
wastewater,
and
°
Risk
assessment
studies
designed
to
characterize
and
evaluate
human
health
and
environmental
risks
associated
with
various
water
body
uses.

In
addition,
EPA
needs
to
apply
detection
to
permitting,
compliance
monitoring,
and
other
uses
of
the
40
CFR
part
136
methods.
These
applications
include:

C
Permitting;
C
Ambient
and
effluent
compliance
monitoring
under
NPDES
and
the
pretreatment
program;
C
Ambient
and
effluent
compliance
monitoring
under
State
and
local
programs;
C
Quality
control
in
analytical
laboratories;
and
C
Method
promulgation
In
theory,
EPA
could
evaluate
each
of
these
applications
independently
and
identify
a
detection
and
quantitation
limit
concept
that
is
best
suited
to
each
application.
Such
an
approach
could
potentially
result
in
the
need
for
up
to
10
different
detection
and/
or
quantitation
limit
concepts.
EPA
believes
that
such
an
approach
would
increase
confusion,
increase
record
keeping
burdens,
and
increase
laboratory
testing
burdens.
For
these
reasons,
EPA
believes
it
is
desirable
to
adopt
a
single
pair
of
related
detection
and
quantitation
procedures
that
can
be
used
to
address
all
Clean
Water
Act
applications.

EPA
also
believes
that
1)
it
is
unrealistic
to
expect
other
organizations,
such
as
the
U.
S.
Geological
Survey,
the
Food
and
Drug
Administration,
ASTM­
International,
AOAC,
etc.,
to
adopt
and
standardize
on
the
concept
selected
by
EPA
for
it's
use
in
CWA
programs,
and
2)
it
is
desirable
to
allow
use
of
concepts
and
methods
developed
by
these
and
other
organizations
to
be
used
in
CWA
programs.
The
inclusion
of
such
concepts
and
methods
provides
the
stakeholder
community
with
increased
measurement
options
that
may
help
reduce
measurement
costs
or
improve
measurement
performance
for
specific
situations.
This
approach
is
consistent
with
EPA's
movement
to
a
performance­
based
measurement
system
(
PBMS)
and
with
the
intent
of
the
National
Technology
Transfer
and
Advancement
Act
(
NTTAA).
So,
although
EPA
prefers
to
identify
and
adopt
a
single
pair
of
detection
and
quantitation
limit
concepts
that
can
meet
CWA
needs,
EPA
also
believes
that
any
concept
should
be
acceptable
for
use
if
it
meets
all
of
the
criteria
established
above
and
fulfills
the
needs
of
the
specific
CWA
application
in
which
it
should
be
used.
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
4­
6
The
Clean
Water
Act
authorizes
State
or
local
governments
to
implement
specific
aspects
of
the
Act,
with
the
proviso
that
they
do
so
in
a
way
that
is
at
least
as
protective
(
i.
e.,
stringent)
as
the
national
standards
put
forth
by
EPA.
Therefore,
this
criterion
is
intended
to
ensure
that
any
detection
and
quantitation
limit
concept
adopted
by
the
Federal
Government
is
sufficiently
clear
and
defined
that
it
allows
for
comparison
with
concepts
adopted
by
State
or
local
governments.
It
is
important
to
note
that
this
criterion
does
not
establish
the
need
for
a
concept
or
procedure
that
is
less
stringent
than
those
already
in
use
by
State
or
local
governments.

Finally,
it
is
important
to
differentiate
between
detection
and
quantitation
limit
concepts
and
compliance
evaluation
thresholds.
Detection
and
quantitation
limit
concepts
pertain
to
measurement
process
thresholds.
More
specifically,
a
detection
limit
describes
the
lowest
concentration
at
which
it
possible
to
reliably
determine
that
a
substance
is
present,
and
a
quantitation
limit
describes
the
lowest
concentration
at
which
it
is
possible
to
reliably
quantify
the
amount
of
a
substance
that
is
present.
In
contrast,
compliance
evaluation
thresholds
are
used
to
support
wastewater
discharge
limits
established
in
National
Pollutant
Discharge
Elimination
System
(
NPDES)
or
pretreatment
program
permits.
Such
limits
are
usually
expressed
as
either
a
maximum
concentration
of
pollutant
allowed
in
the
discharge
or
a
maximum
mass
of
pollutant
allowed
to
be
discharged
in
a
specific
time
period.

Ideally,
analytical
methods
are
available
to
allow
for
detection
and
quantitation
of
pollutants
at
concentrations
that
are
lower
than
the
discharge
levels
needed
to
protect
or
restore
the
quality
of
the
receiving
water.
When
such
measurement
capability
does
not
exist,
permitting
authorities
must
decide
how
to
incorporate
detection
and
quantitation
limits
into
the
discharge
permit.
Historically,
EPA
has
recommended
that
in
such
cases,
the
permitting
authority
include
the
water
quality­
based
limit
in
the
permit,
but
establish
the
compliance
evaluation
threshold
at
the
quantitation
limit
of
the
most
sensitive
available
method.
However,
as
with
other
aspects
of
the
Clean
Water
Act,
State
and
local
governments
may
adopt
permitting
and
compliance
evaluation
approaches
that
are
at
least
as
stringent
as
those
put
forth
by
EPA,
and
some
States
have
preferred
to
use
the
detection
limit
as
the
compliance
evaluation
threshold.

This
criterion
will
be
evaluated
by
studying
1)
the
applicability
of
various
detection/
quantitation
concepts
to
the
variety
of
data
gathering
decisions
that
must
be
made
under
the
CWA,
including
those
that
do
and
those
that
do
not
involve
compliance
monitoring,
and
2)
the
ability
of
the
concepts
to
support
state
and
local
obligations
for
implementing
CWA.
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
1
Chapter
5
Assessment
This
chapter
summarizes
EPA's
assessment
of
various
detection
and
quantitation
limit
concepts
against
the
evaluation
criteria
established
in
Chapter
4.
Assessments
of
detection
limit
concepts
are
presented
in
Section
5.1
and
include
an
assessment
of:

°
the
EPA
method
detection
limit
(
MDL;
Section
5.1.1),
°
the
ASTM­
International
interlaboratory
detection
estimate
(
IDE;
Section
5.1.2),
°
the
American
Chemical
Society
(
ACS)
limit
of
detection
(
LOD;
Section
5.1.3),
°
the
International
Standards
Organization/
International
Union
of
Pure
and
Applied
Chemistry
(
ISO/
IUPAC)
critical
value
(
CRV;
Section
5.1.4),
and
°
the
ISO/
IUPAC
minimum
detectable
value
(
MDV;
Section
5.1.5).

Assessments
of
quantitation
limit
concepts
are
presented
in
Section
5.2
and
include
an
assessment
of:

°
the
EPA
minimum
level
of
quantitation
(
ML;
Section
5.2.1),
°
the
ASTM­
International
interlaboratory
quantitation
estimate
(
IQE;
Section
5.2.2),
°
the
ACS
limit
of
quantitation
(
LOQ;
Section
5.2.3),
and
°
the
ISO/
IUPAC
LOQ
(
section
5.2.4).

A
brief
summary
of
the
evaluation
is
presented
in
Tables
5­
1
(
detection
limit
concepts)
and
5­
2
(
quantitation
limit
concepts).

EPA
limited
the
assessment
to
detection
and
quantitation
limit
concepts
advanced
by
ASTMInternational
by
ACS,
by
ISO/
IUPAC,
and
by
EPA,
for
use
in
EPA's
Clean
Water
Act
(
CWA)
programs,
because
these
concepts
are
the
most
widely
published
and
pertinent.

5.1
Detection
Limit
Concepts
Sections
5.1.1
through
5.1.5
describe
EPA's
assessment
of
five
detection
limit
concepts.
Each
discussion
is
divided
into
two
major
subsections.
The
first
subsection
describes
the
concept
and,
where
applicable,
the
procedure
that
supports
the
concept,
and
the
second
subsection
details
EPA's
assessment
of
the
concept
based
on
the
five
criteria
established
in
Chapter
4
for
evaluating
detection
limit
concepts.
(
Six
criteria
are
given
in
Chapter
4;
four
of
these
pertain
to
both
detection
and
quantitation
limit
concepts,
one
pertains
only
to
detection
limit
concepts,
and
one
pertains
only
to
quantitation
limit
concepts.).

5.1.1
Evaluation
of
the
MDL
Section
5.1.1.1
provides
an
overview
of
the
MDL
concept
and
the
procedures
used
to
implement
the
concept.
Section
5.1.1.2
describes
EPA's
assessment
of
the
MDL
against
the
five
evaluation
criteria
that
concern
detection
limit
concepts.

5.1.1.1
Description
of
the
MDL
Concept
and
Procedure
As
promulgated
at
40
CFR
Part
136,
Appendix
B,
the
MDL
is
defined
as:

"
the
minimum
concentration
of
a
substance
that
can
be
measured
and
reported
with
99%
confidence
that
the
analyte
concentration
is
greater
than
zero
and
is
determined
from
analysis
of
a
sample
in
a
given
matrix
containing
the
analyte."
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
2
A
six­
step
procedure
is
given
in
Appendix
B,
with
an
optional
seventh
step
to
verify
the
reasonableness
of
the
MDL
determined
in
the
first
six
steps.
The
procedure
is
intended
for
use
by
experienced
analytical
chemists.
A
brief
summary
of
the
MDL
procedure
is
as
follows:

1.
The
analyst
makes
an
estimate
of
the
detection
limit
based
on
one
of
four
options:
the
instrument
signal
to
noise
ratio;
three
times
the
standard
deviation
of
replicate
blank
measurements;
a
noted
break
in
the
slope
of
an
instrument
calibration
curve;
or
known
instrument
limitations.

2.
The
analyst
prepares
a
volume
of
reagent
water
that
is
as
free
of
the
target
analyte
as
possible
(
if
the
MDL
is
to
be
determined
in
reagent
water).

3.
The
analyst
prepares
a
sufficient
volume
of
spiked
reagent
water
(
or
of
an
alternate
matrix)
to
yield
seven
replicate
aliquots
that
have
a
concentration
of
the
target
analyte
that
is
at
least
equal
to
or
in
the
same
concentration
range
as
the
estimated
detection
limit
(
it
is
recommended
that
the
concentration
of
the
replicate
aliquots
be
between
1
and
5
times
the
estimated
detection
limit).

4.
All
of
the
replicate
aliquots
are
processed
through
the
entire
analytical
method.

5.
The
variance
(
S2)
and
standard
deviation
(
S)
of
the
replicate
measurements,
as
follows:

where:

Xi;
from
i=
1
to
n,
are
the
analytical
results
in
the
final
method
reporting
units
obtained
from
the
n
sample
aliquots
and
E
refers
to
the
sum
of
the
X
values
from
i=
l
to
n.

6.
The
MDL
is
then
determined
by
multiplying
the
standard
deviation
(
S)
by
the
Student's
t­
statistic
at
a
99%
percentile
for
n­
1
degrees
of
freedom.
If
seven
replicates
are
used,
the
Student's
t­
value
is
3.143.
This
information
is
used
to
calculate
the
MDL
as
follows:

where:

MDL
=
the
method
detection
limit
t(
n­
1,1­"
=
.99)
=
the
Student's
t­
value
appropriate
for
a
99%
confidence
level
and
a
standard
deviation
estimate
with
n­
1
degrees
of
freedom,
and
Chapter
5
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
3
S
=
standard
deviation
of
the
replicate
analyses.

A
95%
confidence
interval
for
the
determined
MDL
may
be
calculated
from
percentiles
of
the
Chisquared
over
degrees
of
freedom
distribution
(
P2/
df).

7.
The
optional
iterative
procedure
to
verify
the
reasonableness
of
the
MDL
involves
spiking
the
matrix
at
the
MDL
that
was
determined
in
Step
6,
and
analyzing
another
seven
replicates
spiked
at
this
level.
The
F­
ratio
of
the
variances
(
S2)
is
determined
and
compared
with
the
F­
ratio
found
in
the
table,
which
is
3.05.
If
S2
A/
S2
B>
3.05,
the
analyst
is
instructed
to
respike
at
the
most
recently
calculated
MDL
and
process
the
samples
through
the
procedure
starting
with
Step
4.
If
S2
A/
S2
B>
3.05,
then
the
pooled
standard
deviation
is
determined.
The
pooled
standard
deviation
is
then
used
to
calculate
the
final
MDL
as
follows:

where
2.681
is
equal
to
t(
12,
1­"
=.
99).

The
95%
confidence
limits
around
the
final
MDL
may
be
determined
using
the
Chi­
squared
distribution.

The
MDL
procedure
given
at
40
CFR
136,
Appendix
B
is
described
as
being
applicable
1)
to
a
wide
variety
of
sample
types,
ranging
from
reagent
water
containing
the
analyte
of
interest
to
wastewater
containing
the
analyte
of
interest,
and
2)
a
broad
variety
of
physical
and
chemical
measurements.
To
accomplish
this,
the
procedure
was
made
device­
or
instrument­
dependent.

5.1.1.2
Assessment
of
the
MDL
Against
the
Evaluation
Criteria
The
following
five
subsections
discuss
the
MDL
concept
and
procedure
in
the
context
of
the
five
evaluation
criteria
that
concern
detection
limit
concepts.

5.1.1.2.1
Criterion
1:
The
detection
limit
concept
must
be
scientifically
valid.

For
the
purposes
of
evaluating
scientific
validity,
EPA
is
using
the
conditions
established
by
the
Supreme
Court
in
Daubert
v.
Merrell
Dow
Pharmaceuticals
(
see
Chapter
4,
Criterion
1).

Condition
1:
It
can
be
(
and
has
been)
tested.
The
MDL
procedure
meets
this
condition.
The
MDL
has
been
used
experimentally
since
1980
and
in
a
regulatory
context
since
1984.
The
MDL
procedure
is
the
most
widely
used
and,
therefore,
the
most
widely
tested
detection
limit
procedure
in
the
history
of
concepts
of
detection.

Critics
of
the
MDL
have
noted
that
the
detection
limit
produced
with
the
MDL
procedure
can
vary
depending
on
the
spike
levels
used.
This
would
suggest,
on
the
surface,
that
the
MDL
procedure
can
be
used
to
obtain
results
that
do
not
support
the
MDL
concept.
This
is
a
misinterpretation
of
the
MDL
based
on
the
mistaken
assumption
that
spike
levels
may
be
arbitrarily
selected.
In
fact,
step
1)
of
the
MDL
procedure
specifies
a
number
of
criteria,
based
on
chemical
analytical
considerations,
that
must
be
met
in
selecting
the
spike
levels
(
see
Section
5.1.1.1,
Step
1).

In
preparation
for
the
assessment
of
detection
and
quantitation
concepts,
EPA
exhaustively
tested
the
MDL
procedure
with
10
different
techniques,
at
decreasing
spike
concentrations,
to
evaluate
this
concern
and
determine
how
well
the
MDL
procedure
characterized
the
region
of
interest.
Results
of
the
study
suggest
that,
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
4
although
the
calculated
MDL
could
vary
depending
on
the
spike
level
used,
the
procedure
was
capable
of
reasonably
estimating
a
detection
limit
when
the
full
iterative
procedure
was
employed.

Condition
2:
It
has
been
subjected
to
peer
review
and
publication.
The
MDL
meets
this
condition.
Prior
to
promulgation
by
EPA,
the
MDL
concept
and
supporting
procedure
was
published
by
Glaser
et
al
in
a
peer
reviewed
journal
(
Glaser
et
al.,
1981).

Condition
3:
The
error
rate
associated
with
the
procedure
is
either
known
or
can
be
estimated.
It
is
possible
to
estimate
error
rates
associated
with
the
MDL
procedure.
It
is
also
possible
to
calculate
confidence
intervals
about
estimated
MDLs
that
are
expressions
of
uncertainty
in
the
estimates.
Clarification
is
in
order
because
the
promulgated
MDL
definition
may
be
somewhat
confusing
in
some
respects.
In
particular,
the
definition
is
confusing
with
regard
to
whether
the
MDL
is
a
true
concentration
or
a
value
estimated
from
measured
data.
EPA
believes
that
the
concept
of
MDL
can
be
clarified
by
slightly
revising
the
definition
as
follows:

"
The
method
detection
limit
(
MDL)
is
the
measured
concentration
at
which
there
is
99%
confidence
that
a
given
analyte
is
present
in
a
given
sample
matrix.
The
MDL
is
estimated
from
replicate
analyses
of
a
matrix
containing
the
analyte."

Condition
4:
Standards
exist
and
can
be
maintained
to
control
its
operation.
The
MDL
concept
is
supported
by
a
clearly
defined,
published
procedure
to
control
its
operation.
The
procedure
gives
the
steps
to
be
followed
and
instructs
the
analyst
to
use
the
entire
measurement
process.
Hundreds,
if
not
thousands,
of
laboratories
have
successfully
implemented
the
MDL
procedure
since
its
promulgation
in
1984.
EPA
has
found
that
when
laboratories
are
required
to
perform
MDL
studies
as
part
of
an
interlaboratory
study,
the
results
reported
by
the
laboratories
are
generally
consistent
(
i.
e.,
within
the
expected
variability).
EPA
has
observed
similar
consistency
in
use
of
the
MDL
by
laboratories
required
to
perform
the
procedure
to
demonstrate
proficiency
with
a
method.
Therefore,
the
MDL
meets
this
condition.

That
said,
however,
EPA
believes
that
additional
guidance
can
be
provided
to
clarify
certain
aspects
of
the
MDL
procedure,
particularly
with
respect
to
handling
of
outliers,
the
optional
reasonableness
step,
and
multi­
analyte
test
methods.
The
MDL
procedure
contains
no
discussion
of
outliers.
It
may
be
helpful
to
clarify
that
1)
results
should
be
discarded
only
if
the
results
are
associated
with
a
known
error
that
occurred
during
analysis
(
e.
g.,
the
replicate
was
spiked
twice)
or
through
a
statistically
accepted
analysis
of
outliers,
and
2)
that
laboratories
should
not
run
more
than
seven
replicates
and
simply
pick
the
best
of
the
seven
results.
The
optional
step
involves
iterative
testing
to
verify
that
the
determined
MDL
is
reasonable;
EPA
has
observed
that
few
organizations
bother
to
perform
this
step.
EPA
also
has
observed
that
when
a
method
involves
a
large
number
of
analytes,
it
can
be
difficult
to
get
all
analytes
to
pass
the
iterative
test
in
the
same
run.
Additional
guidance
on
this
issue
may
be
needed
if
the
iterative
test
is
to
become
a
required
component
of
the
MDL,
as
suggested
above
in
the
discussion
of
Condition
3.

Condition
5:
It
has
attracted
widespread
acceptance
within
a
relevant
scientific
community.
The
MDL
meets
this
condition.
Within
EPA,
the
MDL
has
been
used
by
the
Office
of
Research
and
Development,
Office
of
Science
and
Technology,
Office
of
Ground
Water
and
Drinking
Water,
Office
of
Solid
Waste,
Office
of
Emergency
and
Remedial
Response,
and
other
offices.
The
MDL
also
has
been
used
outside
of
EPA
in
methods
published
by
ASTM­
International,
in
Standard
Methods
for
the
Examination
of
Water
and
Wastewater,
published
by
the
American
Public
Health
Association
(
APHA,
the
American
Water
Works
Association
(
AWWA),
and
the
Water
Environment
Federation
(
WEF),
and
elsewhere.
Although
the
MDL
has
been
criticized
by
some,
EPA
believes
that
it
is
the
most
widely
used
concept
of
detection
within
the
environmental
chemistry
community.
Many
States
incorporate
the
MDL
into
NPDES
permits,
for
example,
and
laboratories
often
advertise
MDLs
in
their
sales
literature.
Chapter
5
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
5
5.1.1.2.2
Criterion
2:
The
concepts
must
address
demonstrated
expectations
of
laboratory
and
method
performance,
including
routine
variability.

The
MDL
procedure
is
designed
to
demonstrate
laboratory
performance
with
a
given
method,
and
can
be
applied
to
a
broad
variety
of
physical
and
chemical
methods.
To
accomplish
this,
the
procedure
was
made
device­
or
instrument­
independent.
The
procedure
also
recognizes
the
importance
of
analyst
experience,
and
explicitly
directs
the
analyst
to
employ
all
sample
processing
and
computation
steps
given
in
the
analytical
method
when
determining
the
MDL.
(
All
of
these
aspects
are
addressed
in
the
MDL
procedure
published
at
40
CFR
136,
Appendix
B).

When
the
MDL
procedure
is
followed
as
intended,
i.
e.,
the
MDL
is
determined
by
an
experienced
analyst
on
each
device
or
instrument
used
for
a
given
method,
the
demonstrated
MDL
will
include
routine
variability
associated
with
the
laboratory
and
the
method.

EPA
recognizes,
however,
that
one
laboratory
may
obtain
detection
limits
that
are
lower
or
higher
than
those
in
another
laboratory.
If
the
MDL
is
being
determined
during
method
development,
it
is
important
to
determine
the
MDL
at
more
than
one
laboratory
to
ensure
the
MDL
published
in
the
method
reflects
demonstrated
expectations
of
method
performance
in
a
community
of
laboratories.
EPA
does
not
believe
that
this
community
should
be
so
broad
as
to
include
the
entire
universe
of
possible
laboratories
that
might
desire
to
practice
the
method.
Rather,
EPA
believes
this
community
should
include
well­
operated
laboratories
that
are
experienced
with
the
techniques
used
in
the
method
and
that
have
some
familiarity
with
the
method.

In
recent
years,
EPA's
Office
of
Science
and
Technology
has
used
single­
laboratory
studies
to
develop
an
initial
estimate
of
the
MDL
for
a
new
or
modified
method,
and
has
verified
these
limits
in
interlaboratory
studies
or
by
conducting
additional
single­
laboratory
studies
in
other
laboratories.
For
example,
when
EPA
initially
drafted
Method
1631
for
measurement
of
mercury,
EPA
estimated
the
MDL
to
be
0.05
ng/
L
based
on
results
produced
by
a
contract
research
laboratory.
Additional
single­
laboratory
MDL
studies
conducted
in
other
laboratories
suggested
that
the
MDL
should
be
raised
to
0.2
ng/
L
to
better
reflect
existing
capabilities
of
the
measurement
community.
During
EPA's
interlaboratory
study,
each
laboratory
was
asked
to
conduct
an
MDL
study.
Every
laboratory
in
the
interlaboratory
study
met
the
MDL
of
0.2
ng/
L,
the
value
published
in
the
promulgated
version
of
Method
1631.

EPA
believes
that
1)
the
MDL
procedure
does
address
demonstrated
expectations
of
laboratory
and
method
performance,
including
routine
variability,
and
2)
if
the
MDL
procedure
is
being
employed
for
method
development
purposes,
it
should
be
performed
in
multiple
laboratories
to
ensure
that
it
adequately
demonstrates
expectations
in
a
community
of
qualified
laboratories.

5.1.1.2.3
Criterion
3:
The
detection
limit
concept
must
be
supported
by
a
practical
and
affordable
procedure
that
a
single
laboratory
can
use
to
evaluate
method
performance.

The
MDL
is
designed
for
use
by
a
single
laboratory.
The
promulgated
version
of
the
MDL
procedure
can
be
performed
with
as
few
as
seven
analyses.
If
the
method
is
to
be
performed
in
a
matrix
other
than
reagent
water,
additional
analyses
may
be
needed
to
determine
the
MDL
in
that
matrix.

Use
of
the
optional
iterative
procedure
would
increase
the
number
of
analyses
by
seven
each
time
the
procedure
is
implemented.
If
the
procedure
is
implemented
two
times
in
reagent
water,
a
total
of
14
analyses
are
required.
If
the
procedure
is
implemented
two
times
in
an
alternate
matrix,
EPA
estimates
that
17­
20
analyses
may
be
required.
In
any
of
these
scenarios,
the
entire
MDL
determination
can
be
performed
in
a
single
analytical
batch
(
most
EPA
methods
specify
batch
sizes
of
20
samples).
Based
on
this,
EPA
believes
that
the
MDL
is
among
the
most
affordable
of
the
procedures
that
have
been
suggested
for
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
6
determining
detection
limits.
In
terms
of
cost,
the
only
concept
that
compares
favorably
with
the
MDL
is
the
instrument
detection
limit
(
IDL).
Although
most
versions
of
the
IDL
compare
favorably
in
terms
of
the
number
of
samples
analyzed,
the
requirement
to
perform
the
test
on
three
non­
consecutive
days
has
the
potential
to
disrupt
routine
laboratory
operations
on
three
days
instead
of
one.
In
addition,
the
IDL
does
not
include
sample
preparation
steps
and,
therefore,
does
not
completely
characterize
a
method.

5.1.1.2.4
Criterion
4:
The
detection
limit
concept
should
result
in
a
measured
concentration
at
which
there
is
99%
confidence
that
a
substance
will
be
detected
when
the
analytical
method
is
performed
by
experienced
staff
in
a
well­
operated
laboratory.

EPA
believes
the
MDL
meets
this
condition
and
refers
the
reader
to
the
discussion
of
this
subject
under
Section
5.1.1.2.1,
Condition
3.

5.1.1.2.5
Criterion
5:
The
detection
limit
concept
must
be
applicable
to
the
variety
of
decisions
made
under
the
Clean
Water
Act
and
should
support
State
and
local
obligations
to
implement
measurement
requirements
that
are
at
least
as
stringent
as
those
set
by
the
Federal
Government.

The
MDL
meets
this
criterion.
The
MDL
has
been
successfully
applied
to
a
variety
of
decisions
under
the
CWA
since
1984.
In
addition,
many
States
and
others
have
adopted
the
MDL
in
their
own
programs.

5.1.2
Evaluation
of
the
ASTM­
International
Interlaboratory
Detection
Estimate
(
IDE)

The
interlaboratory
detection
estimate
(
IDE)
was
developed
by
ASTM
with
support
from
members
of
the
regulated
industry
in
an
attempt
to
provide
a
scientifically
sound,
comprehensive
detection
limit
procedure
that
addresses
the
concerns
of
the
regulated
industry,
of
statisticians,
and
of
analysts
involved
in
ASTM
Committee
D
19
on
water.

A
brief
summary
of
the
procedure
is
given
in
Section
5.1.2.1
and
Section
5.1.2.2
presents
EPA's
assessment
of
the
IDE
against
the
five
criteria
established
for
evaluating
detection
limit
concepts.

5.1.2.1
Description
of
the
IDE
Concept
and
Procedure
ASTM
Designation
D
6091
is
the
Standard
Practice
for
99
%/
95
%
Interlaboratory
Detection
Estimate
(
IDE)
for
Analytical
Methods
with
Negligible
Calibration
Error.
As
stated
in
the
practice:

"
The
IDE
is
computed
to
be
the
lowest
concentration
at
which
there
is
90
%
confidence
that
a
single
measurement
from
a
laboratory
selected
from
the
population
of
qualified
laboratories
represented
in
an
interlaboratory
study
will
have
a
true
detection
probability
of
at
least
95
%
and
a
true
nondetection
probability
of
at
least
99
%
(
when
measuring
a
blank
sample)."
Chapter
5
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
7
The
IDE
is
determined
and
verified
using
a
procedure
containing
5
major
steps
with
approximately
53
substeps
and
conditions.
The
full
text
of
the
IDE
procedure
is
available
from
ASTM­
International.
The
five
major
steps
and
their
functions
are
given
in
Section
6
of
the
IDE
procedure
and
are
as
follows:

1.
Overview
of
the
procedure.

2.
IDE
Study
Plan,
Design,
and
Protocol
­
in
this
section,
the
task
manager
(
study
supervisor)
chooses
the
analyte,
matrix,
and
analytical
method.
Details
are
given
for
range
finding;
the
concentrations
to
be
used
in
the
study;
the
study
protocol
(
ASTM
Practice
D
2777
is
suggested);
the
allowable
sources
of
variation;
and
the
number
of
laboratories,
analysts,
and
days,
over
which
the
study
will
be
conducted.

3.
Conduct
the
IDE
Study,
Screen
the
Data,
and
Choose
a
Model
­
after
the
study
data
are
collected
and
screened
according
to
ASTM
Practice
D
2777,
interlaboratory
standard
deviation
(
ILSD)
versus
concentration
data
are
tabulated
and
one
of
three
models
is
fit
to
the
data.
The
first
attempt
is
at
fitting
a
constant
model.
If
the
attempt
fails,
a
straight­
line
model
is
attempted.
If
the
straight­
line
model
fails,
an
exponential
model
is
fitted.
After
fitting,
the
model
is
evaluated
for
reasonableness
and
lack
of
fit.
If
the
model
fails,
the
study
supervisor
determines
if
a
subset
of
the
data
should
be
analyzed
or
if
more
data
are
needed.

4.
Compute
the
IDE
­
the
IDE
is
computed
using
the
ILSD
model
selected
in
Step
3
to
estimate
the
interlaboratory
standard
deviation
at
a
true
concentration
of
zero
and
at
the
IDE,
using
a
mean
recovery
model
to
transform
measured
and
true
concentrations.
The
IDE
is
computed
as
a
one­
sided
90
%
confidence
upper
statistical
tolerance
limit
or
interval.

5.
Nontrivial
Amount
of
Censored
Data
­
this
section
addresses
the
effect
of
"
non­
detects"
or
"
lessthans
Suggestions
are
given
to
see
if
uncensored
data
can
be
obtained
from
the
laboratories
or
if
the
study
needs
to
be
augmented
with
additional
data.
Suggestions
are
given
for
fitting
a
model
to
data
that
contain
less
than
10
%
non­
detects
or
less­
thans
to
produce
an
IDE.

5.1.2.2
Assessment
of
the
IDE
Against
the
Evaluation
Criteria
The
following
five
subsections
discuss
the
IDE
concept
and
procedure
in
the
context
of
the
five
evaluation
criteria
that
concern
detection
limit
concepts.

5.1.2.2.1
Criterion
1:
The
detection
limit
concept
must
be
scientifically
valid
(
validity
is
comprised
of
five
conditions)

Condition
1:
It
can
be
(
and
has
been)
tested.
EPA
is
not
aware
of
any
organization,
including
ASTM,
that
has
conducted
a
study
to
test
the
procedure
as
written
(
i.
e.,
designed
and
implemented
an
interlaboratory
study
that
involves
estimating
an
initial
IDE
(
IDE0)
and
multilaboratory
analyses
of
multiple
concentrations
of
each
matrix
of
interest
surrounding
IDE0).
Developers
of
the
concept
performed
limited
testing
of
the
concept
on
1)
simulated
datasets
and
2)
real
world
data
sets
generated
for
other
purposes.
These
real
world
data
sets,
however,
are
of
limited
value
for
testing
the
IDE
because
the
concentrations
ranges
associated
with
the
data
are
above
the
low
level
region
of
interest.
As
part
of
this
reassessment,
EPA
tested
a
variant
of
the
IDE
procedure
on
single­
laboratory
datasets
designed
for
characterization
of
an
analytical
method
in
the
region
of
detection.
Despite
the
lack
of
comprehensive
testing,
EPA
believes
that
the
procedure
can
be
tested,
and
therefore
meets
part
of
this
condition.
Specifically,
the
IDE
meets
the
condition
that
it
can
be
tested,
but
it
only
partially
meets
the
condition
that
it
has
been
tested.
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
8
Condition
2:
It
has
been
subjected
to
peer
review
and
publication.
Although
the
IDE
has
not
been
published
in
the
peer­
reviewed
scientific
literature,
the
IDE
has
undergone
extensive
review
and
ballot
by
members
of
ASTM
Committee
D
19,
many
of
whom
are
qualified
peer
reviewers.
Therefore,
although
the
IDE
does
not
meet
this
condition
in
the
sense
of
formal
peer
review
and
publication,
EPA
believes
it
does
meet
the
intent
of
this
condition
(
i.
e.,
submission
to
scrutiny
of
the
scientific
community).

Condition
3:
The
error
rate
associated
with
the
procedure
is
either
known
or
can
be
estimated.
In
theory,
expert
statisticians
could
estimate
the
error
rate
of
the
IDE.
However,
the
IDE
procedure
is
extremely
complex
from
an
analytical
chemistry
and
statistical
perspective.
As
a
result,
it
is
unlikely
that
the
error
rate
could
be
estimated
by
the
typical
users
of
the
analytical
method
to
which
it
is
applied,
or
even
to
the
typical
developers
of
analytical
methods.
Moreover,
EPA
found
the
model
selection
procedure
to
be
highly
subjective,
a
situation
likely
to
yield
different
IDEs
from
the
same
dataset,
depending
on
the
staff
involved
in
performing
the
calculations.
In
practice,
such
conditions
make
it
impossible
to
estimate
the
actual
error
associated
with
the
IDE.
Therefore,
the
IDE
fails
this
condition.

Condition
4:
Standards
exist
and
can
be
maintained
to
control
its
operation.
The
IDE
concept
and
procedure
is
supported
by
a
published
procedure
(
standard)
to
control
its
operation.
The
procedure
gives
the
steps
to
be
followed
in
determining
the
IDE
and
instructs
the
study
supervisor
how
to
gather
the
data
and
compute
an
IDE.

However,
there
are
several
"
gray
areas"
in
the
published
procedure.
The
most
significant
gray
area
is
in
the
description
of
model
selection.
The
procedure
provides
insufficient
guidance
on
use
of
residual
plots
to
evaluate
and
select
models
and,
as
a
result,
selection
of
the
model
may
be
very
subjective,
especially
if
the
number
of
concentrations
is
low.
The
discussion
of
what
model
to
use
after
rejecting
the
exponential
and
linear
model
is
also
very
vague.
The
Rocke
and
Lorenzato
(
hybrid)
model
is
mentioned,
as
well
as
models
with
more
than
one
coefficient.
Much
of
the
data
evaluated
by
EPA
have
tended
to
suggest
the
exponential
model,
based
on
the
statistical
tests
discussed,
but
those
data
have
almost
always
shown
residual
"
patterns"
when
using
this
model,
which
would
then
lead
to
consideration
of
other
models.
In
addition,
fitting
the
constant
model
is
never
discussed
in
detail.
Most
likely,
this
is
simply
done
by
calculating
a
mean
(
weighted
if
necessary)
of
the
variances
from
the
different
concentrations;
however,
this
is
never
explicitly
stated.

Another
concern
with
the
standard
is
that
it
gives
procedures
that
are
inconsistent
with
procedures
given
in
the
IQE
standard,
even
though
the
two
concepts
should
be
consistent
for
a
given
analyte
with
a
given
method.
For
example,
the
exponential
model
figures
prominently
in
the
IDE
procedure,
where
it
is
one
of
the
three
main
models
discussed.
The
Rocke
and
Lorenzato
model
is
not
discussed
in
the
IDE
procedure,
but
it
figures
prominently
in
the
IQE
procedure.
In
theory,
a
single
model
should
support
the
definition
of
both
the
detection
and
quantitation
limits
for
a
given
analyte
by
a
given
method.
As
another
example,
the
IDE
procedure
includes
a
multiplier
to
account
for
bias
in
estimating
the
true
standard
deviation
with
the
sample
standard
deviation,
but
the
IQE
does
not.

Finally,
the
procedure
contains
statistical
errors
that,
if
followed
as
written,
could
produce
inaccurate
IDE
values.
For
example,
Table
1
of
the
procedure
contains
"
Computations
to
Estimate
Straight­
Line
Model
Coefficients
by
Means
of
Least
Squares
 
Ordinary
and
Weighted,"
but
the
weighted
least
squares
formulas
given
in
the
table
are
incorrect.
The
formulas
for
the
weighted
means
of
the
spike
values
and
results
given
in
Table
1
of
D6091
would
only
be
appropriate
if
the
weighting
were
done
based
on
the
number
of
replicates
per
spike
level,
rather
than
on
the
estimated
variance
calculated
using
the
chosen
standard
deviation
model.

In
conclusion,
EPA
believes
that
although
the
IDE
is
supported
by
a
published
procedure,
that
procedure
will
not
control
its
operation
because
of
the
degree
of
subjectivity
involved
implementing
the
Chapter
5
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
9
procedures,
errors
in
the
procedure,
and
internal
inconsistencies
with
its
IQE
counterpart.
Therefore,
the
IDE
fails
this
condition.

Condition
5:
It
has
attracted
widespread
acceptance
within
a
relevant
scientific
community.
The
IDE
fails
this
condition
because
it
is
only
familiar
to,
and
has
been
accepted
by,
a
very
narrow
segment
of
the
scientific
community.
Although
the
IDE
has
been
approved
by
ASTM
for
more
than
5
years,
EPA
is
not
aware
of
an
IDE
that
has
been
published
in
the
open
literature
or
in
an
analytical
method,
including
an
ASTM
method.

5.1.2.2.2
Criterion
2:
The
detection
limit
concept
must
address
demonstrated
expectations
of
laboratory
and
method
performance,
including
routine
variability.

The
IDE
procedure
is
designed
to
reflect
expectations
of
interlaboratory
performance,
including
routine
variability.
The
procedure
contains
extensive
instructions
for
dealing
with
unusual
conditions,
including
sources
of
variability.
However,
EPA
studies
of
a
single­
laboratory
variant
of
the
procedure,
suggested
that
the
procedure
may
not
always
work
as
intended.
For
example,
model
selection
based
upon
hypothesis
tests
(
as
described
in
D6091,
Section
6.3.3.2)
almost
always
indicated
that
the
exponential
model
should
be
used
even
when
the
data
seemed
to
be
show
constant
or
approximately
linear
error,
while
examination
of
residual
plot
indicated
"
systematic
behavior"
(
i.
e.,
non­
random
deviations
from
the
model)
for
the
exponential
and
linear
models.
Another
concern
with
the
IDE
procedure
is
that
use
of
the
nonmandatory
appendices
in
ASTM
D
6512
to
determine
the
fit
of
a
model
may
produce
results
that
differ
from
those
that
would
be
obtained
by
using
the
default
procedures
for
testing
model
fit
that
are
built
into
off­
theshelf
statistical
software.
Such
observations,
along
with
the
concerns
described
in
Section
5.1.2.2.1,
condition
4,
lead
EPA
to
believe
that,
while
the
IDE
concept
addresses
demonstrated
expectations
of
laboratory
and
method
performance,
the
IDE
procedure
does
not
adequately
do
so.
Therefore,
the
IDE
only
partially
meets
this
criterion.

5.1.2.2.3
Criterion
3:
The
detection
limit
concept
must
be
supported
by
a
practical
and
affordable
procedure
that
a
single
laboratory
can
use
to
evaluate
method
performance.

The
IDE
procedure
is
designed
for
use
by
an
ASTM
study
supervisor
or
task
manager
and
not
as
a
procedure
that
a
single
laboratory
can
use
to
evaluate
method
performance.
EPA
is
aware
that
ASTM
Committee
D
19
is
developing
a
Within­
laboratory
Detection
Estimate
(
WDE),
but
the
WDE
is
presently
only
in
the
formative
stages.
The
WDE
may
meet
this
criterion,
but
the
IDE
does
not.

Regarding
cost,
the
IDE
procedure
would
be
the
most
costly
of
the
procedures
that
EPA
has
evaluated
because
of
time
it
would
take
to
understand
and
implement
the
procedure,
and
requirements
for:
(
1)
estimation
of
IDE0,
(
2)
interlaboratory
data,
(
3)
extensive
statistical
intervention
in
determining
the
correct
model,
and
(
4)
possible
reanalyses
if
the
resulting
IDE
do
not
meet
the
criteria
in
the
procedure.

5.1.2.2.4
Criterion
4:
The
detection
limit
concept
should
result
in
a
measured
concentration
at
which
there
is
99%
confidence
that
a
substance
will
be
detected
when
the
analytical
method
is
performed
by
experienced
staff
in
a
well­
operated
laboratory.

The
IDE
definition
states
that
the
IDE
provides
a
95%
probability
of
detection.
However,
setting
aside
the
use
of
a
95%
figure
and
the
fact
that
the
IDE
is
an
interlaboratory
concept
that
cannot
be
applied
in
a
single
laboratory,
the
IDE
procedure
does
not
satisfy
this
criterion
because
it
includes
an
allowance
for
false
negatives
and
a
tolerance
interval
that
inappropriately
increase
the
detection
limit.
For
a
discussion
of
this
issue,
see
sections
3.3.6
(
false
positives
and
false
negatives)
and
3.3.7
(
prediction
and
tolerance
intervals)
in
Chapter
3
of
this
document.
When
the
allowance
for
false
negatives
and
the
prediction
and
tolerance
intervals
are
taken
into
account,
the
resulting
detection
limit
(
IDE)
is
raised
to
the
point
at
which
the
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
10
probability
of
a
false
positive
is
less
than
0.00000001
(
10­
8).
This
false
positive
rate
is
excessive
and
would
yield
numerical
values
of
little
practical
value.
The
IDE
therefore
fails
this
criterion.

5.1.2.2.5
Criterion
5:
The
detection
limit
concept
must
be
applicable
to
the
variety
of
decisions
made
under
the
Clean
Water
Act
and
should
support
State
and
local
obligations
to
implement
measurement
requirements
that
are
at
least
as
stringent
as
those
set
by
the
Federal
Government.

EPA's
comparison
of
detection
limits
produced
by
various
detection
limit
concepts
shows
that
the
median
IDE
is
considerably
higher
than
ACS,
ISO/
IUPAC,
and
EPA
detection
limits.
Although
the
IDE
could
be
applied
to
some
decisions
to
be
made
under
CWA,
it
may
not
support
decisions
when
pollutant
levels
need
to
be
protective
of
human
health
and
the
environment
because
the
IDE
may
be
considerably
higher
than
these
levels.
At
best,
the
IDE
only
partially
meets
this
criterion.

5.1.3
Evaluation
of
the
ACS
Limit
of
Detection
The
limit
of
detection
(
LOD)
was
developed
by
the
Committee
on
Environmental
Improvement
(
CEI)
of
the
American
Chemical
Society
(
ACS).
ACS
is
a
professional
society
for
chemists
and
other
scientists
and
the
publisher
of
a
number
of
scientific
journals.
It
is
not
a
voluntary
consensus
standards
body
(
VCSB),
nor
does
it
develop
or
publish
analytical
methods.
In
1978,
the
ACS/
CEI
began
addressing
concerns
about
the
lack
of
useful
standards
for
interlaboratory
comparisons.
In
1980,
the
Committee
published
its
"
Guidelines
for
Data
Acquisition
and
Data
Quality
Evaluation
in
Environmental
Chemistry"
(
Analytical
Chemistry,
52
2242­
2249),
which
included
the
concepts
of
the
LOD
and
the
limit
of
quantitation
(
LOQ).

5.1.3.1
Description
of
the
ACS
LOD
The
1980
"
Guidelines"
define
the
LOD
as:

"...
the
lowest
concentration
of
an
analyte
that
the
analytical
process
can
reliably
detect.
...
The
LOD
in
most
instrumental
methods
is
based
on
the
relationship
between
the
gross
analyte
signal
St,
the
field
blank
Sb,
and
the
variability
in
the
field
blank
F
b
."

and
construct
the
formal
relations
using
the
equation:

Where
Kd
is
a
constant.
ACS
recommended
a
minimal
value
of
3
for
Kd.
Thus,
the
LOD
is
3F
above
the
gross
blank
signal,
Sb.
In
the
1980
publication,
the
ACS
stated
that
at
Kd
=
3,
there
is
a
7%
risk
of
false
negatives
and
false
positives.
Given
that
the
LOD
is
3F
above
the
blank,
however,
EPA
believes
that
the
risk
of
false
positives
is
somewhat
less
than
1%.

In
1983,
the
ACS
Committee
published
"
Principles
of
Environmental
Analysis"
(
Analytical
Chemistry,
55
2210­
2218).
That
publication
occurred
after
the
1981
paper
on
the
Method
Detection
Limit
(
MDL),
and
ACS/
CEI
stated
that
the
LOD
is
numerically
equivalent
to
the
MDL
as
Sb
approaches
zero.
However,
neither
the
1980
nor
1983
ACS
publications
provide
a
specific
procedure
for
estimating
the
LOD,
nor
do
they
provide
a
minimum
number
of
observations
needed
to
estimate
the
gross
blank
signal
or
the
variability
term
Fb.
Chapter
5
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
11
5.1.3.2
Assessment
of
the
LOD
Against
the
Evaluation
Criteria
The
following
five
subsections
discuss
the
LOD
concept
and
procedure
in
the
context
of
the
five
evaluation
criteria
that
concern
detection
limit
concepts.

5.1.3.2.1
Criterion
1:
The
detection
limit
concept
must
be
scientifically
valid
(
validity
is
comprised
of
five
conditions)

Condition
1:
It
can
be
(
and
has
been)
tested.
Testing
of
the
ACS
LOD
is
hampered
by
1)
the
lack
of
a
supporting
procedure
for
establishing
an
LOD,
and
2)
it's
conceptual
dependence
on
the
variability
associated
with
measuring
blanks.
For
example,
there
is
no
procedure
to
govern
the
minimum
number
of
analyses
needed
to
characterize
the
variability
of
a
blank
sample.
Because
many
environmental
chemistry
techniques
yield
a
zero,
or
possibly
even
negative,
value
when
a
blank
sample
is
analyzed,
and
because
the
LOD
concept
is
based
on
the
standard
deviation
of
these
results,
directly
testing
the
LOD
in
such
techniques
will
yield
a
zero
or
negative
value.
One
solution
for
testing
is
to
rely
on
ACS'
1983
statement
that
the
LOD
is
conceptually
equivalent
to
the
MDL
as
the
blank
signal
approaches
zero,
and
employ
the
MDL
procedure
as
a
means
for
indirectly
testing
the
LOD
concept.
EPA
believes
that
use
of
the
MDL
procedure
is
a
viable
means
for
testing
the
concept;
therefore,
the
LOD
meets
this
condition.

Condition
2:
It
has
been
subjected
to
peer
review
and
publication.
The
LOD
definition
was
published
in
the
peer­
reviewed
journal
Analytical
Chemistry
in
1980
and
1983.
Therefore,
the
LOD
meets
this
condition.

Condition
3:
The
error
rate
associated
with
the
procedure
is
either
known
or
can
be
estimated.
The
error
rates
can
be
estimated,
so
the
LOD
meets
this
condition.
The
error
rate
for
both
false
positives
and
false
negatives
is
stated
to
be
7
%
in
the
1980
Analytical
Chemistry
article.
However,
EPA
believes
that,
because
the
LOD
is
stated
to
be
3
times
the
standard
deviation
of
replicate
measurements
of
a
blank,
the
false
positive
rate
is
overstated
and
is
actually
somewhat
less
than
1
%
whereas
the
false
negative
rate
depends
on
the
true
concentration
in
the
sample.

Condition
4:
Standards
exist
and
can
be
maintained
to
control
its
operation.
The
LOD
lacks
a
clearly
defined
procedure
for
estimating
the
important
terms
required
to
derive
it.
Although
it
may
be
possible
to
derive
LOD
values
from
data
used
to
derive
EPA
MDL
values,
there
is
no
procedure
giving
explicit
instructions
on
the
use
of
replicate
blanks,
replicate
spiked
samples,
or
a
minimum
recommendation
for
the
number
of
replicates.
Therefore,
the
LOD
fails
this
condition.

Condition
5:
It
has
attracted
widespread
acceptance
within
a
relevant
scientific
community.
Because
ACS
does
not
develop
and
publish
analytical
methods,
it
is
difficult
to
determine
the
degree
of
acceptance
of
the
LOD.
EPA
has
not
specifically
investigated
the
numbers
of
papers
published
in
ACS
journals
that
include
LOD
values,
and
EPA's
literature
search
for
detection
and
quantitation
concepts
did
not
uncover
a
large
number
of
citations
that
promote
the
LOD
in
particular.
However,
ACS
LOD
values
have
appeared
in
the
technical
literature.
Given
that
ACS
is
a
relevant
scientific
community,
and
that
use
of
the
LOD
has
appeared
in
the
technical
literature,
EPA
believes
the
LOD
meets
this
condition.

5.1.3.2.2
Criterion
2:
The
detection
limit
concept
must
address
demonstrated
expectations
of
laboratory
and
method
performance,
including
routine
variability.

The
LOD
concept
is
designed
to
address
demonstrated
expectations
of
laboratory
and
method
performance,
including
routine
variability,
so
it
appears
to
meet
this
criterion.
Unfortunately,
ACS
has
not
published
a
procedure
to
implement
the
concept.
In
other
words,
the
LOD
addresses
demonstrated
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
12
expectations
of
laboratory
and
method
performance
in
theory,
but
in
practice,
provides
no
direct
means
for
performing
these
demonstrations.
Therefore,
EPA
believes
the
ACS
LOD
only
partially
meets
this
criterion.

5.1.3.2.3
Criterion
3:
The
detection
limit
concept
must
be
supported
by
a
practical
and
affordable
procedure
that
a
single
laboratory
can
use
to
evaluate
method
performance.

The
ACS
LOD
concept
is
not
supported
by
a
clearly
defined
procedure
for
establishing
the
LOD.
Therefore,
it
fails
this
criterion.

5.1.3.2.4
Criterion
4:
The
detection
limit
concept
should
result
in
a
measured
concentration
at
which
there
is
99%
confidence
that
a
substance
will
be
detected
when
the
analytical
method
is
performed
by
experienced
staff
in
a
well­
operated
laboratory.

The
1983
publication
associated
the
LOD
with
the
"
99%
confidence
level
when
the
difference
(
St
­
Sb)
>
3F."
Therefore,
the
LOD
satisfies
this
criterion.

5.1.3.2.5
Criterion
5:
The
detection
limit
concept
must
be
applicable
to
the
variety
of
decisions
made
under
the
Clean
Water
Act
and
should
support
State
and
local
obligations
to
implement
measurement
requirements
that
are
at
least
as
stringent
as
those
set
by
the
Federal
Government.

In
the
absence
of
a
procedure
for
determining
LOD
values,
the
ACS
LOD
fails
to
meet
this
criterion
because
it
cannot
be
used
in
a
regulatory
context.
The
LOD
passes
only
if
it
is
assumed
to
be
functionally
equivalent
to
the
MDL
(
i.
e.,
the
MDL
procedure
is
used
to
establish
an
LOD).

5.1.4
Evaluation
of
the
IUPAC/
ISO
Critical
Value
(
CRV)

The
critical
value
(
CRV)
was
developed
by
the
International
Union
of
Pure
and
Applied
Chemistry
(
IUPAC)
and
the
International
Organization
for
Standardization
(
ISO).
IUPAC
and
ISO
are
professional
societies
for
chemists
and
other
scientists.
ISO
develops
and
publishes
analytical
methods
through
its
Task
Groups.
In
1995,
Lloyd
Currie
of
the
National
Institute
for
Standards
and
Technology
(
NIST;
formerly
the
National
Bureau
of
Standards)
published
a
signature
discussion
of
IUPAC
concepts
for
detection
and
quantitation
(
Pure
and
Appl.
Chem.
67:
10,
1699­
1722).
Although
refined
during
the
intervening
years
(
see
Currie,
L.
A.,
J.
Radiochem.
And
Nuclear
Chem.
245:
1,
145­
156,
2000),
the
CRV
concept
remains
basically
as
described
in
1995.

5.1.4.1
Description
of
the
ISO/
IUPAC
Critical
Value
(
CRV)
Concept
and
Procedure
The
1995
article
states
that
the
critical
value
(
Lc)
is:

"...
the
minimum
significant
value
of
an
estimated
net
signal
or
concentration,
applied
as
a
discriminator
against
background
noise.
This
corresponds
to
a
1­
sided
significance
test."

For
a
normal
distribution
with
known
variance,
Lc
reduces
to:

Lc
=
z(
1­")
F0
where:
1­"
is
the
false
positive
error
rate,
recommended
at
5
%
("
=
0.05),
and
Chapter
5
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
13
F0
is
the
standard
deviation
at
zero
concentration
If
F0
is
estimated
by
s0
(
replicate
measurements
of
a
blank),
z(
1­")
is
replaced
by
the
Student's
t­
value.
For
7
replicates
(
6
degrees
of
freedom),
the
Student's
t­
value
is
1.943,
where
"
=
0.05.

5.1.4.2
Assessment
of
the
CRV
Against
the
Evaluation
Criteria
The
following
five
subsections
discuss
the
CRV
concept
and
procedure
in
the
context
of
the
five
evaluation
criteria
that
concern
detection
limit
concepts.

5.1.4.2.1
Criterion
1:
The
detection
limit
concept
must
be
scientifically
valid.
(
Validity
is
comprised
of
five
conditions.)

Condition
1:
It
can
be
(
and
has
been)
tested.
The
lack
of
a
supporting
procedure
for
establishing
the
CRV,
coupled
with
it's
conceptual
dependence
on
the
variability
of
blank
measurements
makes
testing
of
the
concept
difficult.
For
example,
if
blank
measurements
fail
to
produce
a
response,
it
is
impossible
to
calculate
a
CRV
because
the
standard
deviation
of
zero
is
zero.
One
solution
for
testing
the
concept
is
to
assume
that
the
CRV
is
functionally
equivalent
to
the
MDL
as
the
blank
signal
approaches
zero,
and
use
a
slightly
modified
version
of
the
MDL
procedure
to
test
the
CRV
concept.
The
slight
modification
involves
selecting
a
Student's
t­
value
based
on
"
=
0.05
instead
of
"
=
0.01,
for
n­
1
degrees
of
freedom.
EPA
believes
this
is
a
reasonable
assumption,
and
therefore,
that
the
MDL
procedure
is
a
viable
means
for
testing
the
CRV
concept.
Therefore,
the
CRV
meets
this
condition.

Condition
2:
It
has
been
subjected
to
peer
review
and
publication.
The
IUPAC/
ISO
definitions
meet
this
criterion.
Moreover,
it
is
likely
that
these
definitions
have
received
greater
peer
review
than
any
of
the
other
concepts.

Condition
3:
The
error
rate
associated
with
the
procedure
is
either
known
or
can
be
estimated.
The
error
rate
is
specified
by
",
suggested
at
0.05
(
5%).
Therefore,
the
CRV
meets
this
condition.

Condition
4:
Standards
exist
and
can
be
maintained
to
control
its
operation.
The
CRV
is
defined
in
the
various
publications
by
Currie.
However,
EPA's
search
of
the
ISO
web
site
found
no
standard
for
control
of
the
concept.
Therefore,
the
CRV
fails
this
condition.

Condition
5:
It
has
attracted
widespread
acceptance
within
a
relevant
scientific
community.
Because
IUPAC
and
ISO
are
international
bodies,
it
is
difficult
to
determine
the
degree
of
acceptance
of
the
CRV
in
the
U.
S.
and
the
world
community.
EPA
has
not
specifically
investigated
the
number
of
papers
in
published
journals
that
include
CRV
values,
but
EPA's
literature
search
for
detection
and
quantitation
concepts
did
not
uncover
a
large
number
of
citations
that
promote
the
CRV
in
particular.
Therefore,
it
is
difficult
to
determine
if
the
CRV
meets
this
condition.

5.1.4.2.2
Criterion
2:
The
detection
limit
concept
must
address
demonstrated
expectations
of
laboratory
and
method
performance,
including
routine
variability.

The
CRV
concept
is
designed
to
account
for
the
variability
of
measurements
of
the
blank
in
the
context
of
a
"
chemical
measurement
process"
(
method).
Unfortunately,
neither
ISO,
IUPAC,
nor
Currie
have
published
a
procedure
to
implement
the
concept.
As
a
result,
the
CRV
addresses
demonstrated
expectations
of
laboratory
and
method
performance
in
theory,
but
in
practice,
provides
no
direct
means
for
performing
these
demonstrations.
Therefore,
EPA
believes
the
CRV
partially
meets
this
criterion.
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
14
5.1.4.2.3
Criterion
3:
The
detection
limit
concept
must
be
supported
by
a
practical
and
affordable
procedure
that
a
single
laboratory
can
use
to
evaluate
method
performance.

The
CRV
concept
is
not
supported
by
a
clearly
defined
procedure
for
establishing
a
CRV.
Therefore,
the
CRV
fails
this
criterion.

5.1.4.2.4
Criterion
4:
The
detection
limit
concept
should
result
in
a
measured
concentration
at
which
there
is
99%
confidence
that
a
substance
will
be
detected
when
the
analytical
method
is
performed
by
experienced
staff
in
a
well­
operated
laboratory.

Although
the
CRV
suggests
"
=
0.05,
resulting
in
1­"
of
0.95
or
95
%
probability
of
detection,
the
concept
allows
specification
of
probabilities.
Therefore,
the
CRV
satisfies
this
criterion.

5.1.4.2.5
Criterion
5:
The
detection
limit
concept
must
be
applicable
to
the
variety
of
decisions
made
under
the
Clean
Water
Act
and
should
support
State
and
local
obligations
to
implement
measurement
requirements
that
are
at
least
as
stringent
as
those
set
by
the
Federal
Government.

In
the
absence
of
a
procedure
for
establishing
CRVs,
the
CRV
concept
fails
to
meet
this
criterion
because
it
cannot
be
used
in
a
regulatory
context.
The
CRV
passes
only
if
it
is
assumed
to
be
functionally
equivalent
to
an
MDL
determined
with
alpha
set
at
0.05
instead
of
0.01
(
i.
e.,
if
the
MDL
procedure,
with
"=
0.05,
is
used
to
establish
a
CRV).

5.1.5
Evaluation
of
the
IUPAC/
ISO
Detection
Limit
The
detection
limit
or
minimum
detectable
value
(
MDV)
was
developed
by
IUPAC/
ISO
and
published
in
the
same
papers
as
the
CRV
(
Section
5.1.4)

5.1.5.1
Description
of
the
IUPAC/
ISO
Detection
Limit
Procedure
The
1995
publications
define
the
minimum
detectable
value
(
detection
limit)
as
follows:

"
The
Minimum
Detectable
Value
(
MDV)
...
[
is]
...
the
net
signal
(
or
concentration)
of
that
value
(
LD)
for
which
the
false
negative
error
is
$,
given
LC
(
or
")."
(
see
the
CRV
for
LC)

For
a
normal
distribution
with
known
variance,
LD
reduces
to:

LD
=
z(
1­$)
FD
where:
1­$
is
the
false
negative
error
rate,
recommended
at
5
%
($
=
0.05),
and
FD
is
the
standard
deviation
at
the
detection
limit
Later
publications
refer
to
the
minimum
detectable
value
as
the
Detection
Limit.
To
avoid
confusion
in
terminology
and
to
help
distinguish
the
ISO/
IUPAC
concept
from
the
MDL,
LOD,
and
CRV,
EPA
will
refer
to
the
ISO/
IUPAC
detection
limit
as
the
Minimum
Detectable
Value,
abbreviated
as
MDV.
Chapter
5
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
15
5.1.5.2
Assessment
of
the
ISO/
IUPAC
MDV
Against
the
Evaluation
Criteria
The
following
five
subsections
discuss
the
ISO/
IUPAC
MDV
concept
and
procedure
in
the
context
of
the
five
evaluation
criteria
that
concern
detection
limit
concepts.

5.1.5.2.1
Criterion
1:
The
detection
limit
concept
must
be
scientifically
valid.
(
Validity
is
comprised
of
five
conditions.)

Condition
1:
It
can
be
(
and
has
been)
tested.
The
lack
of
a
supporting
procedure
for
establishing
the
MDV
makes
testing
of
the
concept
difficult.
However,
EPA
believes
that
the
MDV
can
be
tested
using
data
similar
to
those
used
to
generate
MDL
values.
Therefore,
the
MDV
meets
this
condition.

Condition
2:
It
has
been
subjected
to
peer
review
and
publication.
The
IUPAC/
ISO
definitions
meet
this
condition;
moreover,
it
is
likely
that
this
definition
has
received
greater
peer
review
than
any
of
the
other
concepts.

Condition
3:
The
error
rate
associated
with
the
procedure
is
either
known
or
can
be
estimated.
The
error
rates
are
specified
by
"
and
$,
both
suggested
at
0.05
(
5
%),
so
the
error
rate
is
known.

Condition
4:
Standards
exist
and
can
be
maintained
to
control
its
operation.
The
MDV
is
defined
in
the
various
publications
by
Currie.
However,
EPA's
search
of
the
ISO
web
site
found
no
standard
for
control
of
the
concept.
Therefore,
the
MDV
fails
this
criterion.

Condition
5:
It
has
attracted
widespread
acceptance
within
a
relevant
scientific
community.
Because
IUPAC
and
ISO
are
international
bodies,
it
is
difficult
to
determine
the
degree
of
acceptance
of
the
MDV
in
the
U.
S.
and
the
world
community.
EPA
has
not
specifically
investigated
the
numbers
of
papers
in
published
journals
that
include
MDV
values,
but
EPA's
literature
search
for
detection
and
quantitation
concepts
did
not
uncover
a
large
number
of
citations
that
promote
the
MDV
in
particular.
Therefore,
it
is
difficult
to
determine
if
the
CRV
meets
this
criterion.

5.1.5.2.2
Criterion
2:
The
detection
limit
concept
must
address
demonstrated
expectations
of
laboratory
and
method
performance,
including
routine
variability.

The
MDV
concept
is
designed
to
account
for
the
variability
of
measurements
of
the
blank
in
the
context
of
a
"
chemical
measurement
process"
in
the
sense
that
it
is
used
in
concert
with
a
critical
value
that
is
based
on
blank
measurement
variability.
The
MDV
is
the
true
concentration
that
is
used
in
the
planning
of
method
evaluation
and
development.
The
actual
detection
decision
is
made
at
the
critical
value
(
CRV)
which
is
determined
from
measured
values.
The
concept
of
a
true
concentration
MDV
and
its
associated
allowance
for
false
negatives
is
of
little
practical
value
in
making
the
actual
detection
decision.
Therefore,
the
MDV
fails
this
criterion.
The
allowance
for
false
negatives
in
a
regulatory
context
is
discussed
in
greater
detail
in
Chapter
3.

5.1.5.2.3
Criterion
3:
The
detection
limit
concept
must
be
supported
by
a
practical
and
affordable
procedure
that
a
single
laboratory
can
use
to
evaluate
method
performance.

The
MDV
concept
is
not
supported
by
a
clearly
defined
procedure
for
establishing
MDV
values.
Therefore,
the
MDV
fails
this
criterion.
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
16
5.1.5.2.4
Criterion
4:
The
detection
limit
concept
should
result
in
a
measured
concentration
at
which
there
is
99%
confidence
that
a
substance
will
be
detected
when
the
analytical
method
is
performed
by
experienced
staff
in
a
well­
operated
laboratory.

The
allowance
for
false
negatives
raises
the
probability
of
detection
to
a
value
estimated
to
be
greater
than
99.999999
%
(
probability
of
a
false
positive
less
than
10­
8).
This
false
positive
rate
is
excessive
and
would
yield
"
detection
limits"
that
are
of
little
practical
value.
Therefore,
the
MDV
fails
this
criterion.

5.1.5.2.5
Criterion
5:
The
detection
limit
concept
must
be
applicable
to
the
variety
of
decisions
made
under
the
Clean
Water
Act
and
should
support
State
and
local
obligations
to
implement
measurement
requirements
that
are
at
least
as
stringent
as
those
set
by
the
Federal
Government.

In
the
absence
of
a
procedure
for
establishing
MDV
values,
the
MDV
concept
fails
to
meet
this
criterion
because
it
cannot
be
used
in
a
regulatory
context.

5.2
Quantitation
Limit
Concepts
Sections
5.2.1
through
5.2.4
describe
EPA's
assessment
of
four
quantitation
limit
concepts.
Each
discussion
is
divided
into
two
major
subsections.
The
first
subsection
describes
the
concept
and,
where
applicable,
the
procedure
that
supports
the
concept,
and
the
second
subsection
details
EPA's
assessment
of
the
concept
based
on
the
five
criteria
established
in
Chapter
4
for
evaluating
quantitation
limit
concepts.
(
Six
criteria
are
given
in
Chapter
4;
four
of
these
pertain
to
both
detection
and
quantitation
limit
concepts,
one
pertains
only
to
detection
limit
concepts,
and
one
pertains
only
to
quantitation
limit
concepts.)

5.2.1
Assessment
of
the
EPA
Minimum
level
of
Quantitation
(
ML)

Section
5.2.2.1
provides
an
overview
of
the
ML
concept
and
the
procedures
used
to
implement
the
concept.
Section
5.2.2.2
contains
EPA's
assessment
of
the
ML
against
the
five
evaluation
criteria
that
concern
quantitation
limit
concepts.

5.2.1.1
Description
of
the
ML
Concept
and
Procedures
The
present
definition
of
the
ML
includes
a
statement
of
the
concept
and
the
procedures
used
to
establish
the
ML.
This
definition
states
that
the
ML
is:

"
the
lowest
level
at
which
the
entire
analytical
system
must
give
a
recognizable
signal
and
acceptable
calibration
point
for
the
analyte.
It
is
equivalent
to
the
concentration
of
the
lowest
calibration
standard,
assuming
that
all
method­
specified
sample
weights,
volumes,
and
clean
up
procedures
have
been
employed.
The
ML
is
calculated
by
multiplying
the
MDL
by
3.18
and
rounding
the
results
to
the
number
nearest
to
(
1,
2,
or
5)
x
10n,
where
n
is
an
integer.
"

The
ML
is
designed
to
provide
a
practical
embodiment
of
the
quantification
level
proposed
by
Currie
and
adopted
by
IUPAC.
It
is
functionally
analogous
to
the
American
Chemical
Society's
Limit
of
Quantitation
(
LOQ).
The
LOQ
is
discussed
in
Section
5.2.3
of
this
chapter.
Chapter
2
(
Section
2.2.2)
describes
the
ML
concept
in
detail.
Chapter
5
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
17
The
first
part
of
the
ML
definition
(
i.
e.,
the
lowest
level
at
which
the
system
gives
a
recognizable
signal
and
acceptable
calibration
point
for
the
analyte)
ties
the
quantification
limit
to
the
capabilities
of
the
measurement
system.
The
second
part
of
the
ML
definition
provides
a
procedural
means
for
establishing
the
ML.

The
procedural
component
of
the
definition
is
designed
to
yield
an
ML
value
that
equals
approximately
10
times
the
standard
deviation
of
replicate
analyses
used
to
determine
the
MDL.
(
The
exact
value
corresponding
to
10
times
the
standard
deviation
is
rounded
to
avoid
error
that
would
arise
from
preparation
of
calibration
standards
at
exact,
unrounded
concentrations.)
The
procedure
given
in
the
above
definition
assumes
that
exactly
seven
replicates
are
used
to
determine
the
MDL.
EPA
has
observed,
however,
that
laboratories
occasionally
perform
MDL
studies
with
more
than
the
required
minimum
of
seven
replicates.
When
this
is
done,
the
students
t
value
used
to
calculate
the
MDL
should
be
adjusted
accordingly.
Therefore,
EPA
believes
that
the
portion
of
ML
definition
that
addresses
calculation
of
the
ML
should
be
revised.
An
improved
definition
is
as
follows:

"
The
ML
is
the
lowest
level
at
which
the
entire
analytical
system
must
give
a
recognizable
signal
and
acceptable
calibration
point
for
the
analyte.
It
is
equivalent
to
the
concentration
of
the
lowest
calibration
standard,
assuming
that
all
method­
specified
sample
weights,
volumes,
and
clean
up
procedures
have
been
employed.
The
ML
is
calculated
by
identifying
the
standard
deviation
used
to
calculate
the
MDL,
multiplying
this
standard
deviation
by
10,
and
rounding
the
resulting
number
the
nearest
(
1,
2,
or
5)
x
10n,
where
n
is
an
integer.
If
the
MDL
is
determined
with
seven
replicates,
the
ML
is
calculated
by
multiplying
the
MDL
by
3.18
and
rounding
the
result
to
the
number
nearest
to
(
1,
2,
or
5)
x
10n,
where
n
is
an
integer."

5.2.1.2
Assessment
of
the
ML
against
the
Evaluation
Criteria
The
following
five
subsections
discuss
the
ML
concept
and
procedure
in
the
context
of
the
five
evaluation
criteria
that
concern
quantitation
limit
concepts.

5.2.1.2.1
Criterion
1:
The
quantitation
limit
concept
must
be
scientifically
valid.
(
Validity
is
comprised
of
five
conditions.)

For
the
purposes
of
evaluating
scientific
validity,
EPA
is
using
the
conditions
established
by
the
Supreme
Court
in
Daubert
v.
Merrell
Dow
Pharmaceuticals
(
see
Chapter
4).

Condition
1:
It
can
be
(
and
has
been)
tested.
The
ML
meets
this
condition.
The
ML
has
been
used
experimentally
since
1979
and
in
the
regulatory
context
since
1984.
The
ML
is
tested
each
time
a
laboratory
calibrates
its
instrument
because
methods
that
employ
the
ML
require
that
it
be
included
as
the
lowest
nonzero
standard
in
these
calibrations.

Moreover,
EPA
exhaustively
tested
the
MDL
and
ML
procedure
with
10
different
techniques
at
decreasing
spike
concentrations
to
evaluate
how
well
the
MDL
and
ML
procedures
characterized
the
region
of
interest
in
preparation
for
this
reassessment
of
detection
and
quantitation
limit
concepts.
Results
of
the
study
suggest
that,
1)
although
the
calculated
MDL
and
ML
could
vary
depending
on
the
spike
level
used,
the
procedure
was
capable
of
reasonably
estimating
detection
and
quantitation
limits
when
the
full
iterative
procedure
in
the
MDL
was
employed,
and
2)
the
rounding
process
employed
to
determine
the
ML
generally
yielded
consistent
MLs
even
with
slight
variations
in
the
calculated
MDL.
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
18
In
other
words,
if
the
procedure
for
establishing
an
ML
is
properly
implemented
for
a
given
method,
it
will
yield
an
ML
value
that
is
consistent
with
the
concept,
and
this
ML
value
will
be
verified
(
tested)
by
a
laboratory
each
and
every
time
it
calibrates
the
instrument
used
to
analyze
samples
by
the
method.

Condition
2:
It
has
been
subjected
to
peer
review
and
publication.
The
ML
has
not
been
published
in
a
peer
reviewed
journal.
However,
the
present
definition
of
the
ML
describes
the
concept
and
the
procedures
used
to
establish
the
ML.
This
definition
is
included
in
EPA
Method
1631,
which
was
extensively
peer
reviewed
in
accordance
with
EPA
policies
on
peer
review
prior
to
publication
and
promulgation.
Given
that
EPA's
policies
on
peer
review
are
as
stringent
as
or
more
stringent
than
those
used
by
many
published
journals,
EPA
believes
that
the
ML
has
met
a
high
standard
of
scientific
review
and
scrutiny,
and
therefore,
meets
the
intent
of
this
condition.

Condition
3:
The
error
rate
associated
with
the
procedure
is
either
known
or
can
be
estimated.
The
uncertainty
associated
with
any
ML
value
can
be
calculated.
EPA
performed
such
calculations
during
this
re­
assessment
and
found
that,
on
average
across
all
techniques
tested,
the
relative
standard
deviation
of
replicate
measurements
at
the
ML
was
approximately
7%.
Median
RSD
values
calculated
for
each
multianalyte
method
tested
ranged
from
6
to
14
percent.
RSD
values
calculated
for
each
single­
analyte
method
tested
ranged
from
4
to
16
percent.
(
See
Appendix
C
to
this
TSD
for
a
detailed
discussion
and
presentation
of
results.)

Condition
4:
Standards
exist
and
can
be
maintained
to
control
its
operation.
The
ML
meets
this
criterion.
Detailed
procedures
(
i.
e.,
standards)
for
establishing
the
ML
are
given
in
the
definition
itself,
although,
as
noted
above,
minor
revision
of
this
definition
is
recommended
to
ensure
that
the
ML
is
properly
calculated
when
more
than
7
replicates
are
used
to
determine
the
MDL.

Condition
5:
It
has
attracted
widespread
acceptance
within
a
relevant
scientific
community.
EPA
believes
the
ML
meets
this
condition.
The
ML
is
functionally
analogous
to
the
American
Chemical
Society's
LOQ
and
to
the
ISO/
IUPAC
quantification
limit,
suggesting
widespread
acceptance.

5.2.1.2.2
Criterion
2:
The
quantitation
limit
concept
must
address
demonstrated
expectations
of
laboratory
and
method
performance,
including
routine
variability.

The
ML
procedure
is
designed
to
provide
a
means
by
which
laboratories
can
demonstrate
their
performance
with
a
method
under
routine
laboratory
operating
conditions.
All
recently
developed
EPA
CWA
methods
require
that
laboratories
calibrate
their
instruments
prior
to
analyzing
environmental
samples.
The
ML
is
defined
as
the
lowest
non­
zero
standard
in
the
laboratory's
calibration,
and
therefore,
reflects
realistic
expectations
of
laboratory
performance
with
a
given
method
under
routine
laboratory
conditions
(
i.
e.,
under
conditions
of
routine
variability).

Also,
the
ML
is
based
on
the
standard
deviation
of
replicate
analyses
used
to
establish
the
MDL.
As
described
in
Section
5.1.1.2.2,
these
analyses
are
performed
to
characterize
laboratory
and
method
performance,
including
routine
variability,
at
low
concentrations.
When
laboratories
perform
MDL
studies
and
yield
results
that
are
approximately
3.18
times
lower
than
the
ML,
they
demonstrate
that
they
can
achieve
expected
levels
of
performance
at
the
specified
quantitation
level.

EPA
recognizes
that
one
laboratory
may
obtain
MDLs
and
MLs
that
are
lower
or
higher
than
those
in
another
laboratory.
If
the
ML
is
being
developed
during
method
development,
it
is
important
to
determine
the
ML
at
more
than
one
laboratory
to
ensure
that
the
published
ML
reflects
demonstrated
expectations
of
method
performance
in
a
community
of
laboratories.
EPA
does
not
believe
this
community
should
be
so
broad
as
to
include
the
entire
universe
of
possible
laboratories
that
might
desire
to
practice
the
method.
Chapter
5
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
19
Rather,
EPA
believes
that
this
community
should
include
well­
operated
laboratories
that
are
experienced
with
the
techniques
used
in
the
method
and
that
have
some
familiarity
with
the
method.
See
section
5.1.1.2.2.
for
additional
discussion
of
this
topic.

5.2.1.2.3
Criterion
3:
The
quantitation
limit
concept
must
be
supported
by
a
practical
and
affordable
procedure
that
a
single
laboratory
can
use
to
evaluate
method
performance.

The
ML
is
designed
for
use
by
a
single
laboratory.
The
ML
can
be
directly
determined
from
the
MDL,
which
is
among
the
most
affordable
of
procedures
that
have
been
suggested
for
determining
detection
limits
(
see
discussion
in
Section
5.1.1.2.3
for
additional
details
regarding
affordability).
As
a
result,
the
ML
is
among
the
most
affordable
of
procedures
for
determining
quantitation
limits.

5.2.1.2.4
Criterion
4:
The
quantitation
limit
concept
should
identify
a
concentration
at
which
the
reliability
of
the
measured
results
is
consistent
with
the
capabilities
of
the
method
when
a
method
is
performed
by
experienced
staff
in
a
well­
operated
laboratory.

The
ML
meets
this
criterion.
The
ML
is
determined
in
individual
laboratories
each
time
they
calibrate
their
instruments
to
perform
a
method,
and
the
goal
in
doing
so
is
to
identify
a
concentration
at
which
the
reliability
of
the
measured
results
is
consistent
with
the
capabilities
of
the
method.
EPA
methods
that
employ
the
ML
include
criteria
for
demonstrating
acceptable
calibration
performance,
thereby
ensuring
that
the
ML
is
determined
under
controlled
laboratory
conditions.
EPA
methods
that
employ
the
ML
also
specify
that
the
analytical
method
is
to
be
performed
by
qualified
staff
who
are
experienced
with
the
techniques
used
in
the
method.

5.2.1.2.5
Criterion
5:
The
quantitation
limit
concept
must
be
applicable
to
the
variety
of
decisions
made
under
the
Clean
Water
Act
and
should
support
state
and
local
obligations
to
implement
measurement
requirements
that
are
at
least
as
stringent
as
those
set
by
the
Federal
Government.

The
ML
meets
this
criterion.
It
has
been
successfully
used
to
support
State
and
local
obligations
under
the
Clean
Water
Act
since
1984.

5.2.2
Assessment
of
the
IQE
The
Interlaboratory
Quantitation
Estimate
(
IQE)
was
developed
by
ASTM
with
support
from
members
of
the
regulated
industry
in
an
attempt
to
provide
a
scientifically
sound,
comprehensive
quantitation
limit
procedure
that
addresses
the
concerns
of
the
regulated
industry,
of
statisticians,
and
of
analysts
involved
in
ASTM
Committee
D
19
on
water.
A
brief
summary
of
the
procedure
for
establishing
and
IQE
is
given
in
Section
5.2.2.1.
Section
5.2.2.2
presents
EPA's
assessment
of
the
IQE
against
the
five
criteria
established
for
evaluating
quantitation
limit
concepts.

5.2.2.1
Description
of
the
IQE
Concept
and
Procedure
The
ASTM
Designation
D
6512
is
the
Standard
Practice
Interlaboratory
Quantitation
Estimate.
As
stated
in
the
practice:

"
IQEZ
%
is
computed
to
be
the
lowest
concentration
for
which
a
single
measurement
from
a
laboratory
selected
from
the
population
of
qualified
laboratories
represented
in
an
interlaboratory
study
will
have
an
estimated
Z
%
relative
standard
deviation
(
Z
%
RSD,
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
20
based
on
interlaboratory
standard
deviation),
where
Z
is
typically
an
integer
multiple
of
10,
such
as
10,
20,
or
30,
but
Z
can
be
less
than
10."

The
IQE
is
determined
and
verified
using
a
procedure
containing
5
major
steps
with
approximately
46
substeps
and
conditions.
The
full
text
of
the
IQE
procedure
is
available
from
ASTM­
International.
The
5
major
steps
and
their
functions
are
given
in
Section
6
of
the
IQE
procedure
and
are
as
follows:

1.
Overview
of
the
procedure.

2.
IQE
Study
Plan,
Design,
and
Protocol
­
in
this
section,
the
task
manager
(
study
supervisor)
chooses
the
analyte,
matrix,
and
analytical
method.
Details
are
given
for
the
appropriate
range
of
study
concentrations;
the
model
of
recovery
vs.
concentration;
the
study
protocol
(
ASTM
Practice
D
2777
is
suggested);
the
instructions
to
be
given
to
the
participating
laboratories,
including
reporting
requirements;
the
allowable
sources
of
variation;
and
the
number
of
laboratories,
analysts,
measurement
systems,
and
days,
over
which
the
study
will
be
conducted.

3.
Conduct
the
IQE
Study,
Screen
the
Data,
and
Choose
a
Model
­
after
the
study
data
are
collected
and
screened
according
to
ASTM
Practice
D
2777,
the
interlaboratory
standard
deviation
(
ILSD)
versus
concentration
data
are
tabulated
and
one
of
three
models
is
fit
to
the
data.
The
first
attempt
is
at
fitting
a
constant
model.
If
the
attempt
fails,
a
straight­
line
model
is
attempted.
If
the
straight­
line
model
fails,
a
hybrid
(
Rocke/
Lorenzato)
model
is
fit.
After
fitting,
the
model
is
evaluated
for
reasonableness
and
lack
of
fit.
If
the
model
fails,
the
study
supervisor
determines
if
a
subset
of
the
data
should
be
analyzed
or
if
more
data
are
needed.

4.
Compute
the
IQE
­
the
IQE
is
computed
using
the
ILSD
model
selected
in
Step
3
to
estimate
the
relative
standard
deviation
as
a
function
of
concentration.
The
first
attempt
is
at
10
%
RSD
(
IQE10%).
If
this
attempt
fails,
IQE20%
is
tried,
then
IQE30%.
IQEs
greater
than
30%
are
not
recommended.

5.
Nontrivial
Amount
of
Censored
Data
­
this
section
of
the
IQE
procedure
addresses
the
effect
of
"
nondetects
or
"
less­
thans."
Suggestions
are
given
to
see
if
uncensored
data
can
be
obtained
from
the
laboratories
or
if
the
study
needs
to
be
augmented
with
additional
data.
Suggestions
are
given
for
fitting
a
model
to
data
that
contain
less
than
10
%
non­
detects
or
less­
thans
to
produce
an
IQE.

5.2.2.2
Assessment
of
the
IQE
Against
the
Evaluation
Criteria
The
following
five
subsections
discuss
the
IQE
concept
and
procedure
in
the
context
of
the
five
evaluation
criteria
that
concern
detection
limit
concepts.

5.2.2.2.1
Criterion
1:
The
quantitation
limit
concept
must
be
scientifically
valid.
(
Validity
is
comprised
of
five
conditions.)

Condition
1:
It
can
be
(
and
has
been)
tested.
EPA
is
not
aware
of
any
organization,
including
ASTM,
that
has
conducted
a
study
to
test
the
IQE
procedure
as
written
(
i.
e.,
designed
and
implemented
an
interlaboratory
study
involving
multi­
laboratory
analysis
of
multiple
concentrations
of
each
matrix
of
interest).
It
has
been
tested
by
its
developers
using
simulated
datasets
and
on
interlaboratory
data
sets
that
do
not
adequately
characterize
the
low
level
region
of
interest.
As
part
of
this
reassessment,
EPA
tested
a
variant
of
the
IQE
procedure
on
single
laboratory
data
sets
that
were
designed
to
characterize
an
analytical
method
in
the
region
of
detection
and
quantitation.
Despite
the
lack
of
comprehensive
testing
performed
to
date,
however,
EPA
believes
that
the
IQE
procedure
can
be
tested
if
sufficient
resources
are
invested.
In
other
words,
the
IQE
meets
the
condition
that
it
`
can
be'
tested,
but
only
partially
meets
the
condition
that
it
`
has
been'
tested.
Chapter
5
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
21
Condition
2:
It
has
been
subjected
to
peer
review
and
publication.
Although
the
IQE
has
not
been
published
in
the
peer­
reviewed
scientific
literature,
the
IQE
has
undergone
extensive
review
and
ballot
by
members
of
ASTM
Committee
D
19,
many
of
whom
are
qualified
peer
reviewers.
Therefore,
although
the
IQE
does
not
meet
this
condition
in
the
sense
of
formal
peer
review
and
publication,
EPA
believes
it
does
meet
the
intent
of
this
condition
(
i.
e.,
submission
to
scrutiny
of
the
scientific
community).

Condition
3:
The
error
rate
associated
with
the
procedure
is
either
known
or
can
be
estimated.
In
theory,
an
expert
statistician
could
estimate
the
error
rate
of
the
IQE.
However,
the
IQE
procedure
is
extremely
complex
from
an
analytical
chemistry
and
statistical
perspective.
As
a
result,
it
is
unlikely
that
the
error
rate
could
be
estimated
by
the
staff
of
an
environmental
testing
laboratory.
Moreover,
in
attempting
to
follow
the
IQE
procedure
during
this
reassessment,
EPA
found
the
procedure
to
be
highly
subjective,
particularly
with
respect
to
selection
of
an
appropriate
model.
The
subjective
nature
of
the
procedure
is
likely
to
yield
different
IQEs
from
the
same
data
set,
depending
on
the
staff
involved
in
analyzing
the
data
and
performing
the
calculations.
(
The
likelihood
of
this
problem
is
illustrated
in
Appendix
C
to
this
TSD.)
EPA
believes
such
conditions
make
it
difficult
if
not
impossible
to
estimate
the
actual
error
associated
with
the
IQE.
Therefore,
the
IQE
fails
this
condition.

Condition
4:
Standards
exist
and
can
be
maintained
to
control
its
operation.
The
IQE
concept
and
procedure
is
supported
by
a
published
procedure
(
standard)
to
control
its
operation.
The
procedure
gives
the
steps
to
be
followed
in
determining
the
IQE
and
instructs
the
study
supervisor
how
to
gather
the
data
and
compute
an
IQE.

However,
there
are
several
"
gray
areas"
in
the
published
procedure.
The
most
significant
gray
area
is
in
model
selection.
The
procedure
provides
insufficient
guidance
on
the
use
of
residual
plots
as
a
basis
for
selecting
models
and,
as
a
result,
selection
of
the
model
may
be
very
subjective,
especially
if
the
number
of
concentrations
is
low.
The
discussion
of
what
model
to
use
after
rejecting
the
hybrid
and
linear
models
also
is
very
vague.
The
exponential
model
is
mentioned,
as
well
as
models
with
more
than
one
coefficient.
Much
of
the
data
evaluated
by
EPA
have
tended
to
suggest
the
exponential
model,
based
on
the
statistical
tests
discussed,
but
those
data
have
almost
always
shown
residual
"
patterns"
when
using
this
model,
which
would
then
lead
to
consideration
of
other
models.
In
addition,
fitting
the
"
constant
model"
is
never
discussed
in
detail.
Most
likely,
this
is
simply
done
by
calculating
a
mean
(
weighted
if
necessary)
of
the
variances
from
the
different
concentrations,
however
this
is
never
explicitly
stated.

As
discussed
under
Condition
4
of
Section
5.1.2.2.1
(
scientific
validity
of
the
IDE
procedure),
EPA
also
is
concerned
about
inconsistencies
between
the
IDE
and
IQE
that
suggest
conceptual
problems
with
these
standards.
Finally,
EPA
observed
that
the
IQE
contains
statistical
errors
that
if
followed
as
written
could
produce
inaccurate
IQE
values.
For
example,
the
computations
for
weighted
least
squares
given
in
Table
1
of
the
procedure
are
incorrect.
The
formulas
for
the
weighted
means
of
the
spike
values
and
results
given
in
Table
1
of
D6512
would
only
be
appropriate
if
the
weighting
were
done
based
on
the
number
of
replicates
per
spike
level,
rather
than
on
the
estimated
variance
calculated
using
the
chosen
standard
deviation
model.

Based
on
these
findings
(
along
with
those
discussed
under
Criterion
2
below),
EPA
believes
that,
although
the
IQE
is
supported
by
a
published
procedure,
the
procedure
is
not
sufficient
to
control
operation
of
the
IQE
because
of
the
high
degree
of
subjectivity
involved
in
implementing
the
procedure,
statistical
errors
in
the
procedure,
and
internal
inconsistencies
with
the
IDE.
Therefore,
the
IQE
fails
this
condition.

Condition
5:
It
has
attracted
widespread
acceptance
within
a
relevant
scientific
community.
The
IQE
fails
this
condition
because
it
is
familiar
to,
and
has
been
accepted
only
by,
a
very
narrow
segment
of
the
scientific
community.
Although
the
IQE
has
been
approved
by
ASTM
for
more
than
2
years,
EPA
has
not
seen
an
IQE
in
the
open
literature
or
in
an
analytical
method,
including
an
ASTM
method.
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
22
5.2.2.2.2
Criterion
2:
The
quantitation
limit
concept
must
address
demonstrated
expectations
of
laboratory
and
method
performance,
including
routine
variability.

The
IQE
procedure
is
designed
to
reflect
expectations
of
interlaboratory
performance,
including
routine
variability.
The
procedure
contains
extensive
instructions
for
dealing
with
unusual
conditions,
including
sources
of
variability.
Based
on
studies
of
the
single­
laboratory
variant
of
the
procedure
in
which
the
model
selection
proved
to
be
highly
subjective,
EPA
is
skeptical
about
the
procedure
being
able
to
demonstrate
realistic
expectations
of
laboratory
and
method
performance.

The
IQE
procedure
suggests
attempting
to
fit
study
results
to
a
constant,
linear,
or
hybrid
model.
If
all
of
these
fail,
the
procedure
suggests
trying
a
different
model,
such
as
the
exponential
model.
(
The
exponential
model
figures
more
prominently
in
the
IDE
procedure,
where
it
is
one
of
the
three
main
models
discussed,
replacing
the
Rocke
and
Lorenzato
model.)
Although
the
exponential
model
may
be
appropriate
for
the
IDE
(
which
is
not
tied
to
a
fixed
RSD),
it
yields
unacceptable
results
when
applied
to
the
IQE
procedure.
Under
the
exponential
model,
relative
variability
(
standard
deviation
divided
by
the
true
concentration)
is
a
parabolic
function,
i.
e.,
as
concentration
increases,
relative
variability
decreases
down
to
a
specific
percentage,
and
then
begins
to
increase.
This
is
not
realistic
of
laboratory
and
method
performance.
In
addition
the
exponential
model
will
often
result
in
having
two
possible
values
each
for
IQE10%,
IQE20%,
and
IQE30%.

Another
concern
with
the
IQE
procedure
is
that
use
of
the
non­
mandatory
appendices
in
ASTM
D
6512
to
determine
the
fit
of
a
model
may
produce
results
that
differ
from
those
that
would
be
obtained
by
using
the
default
procedures
for
testing
model
fit
that
are
built
into
off­
the­
shelf
statistical
software.

Given
the
subjectivity
and
confusion
involved
in
selecting
the
model,
EPA
tried
using
the
same
data
set
to
calculate
a
single­
laboratory
variant
of
the
IQE
with
each
of
the
available
models
and
found
that
the
calculated
IQEs
varied
widely
when
different
models
were
used.

Based
on
the
problems
described
above,
EPA
believes
the
IQE
fails
this
criterion.

5.2.2.2.3
Criterion
3:
The
quantitation
limit
concept
must
be
supported
by
a
practical
and
affordable
procedure
that
a
single
laboratory
can
use
to
evaluate
method
performance.

The
IQE
procedure
is
neither
practical
nor
affordable.
It
is
designed
for
use
by
an
ASTM
study
supervisor
or
task
manager
and
not
as
a
procedure
that
a
single
laboratory
can
use
to
evaluate
method
performance.
EPA
is
aware
that
ASTM
Committee
D
19
is
contemplating
development
of
a
withinlaboratory
quantitation
estimate
(
WQE),
but
the
WQE
has
not
been
approved
through
an
ASTM
ballot
so
it
cannot
be
adequately
evaluated
at
this
time.
The
WQE
may
meet
this
criterion,
but
the
IQE
does
not.

Regarding
affordability,
EPA
estimates
that
the
cost
of
implementing
IQE
procedure
would
be
more
than
twice
the
cost
of
EPA's
present
implementation
of
the
ML.
The
increased
cost
stems
from
the
additional
low­
level
data
required
to
assure
that
variability
vs.
concentration
is
being
characterized
in
the
region
of
detection
and
quantitation,
challenges
involved
in
applying
the
statistical
procedures
in
the
IQE,
and
because
of
the
anticipated
reanalysis
and
rework
required
if
either
the
procedure
failed
to
produce
an
IQE
or
if
the
resulting
IQE
failed
to
meet
the
specifications
in
the
IQE
procedure.
Chapter
5
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
23
5.2.2.2.4
Criterion
4:
The
quantitation
limit
concept
should
identify
a
concentration
at
which
the
reliability
of
the
measured
results
is
consistent
with
the
capabilities
of
the
method
when
a
method
is
performed
by
experienced
staff
in
a
well­
operated
laboratory.

If
the
IQE
were
developed
in
an
interlaboratory
study
that
met
the
requirements
of
D
6512,
the
calculated
IQE
would
likely
be
achievable
by
experienced
staff
in
a
well
operated
laboratory.
Therefore,
the
IQE
passes
this
criterion.
However,
EPA
also
notes
that
although
it
passes
the
criterion,
the
IQE
also
suffers
from
the
likelihood
that
it
may
not
identify
the
lowest
concentration
at
which
the
reliability
of
the
measured
results
is
consistent
with
the
capabilities
of
the
method
when
it
is
performed
by
experienced
staff
in
a
welloperated
laboratory.
Also,
EPA
is
concerned
that
the
IQE
is
an
interlaboratory
concept
and
may
be
limiting
in
a
single­
laboratory
setting;
i.
e.,
the
IQE
may
actually
prevent
a
laboratory
from
demonstrating
improved
performance
because
any
re­
development
of
the
IQE
would
require
an
interlaboratory
study
which
no
single
laboratory
would
undertake.

5.2.2.2.5
Criterion
5:
The
quantitation
limit
concept
must
be
applicable
to
the
variety
of
decisions
made
under
the
Clean
Water
Act
and
should
support
State
and
local
obligations
to
implement
measurement
requirements
that
are
at
least
as
stringent
as
those
set
by
the
Federal
Government.

Although
the
IQE
could
be
applied
to
some
decisions
to
be
made
under
CWA,
it
may
not
support
decisions
when
pollutant
levels
need
to
be
protective
of
human
health
and
the
environment
because
the
IDE
may
be
considerably
higher
than
these
levels.
At
best,
the
IQE
only
partially
passes
this
criterion.

5.2.3
Assessment
of
the
ACS
Limit
of
Quantitation
The
Limit
of
Quantitation
(
LOQ)
was
developed
by
the
Committee
on
Environmental
Improvement
of
the
American
Chemical
Society
(
ACS)
and
published
in
the
same
two
papers
as
the
LOD.

5.2.3.1
Description
of
the
ACS
LOQ
Concept
and
Procedure
The
1983
"
Principles"
define
the
LOQ
as:

"...
the
level
above
which
quantitative
results
may
be
obtained
with
a
specified
degree
of
confidence."

The
same
relationship
used
to
define
the
LOD
is
used
for
the
LOQ:

but
the
recommended
minimal
value
for
Kd
be
set
at
10.
Thus,
the
LOQ
is
10F
above
the
gross
blank
signal,
Sb.
According
to
the
1983
publication,
the
LOQ
corresponds
to
an
uncertainty
of
±
30%
(
10F
±
3F).
This
uncertainty
statement
is
based
on
F
equal
to
10%
of
the
LOQ.
Other
statements
of
uncertainty
are,
of
course,
possible
using
knowledge
of
F
and/
or
the
RSD.

Neither
the
1980
nor
1983
ACS
publications
provide
a
specific
procedure
for
estimating
the
LOQ,
nor
do
they
provide
a
minimum
number
of
observations
needed
to
estimate
the
gross
blank
signal
or
the
variability
term
Fb.
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
24
5.2.3.2
Assessment
of
the
ACS
LOQ
Against
the
Evaluation
Criteria
The
following
five
subsections
discuss
the
ACS
LOQ
concept
and
procedure
in
the
context
of
the
five
evaluation
criteria
that
concern
detection
limit
concepts.

5.2.3.2.1
Criterion
1:
The
quantitation
limit
concept
must
be
scientifically
valid
(
validity
is
comprised
of
five
conditions)

Condition
1:
It
can
be
(
and
has
been)
tested.
Testing
of
the
LOQ
is
hampered
by
1)
the
lack
of
a
supporting
procedure
for
establishing
and
LOQ,
and
2)
it's
conceptual
dependence
on
the
variability
of
blank
measurements.
If
the
blank
measurements
fail
to
produce
a
response,
it
is
impossible
to
calculate
an
LOQ
because
the
standard
deviation
of
zero
is
zero.
One
solution
for
testing
the
concept
is
to
assume
that
the
LOQ
is
functionally
equivalent
to
the
ML
as
the
blank
signal
approaches
zero.
EPA
believes
this
is
a
reasonable
assumption,
and
therefore,
that
the
ML
procedure
is
a
viable
means
for
testing
the
LOQ
concept.
Therefore,
the
LOQ
meets
this
condition.

Condition
2:
It
has
been
subjected
to
peer
review
and
publication.
The
ACS
LOQ
definition
was
published
in
the
peer­
reviewed
journal
Analytical
Chemistry
in
1980
and
1983.
Therefore,
the
ACS
LOQ
meets
this
condition.

Condition
3:
The
error
rate
associated
with
the
procedure
is
either
known
or
can
be
estimated.
The
definition
of
the
LOQ
specifically
estimates
the
uncertainty
associated
with
a
concentration
at
the
LOQ
as
±
30%
based
on
10%
RSD.
Other
valid
statements
in
terms
of
%
RSD
may
be
made
based
on
study
requirements,
policy
judgments
and/
or
specific
results.
For
example,
the
estimate
of
an
uncertainty
of
±
30%
based
on
10%
RSD
is
inconsistent
with
EPA
and
ISO/
IUPAC
estimations
which
place
the
uncertainty
at
±
20%
(
at
±
2F),
and
is
inconsistent
with
the
Episode
6000
data
that
place
the
median
RSD
at
7%
and
therefore
the
±
2F
uncertainty
at
approximately
±
14%.

Condition
4:
Standards
exist
and
can
be
maintained
to
control
its
operation.
The
ACS
LOQ
lacks
a
clearly
defined
procedure
for
estimating
the
important
terms
required
to
derive
it.
Although
it
may
be
possible
to
derive
ACS
LOQ
values
from
data
used
to
derive
EPA
MDL
values,
there
is
no
discussion
of
using
replicate
blanks,
replicate
spiked
samples,
or
a
minimum
recommendation
for
the
number
of
replicates.
Therefore,
the
ACS
LOQ
fails
this
condition.

Condition
5:
It
has
attracted
widespread
acceptance
within
a
relevant
scientific
community.
Because
the
ACS
does
not
develop
and
publish
reference
analytical
methods,
it
is
difficult
to
determine
the
degree
of
acceptance
of
the
LOQ.
EPA
has
not
investigated
the
numbers
of
papers
published
in
ACS
journals
that
include
LOQ
values,
but
EPA's
literature
search
for
detection
and
quantitation
concepts
did
not
uncover
a
large
number
of
citations
that
promote
the
LOQ
in
particular.

5.2.3.2.2
Criterion
2:
The
quantitation
limit
concept
must
address
demonstrated
expectations
of
laboratory
and
method
performance,
including
routine
variability.

The
LOQ
concept
is
designed
to
address
demonstrated
expectations
of
laboratory
and
method
performance,
including
routine
variability,
so
it
appears
to
meet
this
criterion.
Unfortunately,
ACS
has
not
published
a
procedure
to
implement
the
concept.
In
other
words,
the
LOQ
addresses
demonstrated
expectations
of
laboratory
and
method
performance
in
theory,
but
in
practice,
provides
no
direct
means
for
performing
these
demonstrations.
Therefore,
EPA
believes
the
ACS
LOQ
partially
meets
this
criterion.
Chapter
5
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
25
5.2.3.2.3
Criterion
3:
The
quantitation
limit
concept
must
be
supported
by
a
practical
and
affordable
procedure
that
a
single
laboratory
can
use
to
evaluate
method
performance.

The
ACS
LOQ
concept
is
not
supported
by
a
clearly
defined
procedure
for
establishing
the
LOQ.
Therefore,
it
fails
this
criterion.

5.2.3.2.4
Criterion
4:
The
quantitation
limit
concept
should
identify
a
concentration
at
which
the
reliability
of
the
measured
results
is
consistent
with
the
capabilities
of
the
method
when
a
method
is
performed
by
experienced
staff
in
a
well­
operated
laboratory.

Given
the
relationship
of
the
ACS
LOQ
to
the
ML,
EPA
believes
the
LOQ
meets
this
criterion
for
the
reasons
outlined
in
Section
5.2.1.2.4,
which
discusses
EPA's
assessment
of
the
ML
against
Criterion
4
for
evaluating
quantitation
limit
concepts.

5.2.3.2.5
Criterion
5
­
Must
be
applicable
to
the
variety
of
decisions
made
under
the
Clean
Water
Act
and
should
support
State
and
local
obligations
to
implement
measurement
requirements
that
are
at
least
as
stringent
as
those
set
by
the
Federal
Government.

In
the
absence
of
a
procedure
for
determining
LOQ
values,
the
ACS
LOQ
fails
to
meet
this
criterion
because
it
cannot
be
used
in
a
regulatory
context.
The
LOQ
passes
only
if
it
is
assumed
to
be
functionally
equivalent
to
the
ML
(
i.
e.,
the
ML
procedure
is
used
to
establish
an
LOQ).

5.2.4
Assessment
of
the
IUPAC/
ISO
Limit
of
Quantitation
A
similar
LOQ
concept
was
developed
by
IUPAC/
ISO
and
published
in
the
same
papers
as
the
CRV
and
detection
limit
(
Sections
5.1.4
and
5.1.5).

5.2.4.1
Description
of
the
ISO/
IUPAC
LOQ
Concept
The
1995
"
Recommendations"
define
the
LOQ
as:

"...
the
ability
of
a
CMP
[
chemical
measurement
process]
to
adequately
`
quantify'
an
analyte.
The
ability
to
quantify
is
generally
expressed
in
terms
of
the
signal
or
analyte
(
true)
value
that
will
produce
estimates
having
a
specified
relative
standard
deviation
(
RSD),
commonly
10
%."

The
relationship
used
to
define
the
LOQ
is:

LQ
=
KQ
×
FQ
The
recommended
value
for
KQ
is
10.
Thus,
the
LOQ
is
10F
above
the
blank
signal,
FQ.

5.2.4.2
Assessment
of
the
IUPAC/
ISO
LOQ
Against
the
Evaluation
Criteria
The
following
five
subsections
discuss
the
IUPAC/
ISO
LOQ
concept
and
procedure
in
the
context
of
the
five
evaluation
criteria
that
concern
detection
limit
concepts.

5.2.4.2.1
Criterion
1:
The
quantitation
limit
concepts
must
be
scientifically
valid
(
validity
is
comprised
of
five
conditions)
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
26
Condition
1:
It
can
be
(
and
has
been)
tested.
Testing
of
the
IUPAC/
ISO
LOQ
is
hampered
by
1)
the
lack
of
a
supporting
procedure
for
establishing
and
LOQ,
and
2)
it's
conceptual
dependence
on
the
variability
of
blank
measurements.
If
the
blank
measurements
fail
to
produce
a
response,
it
is
impossible
to
calculate
an
LOQ
because
the
standard
deviation
of
zero
is
zero.
One
solution
for
testing
the
concept
is
to
assume
that
the
ISO/
IUPAC
LOQ
is
functionally
equivalent
to
the
ML
as
the
blank
signal
approaches
zero.
EPA
believes
this
is
a
reasonable
assumption,
and
therefore,
that
the
ML
procedure
is
a
viable
means
for
testing
the
LOQ
concept.
Therefore,
the
ISO/
IUPAC
LOQ
meets
this
condition.

Condition
2:
It
has
been
subjected
to
peer
review
and
publication.
The
IUPAC/
ISO
LOQ
definition
has
been
published
in
the
peer­
reviewed
journals
Pure
and
Appl.
Chem.
in
1985;
in
Anal.
Chim.
Acta.
and
Chemometrics
and
Intelligent
Lab.
Systems
in
1997;
and
in
J.
Radioanal.
and
Nuclear
Chem.
in
2000.
Therefore,
the
IUPAC/
ISO
LOQ
meets
this
condition.

Condition
3:
The
error
rate
associated
with
the
procedure
is
either
known
or
can
be
estimated.
EPA
used
data
generated
in
the
Episode
6000
study
to
obtain
estimate
the
error
rate
associated
with
the
LOQ.
The
Episode
6000
results
show
that
the
median
error
across
all
analytes
and
analytical
techniques
at
10F
is
approximately
±
14%
with
approximate
95
%
confidence.

Condition
4:
Standards
exist
and
can
be
maintained
to
control
its
operation.
The
IUPAC/
ISO
LOQ
is
lacks
a
clearly
defined
procedure
for
estimating
the
important
terms
required
to
derive
it.
Although
it
may
be
possible
to
derive
IUPAC/
ISO
LOQ
values
from
data
used
to
derive
EPA
MDL
values,
there
is
no
discussion
of
using
replicate
blanks,
replicate
spiked
samples,
or
a
minimum
recommendation
for
the
number
of
replicates.
Therefore,
EPA
believes
that
the
IUPAC/
ISO
LOQ
fails
this
condition.

Condition
5:
It
has
attracted
widespread
acceptance
within
a
relevant
scientific
community.
Acceptance
by
the
scientific
community
is
not
known.
Acceptance
would
be
indicated
by
use
of
the
LOD
in
ISO
methods.
EPA
also
did
not
perform
a
search
of
ISO
methods
because
of
copyright
restrictions.
However,
EPA's
literature
search
for
detection
and
quantitation
concepts
in
the
open
technical
literature
did
not
uncover
a
large
number
of
citations
that
reference
the
LOQ.
Therefore,
it
is
difficult
to
determine
if
the
ISO/
IUPAC
LOQ
meets
this
condition.

5.4.2.2.2
Criterion
2:
The
concepts
must
address
demonstrated
expectations
of
laboratory
and
method
performance,
including
routine
variability.

The
most
recent
publication
on
the
IUPAC/
ISO
LOQ
(
J.
Radioanal.
and
Nuclear
Chem.,
op.
cit.)
provides
an
insight
into
this
issue
through
measurements
of
carbon­
14
by
accelerator
mass
spectrometry.
Therefore,
EPA
believes
that
the
IUPAC/
ISO
LOQ
passes
this
criterion.

5.4.2.2.3
Criterion
3:
The
concept
must
be
supported
by
a
practical
and
affordable
procedure
that
a
single
laboratory
can
use
to
evaluate
method
performance.

The
ISO/
IUPAC
LOQ
concept
is
not
supported
by
a
clearly
defined
procedure
for
establishing
the
LOQ.
Therefore,
it
fails
this
criterion.

5.4.2.2.4
Criterion
4:
The
quantitation
limit
concept
should
identify
a
concentration
at
which
the
reliability
of
the
measured
results
is
consistent
with
the
capabilities
of
the
method
when
a
method
is
performed
by
experienced
staff
in
a
well­
operated
laboratory.
Chapter
5
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
27
Given
the
relationship
of
the
IUPAC/
ISO
LOQ
to
the
ML,
EPA
believes
that
the
LOQ
satisfies
this
criterion
for
the
reasons
outlined
in
Section
5.2.1.2.4,
which
discusses
EPA's
assessment
of
the
ML
against
Criterion
4
for
evaluating
quantitation
limit
concepts.

5.4.2.2.5
Criterion
5:
The
quantitation
limit
concept
must
be
applicable
to
the
variety
of
decisions
made
under
the
Clean
Water
Act
and
should
support
State
and
local
obligations
to
implement
measurement
requirements
that
are
at
least
as
stringent
as
those
set
by
the
Federal
Government.

In
the
absence
of
a
procedure
for
determining
LOQ
values,
the
ISO/
IUPAC
LOQ
fails
to
meet
this
criterion
because
it
cannot
be
used
in
a
regulatory
context.
The
ISO/
IUPAC
LOQ
passes
only
if
the
ML
procedure
is
used
to
establish
an
LOQ.
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
28
Table
5­
1.
Assessment
of
Detection
Limit
Concepts
Against
Evaluation
Criteria
Evaluation
Criteria
MDL
IDE
ACS
LOD
ISO/
IUPAC
CRV
ISO/
IUPAC
MDV
The
detection
limit
concept
must
be
scientifically
valid:

°
It
can
be
(
and
has
been
tested)

°
Has
undergone
peer
review
and
publication
°
Has
an
error
rate
that
is
known
or
can
be
estimated
°
Has
standards
that
can
be
maintained
to
control
its
operation
°
Has
achieved
widespread
acceptance
in
a
relevant
scientific
community
Meets
all
5
conditions
for
scientific
validity
with
slight
modifications
noted
to
clarify
understanding
of
error
rate
Meets
1,
partially
meets
1,
and
fails
3
of
the
5
conditions
for
scientific
validity.

°
Can
be,
but
has
not
been
fully
tested
(
partial)

°
Subjectivity
makes
calculation
of
error
rate
impossible
(
fails)

°
Has
a
standard
but,
due
to
the
high
degree
of
subjectivity,

errors,
and
conceptual
inconsistency,
it
is
unlikely
to
control
its
operation
(
fails)

°
Is
familiar
to
and
accepted
by
a
very
narrow
segment
of
the
scientific
community
(
fails)
Meets
4
of
the
5
conditions
for
scientific
validity.

°
No
standards
exist
to
control
its
operation
Meets
3
of
the
5
conditions
for
scientific
validity.

°
No
standards
exist
to
control
its
operation
°
Degree
of
acceptance
is
unclear.
Meets
3
of
the
5
conditions
for
scientific
validity.

°
No
standard
exist
to
control
its
operation
°
Degree
of
acceptance
is
unclear.

The
detection
limit
concept
must
address
demonstrated
expectations
of
laboratory
and
method
performance,
including
routine
variability
Can
meet
this
criterion
if
properly
applied
Conceptually
passes
this
criterion,

but
fails
in
practice
due
to
problems
with
model
selection
Partially
meets
the
criterion.
Concept
meets
the
criterion
but
no
procedure
for
implementing
the
concept
is
given.

Passes
the
criterion
only
if
equivalency
to
the
MDL
is
assumed.
Partially
meets
this
criterion.
Concept
meets
the
criterion
but
no
procedure
for
implementing
the
concept
is
given.

Passes
the
criterion
only
if
equivalency
to
the
MDL
is
assumed.
Could
be
used
in
planning
method
development
and
evaluation
studies
as
recommended
but
not
in
operational
detection
decision
making.
Table
5­
1.
Assessment
of
Detection
Limit
Concepts
Against
Evaluation
Criteria
Evaluation
Criteria
MDL
IDE
ACS
LOD
ISO/
IUPAC
CRV
ISO/
IUPAC
MDV
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
29
The
detection
limit
concept
must
be
supported
by
a
practical
and
affordable
procedure
that
a
single
laboratory
can
use
to
evaluate
method
performance
Meets
this
criterion.
Procedure
can
be
performed
by
a
single
laboratory
during
a
single
shift,
or
for
method
development
by
multiple
labs
in
a
single
shift
Fails
this
criterion.
Requires
interlaboratory
study
involving
a
reference
lab
or
coordinating
body,
a
minimum
of
6
complete
data
sets,

and
a
skilled
statistician.
The
cost
of
implementing
this
procedure
would
exceed
most
method
development
budgets.
Fails
this
criterion.
Fails
this
criterion.
Fails
this
criterion.

The
detection
limit
concept
should
result
in
a
measured
concentration
at
which
there
is
99%

confidence
that
a
substance
will
be
detected
when
the
analytical
method
is
performed
by
experienced
staff
in
a
welloperated
laboratory.
Meets
this
criterion
When
the
allowance
for
false
negatives
and
the
prediction
and
tolerance
intervals
are
taken
into
account,
the
resulting
detection
limit
(
IDE)
is
raised
to
the
point
at
which
detection
probability
is
estimated
to
be
greater
than
99.999999%;
this
yields
numerical
values
that
have
no
practical
meaning
as
a
detection
standard.
Therefore,
the
IDE
fails
this
criterion.
Meets
this
criterion.
Meets
this
criterion.
The
MDV
is
a
true
concentration
value
not
used
in
the
actual
detection
decision.
Does
not
meet
the
criterion.

The
detection
limit
concept
must
be
applicable
to
the
variety
of
decisions
made
under
the
Clean
Water
Act,

and
should
support
State
and
local
obligations
to
implement
measurement
requirements
that
are
at
least
as
stringent
as
those
set
by
the
Federal
Government
Meets
this
criterion
At
best,
only
partially
passes
this
criterion.
Not
likely
to
meet
this
criterion
in
instances
in
which
a
compliance
limit
is
close
to
detection
by
a
procedure
such
as
the
MDL.
In
the
absence
of
a
procedure
for
determining
LOD
values,

fails
to
meet
this
criterion.
In
the
absence
of
a
procedure
for
determining
CRV
values,
fails
to
meet
this
criterion.
In
the
absence
of
a
procedure
for
determining
MDV
values,
fails
to
meet
this
criterion
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
30
Table
5­
2.
Assessment
of
Quantitation
Limit
Concepts
Against
Evaluation
Criteria
Evaluation
Criteria
ML
IQE
ACS
LOQ
ISO/
IUPAC
LOQ
The
quantitation
limit
concept
must
be
scientifically
valid
°
It
can
be
(
and
has
been
tested)

°
Has
undergone
peer
review
and
publication
°
Has
an
error
rate
that
is
known
or
can
be
estimated
°
Has
standards
that
can
be
maintained
to
control
its
operation
°
Has
achieved
widespread
acceptance
in
a
relevant
scientific
community
Meets
all
5
conditions
for
scientific
validity,

though
slight
modification
to
the
definition
is
suggested
to
improve
operation
when
more
than
7
replicates
are
used
to
estimate
the
ML.
Meets
1
condition,
partially
meets
1
condition,
and
fails
3
conditions.

°
Can
be,
but
has
not
been
fully
tested
(
partial)

°
Error
rate
cannot
be
estimated
due
to
problems
with
the
procedure.
(
fail)

°
Standards
are
not
likely
to
control
its
operation.
(
fail)

°
Has
not
achieved
widespread
acceptance.
(
fail)
Meets
3
of
the
5
conditions
for
scientific
validity.

°
Lacks
a
standard
to
control
its
operation
°
Difficult
to
determine
the
degree
of
acceptance.
Meets
4
of
the
5
conditions
for
scientific
validity.

°
Lacks
a
standard
to
control
its
operation
°
Difficult
to
determine
the
degree
of
acceptance
The
quantitation
limit
concept
must
address
demonstrated
expectations
of
laboratory
and
method
performance,
including
routine
variability
Meets
this
criterion.

Procedure
can
be
performed
by
a
single
laboratory
during
a
single
shift,
or
for
method
development
by
multiple
labs
in
a
single
shift.
Fails
this
criterion
due
to
subjectivity,
errors,
and
theoretical
inconsistencies
in
the
procedure.
Partially
meets
this
criterion.
The
concept
is
designed
to
address
these
expectations
but
in
practice,

there
is
no
procedure
for
performing
such
demonstrations.
Meets
this
criterion
Table
5­
2.
Assessment
of
Quantitation
Limit
Concepts
Against
Evaluation
Criteria
Evaluation
Criteria
ML
IQE
ACS
LOQ
ISO/
IUPAC
LOQ
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
5­
31
The
quantitation
limit
concept
must
be
supported
by
a
practical
and
affordable
procedure
that
a
single
laboratory
can
use
to
evaluate
method
performance
Meets
this
criterion
Fails
this
criterion.
Requires
interlaboratory
study
involving
a
reference
lab
or
coordinating
body,
6
complete
data
sets,
and
a
highly
skilled
statistician.

The
cost
of
implementing
this
procedure
would
exceed
most
method
development
budgets.
Fails
this
criterion.
Fails
this
criterion
The
quantitation
limit
concept
should
identify
the
concentration
at
which
the
reliability
of
the
measured
results
is
consistent
with
the
capabilities
of
the
method
when
the
method
is
performed
by
an
experienced
analyst
in
a
welloperated
lab.
Meets
this
criterion
Meets
this
criterion,
but
is
not
likely
to
estimate
the
lowest
level
at
which
reliable
measurements
can
be
made
by
an
experienced
analyst
in
a
well
operated
lab..
Meets
this
criterion
Meets
this
criterion
The
quantitation
limit
concept
must
be
applicable
to
the
variety
of
decisions
made
under
the
Clean
Water
Act,
and
should
support
state
and
local
obligations
to
implement
measurement
requirements
that
are
at
least
as
stringent
as
those
set
by
the
Federal
Government
Meets
this
criterion.
At
best,
only
partially
passes
this
criterion.
Fails
for
those
instances
in
which
the
IQE
limit
is
greater
than
an
effluent
limit
or
water
qualitybased
limit.
Fails
this
criterion.
In
the
absence
of
a
procedure
for
determining
ACS
LOQ
values,
the
ACS
LOQ
cannot
be
used
in
a
regulatory
context.
Fails
this
criterion.
In
the
absence
of
a
procedure
for
determining
LOQ
values,
the
ISO/
IUPAC
LOQ
cannot
be
used
in
a
regulatory
context.
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
6­
1
Chapter
6
Conclusions
EPA's
assessment
of
detection
and
quantitation
limit
concepts
was
based
on:

°
Identification
of
relevant
concepts
to
include
in
the
assessment
(
Chapter
2).
°
Identification
of
issues
that
may
be
relevant
to
the
assessment
from
an
analytical
chemistry,
statistical,
or
regulatory
perspective
(
Chapter
3).
°
Development
of
criteria
that
reflects
EPA's
views
concerning
these
issues
(
Chapter
4).
These
criteria
formed
the
primary
basis
for
evaluating
the
ability
of
each
concept
to
meet
EPA
needs
under
the
Clean
Water
Act.
°
Assessment
of
how
well
each
concept
met
the
evaluation
criteria
(
Chapter
5).
°
Use
of
real­
world
data
to
evaluate
both
the
theoretical
and
practical
limitations
of
each
concept
(
Appendices
B
and
C).

EPA
evaluated
four
sets
of
detection
and
quantitation
limit
concepts
advanced
by
EPA,
ASTMInternational
ACS,
and
ISO/
IUPAC.
Of
these,
only
the
concepts
advanced
by
EPA
and
ASTM­
International
were
supported
by
clearly
defined
procedures
for
implementing
the
concept.
The
lack
of
supporting
procedures
for
the
ACS
and
ISO/
IUPAC
concepts
is
reflected
in
EPA's
overall
assessment.

EPA's
overall
assessment
of
each
concept
against
each
of
the
evaluation
criteria
suggests
that
1)
no
single
pair
of
detection
and
quantitation
limit
concepts
perfectly
meets
EPA's
criteria,
and
2)
the
MDL
and
ML
are
closest
to
meeting
EPA's
criteria.
EPA's
assessment
of
the
theoretical
and
practical
applications
of
each
concept
(
see
Appendices
B
and
C)
is
summarized
in
Exhibit
6­
1.
This
exhibit
suggests
that
no
concept
produces
the
"
right"
answer,
and
that
different
concepts
produce
different
detection
and
quantitation
limits.
Observed
differences
are
largely
due
to
different
sources
of
variability
accounted
for
among
the
concepts.

These
findings
suggest
that
the
MDL
and
ML
are
at
least
as
good,
if
not
better,
than
other
detection
and
quantitation
limit
concepts.
EPA
has
suggested
modifications
to
the
MDL
and
ML
to
allow
these
concepts
to
fully
meet
EPA's
evaluation
criteria.
(
A
revised
version
of
the
MDL
procedure,
that
reflects
these
modifications,
is
provided
in
Appendix
D
to
this
TSD.
A
revised
version
of
the
ML
definition
was
given
in
Chapter
5.)
As
noted
in
Chapter
3,
however,
outside
organizations
use
different
detection
and
quantitation
concepts
that
meet
their
own
needs.
Given
the
alternate
needs
and
EPA's
desire
to
1)
encourage
the
development
of
improved
measurement
techniques,
and
2)
provide
the
stakeholder
community
with
a
variety
of
options
whenever
possible,
EPA
believes
it
would
impractical
to
require
that
all
methods
promulgated
for
use
in
Clean
Water
Act
programs
employ
the
MDL
and
ML
concepts.

One
possible
alternative
is
for
EPA
to
revise
the
current
MDL
and
ML
procedure
as
the
standard
concepts
for
use
in
CWA
applications,
and
to
allow
alternate
procedures
for
specific
CWA
applications
as
long
as
those
procedures
meet
the
needs
of
the
specific
application
(
i.
e.,
provide
detection
and
quantitative
capabilities
that
would
protect
human
health
and
the
environment).
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
6­
2
Exhibit
6­
1:
Theoretical
and
Practical
Application
of
Each
Concept
Finding
1:
Each
concept
yields
different
values.

Detection
Limits
Concepts
°
The
EPA
MDL
and
ACS
LOD
concepts,
which
are
functionally
analogous,
produced
detection
limits
that
are
a
median
of
1.25
times
higher
than
the
CRV
advanced
by
ISO
and
IUPAC.
°
The
Minimum
Detectable
Value
(
MDV)
advanced
by
ISO
and
IUPAC
produced
detection
limits
that
are
a
median
of
1.2
times
higher
than
the
MDL
and
LOD
concepts.
°
A
single­
laboratory
variant
of
the
IDE
(
the
IDE
has
been
advanced
by
ASTM­
International)
produced
detection
limits
that
are
a
median
of
2.9
times
higher
than
the
MDL
and
LOD
concepts.

Quantitation
Limit
Concepts
°
The
EPA
ML
and
the
functionally
equivalent
ACS
LOQ
produced
quantitation
limits
that
are
a
median
of
1.1
times
higher
than
the
LOQ
advanced
by
ISO
and
IUPAC.
°
A
single
laboratory
variant
of
the
IQE
(
the
IQE
has
been
advanced
by
ASTM)
produced
median
quantitation
limits
that
are
equivalent
to
the
EPA
ML
and
ACS
LOQ.

Finding
2:
More
than
the
5
levels
specified
by
ASTM
are
required
to
produce
a
reliable
IDE
and
IQE
°
EPA
found
that
the
IDEs
produced
with
a
subset
of
data
generated
from
the
5
concentrations
recommended
in
the
IDE
procedure
differed
widely
from
the
IDEs
produced
with
a
larger
set
of
data
involving
16
concentrations
(
including
the
subset
of
5).
°
Findings
suggest
that
more
than
5
levels
are
needed
to
produce
a
reliable
IDE
due
the
limited
power
of
the
statistical
tests
for
significant
model
parameters
and
the
difficulty
of
drawing
conclusions
based
on
residual
plots
with
only
5
points.
°
Parallel
reasoning
can
be
applied
to
the
IQE
based
on
its
similarity
to
the
IDE.

Finding
3:
The
ML
procedure
yields
quantitation
limits
that
are
generally
in
the
range
of
the
10%
RSD
intended
in
the
ML
(
and
the
functionally
analogous
ACS
LOQ)
concept.

°
EPA
calculated
the
uncertainty
associated
with
replicate
measurements
made
at
the
ML
for
a
large
number
of
analytes
and
techniques.
°
EPA
found
that
on
average,
across
all
techniques
tested,
the
RSD
of
replicate
measurements
at
the
ML
was
approximately
7%.
Median
RSDs
calculated
for
each
multi­
analyte
method
ranged
from
6­
14%,
and
RSD
values
calculated
for
each
single­
analyte
method
ranged
from
4
­
16%.

Finding
4:
No
single
model
adequately
predicts
the
behavior
of
all
analytes
and
all
methods
across
the
measurement
range.

°
EPA
produced
graphs
representing
hundreds
of
analyte/
method
combinations.
Selection
of
an
appropriate
model
based
on
these
graphs
is
highly
subjective
at
best
due
to
the
lack
of
clear
patterns
and
the
residuals
observed
with
each
model
applied.
°
The
IDE
and
IQE
concepts
are
the
only
concepts
other
than
the
MDL
and
ML
that
are
supported
by
a
procedure
for
implementing
the
concept.
The
IDE
and
IQE
procedures
rely
heavily
on
model
selection,
and
the
degree
of
subjectivity
involved
in
selecting
these
models
makes
implementation
of
the
IDE
and
IQE
difficult.

Finding
5:
Use
of
a
recovery
correction
when
establishing
detection
and
quantitation
limits
may
not
be
appropriate.

°
EPA
found
that
using
a
regression
line
to
estimate
a
recovery
correction
at
zero
concentration
causes
great
swings
in
the
resulting
detection
and
quantitation
limits.
°
Use
of
a
recovery­
correction
procedure
also
can
result
in
`
double­
correcting'
for
recovery
because
1)
nearly
all
methods
already
contain
specifications
for
acceptable
recovery
performance,
and
2)
some
methods
include
recovery
correction
in
the
computation
of
results.
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
R­
1
References
ASTM
D6091,
Standard
Practice
for
99
%/
95
%
Interlaboratory
Detection
Estimate
(
IDE)
for
Analytical
Methods
with
Negligible
Calibration
Error,
1991.

ASTM
Practice
D
2777,
Standard
Practice
Interlaboratory
Quantitation
Estimate,
2000.

Clayton
C.
A.,
J.
W.
Hines,
and
P.
D.
Elkins
(
1987),
Detection
Limits
with
Specified
Assurance
Probabilities,
Analytical
Chemistry
59:
2506­
2514.

Currie,
Lloyd
A.
(
1968),
Limits
for
Quantitative
Detection
and
Quantitative
Determination,
Analytical
Chemistry
40:
586­
593.

Currie,
Lloyd
A.
(
1995),
Nomenclature
in
Evaluation
of
Analytical
Methods
including
Detection
and
Quantification
Capabilities,
Pure
and
Appl.
Chem.
67:
10,
1699­
1722.

Currie,
Lloyd
A.
(
1999a),
Nomenclature
in
Evaluation
of
Analytical
Methods
Including
Detection
and
Quantification
Capabilities
(
IUPAC
Recommendations
1995)
Anal.
Chim.
Acta
391,
105­
126.

Currie,
Lloyd
A.
(
1999b),
Detection
and
Quantification
Limits:
Origins
and
Historical
Overview,
Anal.
Chim.
Acta
391,
127­
134.

Currie,
Lloyd
A.
(
2000),
Detection
and
quantification
capabilities
and
the
evaluation
of
low­
level
data:
Some
international
perspectives
and
continuing
challenges,
J.
Radioanalytical
and
Nuclear
Chem,
Vol.
245,
145­
156.

Daubert
v.
Merrell
Dow
Pharmaceuticals,
509
U.
S.
579
(
1993).

Federal
Register
(
1979),
Guidelines
Establishing
Test
Procedures
for
the
Analysis
of
Pollutants
Under
the
Clean
Water
Act;
Proposed
Rule
(
44
FR
69463).

Federal
Register
(
1984),
Guidelines
Establishing
Test
Procedures
for
the
Analysis
of
Pollutants
Under
the
Clean
Water
Act;
Final
Rule
and
Interim
Final
Rule,
and
Proposed
Rule
(
49
FR
43234).

Federal
Register
(
1987),
National
Primary
Drinking
Water
Regulations
­
Synthetic
Organic
Chemicals
Monitoring
for
Unregulated
Contaminants;
Final
Rule,
52
FR
25699.

Federal
Register
(
1992),
National
Primary
Drinking
Water
Regulations;
Synthetic
Organic
Chemicals
and
Inorganic
Chemicals;
Final
Rule,
57
FR
31800.

Federal
Register
(
1998),
Federal
Participation
in
the
Development
and
Use
of
Voluntary
Consensus
Standards
and
in
Conformity
Assessment
Activities,
OMB
Circular
A­
119,
63
FR
8546.

Federal
Register
(
1999),
Guidelines
Establishing
Test
Procedures
for
the
Analysis
of
Pollutants;
Measurement
of
Mercury
in
Water
(
EPA
Method
1631,
Revision
B);
Final
Rule,
64
FR
30417.

Gibbons,
R.
D.,
F.
H.
Jarke,
K.
P.
Stoub
(
1991),
Detection
Limits:
For
Linear
Calibration
Curves
with
Increasing
Variance
and
Multiple
Future
Detection
Decisions,
Waste
Testing
and
Quality
Assurance:
ASTM
STP
1075,
D.
Friedman,
Ed.,
American
Society
for
Testing
and
Materials,
Philadelphia
3:
337­
390.
References
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
R­
2
Gibbons,
Robert
D.,
David
E.
Coleman,
and
Raymond
F.
Maddalone
(
1997),
Response
to
Comment
on
"
An
Alternative
Minimum
Level
Definition
for
Analytical
Quantification,"
Env.
Sci.
and
Tech.
31:
7
2071­
2077.

Glaser,
John,
Denis
Foerst,
Gerald
McKee,
Stephan
Quave,
William
Budde
(
1981),
Trace
Analyses
for
Wastewaters,
Env.
Sci.
&
Tech
15:
1426­
1435.

Hahn
and
Meeker,
Statistical
Intervals,
Wiley,
1991.

Hubaux,
A.
and
G.
Vos
(
1970),
Decision
and
Detection
Limits
for
Linear
Calibration
Curves,
Analytical
Chemistry
42:
849­
855.

Keith,
Lawrence
H.,
et.
al.
(
1983),
Principles
of
Environmental
Analysis,
Anal.
Chem.
55:
14,
2210­
2218.

Maddalone,
Raymond
,
James
Rice,
Ben
Edmondson,
Babu
Nott,
&
Judith
Scott
(
1993),
Defining
Detection
and
Quantitation
Levels,
Water
Environment
and
Technology
5,
41
­
44.

McDougal,
Daniel,
et
al.
(
1980),
Guidelines
for
Data
Acquisition
and
Data
Quality
Evaluation
in
Environmental
Chemistry,
Anal.
Chem.
52:
14,
2242­
2249.

Patterson,
Clair
C.
and
Dorothy
M.
Settle,
(
1976),
NBS
Special
Publication
422,
Accuracy
in
Trace
Analysis:
Sampling,
Sample
Handling,
and
Analysis
1,
321.

Pratt
and
Gibbons,
Concepts
of
Non­
parametric
Theory,
Springer­
Verlag,
1981.

Rocke,
David
and
Stefan
Lorenzato
(
1995)
A
Two­
Component
Model
for
Measurement
Error
in
Analytical
Chemistry,
Technometrics
37:
176­
184.

U.
S.
Environmental
Protection
Agency
(
1991),
Technical
Support
Document
for
Water
Quality
Based
Toxics
Control,
EPA
505/
2­
90/
001.

U.
S.
Environmental
Protection
Agency
(
1993),
Guidance
on
Evaluation,
Resolution,
and
Documentation
of
Analytical
Problems
Associated
with
Compliance
Monitoring,
EPA
821­
B­
93­
001.

U.
S.
Environmental
Protection
Agency
(
1994),
Draft
National
Guidance
for
the
Permitting,
Monitoring,
and
Enforcement
of
Water
Quality­
based
Effluent
Limitations
Set
Below
Analytical
Detection/
Quantitation
Levels.

U.
S.
Environmental
Protection
Agency
(
2001),
EPA's
Guidance
for
Implementation
and
Use
of
EPA
Method
1631
for
the
Determination
of
Low­
Level
Mercury,
EPA
821­
R­
01­
023.

Youden,
W.
J.
and
E.
H.
Steiner,
Statistical
Manual
of
the
Association
of
Official
Analytical
Chemists
AOACInternational
481
N.
Frederick
Ave.,
Suite
500,
Gaithersburg,
MD
20877­
2417.
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
A­
1
Appendix
A
Literature
Search
Regarding
Detection
and
Quantitation
Limit
Concepts
Introduction
Beginning
in
2001,
DynCorp
staff
conducted
a
search
of
published
literature
to
identify
articles
that
discuss
detection
and
quantitation
limit
concepts.
This
literature
search
effort
was
conducted
under
EPA
Contract
No.
68­
C­
01­
091
to
support
an
evaluation
of
detection
and
quantitation
limit
concepts
by
the
EPA
Office
of
Water.

The
principal
goal
of
this
literature
search
effort
was
to
determine
if
any
new
detection
or
quantitation
limit
concepts
had
been
published
in
the
literature
since
an
earlier
search
conducted
for
EPA
by
Science
Applications
International
Corporation
(
SAIC)
in
1997
and
1998.
That
search
resulted
in
an
annotated
bibliography
developed
by
SAIC
and
delivered
to
EPA
in
1998.

The
results
of
the
DynCorp
literature
search
are
summarized
in
the
table
at
the
end
of
this
appendix.

How
the
search
was
conducted
This
search
was
conducted
using
two
major
techniques:

°
a
search
of
an
on­
line
citation
index
(
an
index
of
articles
cited
by
other
authors)
and
°
a
general
on­
line
search
of
literature.

On­
line
citation
index
search
Because
the
search
was
intended
to
identify
detection
and
quantitation
limit
concepts
and
not
specific
numeric
limits
associated
with
a
particular
analytical
method,
DynCorp
began
by
searching
for
references
to
the
major
concepts
known
to
EPA.
These
included
the
Agency's
method
detection
limit
(
MDL)
and
any
other
terms
that
have
been
suggested
to
the
Agency
as
alternative
detection
or
quantitation
limit
concepts.
In
addition
to
searching
for
these
concepts,
DynCorp
also
searched
the
citation
index
to
identify
references
to
the
original
authors
of
these
concepts
and
for
any
other
authors
who
either
cited
the
original
concepts,
the
original
papers
underlying
those
concepts,
or
the
authors
of
those
concepts.
DynCorp
used
a
similar
approach
to
find
any
papers
that
cited
the
references
identified
in
the
earlier
literature
search
by
SAIC.

DynCorp
staff
evaluated
the
full
title
of
each
identified
citation
to
determine
its
relevance
to
EPA's
objective.
Where
available
electronically
and
at
no
additional
cost,
DynCorp
staff
also
reviewed
the
abstract
and/
or
full
paper
to
further
characterize
relevance.
All
papers
that
were
determined
to
be
relevant,
or
even
possibly
relevant,
were
obtained
in
hardcopy
or
electronic
format
for
evaluation
by
EPA.

After
reviewing
all
papers
determined
to
be
relevant
to
EPA's
objective,
DynCorp
examined
all
of
the
references
cited
in
those
papers
to
identify
additional
papers
of
interest.
These,
too,
were
obtained
in
hardcopy
or
electronic
format
for
evaluation
by
EPA,
except
where
noted
below
General
on­
line
literature
search
DynCorp
performed
an
on­
line
direct
search
of
published
literature
(
e.
g.,
a
literature
database
of
published
articles,
not
a
citation
index)
using
general
terms
such
as
"
detection
limit,"
"
quantitation
limit,"
or
"
calibration."
As
expected,
this
approach
returned
a
very
large
numbers
of
papers
that
mention
these
terms,
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
A­
2
even
if
the
focus
of
the
paper
was
on
something
far
removed
from
the
development
or
assessment
of
concepts
about
detection
and
quantitation,
and
proved
to
be
of
limited
value
in
serving
EPA's
objectives
for
the
search.
Therefore,
DynCorp
discontinued
this
effort
and
narrowed
our
on­
line
literature
search
to
a
search
for
additional,
uncited
works
by
authors
of
the
concepts
known
to
EPA
or
identified
through
the
citation
index
approach.

Papers
determined
to
be
relevant
to
EPA's
objective
were
obtained
in
electronic
or
hardcopy
format
for
evaluation
by
EPA,
except
where
noted
below.

How
the
results
are
presented
DynCorp
identified
a
total
of
160
relevant
publications
using
the
approach
described
above.
Thirtythree
(
33)
of
these
publications
were
also
identified
in
the
earlier
search
by
SAIC.
Of
the
127
remaining
publications,
35
were
published
since
the
SAIC
search
was
completed.

Each
of
the
160
publications
identified
in
the
search
is
listed
in
Attachment
1,
which
provides
the
title,
year
of
publication,
authors,
and
source
citation.
The
citations
for
the
33
papers
identified
in
the
earlier
search
by
SAIC
are
included
in
the
attachment,
and
can
be
identified
by
the
phrase
"
annotated
only"
in
parentheses
after
the
title
of
the
paper.

The
final
column
of
the
attached
spreadsheet
is
labeled
"
Category."
All
of
the
citations
identified
in
the
SAIC
literature
search
and
the
current
search
conducted
by
DynCorp
were
placed
in
one
of
the
six
following
categories,
based
on
the
principal
characteristic
of
the
article:

°
Background
­
The
citation
discusses
background
information
(
including
early
works
by
Currie,
Kaiser,
and
others).
°
Calibration
concept
­
The
citation
primarily
deals
with
calibration
of
analytical
instrumentation
°
Critique
­
The
major
thrust
of
the
citation
is
to
critique
one
or
more
concepts,
as
opposed
to
introducing
a
new
concept
°
Multi­
laboratory
concept
­
The
citation
describes
an
approach
to
developing
detection
and/
or
quantitation
limits
that
relies
on
multi­
laboratory
measurements
°
Single­
laboratory
concept
­
The
citation
describes
an
approach
to
developing
detection
and/
or
quantitation
limits
that
relies
on
single­
laboratory
measurements
°
Single­
laboratory,
multi­
level
concept
­
The
citation
describes
an
approach
to
developing
detection
and/
or
quantitation
limits
that
relies
on
single­
laboratory
measurements
but
explicitly
includes
multiple
concentrations.

Although
there
is
some
degree
of
overlap
between
categories,
and
some
papers
could
probably
be
classified
in
more
than
one
category,
each
citation
was
classified
into
only
one
category
for
the
purposes
of
this
search.

A
seventh
category
called
"
Not
found"
was
used
for
three
papers
that
were
identified
in
the
literature
search,
but
copies
of
which
could
not
readily
be
obtained.
One
paper
is
from
a
German
journal
that
was
not
available
via
interlibrary
loan.
A
second
article
was
also
not
available
via
interlibrary
loan.
The
third
citation
is
an
abstract
by
Currie,
from
1983.
Given
that
the
work
of
Currie
is
well­
represented
in
the
other
citations
and
the
fact
that
this
citation
appears
to
be
only
an
abstract,
additional
efforts
were
not
expended
to
obtain
a
copy.

The
references
presented
in
the
table
were
sorted
by
category
and
year
of
publication
and
are
displayed
with
the
most
recent
citations
in
each
category
first.
Appendix
A
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
A­
3
Summary
The
principal
goal
of
this
literature
search
effort
was
to
determine
if
any
new
detection
or
quantitation
limit
concepts
had
been
published
in
the
literature
since
the
search
by
SAIC
in
1997­
1998.
As
anticipated,
citations
were
identified
that
relate
to
the
recent
efforts
of
the
International
Organization
for
Standardization
(
ISO),
the
International
Union
of
Pure
and
Applied
Chemists
(
IUPAC),
and
the
American
Society
for
Testing
and
Materials
(
ASTM).
Additional
articles
critiquing
various
concepts
were
identified
as
well.

However,
no
previously
unknown
detection
or
quantitation
limit
concepts
were
uncovered
as
a
result
of
this
effort.
Asessment
of
Detetion
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
A­
4
Results
of
the
2001
Literature
Search
Title
Year
Author
Source
Category
Some
Case
Studies
of
Skewed
(
and
other
ab­
normal)
Data
Distributions
Arising
in
Low­
Level
Environmental
Research
2001
L.
A.
Currie
Fresenius
Journal
of
Analytical
Chemistry
370:
705­

718
Background
Legislative
Limits
Below
Detection
Capability
2000
S.
L.
R.
Ellison,
V.
J.
Barwick,
A.
Williams
Accreditation
Quality
Assurance
5:
308­
313
Background
International
Recommendations
Offered
on
Analytical
Detection
and
Quantification
Concepts
and
Nomenclature
1999
L.
A.
Currie
Analytica
Chimica
Acta
391:
103
Background
Detection
and
Quantifation
Limits:
Origins
and
Historical
Overview
1999
L.
A.
Currie
Analytica
Chimica
Acta
391:
127­
134
Background
1996
ASMS
Fall
Workshop:
Limits
to
Confirmation,
Quantitation,
and
Detection
1997
R.
Baldwin,
R.
A.
Bethem,
R.
K.
Boyd,
W.
L.

Budde,
T.
Cairns,
R.
D.
Gibbons,
J.
D.

Henion,
M.
A.
Kaiser,
Journal
of
the
American
Society
for
Mass
Spectrometry
8:
1180­
1190
Background
Measurement
precision
and
1/
f
Noise
in
Analytical
Instruments
1996
Y.
Hayashi,
R.
Matsuda,
R.
B.
Poe
Journal
of
Chromatography
A
722:
157­
167
Background
Fossil­
and
Bio­
mass
Combustion:
C­
14
for
Source
Identification,
Chemical
Tracer
Development,
and
Model
Validation
1994
L.
A.
Currie,
G.
A.
Klouda,
D.
B.
Klinedinst,

A.
E.
Sheffield,
A.
J.
T.
Jull,
D.
J.
Donahue,

M.
V.
Connolly
Nuclear
Instr.
And
Methods
in
Physics
Res.
B
92:

404­
409
Background
Interlaboratory
Comparison
of
Instruments
Used
for
the
Determination
of
Elements
in
Acid
Digestates
of
Solids
1994
D.
E.
Kimbrough,
J.
Wakakuwa
Analyst
119:
383­
388
Background
Throwaway
Data
1994
L.
H.
Keith
Environmental
Science
&
Technology
28:
389A­

390A
Background
EPA's
Office
of
Water
Surges
Toward
MDL
Solution
1994
Larry
Keith
Radian
Background
In
Pursuit
of
Accuracy:
Nomenclature,
Assumptions,
and
Standards
(
annotated
only)
1992
L.
A.
Currie
Pure
&
Applied
Chemistry
64:
455­
472
Background
Interlaboratory
Aspects
of
Detection
Limits
Used
for
Regulatory
and
Control
Purposes
1988
L.
B.
Rogers
ACS
Symposium
Series
361:
94­
108
Background
Noise
and
Detection
Limits
in
Signal­
Integrating
Analytical
Methods
1988
H.
C.
Smit,
H.
Steigstra
ACS
Symposium
Series
361:
126­
148
Background
Effects
of
Analytical
Calibration
Models
on
Detection
Limit
Estimates
1988
K.
G.
Owens,
C.
F.
Bauer,
C.
L.
Grantr
ACS
Symposium
Series
361:
194­
207
Background
Real­
World
Limitations
to
Detection
1988
D.
Kurtz,
J.
Taylor,
L.
Sturdivan,
W.
Crummett,

C.
Midkiff,
R.
Watters
Jr,
L.
Wood,

W.
Hanneman,
W.
Horwitz
ACS
Symposium
Series
361:
288­
316
Background
Detection
Limits
­
A
Systematic
Approach
to
Detection
Limits
is
Needed
When
Trace
Determinations
are
to
be
Performed
1986
S.
A.
Borman
Analytical
Chemistry
58:
A986
Background
Chemometrics
and
Analytical
Chemistry
1984
L.
A.
Currie
Chemometrics
56:
115­
146
Background
Quality
Control
in
Water
Analyses
1983
C.
Kirchmer
ES&
T
17:
174A­
181A
Background
Title
Year
Author
Source
Category
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
A­
5
Validation
of
Analytical
Methods
1983
J.
K.
Taylor
Analytical
Chemistry
55:
600A­
602A,
608A
Background
Trace
Analyses
for
Wastewaters
­
Author's
response
1982
D.
Foerst
Envir.
Sci.
&
Tech.
16:
430A
­
431A
Background
Zur
Theorie
der
Eichfunktion
bei
der
spektrochemischen
Analyse
1982
V.
H.
Kaiser
DK
535:
309­
319
Background
The
Reliability
of
Detection
Limits
in
Analytical
Chemistry
1980
J.
D.
Winefordner,
J.
L.
Ward
Analytical
Letters
13:
1293­
1297
Background
A
Review
and
Tutorial
Discussion
of
Noise
and
Sign­
to­
Noise
Ratios
in
Analytical
Spectrometry
­
I.
Fundamental
Principles
of
Signal­
to­
Noise
Ratios
1978
C.
T.
J.
Alkemade,
W.
Snelleman,

G.
D.
Boutilier,
B.
D.
Pollard,
J.
D.
Winefordner,

T.
L.
Chester,
N.
Omenetto
Spectrochimica
Acta
33B:
383­
399
Background
A
Review
and
Tutorial
Discussion
of
Noise
and
Sign­
to­
Noise
Ratios
in
Analytical
Spectrometry
­
II.
Fundamental
Principles
of
Signal­
to­
Noise
Ratios
1978
G.
D.
Boutilier,
B.
D.
Pollard,
J.
D.
Winefordner,

T.
L.
Chester,
N.
Omenetto
Spectrochimica
Acta
33B:
401­
415
Background
A
Tutorial
Review
of
Some
Elementary
Concepts
in
the
Statistical
Evaluation
of
Trace
Element
Measurements
1978
P.
W.
J.
M.
Boumans
Spectrochimica
Acta
33B:
625­
634
Background
Analysis
of
Lead
in
Polluted
Coastal
Seawater
1976
C.
Patterson,
D.
Settle,
B.
Glover
Marine
Chemistry
4:
305­
319
Background
Multielement
Analysis
with
an
Inductively
Coupled
Plasma/
Optical
Emission
System
1976
R.
M.
Ajhar,
P.
D.
Dalager,
A.
L.
Davison
American
Laboratory
72­
78
Background
Interlaboratory
Lead
Analyses
of
Standardized
Samples
of
Seawater
1974
P.
Brewer,
N.
Frew,
N.
Cutshall,
J.
J.
Wagner,

R.
A.
Duce,
P.
R.
Walsh,
G.
L.
Hoffman,

J.
W.
R.
Dutton,
W.
F.
Fitzgerald
Marine
Chemistry
2:
69­
84
Background
Statistical
and
Mathematical
Methods
in
Analytical
Chemistry
1972
L.
A.
Currie,
J.
J.
Filliben,
J.
R.
DeVoe
Anal.
Chem.
44:
497R­
512R
Background
Studies
of
Flame
and
Plasma
Torch
Emission
for
Simultaneous
Multi­
Element
Analysis­
I.
Preliminary
Investigations
1972
P.
W.
J.
M.
Boumans,
F.
J.
De
Boer
Spectrochimica
Acta
27B:
391­
414
Background
Quantitative
Determination:
Application
to
Radiochemistry
1968
Lloyd
Currie
Anal.
Chem.
40:
586­
593
Background
Qualitative
and
Quantitative
Sensitivity
in
Flame
Photometry
1966
J.
Ramirez­
Munoz
Talanta
13:
87­
101
Background
The
Limit
of
Detection
of
Analytical
Methods
1962
J.
B.
Roos
Analyst
87:
832­
833
Background
A
Careful
Consideration
of
the
Calibration
Concept
2001
S.
D.
Phillips,
W.
T.
Estler,
T.
Doiron,
K.
R.

Eberhardt,
M.
S.
Levenson
Journal
of
Research
of
the
National
Institute
of
Standards
and
Technology
106:
371­
379
Calibration
Weighted
Random­
Effects
Regression
Models
with
Application
to
Interlaboratory
Calibration
2001
R.
D.
Gibbons,
D.
K.
Bhaumik
Technometrics
43:
192­
198
Calibration
Guidelines
for
Calibration
in
Analytical
Chemistry­
Part
I.
Fundamentals
and
Single
Component
Calibration
(
IUPAC
recommendations
1998)
1998
K.
Danzer,
L.
A.
Currie
Pure
and
Applied
Chemistry
70:
993­
1014
Calibration
A
Comparison
of
Uncertainty
Criteria
for
Calibration
(
annotated
only)
1996
R.
W.
Mee,
K.
R.
Eberhardt
Technometrics
38:
221­
229
Calibration
Constant­
Width
Calibration
Intervals
for
Linear
Regression
(
annotated
only)
1994
K.
R.
Eberhardt,
R.
W.
Mee
Journal
of
Quality
Technology
26:
21­
29
Calibration
Regression
and
Calibration
with
Nonconstant
Error
Variance
1990
M.
Davidian,
P.
D.
Haaland
Chemometrics
and
Intelligent
Laboratory
Systems
9:

231­
248
Calibration
Asessment
of
Detetion
and
Quantitation
Concepts
Title
Year
Author
Source
Category
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
A­
6
Calibration
with
Randomly
Changing
Standard
Curves
(
annotated
only)
1989
D.
F.
Vecchia,
H.
K.
Iyer,
P.
L.
Chapman
Technometrics
31:
83­
90
Calibration
Linear
Calibration
When
the
Coefficient
of
Variation
is
Constant
(
annotated
only)
1988
Y.
C.
Yao,
D.
F.
Vecchia,
H.
K.
Iyer
Probability
and
Statistics:
Essays
in
Honor
of
Franklin
A.
Graybill,
297­
309
Calibration
Analytical
Method
Comparisons
by
Estimates
of
Precision
and
Lower
Detection
Limit
1986
D.
M.
Holland,
F.
F.
McElroy
Environmental
Science
&
Technology
20:
1157­

1161
Calibration
Design
Considerations
for
Calibration
(
annotated
only)
1986
J.
P.
Buonaccorsi
Technometrics
28:
149­
155
Calibration
Multivariate
Calibration
when
the
Error
Covariance
Matrix
is
Structured
(
annotated
only)
1985
T.
Naes
Technometrics
27:
301­
311
Calibration
An
Implementation
of
the
Scheffé
Approach
to
Calibration
Using
Spline
Functions,

Illustrated
by
a
Pressure­
Volume
Calibration
(
annotated
only)
1982
J.
A.
Lechner,
C.
P.
Reeve,
C.
H.

Spiegelman
Technometrics
24:
229­
234
Calibration
Measuring
and
Maximizing
Precision
in
Analyses
Based
on
Use
of
Calibration
Graphs
1982
D.
G.
Mitchell,
J.
S.
Garden
Talanta
29:
921­
929
Calibration
Calibration
in
Quantitative
Analysis:
Part2.
Confidence
Regions
for
the
Sample
Content
in
the
Case
of
Linear
Calibration
Relations
1981
J.
Agterdenbos,
F.
J.
M.
J.
Maessen,
J.
Balke
Analytica
Chimica
Acta
132:
127­
137
Calibration
Design
Aspects
of
Scheffe's
Calibration
Theory
using
Linear
Splines
(
annotated
only)
1980
C.
H.
Spiegelman,
W.
J.
Studden
Journal
of
Research
of
the
National
Bureau
of
Standards
85:
295­
304
Calibration
Nonconstant
Variance
Regression
Techniques
for
Calibration­
Curvev­
Based
Analysis
1980
J.
S.
Garden,
D.
G.
Mitchell,
W.
N.
Mills
Anal.
Chem.
52:
2310­
2315
Calibration
Calibration
in
Quantitative
Analysis
1979
J.
Agterdenbos
Analytica
Chimica
Acta
108:
315­
323
Calibration
Calibration
Curves
with
Nonuniform
Variance
1979
L.
Schwartz
Analytical
Chem.
51:
723­
727
Calibration
Elimination
of
the
Bias
in
the
Course
of
Calibration
(
annotated
only)
1978
L.
J.
Naszódi
Technometrics
20:
201­
205
Calibration
Optimal
Designs
for
the
Inverse
Regression
Method
of
Calibration
(
annotated
only)
1973
M.
A.
Thomas,
R.
H.
Myers
Communications
in
Statistics
2:
419­
433
Calibration
A
Statistical
Theory
of
Calibration
(
annotated
only)
1973
H.
Scheffé
The
Annals
of
Statistics
1:
1­
37
Calibration
On
the
Problem
of
Calibration
(
annotated
only)
1972
G.
K.
Shukla
Technometrics
14:
547­
553
Calibration
Statistical
Processing
of
Calibration
Data
in
Quantitative
Analysis
by
Gas
Chromatography
1970
P.
Bocek,
J.
Novak
J.
Chromatog.
51:
375­
383
Calibration
Estimation
of
a
Linear
Function
for
a
Calibration
Line:
Consideration
of
a
Recent
Proposal
1969
J.
Berkson
Technometrics
11:
649­
660
Calibration
A
Note
on
Regression
Methods
in
Calibration
(
annotated
only)
1969
E.
J.
Williams
Technometrics
11:
189­
192
Calibration
Classical
and
Inverse
Regression
Methods
of
Calibration
in
Extrapolation
(
annotated
only)
1969
R.
G.
Krutchkoff
Technometrics
11:
605­
608
Calibration
Title
Year
Author
Source
Category
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
A­
7
Optimal
Experimental
Designs
for
Estimating
the
Independent
Variable
in
Regression
(
annotated
only)
1968
R.
L.
Ott,
R.
H.
Myers
Technometrics
10:
811­
823
Calibration
Classical
and
Inverse
Regression
Methods
of
Calibration
(
annotated
only)
1967
R.
G.
Krutchkoff
Technometrics
9:
425­
439
Calibration
The
Interpretation
of
Certain
Regression
Methods
and
their
Use
in
Biological
and
Industrial
Research
(
annotated
only)
1939
C.
Eisenhart
The
Annals
of
Mathematical
Statistics
10:
162­
186
Calibration
The
Three
"
Rs"
for
Relevant
Detection,
Reliable
Quantitation
and
Respectable
Reporting
Limits
2000
Ann
Rosecrance
Env.
Testing
&
Anal.
9:
13,50
Critique
Detection
and
Quantification
Capabilities
and
the
Evaluation
of
Low­
Level
Data:
Some
International
Perspectives
and
Continuing
Challenges
2000
L.
A.
Currie
Journal
of
Radioanalytical
and
Nuclear
Chemistry
245:
145­
156
Critique
Realistic
Detection
Limits
from
Confidence
Bands
1999
J.
R.
Burdge,
D.
L.
McTaggart,
S.
O.
Farwell
Journal
of
Chemical
Education
76:
434­
439
Critique
Response
to
Comment
of
"
An
Alternative
Minimum
Level
Definition
for
Analytical
Quantification"
1999
Henry
Kahn,
William
Telliard,
Chuck
White
Env.
Sci.
&
Tech.
33:
1315
Critique
Comment
on
"
An
Alternative
Minimum
Level
Definition
for
Analytical
Quantification"
1999
H.
G.
Rigo
Env.
Sci
&
Tech.
33:
1311­
1312
Critique
Response
to
Comment
on
"
An
Alternative
Minimum
Level
Definition
for
Analytical
Quantification"
1999
Robert
Gibbons,
David
Coleman,
Ray
Maddalone
Env.
Sci.
&
Tech.
33:
1313­
1314
Critique
Comment
on
"
An
Alternative
Minimum
Level
Definition
for
Analytical
Quantification"
1998
Henry
Kahn,
William
Telliard,
Chuck
White
Envir.
Sci
&
Tech
32:
2346­
2348
Critique
Response
to
Comment
on
"
An
Alternative
Minimum
Level
Definition
for
Analytical
Quantification"
1998
Robert
Gibbons,
David
Coleman,
Ray
Maddalone
Envir.
Sci
&
Tech
32:
2349­
2353
Critique
A
Discussion
of
Issues
Raised
by
Lloyd
Currie
and
a
Cross
Disciplinary
View
of
Detection
Limits
and
Estimating
Parameters
that
are
Often
At
or
Near
Zero
(
annotated
only)
1997
C.
H.
Spiegelman
Chemometrics
and
Intelligent
Laboratory
Systems
37:
183­
188
Critique
A
Mock
Trial
for
Critical
Values
(
Detection
Limits)
(
annotated
only)
1997
C.
H.
Spiegelman,
P.
Tarlow
STATS:
The
Magazine
for
Students
of
Statistics
20:

13­
16
Critique
Comment
on
"
An
Alternative
Minimum
Level
Definition
for
Analytical
Quantification"
1997
David
Kimbrough
Envir.
Sci.
&
Tech.
31:
3727­
3728
Critique
The
Smallest
Concentration
1997
R.
F.
Moran,
E.
N.
Brown
Clinical
Chemistry
43:
856­
857
Critique
A
Statistical
Overview
of
Standard
(
IUPAC
and
ACS)
and
New
Procedures
for
Determining
the
Limits
of
Detection
and
Quantification:
Application
to
Voltammetric
and
Stripping
Techniques
(
Technical
Report)
1997
J.
Mocak,
A.
M.
Bond,
S.
Meitchell,
G.

Scollary
Pure
and
Applied
Chemistry
69:
297­
328
Critique
Response
to
Comment
on
"
An
Alternative
Minimum
Level
Definition
for
Analytical
Quantification"
1997
R.
D.
Gibbons,
D.
E.
Coleman,
R.
F.

Maddalone
Envir.
Sci.
&
Tech
31:
3729­
3731
Critique
Some
Conceptual
and
Statistical
Issues
in
Analysis
of
Groundwater
Monitoring
Data
1996
R.
D.
Gibbons
Environmetrics
7:
185­
199
Critique
Some
Statistical
and
Conceptual
Issues
in
the
Detection
of
Low
Level
Environmental
Pollutants
1995
Robert
Gibbons
Environ.
&
Ecol.
Statistics
2:
125­
167
Critique
Asessment
of
Detetion
and
Quantitation
Concepts
Title
Year
Author
Source
Category
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
A­
8
Comment
on
"
Method
Detection
Limits
in
Solid
Waste
Analysis"
1995
D.
E.
Coleman
Environmental
Science
&
Technology
29:
279­
280
Critique
Comment
on
"
Method
Detection
Limits
in
Solid
Waste
Analysis"
1995
Janice
Wakakuwa,
David
Kimbrough
Envir.
Sci.
&
Tech.
29:
281­
282
Critique
"
You
Can't
Compute
With
Less­
Thans"
1994
Ken
Osborn,
Ann
Rosecrance
East
Bay
Municipal
Utility
District,
Core
Laboratories
Critique
Limits
of
Detection
1994
N.
Cressie
Chemometrics
Intelligent
Laboratory
Systems
22:

161­
163
Critique
Conflicting
Perspectives
About
Detection
Limits
and
About
the
Censoring
of
Environmental
Data
1994
M.
J.
R.
Clark,
P.
H.
Whitfield
Water
Resources
Bulletin
30:
1063­
1079
Critique
Limit
of
Discrimination,
Limit
of
Detection
and
Sensitivity
in
Analytical
Systems
1994
R.
Ferrus,
M.
R.
Egea
Analytica
Chimica
Acta
287:
119­
145
Critique
Discussion
of:
A
Study
of
the
Precision
of
Lead
Measurements
at
Concentrations
Near
the
Method
Limit
of
Detection
1994
B.
R.
Nott,
R.
R.
Maddalone
Water
Environment
Research
66:
853­
854
Critique
Limits
of
Detection
Methodologies
1993
J.
Lindstedt
Plating
and
Surface
Finishing
80:
81­
86
Critique
Method
Detection
Limits
in
Solid
Waste
Analysis
1993
David
Kimbrough,
Janice
Wakakuwa
Enviro.
Sci.
&
Tech
27:
2692­
2699
Critique
Defining
the
Limits
1993
G.
Stanko,
W.
Krochta,
A.
Stanley,
T.
Dawson,

K.
Hillig,
R.
Javick,
R.
Obrycki,
B.
Hughes,
F.

Saksa
Environmental
Lab
1:
16­
20
Critique
A
Study
of
the
Precision
of
Lead
Measurements
at
Concentrations
Near
the
Method
Limit
of
Detection
1993
P.
M.
Berthouex
Water
Environment
Research
65:
620­
629
Critique
Detection
Limit
Concepts:
Foundations,
Myths,
and
Utilization
1992
D.
A.
Chambers,
S.
S.
Dubose,
E.
L.

Sensintaffar
Health
Phys.
63:
338­
340
Critique
Difficulties
Related
to
Using
Extreme
Percentiles
for
Water
Quality
Regulations
1991
P.
M.
Berthouex,
Ian
Hau
Research
Journal
WPCF
63:
873­
879
Critique
A
Simple
Rule
for
Judging
Compliance
Using
highly
Censored
Samples
1991
P.
M.
Berthouex,
Ian
Hau
Research
Journal
WPCF
63:
880­
886
Critique
Current
Method
for
Setting
Dioxin
Limits
in
Water
Requires
Reexamination
1990
J.
LaKind,
E.
Rifkin
Env.
Sci.
&
Tech
24:
963­
965
Critique
Kaiser
3­
Sigma
Criterion
­
A
Review
of
the
Limit
of
Detection
1990
L.
S.
Oresic,
V.
Grdinic
Acta
Pharmaceutica
Jugoslavica
40:
21­
61
Critique
MCL
Noncompliance:
Is
the
Laboratory
at
Fault?
1990
Steven
Koorse
AWWA
p53­
58
Critique
Qualitative
or
Quantitative
Characterization
of
Spectrographic
Methods?
The
Detection
and
Determination
Limits
1990
Karol
Florian
Chemia
Analityczna
35:
129­
139
Critique
False
Positives,
Detection
Limits,
and
Other
Laboratory
Imperfections:
The
Regulatory
Implications
1989
Steven
Koorse
Environmental
Law
Reporter
19:
10211­
10222
Critique
Evaluation
of
Detection
Limit
Estimators
(
annotated
only)
1988
F.
C.
Garner,
G.
L.
Robertson
Chemometrics
and
Intelligent
Laboratory
Systems
3:

53­
59
Critique
Title
Year
Author
Source
Category
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
A­
9
Chemometrics
­
Measurement
Reliability
1988
K.
Castaneda­
Mendez
Clinical
Chemistry
34:
2494­
2498
Critique
The
Detection
Limit
1988
P.
S.
Porter,
R.
C.
Ward,
H.
F.
Bell
Environmental
Science
&
Technology
22:
856­
861
Critique
Estimation
of
Detection
Limits
for
Environmental
Analytical
Procedures
­
A
Tutorial
1988
Cliff
Kirchmer
ACS
Symposium
Series
361:
78­
93
Critique
Limits
of
Detection
1984
J.
K.
Taylor
Analytical
Chemistry
56:
130A
Critique
Clarification
of
the
Limit
of
Detection
in
Chromatography
1984
J.
P.
Foley,
J.
G.
Dorsey
Chromatographia
18:
503­
511
Critique
Limit
of
Detection:
A
Closer
Look
at
the
IUPAC
Definition
1983
Gary
Long,
J.
D.
Winefordner
Analytical
Chem.
55:
712­
724
Critique
Trace
Analyses
for
Wastewaters
1982
C.
J.
Kirchmer
Envir.
Sci.
&
Tech.
16:
430A
Critique
A
comparison
of
statistical
and
empirical
detection
limits
1998
G.
C.
C.
Su
Journal
of
AOAC
International
81:
105­
110
Multilab
Challenges
in
Regulatory
Environmetrics
1997
C.
B.
Davis
Chemometrics
Intelligent
Laboratory
Systems
37:

43­
53
Multilab
Determining
Quantitation
Levels
for
Regulatory
Purposes
1996
P.
F.
Sanders,
R.
L.
Lippincott,
A.
Eaton
Journal
American
Water
Works
Association
88:
104­

114
Multilab
Defining
Detection
and
Quantitation
Levels
1993
Raymond
Maddalone,
James
Rice,
Ben
Edmondson,
Babu
Nott,
Judith
Scott
Water
Envir.
&
Tech.
Jan.
93:
41­
44
Multilab
Concept
2000­
A
Statistical
Approach
for
Analytical
Practice
­
Part
I:
Limits
of
Detection,

Identification,
and
Determination
1999
Hadrich
J
et
al
Deutsche
Lebensmittel­
Rundschau
1999,
95(
10),

428­
436
not
found
Statistics
and
Environmental
Policy:
Case
Studies
from
Long­
Term
Environmental
Monitoring
Data
1999
Goudey
R
et
al
Novart
FDN
Sym
220;
144­
157
not
found
The
Many
Dimensions
of
Detection
in
Chemical
Analysis
1983
Currie
LA
Abstracts
of
Papers
of
the
American
Chemical
Society,
185
(
Mar),
63­
PEST
not
found
A
Practical
Strategy
for
Determining
and
Verifying
Detection
Limits
2001
T.
Georgian,
K.
E.
Osborn
Env.
Testing
&
Analysis
10:
13­
14
Single
lab
Review
of
the
Methods
of
the
US
Environmental
Protection
Agency
for
Bromate
Determination
and
Validation
of
Method
317.0
for
Disinfection
By­
Product
Anions
and
Low
­
Level
Bromate
2001
D.
P.
Hautman,
D.
J.
Munch,
C.
Frebis,
H.
P.

Wagner,
B.
V.
Pepich
Journal
of
Chromatography
A
920:
221­
229
Single
lab
Comparison
of
Detection
Limits
in
Environmental
Analysis
­
Is
it
Possible?
An
Approach
on
Quality
Assurance
in
the
Lower
Working
Range
by
Verification
2001
S.
Geib,
J.
W.
Einax
Fresenius
Journal
of
Analytical
Chemistry
370:
673­

678
Single
lab
On
the
Assessment
of
Compliance
with
Legal
Limits,
Part
I:
Signal
and
Concentration
Domains
2001
E.
Desimoni,
S.
Mannino,
B.
Brunetti
Accreditation
Quality
Assurance
6:
452­
458
Single
lab
Capability
of
Detection
­
Part
2
2000
ISO
ISO
11843­
2
Single
lab
Nomenclature
in
Evaluation
of
Analytical
Methods
Including
Detection
and
Quantifation
Capabilities
(
IUPAC
Recommendations
1995)
1999
L.
A.
Currie
Analytica
Chimica
Acta
391:
105­
126
Single
lab
Asessment
of
Detetion
and
Quantitation
Concepts
Title
Year
Author
Source
Category
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
A­
10
New
Reporting
Procedures
Based
on
Long­
Term
Method
Detection
Limits
and
Some
Considerations
for
Interpretations
of
Water­
Quality
Data
Provided
by
the
U.
S.

Geological
Survey
National
Water
Quality
Laboratory
1999
C.
J.
Obinger
Childress,
W.
T.
Foreman,
B.

F.
Connor,
and
T.
J.
Maloney
USGS
Open­
File
Report
99­
193,
19
pages.
Single
lab
Analyses
of
Polychlorinated
Biphenyls
and
Chlorinated
Pesticides
in
Biota:
Method
and
Quality
Assurance
1999
P.
Cleemann,
G.
B.
Paulsen
Journal
of
AOAC
International
82:
1175­
1184
Single
lab
Detection
Limits
of
Organic
Contaminants
in
Drinking
Water
1998
W.
M.
Draper,
J.
S.
Dhoot,
J.
S.
Dhaliwal,

J.
W.
Remoy,
S.
K.
Perera,
F.
J.
Baumann
Journal
of
the
American
Water
Works
Association
90:
82­
90
Single
lab
Detection:
International
Update,
and
Some
Emerging
Di­
lemmas
Involving
Calibration,

the
Blank,
and
Multiple
Detection
Decisions
(
annotated
only)
1997
L.
A.
Currie
Chemometrics
and
Intelligent
Laboratory
Systems
37:
151­
181
Single
lab
Regulations
­
From
an
Industry
Perspective
or
Relationships
Between
Detection
Limits,

Quantitation
Limits,
and
Significant
Digits
(
annotated
only)
1997
D.
Coleman,
J.
Auses,
N.
Grams
Chemometrics
and
Intelligent
Laboratory
Systems
37:
71­
80
Single
lab
Capability
of
Detection
­
Part
1
1997
ISO
ISO
11843­
1
Single
lab
Determination
of
Site­
Specific
Effluent
Detection
Limits
1996
George
Neserke,
Harold
Taylor
Water
Env.
Res.
66:
115­
119
Single
lab
Multivariate
Detection
Limits
Estimators
1996
R.
Boque,
F.
X.
Rius
Chemometrics
and
Intelligent
Laboratory
Systems
32:
11­
23
Single
lab
Nomenclature
in
Evaluation
of
Analytical
Methods
including
Detection
and
Quantification
Capabilities
1996
Lloyd
Currie
Pure
&
Appl.
Chem.
67:
1699­
1723
Single
lab
Reporting
Low­
Level
Analytical
Data
Third
Draft
(
1995­
11­
08)
­­
New
Project
of
Commission
V.
I.,
International
Union
of
Pure
and
Applied
Chemistry
1995
William
Horwitz
IUPAC
Single
lab
IUPAC
Recommendations
for
Defining
and
Measuring
Detection
and
Quantification
Limits
1994
LA
Currie,
W.
Horwitz
Analuses
Magazine
22:
24­
26
Single
lab
Recommendations
for
the
Presentation
of
Results
of
Chemical
Analysis
(
annotated
only)
1994
L.
A.
Currie,
G.
Svehla
Pure
&
Applied
Chemistry
66:
595­
608
Single
lab
Detarchi­
A
Program
for
Detection
Limits
with
Specified
Assurance
Probabilites
and
Characteristic
Curves
of
Detection
1994
L.
Sarabia,
M.
C.
Ortiz
TRAC­
Trends
in
Analytical
Chemistry
13:
1­
6
Single
lab
Quality
Control
Level:
An
Alternaltive
to
Detection
Levels
1994
D.
E.
Kimbrough,
J.
Wakakuwa
Environmental
Science
&
Technology
28:
338­
345
Single
lab
Multivariate
Decision
and
Detection
Limits
(
annotated
only)
1993
A.
Singh
Analytica
Chimica
Acta
277:
205­
214
Single
lab
A
Model
of
Measurement
Precision
at
Low
Concentrations
1993
P.
M.
Berthouex,
D.
R.
Gan
Water
Environment
Research
65:
759­
763
Single
lab
Robust
Procedure
for
Calibration
and
Calculation
of
the
Detection
Limit
of
Trimipramine
by
Adsorptive
Stripping
Voltametry
at
a
Carbon
Paste
Electrode
1993
M.
C.
Ortiz,
J.
Arcos,
J.
V.
Jurarros,
J.
Lopez­

Palacios,
L.
A.
Sarabia
Analytical
Chemistry
65:
678­
682
Single
lab
Nondetects,
Detection
Limits,
and
the
Probability
of
Detection
1991
D.
Lambert,
B.
Peterson,
I.
Terpenning
JASA
86:
266­
277
Single
lab
Title
Year
Author
Source
Category
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
A­
11
Detection
Limits:
For
Linear
Calibration
Curves
with
Increasing
Variance
and
Multiple
Future
Detection
Decisions
(
annotated
only)
1991
R.
D.
Gibbons,
F.
H.
Jarke,
K.
P.
Stoub
Waste
Testing
and
Quality
Assurance:
ASTM
STP
1075,
D.
Friedman,
Ed.,
American
Society
for
Testing
and
Materials,
Philadelphia
3:
337­
390
Single
lab
Limits
of
Detection
in
Multivariate
Calibration
1991
G.
Bauer,
W.
Wegscheider,
H.
M.
Ortner
Fresenius
Journal
of
Analytical
Chemistry
340:
135­

139
Single
lab
Estimating
Detection
Limits
in
Ultratrace
Analysis.
Part
I:
The
Variability
of
Estimated
Detection
Limits
1991
C.
L.
Stevenson,
J.
D.
Winefordner
Applied
Spectroscopy
45:
1217­
1224
Single
lab
Reporting
Low­
Level
Data
for
Computerized
Data
Bases
1988
M.
Brossman,
G.
McKenna,
H.
Kahn,
D.

King,
R.
Kleopfer,
J.
Taylor 
ACS
Symposium
Series
361:
317­
327
Single
lab
Detection
Limits
with
Specified
Assurance
Probabilities
1987
C.
A.
Clayton,
J.
W.
Hines,
and
P.
D.
Elkins
Analytical
Chemistry
59:
2506­
2514
Single
lab
Limit
of
Detection
and
Limit
of
Determination
­
Application
of
Different
Statistical
Approaches
to
an
Illustrative
Example
of
Residue
Analysis
1987
J.
Vogelgesang
Fresenius
Zeitschrift
Fur
Analytsche
Chemie
328:

213­
220
Single
lab
Determining
the
Lowest
Limit
of
Reliable
Assay
Measurement
(
annotated
only)
1983
L.
Oppenheimer,
T.
P.
Capizzi,
R.
M.

Weppelman,
H.
Mehta
Analytical
Chemistry
55:
638­
643
Single
lab
Principles
of
Environmental
Analysis
1983
L.
H.
Keith,
W.
Crummett,
J.
Deegan
Jr,
R.
A.

Libby,
J.
K.
Taylor,
G.
Wentler
Analytical
Chemistry
55:
2210­
2218
Single
lab
Trace
Analyses
for
Wastewaters
1981
John
Glaser,
Denis
Foerst,
Gerald
McKee,

Stephan
Quave,
William
Budde
Env.
Sci.
&
Tech
15:
1426­
1435
Single
lab
Guidelines
for
Data
Acquisition
and
Data
Quality
Evaluation
in
Environmental
Chemistry
1980
ACS
Committee
of
Environmental
Improvement
Anal.
Chem.
52:
2242­
2249
Single
lab
Sensitivity
and
Limit
of
Detection
in
Quantitative
Spectrometric
Methods
(
annotated
only)
1974
J.
D.
Ingle
Jr.
Journal
of
Chemical
Education,
51,
100­
105.
Single
lab
Decision
and
Detection
Limits
for
Linear
Calibration
Curves
(
annotated
only)
1970
A.
Hubaux,
G.
Vos
Analytical
Chemistry
42:
849­
855
Single
lab
Limits
for
Quantitative
Detection
and
Quantitative
Determination
(
annotated
only)
1968
L.
A.
Currie
Analytical
Chemistry
40:
586­
593
Single
lab
A
Statistical
Method
for
Evaluation
of
Limiting
Detectable
Sample
Concentrations
1967
P.
A.
St.
John,
W.
J.
McCarthy,
J.
D.

Winefordner
Analytical
Chem.
39:
1495­
1597
Single
lab
Initial
Evaluation
of
Quantitative
Performance
of
Chromatographic
Methods
Using
Replicates
at
Multiple
Concentrations
2001
M.
A.
Castillo,
R.
C.
Castells
Journal
of
Chromatography
A
921:
121­
133
Single
lab
­

multilevel
Multivariate
Detection
Limits
with
Fixed
Probabilities
of
Error
1999
R.
Boque,
M.
S.
Larrechi,
F.
X.
Rius
Chemometrics
and
Intelligent
Laboratory
Systems
45:
397­
408
Single
lab
­

multilevel
Evaluation
of
Approximate
Methods
for
Calculating
the
Limit
of
Detection
and
Limit
of
Quantification
1999
M.
E.
Zorn,
R.
D.
Gibbons,
W.
C.
Sonzogni
Environmental
Science
&
Technology
33:
2291­

2295
Single
lab
­

multilevel
Asessment
of
Detetion
and
Quantitation
Concepts
Title
Year
Author
Source
Category
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
A­
12
Limits
of
Detection,
Identification
and
Determination:
A
Statistical
Approach
for
Practitioners
1998
J.
Vogelgesang,
J.
Hadrich
Accreditation
Quality
Assurance
3:
242­
255
Single
lab
­

multilevel
Weighted
Least­
Squares
Approach
to
Calculating
Limits
of
Detection
and
Quantification
by
Modeling
Variability
as
a
Function
of
Concentration
(
annotated
only)
1997
M.
E.
Zorn,,
R.
D.
Gibbons,
W.
C.
Sonzogni
Analytical
Chemistry
69:
3069­
3075
Single
lab
­

multilevel
Detection
Limits
in
GC­
MS
Multivariate
Analysis
1997
Boque
R
et
al
Quimica
Analytica,
16(
2),
81­
86
Single
lab
­

multilevel
An
Alternative
Minimum
Level
Definition
for
Analytical
Quantification
1997
Robert
Gibbons,
David
Coleman,
Raymond
Maddalone
Environmental
Science
&
Technology
31:
2071­

2077
Single
lab
­

multilevel
A
Two­
Component
Model
for
Measurement
Error
in
Analytical
Chemistry
1995
David
Rocke,
Stefan
Lorenzato
Technometrics
37:
176­
184
Single
lab
­

multilevel
Practical
Quantitation
Limits
(
annotated
only)
1992
R.
D.
Gibbons,
N.
E.
Grams,
F.
H.
Jarke,
K.
P.

Stoub
Chemometrics
and
Intelligent
Laboratory
Systems
12:
225­
235.
Single
lab
­

multilevel
Experimental
Comparison
of
EPA
and
USATHAMA
Detection
and
Quantitation
Capability
Estimators
1991
C.
L.
Grant,
A.
D.
Hewitt,
T.
F.
Jenkins
American
Laboratory
23:
15­
33
Single
lab
­

multilevel
High
Pressure
Liquid
Chromatography
Determination
of
the
Intermediates
Side
Reaction
Products
in
FD&
C
Red
No.
2
and
FD&
C
Yellow
No.
5:
Statistical
Analysis
of
Instrument
Response
(
annotated
only)
1978
C.
J.
Bayley,
E.
A.
Cox,
J.
A.
Springer
J.
Assoc.
Off.
Anal.
Chem
61:
1404­
1414.
Single
lab
­

multilevel
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
B­
1
Appendix
B
is
not
done
yet,
but
it
will
go
here
when
it
is.
It
will
be
sent
to
the
reviewers
shortly.
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
C­
1
Appendix
C
is
not
done
yet,
but
it
will
go
here
when
it
is.
It
will
be
sent
tothe
reviewers
shortly.
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
D­
1
Appendix
D
Draft
Revised
MDL
Procedure
Definition
and
Procedure
for
the
Determination
of
the
Method
Detection
Limit,
Revision
4.0,
Draft,
July
1,
2002
Definition
The
method
detection
limit
(
MDL)
is
the
measured
concentration
at
which
there
is
99%
confidence
that
a
given
analyte
is
present
in
a
given
sample
matrix.
The
MDL
is
estimated
from
replicate
analyses
of
a
matrix
containing
the
analyte.

Scope
and
Application
This
procedure
is
for
determination
of
an
MDL
for
a
given
analyte
(
parameter)
in
a
given
matrix
(
the
component
or
substrate
that
contains
the
analyte)
using
a
given
test
procedure
(
analytical
method).
It
is
applicable
to
a
wide
variety
of
analytes,
to
matrices
ranging
from
air
through
water
to
solid,
and
to
a
broad
variety
of
physical,
chemical,
and
biological
methods.
The
MDL
for
a
given
analyte
measured
using
a
given
analytical
method
may
vary
as
a
function
of
the
matrix
tested.

This
procedure
requires
a
complete,
specific,
and
well­
defined
analytical
method.
It
is
essential
that
all
sample
processing
steps
of
the
analytical
method
be
included
in
determination
of
an
MDL.
The
procedure
is
independent
of
the
analytical
system
employed
for
the
measurement.

The
MDL
must
be
determined
in
a
reference
matrix
to
demonstrate
the
capability
of
the
analytical
method
in
the
absence
of
interferences.
It
may
be
determined
in
other
matrices
if
desired.

Procedure
1.
Estimate
the
MDL
The
experience
of
the
laboratory
is
important
to
properly
estimate
the
MDL.
However,
the
laboratory
must
include
the
following
considerations
in
producing
this
initial
estimate:

1.1
The
concentration
of
analyte
that
produces
an
instrument
signal/
noise
in
the
range
of
2.5
to
5
for
those
instances
in
which
an
instrument
is
used
for
the
determination.

1.2
The
concentration
equivalent
to
three
times
the
standard
deviation
of
replicate
measurements
of
the
analyte
in
a
blank.
If
analysis
of
the
blank
produces
no
response
(
zero),
the
concentration
equivalent
to
three
times
the
standard
deviation
of
replicate
measurements
at
the
lowest
concentration
that
produces
a
response.

1.3
A
concentration
in
the
region
of
constant
or
near­
constant
standard
deviation
at
low
concentrations.

1.4
The
lowest
concentration
that
can
be
measured
by
analyzing
samples
containing
successively
lower
concentrations
of
the
analyte.

2.
Select
a
reference
matrix
To
establish
the
capability
of
an
analytical
method,
the
MDL
must
be
determined
in
a
matrix
free
of
the
analyte
and
free
of
matrix
effects
(
interferences).
The
most
common
reference
matrix
is
reagent
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
D­
2
water.
Reagent
water
is
defined
as
water
in
which
the
analyte
and
interferences
are
not
detected
at
the
MDL
or,
if
this
is
the
initial
estimate,
detected
at
the
MDL
estimate
produced
in
Step
1.
An
interference
is
defined
as
a
systematic
error
in
the
measured
analytical
signal
caused
by
the
presence
of
a
substance
other
than
the
analyte.
Other
common
reference
matrices
are
playground
sand
for
soils,
sediments,
and
other
solid
samples;
and
corn
oil
for
tissue
samples.

3.
Establish
the
test
concentration
3.1
Reference
matrix
 
Spike
the
reference
matrix
with
the
analyte
to
produce
a
test
concentration
in
the
range
of
one
to
five
times
the
MDL
estimated
in
Step
1
or,
if
this
is
an
iteration
from
Step
7,
the
last
MDL
calculated.
A
sufficient
quantity
of
spiked
reference
matrix
should
be
prepared
for
7
analyses
minimum.
Proceed
to
Step
4.

3.2
Matrix
other
than
reference
matrix
 
Analyze
a
sample
of
the
matrix
in
triplicate.

3.2.1
If
the
average
measured
concentration
of
the
analyte
in
the
matrix
is
in
the
range
of
one
to
five
times
the
MDL
determined
in
the
reference
matrix
(
Steps
3.1
and
4
­
7),
proceed
to
Step
4.

3.2.2
If
the
average
measured
concentration
of
the
analyte
in
the
matrix
is
less
than
the
MDL
determined
in
the
reference
matrix,
spike
a
sample
of
the
matrix
to
bring
the
concentration
of
the
analyte
to
between
one
and
five
times
the
MDL
determined
in
the
reference
matrix.
Proceed
to
Step
4.

3.2.3
If
the
average
measured
concentration
of
the
analyte
in
the
matrix
is
greater
than
five
times
the
MDL
determined
in
the
reference
matrix,
reduce
the
concentration
of
the
analyte
to
between
one
and
five
times
the
MDL
determined
in
the
reference
matrix
using
one
of
the
following:

3.2.3.1
Selectively
remove
the
analyte
from
the
matrix.

3.2.3.2
Obtain
another
matrix
with
a
lower
concentration
of
the
analyte.

3.2.3.3
Dilute
a
sample
of
the
matrix
with
the
reference
matrix.
For
example,
if
the
matrix
is
aqueous,
dilute
the
sample
with
reagent
water.

3.3
If
this
is
an
iteration
from
Step
7,
spike
the
matrix
at
the
last
MDL
calculated.

4.
Perform
the
analyses
4.1
It
may
be
economically
and
technically
desirable
to
evaluate
the
estimate
of
the
MDL
(
Step
1)
before
proceeding
with
determination
of
the
MDL
in
Step
4.2.
This
will
prevent
repeating
this
entire
procedure
when
the
costs
of
analyses
are
high,
and
attempt
to
insure
that
the
procedure
is
being
conducted
at
the
correct
concentration.
To
evaluate
the
estimated
MDL,
proceed
as
follows:

4.1.1
Process
three
aliquots
of
the
sample
to
be
used
to
calculate
the
MDL
(
Step
3)
through
the
entire
method
per
Step
4.2.
Appendix
D
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
D­
3
4.1.2
Calculate
the
standard
deviation
of
results
for
the
three
aliquots
as
follows:

Where:

Xi;
i=
1
to
3,
are
the
analytical
results
in
the
method
reporting
units
obtained
from
analysis
of
the
3
sample
aliquots,
and
2
is
the
degrees
of
freedom
for
the
3
measurements.

4.1.3
Calculate
a
preliminary
MDL
as
follows:

Preliminary
MDL
=
7s
where:

7
=
the
Students'
t
value
appropriate
for
a
99%
confidence
level
and
a
standard
deviation
estimate
with
2
degrees
of
freedom
(
3
replicates).
s
=
standard
deviation
of
results
of
analyses
of
the
3
replicates.

4.1.4
If
the
Preliminary
MDL
is
in
the
range
of
0.2
­
1.0
times
the
concentration
in
the
spiked
sample
(
Step
3),
take
four
additional
aliquots
and
proceed
using
the
procedure
in
Step
4.2.
Use
all
seven
measurements
for
calculation
of
the
MDL.
Otherwise,
produce
a
new
sample
per
Step
3
with
the
analyte
at
the
concentration
of
the
Preliminary
MDL
and
either
repeat
Step
4.1
or
proceed
to
Step
4.2
for
determination
of
the
MDL.

4.2
Process
seven
aliquots
of
the
test
sample
chosen
in
Step
3
or
Step
4.1
through
the
entire
analytical
method.
Make
all
computations
according
to
the
method
with
final
results
in
the
method
reporting
units.
To
obtain
a
valid
MDL,
the
seven
analytical
results
must
all
be
positive
numbers.
If
any
of
the
seven
results
is
negative
or
zero,
increase
the
test
concentration
(
per
Step
3)
and
repeat
the
MDL
procedure.

5.
Calculate
the
standard
deviation
(
s)
from
the
last
set
of
seven
replicate
analyses,
as
follows:

where
:

xi,
i=
1
to
i=
7,
are
the
results
from
the
last
(
kth)
set
of
seven
replicate
analyses.
Assessment
of
Detection
and
Quantitation
Concepts
Draft
Document
for
Peer
Review
­
Do
Not
Circulate
­
August
2002
D­
4
6.
Calculate
the
MDL
where:

MDL
=
the
method
detection
limit
3.14
=
the
Students'
t
value
appropriate
for
a
99%
confidence
level
and
a
standard
deviation
estimate
with
6
degrees
of
freedom.
s
=
standard
deviation
of
analyses
of
the
7
replicates.

7.
Iterate
and
verify
the
reasonableness
of
the
MDL
7.1
If
two
MDL
estimates
have
been
produced,
proceed
to
step
7.2;
otherwise,
return
to
step
3
and
produce
a
second
MDL
estimate.

7.2
Calculate:

where:

s2
h
=
the
variance
estimate
from
the
larger
spike
concentration
in
the
last
two
iterations
s2
l
=
the
variance
estimate
from
the
smaller
spike
concentration
in
the
last
two
iterations
7.3
If
F
>
3.05,
return
to
step
3
and
produce
another
MDL
estimate;
otherwise
continue.

7.4
The
final
MDL
is
the
last
(
most
recent)
MDL
calculated
from
two
successive
iterations
that
pass
the
F
ratio
test
(
F
#
3.05).

Reporting
The
following
information
must
be
reported
to
support
the
MDL
for
each
analyte:

C
Analytical
method
by
number
and
title
C
Matrix
C
Mean
recovery
(
if
a
laboratory
standard
or
a
sample
that
contained
a
known
amount
of
analyte
was
used
for
the
MDL
determination)

The
following
information
must
also
be
reported,
and
expressed
in
the
appropriate
method
reporting
units:

C
Concentration
at
which
the
test
was
performed
C
The
measured
concentration
in
each
aliquot
for
the
iterations
used
to
calculate
the
MDL
(
14
results)
C
The
final
MDL
If
the
analytical
method
permits
a
change
in
analytical
conditions
that
could
affect
the
MDL,
these
conditions
must
be
reported.
